Infor EAM SQL Query Optimization: Asset Downtime Reporting, KPI Calculations, and Performance Index Tuning
Running accurate downtime and KPI reports against a mature Infor EAM database is a different problem from general SQL optimization. The transaction volume on a production EAM instance – particularly on R5WORKORDERS and R5EVENTS – grows continuously, and reporting queries that perform acceptably at go-live will degrade as asset history accumulates. The challenge is not simply writing correct SQL. It is writing queries that are both logically accurate against EAM’s event and status model and structurally efficient enough to run without locking transactions or exhausting the reporting server. Both requirements must be met simultaneously, and most performance problems in EAM reporting arise from failing to satisfy at least one of them.
EAM Schema Foundations for Downtime Queries
Asset downtime in Infor EAM is not stored in a single field or table. It is derived from the intersection of work order records, equipment records, and event records, with the actual downtime duration either calculated from date fields or recorded explicitly depending on how the instance is configured.
R5WORKORDERS is the central transactional table. Each work order row carries a WOSTRTDATE and WOENDDATE representing the work period, a WOTYPE indicating whether the order is a corrective, preventive, or other class of work, and a WOSTATUS reflecting the order lifecycle. The WODOWNTIME column stores the recorded downtime duration in hours where operators have entered it manually, but this value is not always populated and cannot be relied upon as the sole source of downtime truth. WOEQUIPMENT links the work order to R5EQUIPMENT via the equipment code.
R5EQUIPMENT holds the master asset record. EQUIPMENTID is the primary key used throughout the schema. EQCLASS and EQDEPT are the classification and department attributes that become grouping keys in downtime aggregations. EQSTATUS reflects current operational state and is relevant when filtering to assets that are currently active rather than decommissioned.
R5EVENTS records failure and incident events tied to equipment. EVTEQUIP links to R5EQUIPMENT and EVTWONUM links to R5WORKORDERS, giving the event its work order context. EVTSTARTDATE and EVTENDDATE bracket the failure period as recorded in the event record, which is distinct from the work order dates. This distinction matters considerably for MTBF calculations. The event record captures when the failure occurred, while the work order captures when technicians worked on it. Conflating these two date ranges is one of the most common sources of KPI error in EAM reporting.
R5ACTIVITIES records the labor and task lines within a work order and is the correct source for actual repair labor hours when calculating MTTR from a labor-input perspective rather than a calendar-time perspective. ACTWONUM links each activity to its parent work order.
Query architects working against EAM downtime data need to understand that a single asset failure can generate one event record, one or more work orders, and multiple activity lines. A query joining all four tables without appropriate aggregation will produce row multiplication that inflates every duration and hour calculation.
Are your Infor EAM SQL queries returning slow asset downtime reports or inaccurate KPI calculations?
Sama's EAM consultants optimise SQL query structure, index configuration, and reporting logic in live Infor EAM environments to improve performance and data accuracy.
Writing Accurate Asset Downtime Queries
The foundational downtime query pattern in Infor EAM calculates downtime duration from R5WORKORDERS date fields for a defined equipment set and date range. A minimal accurate form targets WOSTRTDATE and WOENDDATE, casts the difference to hours, and filters on WOSTATUS to exclude orders that never reached a state reflecting actual downtime. In Oracle-based EAM instances, the date arithmetic uses standard Oracle date subtraction multiplied by 24 to produce hours. In SQL Server-based instances, DATEDIFF with the hour interval is appropriate.
The status filter on R5WORKORDERS is not optional. Orders in a status of R (raised) or AP (approved but not started) do not represent completed downtime events. The query should filter to statuses indicating completed or closed work, typically C or CO depending on the instance configuration. EAM administrators sometimes create custom statuses, so the applicable closed statuses should be confirmed against the R5WODEFINITIONS or equivalent configuration table for the specific instance.
Overlapping downtime windows are a real problem on assets that generate multiple concurrent work orders, which occurs on complex equipment with multiple failure points addressed simultaneously. A naive SUM of WOENDDATE minus WOSTRTDATE across all orders for an asset in a period will double-count any overlapping intervals. Handling this requires collapsing overlapping intervals before summing. In Oracle, an analytic approach using LAG to compare each order’s start date against the previous order’s end date allows identification of overlapping pairs. The collapsed intervals can then be summed. In SQL Server, a recursive CTE approach achieves the same result. Neither approach is trivial, and skipping it produces inflated downtime figures that distort every downstream KPI.
NULL handling in WOENDDATE requires explicit attention. Work orders that are still open at query time have a NULL WOENDDATE. Depending on the report purpose, these should either be excluded with a WHERE WOENDDATE IS NOT NULL clause, or the current timestamp should be substituted for WOENDDATE to calculate elapsed downtime on open events. Mixing open and closed orders in the same duration calculation without distinguishing them is a logic error that produces misleading results.
EAM classifies downtime through both WOTYPE and event classification codes on R5EVENTS. Planned downtime from preventive maintenance orders should be separated from unplanned corrective downtime in all reporting that feeds reliability KPIs. A join from R5WORKORDERS to R5EVENTS on EVTWONUM, filtered to corrective event types, isolates unplanned downtime periods for MTTR and MTBF purposes.
KPI Derivation from EAM Data Structures
MTTR, expressed as mean calendar hours from failure start to restoration, is derived by joining R5EVENTS to R5WORKORDERS on EVTWONUM and calculating the average of WOENDDATE minus EVTSTARTDATE for a corrective work order population. The EVTSTARTDATE field on the event record captures the actual failure onset, while WOENDDATE captures the moment the repair was completed and the order closed. Using WOSTRTDATE instead of EVTSTARTDATE for the start of the repair clock is a common error. WOSTRTDATE reflects when work began, not when the failure occurred. If the intent is to measure response and repair time combined – the full outage duration from failure to restoration – EVTSTARTDATE is the correct anchor. If the intent is to measure only active repair time, WOSTRTDATE is appropriate, but this should be documented as a distinct metric.
The aggregation query for MTTR groups by WOEQUIPMENT and calculates AVG(WOENDDATE – EVTSTARTDATE) * 24 over a date range filtered to corrective work types. Filtering on WOTYPE to corrective orders and on WOSTATUS to closed orders before aggregating is essential. Including preventive or inspection orders in an MTTR calculation produces a meaningless average.
MTBF requires calculating the intervals between successive failure events for each piece of equipment, then averaging those intervals. The query pattern selects EVTSTARTDATE from R5EVENTS for each equipment code, orders the events chronologically, and uses LAG(EVTSTARTDATE) to compute the gap between consecutive failure start dates. The average of those gaps, expressed in hours, is the MTBF for the period. A WHERE clause filtering to failure event types on R5EVENTS is mandatory. EAM event records include non-failure events such as inspections and condition monitoring entries that must be excluded. The EVTTYPE field carries the classification, and only codes mapped to actual failure events should be included in the MTBF population.
OEE introduces a third dimension – quality – that requires additional data sources beyond the standard maintenance tables. Availability is calculated as (scheduled operating time minus downtime) divided by scheduled operating time, where downtime comes from the corrective work order population and scheduled operating time requires either a shift schedule table or a configuration parameter specific to the asset class. Performance requires production rate data, which in EAM implementations is typically held in a separate production or meter reading table such as R5METERREADINGS or an integrated MES source. Quality requires defect or reject rate data which is often external to EAM entirely.
For implementations where OEE is calculated within EAM, the availability component is the only element fully derivable from core EAM tables. Performance and quality components frequently require either custom EAM extension tables or integration with external datasets. Attempting to calculate full OEE from standard EAM tables alone without confirming that performance and quality data is captured in the EAM schema will produce an incomplete metric that understates both the complexity of the calculation and the gaps in the data model. [internal link: Infor EAM implementation services page] covers data architecture considerations relevant to OEE instrumentation decisions during implementation.
Index Strategy for EAM Reporting Workloads
The standard index set delivered with Infor EAM is optimised for transactional performance – insert, update, and lookup operations driven by the application layer. Reporting workloads have fundamentally different access patterns: they scan large date ranges, join multiple large tables, and aggregate across equipment hierarchies. The standard indexes frequently do not serve these patterns efficiently.
R5WORKORDERS accumulates the largest row counts in most EAM instances. The columns that appear most frequently in reporting WHERE clauses are WOEQUIPMENT, WOSTRTDATE, WOENDDATE, WOSTATUS, and WOTYPE. If the instance does not carry a composite index on WOEQUIPMENT and WOSTRTDATE, range queries filtering by equipment and date will perform full or near-full table scans as the table grows. Adding a composite index on these two columns, with WOEQUIPMENT as the leading column because it has higher selectivity in most query patterns, materially reduces the scan cost for asset-specific downtime queries.
R5EVENTS similarly benefits from an index on EVTEQUIP and EVTSTARTDATE. The join between R5EVENTS and R5WORKORDERS on EVTWONUM requires that EVTWONUM is indexed on R5EVENTS, which the standard schema typically provides, but this should be verified against the execution plan rather than assumed.
Index bloat is a genuine risk on R5WORKORDERS in high-transaction environments. Work orders are frequently updated as they move through status transitions, and each update to an indexed column triggers an index maintenance operation. Excessive indexing on a heavily transacted table increases write overhead and causes index fragmentation that degrades read performance over time. The correct approach is to add only the indexes that demonstrably improve the specific reporting queries in scope, validate their impact on write performance in a staging environment, and schedule regular index rebuild or reorganize operations aligned with the EAM maintenance window.
Evaluating execution plans for EAM reporting queries should be done against production-representative data volumes. An execution plan generated against a development instance with a fraction of the production row count will not reveal the same access patterns. When examining execution plans, key warning signs are nested loop joins on large unindexed tables, sort operations on columns with no supporting index, and hash joins consuming excessive memory grants. A query plan showing a full scan of R5WORKORDERS without an index seek indicates that the available indexes do not match the query’s filter predicates, either because the index does not exist or because the query is written in a form that prevents index use – for example, applying a function to an indexed column in a WHERE clause.
Are your Infor EAM SQL queries returning slow asset downtime reports or inaccurate KPI calculations?
Sama's EAM consultants optimise SQL query structure, index configuration, and reporting logic in live Infor EAM environments to improve performance and data accuracy.
Query Execution and Reporting Layer Considerations
Queries executed through the Infor EAM reporting layer – whether through the standard EAM report framework or through Infor Birst – behave differently from queries run via direct database connection. The EAM reporting framework constructs queries dynamically based on user-selected parameters and applies its own filtering logic above the base SQL. This can result in query structures that are not optimal for the underlying database, particularly when the dynamically constructed WHERE clause negates available indexes.
Direct database access, typically through a read replica or a reporting database fed by replication, removes the application layer from the query path and allows full control over query structure, hints, and execution plan management. Most production EAM environments should route heavy reporting workloads to a dedicated read replica to avoid contention with transactional processing on the primary database. Running large aggregation queries against the primary OLTP database during business hours creates lock contention risk on R5WORKORDERS and R5EVENTS, both of which receive continuous writes during normal operations.
Infor Birst, the analytics layer positioned as the EAM reporting front-end in current Infor product architecture, introduces its own query generation layer that issues SQL against either a Birst data warehouse or a live connection to the EAM database depending on the deployment configuration. When Birst operates against a live EAM database connection, its generated SQL passes through the same index and execution constraints as any other query. Understanding how Birst translates its data model constructs into SQL – and whether the resulting queries align with the available indexes – requires examining the SQL logs from the database layer rather than relying on the Birst interface alone. The Infor documentation portal publishes configuration guidance for Birst data warehouse refresh scheduling and live connection setup that is directly relevant to managing query performance at this layer.
Partitioning R5WORKORDERS by a date column – typically WOSTRTDATE or WOCRDATE – is a viable strategy on Oracle and SQL Server instances once the table exceeds several million rows. Range partitioning on the work order creation or start date aligns with the most common reporting filter patterns, which are date-bounded. Partition pruning allows the database engine to exclude irrelevant partitions entirely from a query’s scan scope, reducing I/O proportionally to the selectivity of the date range filter. Implementing partitioning on an existing large table requires careful planning around the partition key choice, existing index structures, and the potential downtime window for the conversion operation. Partitioning should be evaluated as part of a broader database architecture review rather than applied reactively to a single slow query.
As EAM transaction histories grow and reporting requirements become more complex, the separation of OLTP and analytical workloads becomes a structural necessity rather than an optional optimisation. The long-term architecture for high-volume EAM reporting should treat the core EAM database as a system of record that feeds a purpose-built analytical store, with KPI calculations and downtime aggregations executed against the analytical store on pre-aggregated or incrementally refreshed datasets rather than against live transaction tables.