Infor CloudSuite ERP Migration from On-Premise: Tenant Provisioning, Data Migration ETL, and User Acceptance Testing
Moving an Infor on-premise ERP to CloudSuite is not an upgrade. It is a migration. The distinction matters because the architecture, the data model, and the configuration approach in CloudSuite differ meaningfully from their on-premise equivalents, and treating the project as a lift-and-shift operation is the most reliable way to introduce problems that surface months after go-live. This post covers three of the highest-risk phases of an Infor CloudSuite migration: tenant provisioning, data migration ETL design, and user acceptance testing. Each phase has specific technical requirements, common failure modes, and decisions that carry long-term consequences for how well the system operates in production.
Why Infor CloudSuite Migrations Fail Where They Should Not
Most Infor CloudSuite migration failures are not caused by the technology. They are caused by underestimating the scope of what has to change when you move from an on-premise environment – where years of customisations, patched integrations, and workarounds have accumulated – to a multi-tenant cloud platform with a rigid update cadence and a fundamentally different extensibility model.
According to Panorama Consulting Group’s 2023 ERP Report, 49 percent of ERP implementations exceed their original budget, and 62 percent take longer than planned. For cloud migrations specifically, the gap between expected and actual complexity is most pronounced in data migration and testing, which are consistently under-scoped in project plans relative to the effort they actually require.
Infor CloudSuite operates on a shared infrastructure model with Infor managing the underlying platform, security patching, and release cadence. Unlike on-premise deployments, tenants cannot modify base application code, and customisations must be implemented through Infor’s approved extensibility tools: ION (Infor Operating Network), BODs (Business Object Documents), and Infor OS configuration layers. This means that every customisation in the on-premise environment needs to be assessed before migration begins, not during it.
Understanding what CloudSuite is designed to do – and where it differs from legacy Infor environments – shapes every decision in the migration. The guide to Infor CloudSuite capabilities and optimising efficiency on the Sama Consulting site covers the platform fundamentals that inform good migration design.
Planning an Infor CloudSuite migration from on-premise?
Sama guides you through tenant provisioning, ETL design, and UAT so your go-live lands cleanly, on schedule.
Phase One: Tenant Provisioning
Tenant provisioning in Infor CloudSuite is the process of standing up the cloud environment, configuring the multi-tenant infrastructure, and establishing the technical foundation before any migration work begins. It is frequently treated as an administrative step when it is actually a technical design exercise with downstream consequences.
Understanding the Multi-Tenant Architecture
Infor CloudSuite runs on Amazon Web Services for most deployments, with Azure available for specific product lines. Within that infrastructure, each customer tenant is logically isolated but shares underlying compute resources with other tenants. Infor manages the infrastructure layer. The customer configures the application layer.
Tenant provisioning involves three distinct environments: Development, Test/QA, and Production. A fourth Training environment is often provisioned separately for end-user onboarding. Each environment must be provisioned, configured, and connected to the integration layer independently. A configuration change made in the development environment does not automatically propagate to test or production – promotion between environments follows a defined process through Infor’s Lifecycle Management tooling.
The first provisioning decision is tenant region selection. CloudSuite tenants are provisioned in a specific AWS region, and data residency implications follow directly from that choice. For organisations subject to GDPR, data residency in an EU region is a compliance requirement, not a preference. Changing the tenant region after provisioning is not a simple reconfiguration. It effectively requires reprovisioning, so this decision must be made correctly at the start.
ION and Integration Layer Setup
Infor Operating Network (ION) is the middleware layer that connects CloudSuite applications to each other and to external systems. ION is provisioned as part of the CloudSuite tenant, but its configuration – defining connection points, message flows, and BOD routing – is a significant technical task that needs to begin during provisioning, not after go-live.
ION uses BODs (Business Object Documents) as its standard message format. A BOD is an XML document representing a business event – a supplier invoice, a purchase order, a journal entry – in a standardised schema. Every integration in the CloudSuite environment, whether to another Infor product or to an external system, communicates via BODs through ION.
During provisioning, the ION connection points for each integrated system must be defined. Each connection point specifies the system it represents, the BOD types it can send and receive, and the authentication method. Getting ION architecture right at provisioning time prevents the most expensive category of post-go-live integration failures – ones rooted in a poorly designed foundation rather than a configuration error that can be quickly corrected.
The detailed guide to Infor ION integration architecture and configuration covers how ION is structured across deployment types, including the connection point and workflow design decisions that apply directly to migration projects.
Security Model and Role Configuration
CloudSuite’s security model is role-based and configured through Infor OS, the platform layer that sits beneath CloudSuite applications. Security roles are assigned to users and determine both application access and data access – which companies, ledgers, and transaction types each user can view or modify.
Provisioning the security model requires mapping the on-premise user roles to CloudSuite equivalents. This is rarely a clean mapping. On-premise Infor environments often have highly customised role structures built up over years, sometimes with role proliferation that nobody fully understands any more. The provisioning phase is the right time to rationalise the security model rather than replicate it, but doing so requires input from both IT and business process owners.
Single sign-on configuration is also handled during provisioning. Most enterprise deployments connect CloudSuite to the organisation’s identity provider – typically Azure Active Directory or Okta – via SAML 2.0. The SSO configuration must be completed before user acceptance testing begins, because UAT users need to authenticate through the production-equivalent path, not through temporary admin credentials.
Planning an Infor CloudSuite migration from on-premise?
Sama guides you through tenant provisioning, ETL design, and UAT so your go-live lands cleanly, on schedule.
Phase Two: Data Migration ETL Design
Data migration is consistently the phase that determines whether a CloudSuite migration delivers a clean system or inherits every data quality problem the on-premise environment accumulated over its lifetime. ETL design – Extract, Transform, Load – is the technical discipline that separates those two outcomes.
Scoping the Migration Objects
The first step in ETL design is agreeing on what data is being migrated. Not everything in the on-premise system belongs in CloudSuite. Historical transaction data, closed periods, and legacy reference data that is no longer operationally relevant should be evaluated for archival rather than migration. Migrating everything by default increases ETL complexity, extends testing timelines, and degrades CloudSuite performance post-go-live with data that serves no operational purpose.
A standard CloudSuite migration data scope includes:
- Chart of accounts and financial structure (companies, ledgers, accounting units, account categories)
- Open transactions (open purchase orders, open supplier invoices, open customer orders, open AR and AP balances)
- Master data (suppliers, customers, items, and employees where Infor HCM is in scope)
- Historical balances for reporting continuity (typically three to five years of summarised period data)
- Reference data and configuration tables (currency codes, tax codes, document types, approval structures)
Closed historical transactions – fully processed purchase orders, paid invoices, completed sales orders – are typically archived rather than migrated, with access maintained through a reporting database or archive tool. This decision needs to be made explicitly and documented, because it directly affects what users can query in CloudSuite on day one and what requires a separate lookup process.
ETL Architecture for Infor CloudSuite
CloudSuite provides two primary mechanisms for bulk data loading: Infor Data Lake through the ION and Infor OS data pipeline, and direct API-based loading via CloudSuite’s REST APIs. For large-volume migration data, Infor also supports file-based loading through IEC (Infor Enterprise Collaborator) for specific modules.
The ETL architecture for a CloudSuite migration involves three layers.
Extraction pulls data from the on-premise Infor database – typically SQL Server or Oracle – using direct database queries or Infor’s reporting tools. The extraction layer should capture data as a point-in-time snapshot rather than a live connection, to ensure consistency across the full migration dataset. For large databases, extraction is staged by module and by date range to manage volume.
Transformation is where the most complex logic sits. On-premise Infor data models often include legacy fields, custom tables, and structures that do not exist in CloudSuite. The transformation layer maps source fields to CloudSuite target fields, applies business rules for data cleansing – standardising address formats, resolving duplicate supplier records, normalising account codes – and handles the structural changes required by CloudSuite’s data model.
Financial dimension mapping is particularly complex. On-premise Infor environments may use a chart of accounts structure that differs from what the organisation wants in CloudSuite, and the migration is often used as an opportunity to rationalise the financial structure. Every account code change requires a corresponding update to historical balance figures, open transaction assignments, and reporting mappings.
Loading pushes the transformed data into CloudSuite via the appropriate channel. For financial master data and configuration, direct API loading via CloudSuite’s REST endpoints is the standard approach. For high-volume transaction data, file-based loading through IEC or the ION data pipeline is more efficient. The loading layer must include validation logic that confirms each record was accepted by CloudSuite and logs rejections with the specific error reason for remediation.
For organisations tackling data migration as part of a broader CloudSuite programme, the article on mastering data migrations – strategy, tools, and best practices covers the methodology and tooling decisions that apply across Infor migration projects.
ETL Tooling Options
The choice of ETL tooling affects how maintainable the migration process is, particularly for the multiple test load cycles that precede production cutover.
Custom Python-based ETL pipelines are common in organisations with strong internal engineering capability and complex transformation logic that benefits from procedural code rather than a visual mapping tool. Pandas handles the transformation layer, SQLAlchemy manages extraction queries against the source database, and CloudSuite REST API endpoints receive the load. For organisations using Infor ION as the primary integration layer, ION itself can serve as the data pipeline for migration loads, keeping the tooling within the Infor ecosystem and reusing the connection points established during provisioning.
Regardless of tooling choice, the ETL pipeline needs to be re-runnable. A migration that can only be executed once is not a migration – it is a one-way door. Every test load cycle runs the same pipeline against refreshed source data, and the production cutover run is the final execution of a pipeline that has already been validated multiple times. Building re-runnability into the ETL design from the start is what makes parallel testing and incremental validation feasible.
Data Cleansing Before Migration
The most common mistake in data migration projects is treating data cleansing as something that happens inside the ETL transformation step. Cleansing logic embedded in the transformation layer is difficult to validate, hard to audit, and tends to propagate source-system problems into the target system in a slightly different form.
The better approach is a dedicated data cleansing phase before ETL design is finalised. Extract source data profiles from the on-premise system – record counts, null rates for key fields, duplicate rates for master data records, referential integrity violations between related tables. Present these profiles to business data owners and require sign-off on cleansing decisions before migration design locks in.
For supplier master data, a typical on-premise Infor environment contains duplicate supplier records created over years of manual entry, suppliers with missing tax IDs, and inactive suppliers never formally deactivated. Each of these requires a business decision: which duplicate is canonical, is the missing data obtainable before migration, should inactive records be migrated at all. These are not IT decisions. They require business input and sign-off, and getting that input means presenting the data quality findings in a format business users can engage with, not a raw database profiling report.
Planning an Infor CloudSuite migration from on-premise?
Sama guides you through tenant provisioning, ETL design, and UAT so your go-live lands cleanly, on schedule.
Phase Three: User Acceptance Testing
User acceptance testing is the phase where technical work is validated against real business requirements by the people who will use the system daily. It is also the phase most frequently compressed when projects run behind schedule, which consistently produces go-live environments that surprise users with problems that should have been caught in testing.
Designing a UAT Programme That Actually Works
Effective UAT is not ad-hoc testing by whoever is available during the two weeks before go-live. It is a structured programme with defined test scenarios, assigned testers, clear entry and exit criteria, and a defect management process that routes issues to the appropriate resolution team.
The UAT test library for a CloudSuite migration should cover three categories of scenarios.
Business process scenarios test end-to-end process flows as users will perform them after go-live. Procure-to-pay, order-to-cash, record-to-report, and manufacturing execution (where Factory Track or production modules are in scope) are the primary process families. Each scenario is written as a user story – starting condition, steps to perform, expected outcome – and executed by a business user, not a member of the technical team.
For manufacturers running Infor Factory Track alongside CloudSuite, the integration between shop floor operations and the ERP back-end needs its own testing track. Time entry, labour management, and inventory movement scenarios need to be validated end-to-end, from the Factory Track interface through to the CloudSuite ledger. The service page for Infor Factory Track consulting and implementation covers how Factory Track integrates with the broader CloudSuite environment, which is the context testers need to design those scenarios correctly.
Data validation scenarios confirm that migrated data is accurate, complete, and correctly structured in CloudSuite. Testers compare key figures – open AP balances, open PO values, inventory quantities, customer account balances – between the on-premise source system and CloudSuite. Discrepancies are logged as defects. This is distinct from the ETL validation performed by the technical team – it is a business-level confirmation that the numbers make sense to the people who own them.
Integration scenarios test the connections between CloudSuite and external systems via ION. These scenarios require both the CloudSuite test environment and any connected external system test environments to be active simultaneously, with ION connection points configured against test endpoints rather than production. A failure in an integration scenario that is only discovered in production is significantly more disruptive than one caught in structured testing. The step-by-step guide to getting started with Infor ION system integration provides useful reference for how ION connections are validated during testing.
UAT Environment Management
UAT must run against a dedicated environment that contains migrated test data from the most recent ETL test cycle. Running UAT against a development environment with manually entered sample data produces results that do not reflect production conditions and misses the data-related defects that make up a significant proportion of go-live issues.
The UAT environment refresh cycle – how often the ETL is re-run to load updated source data – needs to be defined before UAT begins. For a three-week UAT programme, a mid-cycle refresh is typically scheduled at the end of week one, so that defects found in the first week that were caused by data quality issues can be retested against a corrected dataset in weeks two and three.
Cutover readiness criteria should be defined before UAT starts, not at the end. Common criteria include: all critical and high-severity defects resolved or accepted with a documented workaround, all business process scenarios executed with a pass rate above 95 percent, all integration scenarios passing against the test endpoint configuration, and sign-off obtained from the business process owners for each functional area.
Defect Management and Triage
The defect management process during UAT needs a clear severity classification and a defined resolution path for each severity level. Critical defects – those that block a core business process and have no workaround – require immediate escalation and a defined resolution timeline. High-severity defects are resolved before go-live. Medium and low-severity defects are triaged for post-go-live resolution with a documented workaround where one exists.
The most important discipline in defect management is distinguishing between defects – things that do not work as designed – and change requests – things that work as designed but where the user wants a different outcome. Change requests discovered during UAT represent scope additions that need to go through the project change control process, not the defect queue. Without this distinction, the defect log becomes a backlog of feature requests and the project never reaches UAT exit criteria.
Regression testing is required after each defect fix. A fix to a data migration issue, a configuration error, or an integration mapping problem can have unexpected side effects on other parts of the system. Define which test scenarios are in scope for regression after each fix, and assign them explicitly to a tester rather than assuming the development team will catch regression issues independently.
Reporting and Analytics Readiness
One aspect of CloudSuite migrations that is consistently underplanned is the reporting layer. Finance and operations leaders expect to see familiar reports in CloudSuite from day one. If reporting is not built and validated before go-live, the system is technically live but operationally limited.
CloudSuite’s native reporting, combined with Infor Birst for advanced analytics, provides a capable reporting environment – but it requires configuration specific to your CloudSuite implementation. Report structures built on the on-premise system’s data model will not work directly in CloudSuite if the financial structure has changed during migration. Chart of accounts rationalisation, new cost centre hierarchies, or consolidated company structures all require corresponding updates to report definitions.
Build reporting validation into the UAT programme as a distinct workstream. Assign business users who own specific reports – the AP ageing report, the open order backlog, the cost centre spend summary – to validate that their reports produce results matching the source system for the same time period. Discrepancies between on-premise and CloudSuite report output are among the most visible failures at go-live and the hardest to explain to leadership after the fact.
For organisations planning to extend their reporting and analytics capability beyond standard CloudSuite reports, the Infor Birst business intelligence and analytics service provides the cloud-based BI layer that sits on top of CloudSuite data to support more complex financial and operational analysis.
Planning an Infor CloudSuite migration from on-premise?
Sama guides you through tenant provisioning, ETL design, and UAT so your go-live lands cleanly, on schedule.
Cutover Planning and the Production Run
The period between final UAT sign-off and production go-live is the most operationally sensitive part of the migration. Cutover planning defines exactly what happens, in what sequence, during the final data migration run and the transition from on-premise to CloudSuite as the system of record.
Cutover Sequencing
The production cutover sequence for a CloudSuite migration typically spans 48 to 72 hours. The general sequence is as follows.
The on-premise system is placed in read-only mode at the agreed cutover start time. No new transactions are processed from this point. The final ETL run executes against the frozen source data, loading the delta between the last test load and the cutover snapshot into CloudSuite. The data validation team performs an accelerated check of key balances and open transaction counts against the pre-agreed validation checklist. Integration connections are switched from test endpoints to production endpoints and smoke-tested. SSO and user access are confirmed for a representative sample of users across each role. Go-live is declared and the on-premise system is formally decommissioned as the transactional system of record.
Each step has a defined time estimate, an owner, and a rollback decision point. If the data validation check fails materially, or if a critical integration is not passing smoke tests, the rollback decision – reverting to the on-premise system and rescheduling cutover – needs to be made by a named business sponsor, not by the technical team.
Parallel Run Considerations
Some organisations require a parallel run period where both the on-premise system and CloudSuite operate simultaneously, with transactions entered in both and results reconciled daily. Parallel runs are operationally expensive – they require double data entry, significant reconciliation effort, and a prolonged period of ambiguity about which system is authoritative – but they provide a safety net for high-risk migrations where the business cannot tolerate a failed cutover.
For most CloudSuite migrations, a rigorous UAT programme with well-defined exit criteria is a more effective risk mitigation than a parallel run. The parallel run itself introduces data synchronisation risks and user confusion that can create problems rather than prevent them. If a parallel run is required, define its scope tightly: which modules, which transaction types, and what the reconciliation threshold is for ending the parallel period.
Post-Go-Live Stabilisation
Go-live is not the end of the migration. The first 30 to 60 days in production carry their own risk profile and support requirements.
The most common post-go-live issues in CloudSuite migrations are integration failures that were not fully exercised in testing, performance issues with specific report or transaction types under production data volumes, and user adoption problems where training coverage was insufficient for actual daily usage patterns.
Staff the hyper-care period with the technical team members who built the migration, not a separate support function unfamiliar with the configuration decisions made during the project. Issues in the first 30 days require fast, well-informed diagnosis. The consultant who built the ION connection point for supplier data is the right person to diagnose why supplier invoices are not flowing on day three of go-live. Handing that work to a support analyst reading documentation adds time and introduces risk.
Post-go-live financial close validation is also part of stabilisation. Financial reports in CloudSuite need to be reconciled against equivalent on-premise reports for the first close period to confirm that migrated balances are correct and that the new financial structure is producing results the finance team expects. Differences discovered at first close are significantly more disruptive than those caught during UAT, because they involve real financial data in the live system.
For organisations also considering the longer-term path beyond initial migration – including optimisation of CloudSuite configuration, extension of integration coverage, and alignment with Infor’s annual release improvements – the Infor CloudSuite consulting and managed support services from Sama cover what that ongoing engagement looks like in practice.
Conclusion
An Infor CloudSuite migration from on-premise is a complex programme with three phases – tenant provisioning, data migration ETL, and user acceptance testing – each carrying distinct technical risks that are well understood and largely preventable with the right design and execution approach.
Provisioning decisions made without sufficient thought, particularly around region selection, ION architecture, and security model design, create problems that are expensive to unwind once configuration is established. Data migration ETL built without re-runnability, without a structured cleansing phase, and without business sign-off on data quality decisions produces a CloudSuite environment that inherits the problems of the on-premise system in a new form. User acceptance testing compressed into the final two weeks of a schedule produces go-lives that surprise everyone.
The organisations that execute these migrations well invest the scoping and resourcing each phase requires, and make cutover decisions based on defined exit criteria rather than schedule pressure.
If your organisation is planning an Infor CloudSuite migration, working through a programme that is not progressing as expected, or looking for senior consultants to take ownership of specific high-risk phases, Sama Consulting has direct, hands-on experience delivering exactly these engagements across manufacturing, aerospace, automotive, and industrial organisations. Get in touch to talk through where your programme stands.