Utilising proprietary data manipulation tools, automation makes the whole process of data migration repeatable. With the strict use of
- automated source and target system refreshes
- data extracts
- script execution
- data quality checking, tagging and loading
- auditing and reporting
the data migration process can be run end-to-end as often as is practical. In a typical migration the objects will be repeatedly run through this process over and again until the results are correct, and predictable every time. When working on critical systems with large data, repeatable automated processes are the only way to go.
To ‘guarantee’ the result of the final data migration, the main cut-over and system go-live deployment, the expected result should be known beforehand. This concept is extremely difficult to obtain, given
- the ever-changing nature of the data
- unknown data quality errors introduced just prior to the final migration
- target system changes
- migration requirement changes
- changes in project resources.
At Data MC our whole methodology is geared towards building a predictable outcome for the business during the final data migration, the systems go-live event. The ‘repeatability’ of the built data migration solution is there to support multiple end-to-end migration runs, to enable all likely scenarios to be encountered before the go-live migration.
As much as having a ‘repeatable’ and ‘predicable’ data migration is critical to achieving a successful data migration, so too is the time in which the go-live migration will take. As with the ‘predictability’ relying on ‘repeatability’ of the data migration, the ‘timeliness’ relies on the ‘predictable’ nature of the final migration. Under no circumstance should a go-live event be attempted without a confident, scheduled ‘Runsheet’ being developed beforehand