Zero Downtime Migration – Logical Online Migration of Very Large Databases

Things always work on PowerPoint slides. And they almost always work in our lab environments as well. And recorded demos are also pretty bullet-proof.

But what happens when you have a huge database to migrate? A Very Large Database (VLDB). First, things get more exciting. Second, there are more knobs to turn and the smallest thing might become a big thing. Here are some things to consider when you migrate VLDBs using the Logical Online method.

Existing Data Guard No need to worry about your existing on-prem Oracle Data Guard environment. The Logical Migration uses Oracle Data Pump and Oracle GoldenGate. Both tools can be used on a database in a Data Guard and does not interfere with Data Guard operations.

But no switchovers or failovers are allowed during replication. GoldenGate can be configured to fully support a Data Guard environment and seamlessly reconnect in case of a Data Guard switchover. But it requires advanced configuration of GoldenGate. Zero Downtime Migration (ZDM) does the GoldenGate configuration but does so in a basic way that does not support Data Guard switchovers.

Data Pump Data Pump is used for the initial load of data.

The initial load happens before downtime, so generally you shouldn’t worry too much about the time it takes to perform the Data Pump export and import. However, the longer it takes, the bigger a replication will GoldenGate need to catch up. If the gap is too big, it might become a problem.

Apply a proper degree of parallelism. Rule-of-thumb:

On-prem: 2 x number of physical cores OCI: Number of OCPUs You can change the Data Pump parallel setting using the ZDM response file parameters DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE and DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE

When you are importing in OCI, consider scaling up on CPUs. More CPUs, more Data Pump parallel, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines needs around 10 minutes of downtime.

ZDM applies Data Pump compression automatically. Remember you don’t need a license for Advanced Compression Option to use compression when you are migrating with ZDM. Our experience is that MEDIUM most of the time is the best alternative. It provides a good compression ratio at a low CPU footprint. Although HIGH can compress better, it often comes at a much higher CPU footprint. ZDM uses MEDIUM compression algorithm. It’s not possible to change the Data Pump compression algorithm. The interested reader may find a benchmark below, that compares the different compression algorithms:

ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. SecureFile LOBs are superior in many ways, also when it comes to importing. Based on a customer case, we did a benchmark that measures the time it takes to import LOB data. The customer saw a 3x improvement during by transforming to SecureFile LOBs – and got the benefits of SecureFile LOBs afterwards. ZDM transforms to SecureFile LOBs automatically. Don’t turn it off. Here are the log entries from the two imports. Notice the elapsed time:

BasicFile LOBs

10-OCT-20 21:43:21.848: W-3 . . imported "SCHEMA"."TABLE" 31.83 GB 681025 rows in 804 seconds using direct_path

SecureFile LOBs

15-OCT-20 18:16:48.663: W-13 . . imported "SCHEMA"."TABLES" 31.83 GB 681025 rows in 261 seconds using external_table Don’t use import over network link. Use dump files. Performance-wise there are some restrictions when you import over network link. These restrictions really hurt on a VLDB.

The Data Pump works will automatically be started on all nodes, if the target database is a RAC database. That should allow for the fastest possible import. If needed, you can force all workers on to the same node using DATAPUMPSETTINGS_DATAPUMPPARAMETERS_NOCLUSTER

But when you export to dump files, remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance and ideally, they should be separated from the storage that holds the database data files.

A Data Pump export does not affect your existing backup operations. But the export and the backup will fight over the same resources, so I recommend suspending major backup operations (level 0+1) until after the export.

Speaking of fighting over resources. You should run the export at off peak hours. That will allow the export to use many more resources. The export is not a fully consistent export. Each table is consistent, but ZDM does not use FLASHBACK_TIME or FLASHBACK_SCN to make the entire export consistent. This helps avoid ORA-01555 snapshot too old during the export. The SCN of each table is recorded in the dump file and GoldenGate later on uses that information to start replication on each table individually from the appropriate SCN.

GoldenGate Replication How much data will GoldenGate need to transfer from source database to GoldenGate hub and from GoldenGate hub to target database? GoldenGate stores the replication data in trail files. The trail files are smaller than redo logs. The size of trail files is typically 30-40 % of the size of the redo logs. Imagine a database generating 15 TB of redo a day. The size of the trail files will be 4,5-6 TB. If you further apply compression on the trail files, you can typically reduce the size to around 1/8. Using our example, the trail files are now 550-750 GB. Which is significantly less than the initial 15 TB.

In the ZDM response file there is a parameter called GOLDENGATESETTINGS_EXTRACT_PERFORMANCEPROFILE. The default value is MEDIUM, but you have the option of changing it do HIGH. This should increase the performance of the extract process.

ZDM configure GoldenGate to use Parallel Replicat. This gives you a few options. First, you can control the number of mappers by using the parameter GOLDENGATESETTINGS_REPLICAT_MAPPARALLELISM (default is 4). Also, apply parallelism is auto-tuned. You can set a minimum and maximum value using GOLDENGATESETTINGS_REPLICAT_MINAPPLYPARALLELISM and GOLDENGATESETTINGS_REPLICAT_MAXAPPLYPARALLELISM=50. Defaults are 4 and 50 so I don’t think it will be necessary to change.

By default, ZDM does not configure GoldenGate with DDL replication. Replicating without DDL gives the best performance. If you must have DDL replication in your migration project, you can look at the ZDM response file parameter GOLDENGATESETTINGS_REPLICATEDDL. But be prepared for a performance degradation.

In our webinar, Migrating Very Large Databases my team and I discuss what to consider when migrating databases with GoldenGate.

Last, you can find more advice in Administering Oracle GoldenGate 19.1, Tuning the Performance of Oracle GoldenGate.

GoldenGate Health Check Oracle GoldenGate comes with health check scripts that you can use on the source and target database. This will help you monitor and troubleshoot the process. Here is information on how to install and use the health check scripts. You can also find information in Administering Oracle GoldenGate 19.1, Using Healthcheck Scripts to Monitor and Troubleshoot.

Backup of Target Database When you use logical online migration in ZDM, you create the target database in advance. In contrast to physical migration, the target database is not overwritten. This means that you can configure automatic backup of your target OCI database before you complete the migration and switch over to OCI. You will even have time to and possibility of testing the backup.

I recommend that you configure automatic backup after the Data Pump initial load. This way there is less archive logs to back up as well.

Data Guard on Target Database Similar to automatic backup, you can also create your Data Guard association before the switchover. Likewise, wait until after the initial load has completed.

Conclusion Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Your database can be protected by backup and Data Guard from the very second you switch over to OCI. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series Introduction Install And Configure ZDM Physical Online Migration Physical Online Migration to DBCS Physical Online Migration to ExaCS Physical Online Migration and Testing Physical Online Migration of Very Large Databases Logical Online Migration Logical Online Migration to DBCS Logical Offline Migration to Autonomous Database Logical Online Migration and Testing Logical Online Migration of Very Large Databases Logical Online and Sequences Logical Offline Migration How To Minimize Downtime Logical Migration and Statistics Logical Migration and the Final Touches Create GoldenGate Hub Monitor GoldenGate Replication The Pro Tips

4 thoughts on “Zero Downtime Migration – Logical Online Migration of Very Large Databases

  1. Hi Daniel. We have an Active DB it produces around 600gb per week it uses a lot of partitions and sub partitions and MV and stuff like that. it’s around 230TB on AIX. We try month later to install GG and initial load via exdp was a Nightmare., Datapump jobs were cancel for the impact over production and the client wasn’t happy at all. The thing is for this big VLDB you still agree ZDM logical mig expdp initial load is the way to go?


  2. Hi,
    That’s a big database with huge data growth as well. A migration like this will always be challenging and there will be many bumps on the road and things to sort out. Depending on the requirements for downtime other options (like cross-platform transportable tablespace) might be worth investigating.
    I would need more details to answer your question in detail. Feel free to e-mail with additional detail on and I will see what I can find for you.


  3. Yes, that’s exactly what we’re testing. For now using xtts v4 perl rman tooling with just 2 tablespaces to gain knowhow of the procedure with the QA DB is 16TB. And yes, the plan for production, is move to OCI Cloud over ExaCS. Yes we’re using Fast Connect Onprem-oci


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s