Zero Downtime Migration – Logical Online Migration of Very Large Databases

Things always work on PowerPoint slides. And they almost always work in our lab environments as well. And recorded demos are also pretty bullet-proof.

But what happens when you have a huge database to migrate? A Very Large Database (VLDB). First, things get more exciting. Second, there are more knobs to turn and the smallest thing might become a big thing. Here are some things to consider when you migrate VLDBs using the Logical Online method.

Existing Data Guard No need to worry about your existing on-prem Oracle Data Guard environment. The Logical Migration uses Oracle Data Pump and Oracle GoldenGate. Both tools can be used on a database in a Data Guard and does not interfere with Data Guard operations.

But no switchovers or failovers are allowed during replication. GoldenGate can be configured to fully support a Data Guard environment and seamlessly reconnect in case of a Data Guard switchover. But it requires advanced configuration of GoldenGate. Zero Downtime Migration (ZDM) does the GoldenGate configuration but does so in a basic way that does not support Data Guard switchovers.

Data Pump Data Pump is used for the initial load of data.

The initial load happens before downtime, so generally you shouldn’t worry too much about the time it takes to perform the Data Pump export and import. However, the longer it takes, the bigger a replication will GoldenGate need to catch up. If the gap is too big, it might become a problem.

Apply a proper degree of parallelism. Rule-of-thumb:

On-prem: 2 x number of physical cores OCI: Number of OCPUs You can change the Data Pump parallel setting using the ZDM response file parameters DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE and DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE

When you are importing in OCI, consider scaling up on CPUs. More CPUs, more Data Pump parallel, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines needs around 10 minutes of downtime.

ZDM applies Data Pump compression automatically. Remember you don’t need a license for Advanced Compression Option to use compression when you are migrating with ZDM. Our experience is that MEDIUM most of the time is the best alternative. It provides a good compression ratio at a low CPU footprint. Although HIGH can compress better, it often comes at a much higher CPU footprint. ZDM uses MEDIUM compression algorithm. It’s not possible to change the Data Pump compression algorithm. The interested reader may find a benchmark below, that compares the different compression algorithms:

ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. SecureFile LOBs are superior in many ways, also when it comes to importing. Based on a customer case, we did a benchmark that measures the time it takes to import LOB data. The customer saw a 3x improvement during by transforming to SecureFile LOBs – and got the benefits of SecureFile LOBs afterwards. ZDM transforms to SecureFile LOBs automatically. Don’t turn it off. Here are the log entries from the two imports. Notice the elapsed time:

BasicFile LOBs

10-OCT-20 21:43:21.848: W-3 . . imported "SCHEMA"."TABLE" 31.83 GB 681025 rows in 804 seconds using direct_path

SecureFile LOBs

15-OCT-20 18:16:48.663: W-13 . . imported "SCHEMA"."TABLES" 31.83 GB 681025 rows in 261 seconds using external_table Don’t use import over network link. Use dump files. Performance-wise there are some restrictions when you import over network link. These restrictions really hurt on a VLDB.

The Data Pump works will automatically be started on all nodes, if the target database is a RAC database. That should allow for the fastest possible import. If needed, you can force all workers on to the same node using DATAPUMPSETTINGS_DATAPUMPPARAMETERS_NOCLUSTER

But when you export to dump files, remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance and ideally, they should be separated from the storage that holds the database data files.

A Data Pump export does not affect your existing backup operations. But the export and the backup will fight over the same resources, so I recommend suspending major backup operations (level 0+1) until after the export.

Speaking of fighting over resources. You should run the export at off peak hours. That will allow the export to use many more resources. The export is not a fully consistent export. Each table is consistent, but ZDM does not use FLASHBACK_TIME or FLASHBACK_SCN to make the entire export consistent. This helps avoid ORA-01555 snapshot too old during the export. The SCN of each table is recorded in the dump file and GoldenGate later on uses that information to start replication on each table individually from the appropriate SCN.

GoldenGate Replication How much data will GoldenGate need to transfer from source database to GoldenGate hub and from GoldenGate hub to target database? GoldenGate stores the replication data in trail files. The trail files are smaller than redo logs. The size of trail files is typically 30-40 % of the size of the redo logs. Imagine a database generating 15 TB of redo a day. The size of the trail files will be 4,5-6 TB. If you further apply compression on the trail files, you can typically reduce the size to around 1/8. Using our example, the trail files are now 550-750 GB. Which is significantly less than the initial 15 TB.

In the ZDM response file there is a parameter called GOLDENGATESETTINGS_EXTRACT_PERFORMANCEPROFILE. The default value is MEDIUM, but you have the option of changing it do HIGH. This should increase the performance of the extract process.

ZDM configure GoldenGate to use Parallel Replicat. This gives you a few options. First, you can control the number of mappers by using the parameter GOLDENGATESETTINGS_REPLICAT_MAPPARALLELISM (default is 4). Also, apply parallelism is auto-tuned. You can set a minimum and maximum value using GOLDENGATESETTINGS_REPLICAT_MINAPPLYPARALLELISM and GOLDENGATESETTINGS_REPLICAT_MAXAPPLYPARALLELISM=50. Defaults are 4 and 50 so I don’t think it will be necessary to change.

By default, ZDM does not configure GoldenGate with DDL replication. Replicating without DDL gives the best performance. If you must have DDL replication in your migration project, you can look at the ZDM response file parameter GOLDENGATESETTINGS_REPLICATEDDL. But be prepared for a performance degradation.

In our webinar, Migrating Very Large Databases my team and I discuss what to consider when migrating databases with GoldenGate.

Last, you can find more advice in Administering Oracle GoldenGate 19.1, Tuning the Performance of Oracle GoldenGate.

GoldenGate Health Check Oracle GoldenGate comes with health check scripts that you can use on the source and target database. This will help you monitor and troubleshoot the process. Here is information on how to install and use the health check scripts. You can also find information in Administering Oracle GoldenGate 19.1, Using Healthcheck Scripts to Monitor and Troubleshoot.

Backup of Target Database When you use logical online migration in ZDM, you create the target database in advance. In contrast to physical migration, the target database is not overwritten. This means that you can configure automatic backup of your target OCI database before you complete the migration and switch over to OCI. You will even have time to and possibility of testing the backup.

I recommend that you configure automatic backup after the Data Pump initial load. This way there is less archive logs to back up as well.

Data Guard on Target Database Similar to automatic backup, you can also create your Data Guard association before the switchover. Likewise, wait until after the initial load has completed.

Conclusion Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Your database can be protected by backup and Data Guard from the very second you switch over to OCI. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series Introduction Install And Configure ZDM Physical Online Migration Physical Online Migration to DBCS Physical Online Migration to ExaCS Physical Online Migration and Testing Physical Online Migration of Very Large Databases Logical Online Migration Logical Online Migration to DBCS Logical Offline Migration to Autonomous Database Logical Online Migration and Testing Logical Online Migration of Very Large Databases Logical Online and Sequences Logical Offline Migration How To Minimize Downtime Logical Migration and Statistics Logical Migration and the Final Touches Create GoldenGate Hub Monitor GoldenGate Replication The Pro Tips

Zero Downtime Migration – Monitor GoldenGate Replication

When you are migrating your Oracle Database with Zero Downtime Migration (ZDM) and the Logical Online method, you are using Oracle GoldenGate to handle the replication. The replication keeps the OCI target database in sync until you can perform the switchover. Thus, it is a vital part of the process. Let’s see how you can monitor the replication.

I have already started a migration which is currently paused at the phase ZDM_MONITOR_GG_LAG.

Monitor Services

Log on to the Oracle GoldenGate Hub via HTTPS. Use the host name, private or public IP address or whatever your network allows.

You are prompted for credentials. Log on as oggadmin – in lowercase. The password you can find in the file ogg-credentials.json which you find on GoldenGate Hub. The log in prompt of Oracle GoldenGate

Once logged in you can ensure that all the services are up and running. Most important are the Administration Server services – on source and target. They are handling the replication. Ensure all services are running You can start and stop the services from the Action drop-down menu.

By clicking on the Port number of each of the services, you can go to the specific console for each of the services. Again, the Administration Server are the ones of interest. Click on the port number to go to the specific console

Monitor Extract

If you followed the link to the Source Administration Server (by clicking on the port number) you end up here: Administration Server overview

Ensure that the extract is running. Further down the page you see a list of events. From the Action drop-down menu, you can start and stop the extract. And you get into the Details as well: Follow the Details link

You can get a lot of details, like extract statistics. Here you can see that extract has caught one insert into the table locations: Extract statistics

Monitor Replicat

Now you have monitored the extract, and you should also monitor the replication, which is handled by the replicat process. Basically, you do the same as with extract but use the link on the target administration server: Follow the link to the target administration server to monitor the replicat process

Here you should look at the replicat process and ensure that it is running: Ensure replicat process is running

You start and stop the services and get into the details like described above.

Troubleshooting

Here is an example of an error. I have created a table in my source database that does not exist in the target. This break the replication. I can use the list of critical events to figure out what went wrong: Replicat failed

Also, if you go into the details of the replicat process, and switch to the report tab, you can find even more details:

You can also find more logs on the GoldenGate host in the directories:

  • /u02/deployments/Source/etc/conf/ogg/
  • /u02/deployments/Target/etc/conf/ogg/

Conclusion

With Oracle GoldenGate Microservices Architecture it is easy to get an overview of the replication process. There are nice graphical overviews and you can even get detailed information and statistics.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Logical Online method. The migration will happen with a combination of Data Pump and Oracle GoldenGate. This way even very large databases can be migrated with very little downtime.

In the introduction blog post I covered the basic characteristics of this approach and explained that Oracle GoldenGate can be used at no extra license cost and it is really simple to use.

Migrate your Oracle Database with Zero Downtime Migration ZDM using Data Pump and GoldenGate

With the Logical Online method, you can even target Autonomous Databases. Both Oracle Autonomous Transaction Processing (ATP) and Oracle Autonomous Data Warehouse (ADW) and also shared and dedicated. When you target an Autonomous Database, you don’t have access to the underlying database host, so instead of SSH access, ZDM will require SQL*Net connectivity.

Benefits

  • First and most important, you don’t need a license for Oracle GoldenGate to use this approach. Yes, that’s right! You are allowed to use a marketplace edition of Oracle GoldenGate 19c for 183 days to migrate your database with ZDM.

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools: Oracle Zero Downtime Migration and Oracle Cloud Infrastructure Database Migration

  • Even if you don’t have any GoldenGate experience, you can still use this approach. For example, you don’t need to install and configure GoldenGate yourself. The marketplace image creates the GoldenGate installation for you. Afterwards, ZDM configures GoldenGate and will eventually clean up as well.
  • You can migrate directly into a higher release. For example, if your source database is 12.1.0.2, you can migrate into 19c directly. This avoids the extra downtime normally needed to perform an upgrade.
  • You can migrate directly into a PDB. For example, if you source database is a non-CDB, you can migrate directly into a PDB. This avoids the extra downtime normally needed to convert the database into a PDB using the script noncdb_to_pdb.sql.
  • You can use the full power of Data Pump and GoldenGate to transform your data. By default, ZDM will convert any old BasicFile LOBs into SecureFile LOBs during import. But you can apply other transformations as well. You can partition the table in the target database, you can change the tablespace layout, you can apply compression and anything else that is possible with Data Pump.
  • You can migrate into a different character set (Data Pump and GoldenGate restrictions apply).
  • Your Data Pump export is automatically compressed if the source database is Enterprise Edition. Normally, Data Pump compression would require a license for the Advanced Compression Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump compression without having the Advanced Compression Option.
  • Your Data Pump export is automatically encrypted if the source database is Enterprise Edition. Normally, Data Pump encryption would require a license for the Advanced Security Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump encryption without having the Advanced Security Option.
  • You get recommendations on the migration before it is actually started. ZDM includes the Cloud Premigration Advisor Tool (CPAT) which analyzes your database and give you specific recommendations that matches the target database. For example, if you target an Autonomous Database, CPAT will warn if you have unsupported objects.
  • You can migrate a SE2 database into OCI using this approach as well.
  • You can build your target database in advance and configure backup upfront as well. You can test your backup/recovery strategy before you complete the migration.
  • You can build a standby database in advance as well. Your new OCI target database can be protected by Data Guard from the very second you perform the switch-over. I would recommend doing the initial Data Pump load first, and then build the standby database. This ensures that a minimum of changes has to be applied via redo.

Considerations

  • The time zone file in the OCI target database must be higher than that in the source database. Typically, this is not a problem if you provision a brand-new DB System. But if you plan to use an existing CDB in OCI do check the time zone file upfront:
SQL> select * from v$timezone_file;
  • In My Oracle Support you can find a list of recommended patches to apply on the source database when using Oracle GoldenGate. There is one for 11g and one for 12c and newer. You don’t have to apply those patches, but it is a recommendation. However, if you run into issues, you potentially have to patch your source database.
  • Most databases can be migrated with Oracle GoldenGate. Only very exotic data types are not supported. Be sure to check the documentation.
  • When you use Data Pump to migrate you will lose a lot of the information that is stored in the data dictionary, like
    • AWR
    • SQL Plan Baselines
    • SQL Profiles
    • SQL Patches
  • You can still transfer that information, but you need to use other tools. If this is relevant to you, I suggest that you watch our webinar Performance Stability, Tips and Tricks and Underscores where it is covered in detail.
  • Depending on whether you use a full or schema mode export you need to take care of your public objects afterwards. It could be synonyms or database links.
  • For a huge database you should be prepared for the time it takes to perform the Data Pump export and import. No need to worry too much because GoldenGate will keep track of the changes. But for a 100 TB database you should not start the job the day before the switchover is supposed to take place.
  • It is not recommended to execute DDL statements when GoldenGate is replicating changes. The extract and replicat processes that are created by GoldenGate does not replicate DDL statements. If you for example add a table after the initial Data Pump load, it will not get replicated by GoldenGate. However, GoldeGate is a very powerful tool and you can configure it to replicate DDL as well. But you will need to adjust the GoldenGate user privileges and change the parameter files. In the documentation you can read more about the possibilities of DDL replication and how to configure it.

Fallback

Using this approach there is no straight forward way of falling back to the source on-prem database. You could use GoldenGate to reverse the replication when you switch over to the OCI target database. However, it is not configured automatically by ZDM. So, this is very you would need to know GoldenGate and do the required configuration.

Conclusion

You get a lot more options when you use the Logical Online method to migrate your database. Especially, the possibility of migrating directly into a higher release or into a PDB is very attractive. Plus, you can use GoldenGate during the migration at no extra license cost.

However, the flexibility comes at a cost. There is a little more configuration and it is not as straight-forward as the Physical Online method. But it is not complicated.

Want to Know More

If you want to know more about using Data Pump and GoldenGate for database migrations, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series