Zero Downtime Migration – Logical Online and Sequences

When migrating with Oracle Zero Downtime Migration (ZDM) and the logical online, you use Data Pump for the initial load and Oracle GoldenGate to keep the database in sync. When using Oracle GoldenGate, attention is needed on your sequences; otherwise, you will probably end up with a lot of ORA-00001 unique constraint violated.

The Problem

I think it is best to illustrate with an example:

Source Database Target Database
Sequence s1 currently has a value of 100.
ZDM starts and performs the export for the initial load.
The Data Pump import creates sequence s1 with current value 100.
A user inserts a row and uses the sequence:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence, and the statement now looks like this:
insert into orders values(101 ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                GoldenGate replicates the statement. It does not take a new value from the sequence on the target. Doing so would corrupt the data because there is no guarantee that the same number would be taken. Instead, it uses the statement as it was executed on the source:
insert into orders values(101 ...
Now it is time to complete the migration. Users are disconnected.
GoldenGate synchronizes the final changes before users are now connecting to the target database.
A user inserts another record into orders:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence on the target database. It was created with current value 100, so next value is 101:
insert into orders values(101 ...
101 has already been used, and typically, there is a primary key or unique key constraint on such a column.
This causes ORA-00001 unique constraint violated.

I talked to our GoldenGate experts, and it is a common pitfall. During migrations, sequences are often forgotten, and soon after the migration they start seeing ORA-00001 unique constraint violated.

The Solution

After the switchover, you must ensure that the sequences have a proper value. The sequences must be forwarded or advanced. You can do that in many ways. Here are some ideas:

Recreate sequences

Right after the migration has been completed and GoldenGate has done the final synchronization, but before users are connecting, you can recreate the sequences.

  • Drop the sequences on the target database.
  • Recreate the sequences either using DBMS_METADATA or Data Pump.
  • Ensure to adjust the privileges on the sequences.

If you can wrap all those changes into a shell script, you can have ZDM execute it automatically as part of the migration flow. Check out Pro Tip 5: Adding Custom Scripts from a previous blog post.

Forward or advance the sequence

Another approach is to forward or advance the sequence. Increment the sequence on the target database until the next value is high enough. What do I mean by high enough?

  • The value of the same sequence on the source database.
  • If you know the sequence is only used to generate numbers for one column (let’s call it t1.c1), then get the highest value from that column SELECT max(c1) FROM t1. Typically, such a column is a primary key column with a unique index underneath, so a SELECT max should go very fast.

Now that you have the desired, new value for the sequence, you can calculate how much to increment by. Example: The source database sequence (DBA_SEQUENCES.LAST_NUMBER) is 1500 and target database sequence is 1000, then you need to increment by 500:

alter sequence seq1 increment by 500;
select seq1.nextval from dual;
alter sequence seq1 increment by 1;

If you use the approach of SELECT max then you need to increment by 501. In any case, better increment by too much than too little.

Similar to the former option, you can put the commands into a shell script and have ZDM execute it

Replicate sequence changes

GoldenGate also has the possibility of replicating the changes to the sequences. If you decide to use this with ZDM, it will require additional configuration of GoldenGate, which is why I prefer any of the former options. If you want to more about it, check out Oracle GoldenGate 19.1 – Using Oracle GoldenGate for Oracle Database, chapter 8.1 Installing Support for Oracle Sequences.


If your database uses sequences and you are using Oracle GoldenGate as part of your migration, ensure to handle your sequences properly. Otherwise, you will fall into a typical pitfall of GoldenGate migrations.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Migration and Statistics

If you decided to do a logical migration of your Oracle Database with Zero Downtime Migration (ZDM), here is something important about optimizer statistics. You must manually take care of the optimizer statistics on the target database after the Data Pump import. Either by recreating the statistics or transporting the statistics from the source database.


Whenever I talk about migration of Oracle Database with Data Pump, I always mention that it is best practice to exclude optimizer statistics from the Data Pump export. Here is how you do it:

$ expdp ... exclude=statistics

Why is that recommended? Data Pump is not very good at extracting the optimizer statistics. There is nothing wrong with the statistics when they are imported. But it can take a very long time to do the export of statistics.

ZDM and Data Pump

The development team behind ZDM wanted the tool to be easy to use. Also, they wanted ZDM to use all the best practices that come with the various other tools that ZDM uses. When they talked to us about Data Pump, we told them to exclude statistics.

BUT – this also leaves you in a situation where you have a database completely without optimizer statistics. It goes without saying that it is a disaster waiting to happen. You must ensure that optimizer statistics are present on the target database before allowing the users to connect to it.


One option is to regather optimizer statistics on the target database after the Data Pump import. When the Data Pump import completes, and before you proceed with the switchover, it is time to regather the statistics. Typically, you would start ZDM something like this:

$ zdmcli migrate database .... -pauseafter ZDM_MONITOR_GG_LAG

It will start by performing the initial load of the database with Data Pump. Then it will configure GoldenGate before it pauses – waiting for your signal to complete the migration. At this time, use DBMS_STATS to gather statistics:

SQL> exec dbms_stats.gather_database_stats;

This has some drawbacks:

  • It requires time and resources – which might not be the biggest problem. The source database is still open for business. We haven’t performed the switchover yet.
  • Column usage information (table COL_USAGE$) is not populated, so in some cases, no histograms will be created. This will happen if the database is supposed to automatically determine whether histograms are needed (method_opt includes size auto). In that case, you can merge the column usage information from another database via a database link, which can be a good idea if your application is depending on histograms.
  • The table, schema, or database statistics preferences are not present. It could be degree, method_opt, stale_pct or any other preferences that you can set with DBMS_STATS.SET_TABLE_PREFS (or schema or database-wide preferences). These preferences can be transferred to the new system, which is what I will talk about next.

Transferring Statistics

Another option is to transfer the statistics using DBMS_STATS. We have covered this in detail in a webinar, so I suggest you watch Performance Stability, Tips and Tricks and Underscores for all the details.

In short,

  1. The optimizer statistics in the source database are extracted from the data dictionary and stored in a transportable format in a regular heap table (referred to as a staging table).
  2. Using Data Pump, you move that table to the target database.
  3. Then you put the statistics into the data dictionary of the target database so that optimizer can use them.

Pro tip: You should perform step #1 before you start ZDM and store the staging table in one of the schemas that you are migrating with ZDM. That way, you don’t have to move the table manually. It is moved by ZDM together with the real data.

One thing to be aware of is that the table, schema or database statistics preferences are not transferred when you use e.g. DBMS_STATS.EXPORT_TABLE_STATS. There are dedicated procedures for transferring the statistics preferences:


You must figure out how to move the optimizer statistics into your target database when you use ZDM to perform logical migrations. If not, your target database will be without optimizer statistics which is a disaster waiting to happen.


You might now ask. If Data Pump is bad at exporting statistics, but there are already better ways available in the database, why don’t we change the Data Pump code? And you are right – but so far, other things have been prioritized. I would love to see this one day fully embedded in Data Pump.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration and Testing

How can you test your OCI database before going live? With Logical Online migration it is possible using Flashback Database, restore or cloning. Let’s explore the options.

Flashback Database

Using Flashback Database is the easiest option and it is supported on all target platforms except Autonomous Database (shared and dedicated). In addition, your target database must be Enterprise Edition.

Does Oracle GoldenGate support Flashback Database? You can find the answer in the MOS note Does Goldengate Support Oracle RDBMS Flashback Features? (Doc ID 966212.1):

In a situation where there are only Replicats on a system, then FLASHBACK DATABASE… is fully supported if the Replicats are all using a checkpoint table and the trail files are available that go as far back as the flashback.

When you do Logical Online migrations you only have replicat process in your target database. In addition, checkpoint tables are enabled by default. All good!

Flashback Database – How To

  1. Stop replicat process by logging on to the GoldenGate hub and navigate to the Target Administration Server: How to stop replicat process in Oracle GoldenGate Microservices Architecture Hub
  2. Set a guaranteed restore point in the target PDB (named tgtpdb):
alter session set container=tgtpdb;
create restore point grp4test guarantee flashback database;
  1. Do your tests.
  2. Revert the changes by issuing a FLASHBACK PLUGGABLE DATABASE command:
alter session set container=cdb$root;
alter pluggable database tgtpdb close immediate;
flashback pluggable database tgtpdb to restore point grp4test;
alter pluggable database tgtpdb open resetlogs;
  1. Restart replicat process: How to start replicat process in Oracle GoldenGate Microservices Architecture Hub

The above procedure uses flashback of the pluggable database. This feature was introduced in Oracle Database 12.2. If your target database is 12.1 or older, you have to flashback the entire CDB. If your target database is a non-CDB database, you have to flashback back the entire database.


Backup/restore is also an option. In OCI it is easy to backup and restore, however, it does take longer than using Flashback Database. But the good thing is that you get to test your OCI backup and recovery strategy.

For Autonomous Database you can use the automatic backups which are enabled by default. In the other Database Cloud Offerings, you must enable automatic backup yourself.

Restore – How To

  1. Stop replicat process (see above)
  2. Create backup (or rely on automatic backup)
  3. Do your tests
  4. Restore backup
  5. Restart replicat process (see above)


Cloning the database or the entire DB System is also an option. A benefit is that the GoldenGate replication can continue while you are doing the tests. When you use Flashback Database and restore the replication must be stopped. Trail files will accumulate on disk and at one point the target database will be very busy catching up with the lag. For very huge and active systems it might be desirable to work on a clone. However, cloning to a new system mean that you are no longer testing in that specific database that will be your future production database. Also, cloning involves creating new DB Systems which has a cost.

Clone – How To

  1. Clone database or DB System
  2. Do your tests
  3. Discard database or DB System


Before going live in your new OCI database, you should test it. With Logical Online method it is easy, and you can use well-known techniques. If you combine it with the recommendations from our webinar Performance Stability, Tips and Tricks and Underscores, you are well underway towards a successful migration.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration of Very Large Databases

Following the blog post on migrating Very Large Databases (VLDBs) using Logical Online method, let’s touch upon the Physical Online method as well.

Existing Data Guard

VLDBs are very often protected by Data Guard with one or more standby databases. When you start a migration with Zero Downtime Migration (ZDM) you don’t want to promise your existing Data Guard setup. If something happens during the preparation of the migration to OCI, you still want to be able to switch over to an on-prem standby database. Something like this:

Setup with on-prem Oracle Database Data Guard and standby database in OCI

When ZDM configure the OCI standby database, it will be added as a manually managed database, i.e. it does not use Data Guard broker. It does not interfere with your Data Guard broker setup and the setup remains valid. A few things to observe:

  • When ZDM is working, no switch-overs are allowed. This will cause the process to error out.
  • By working I mean – from the second you start the ZDM migration and until it is paused at ZDM_CONFIGURE_DG_SRC.
  • When ZDM is paused, you can do as many switch-overs as you like. Just ensure that the original source database become the primary database again as soon as possible.
  • When you need to complete the migration, the source database must be the primary database and no switch-overs are allowed. Which does make sense because in this last phase, ZDM is switching over to the OCI target database.
  • What about fail-overs. Fail-overs means loss of data and to accept that you need to open the database with RESETLOGS. This causes all sort of havoc in your Data Guard setup. You are back to start.

At which phases can you switch over to your on-prem standby database without jeopardizing the ZDM migration


But you must remove your on-prem Data Guard setup, before you complete the migration with ZDM, i.e. you resume the paused job. ZDM will not be able to finish properly if there is a conflicting Data Guard configuration. To remove your existing Data Guard setup:

  1. ZDM migration is currently paused at ZDM_CONFIGURE_DG_SRC.
  2. Connect using Data Guard CLI (dgmgrl) to source on-prem database (primary). Remove configuration:
remove configuration
  1. Connect using SQL*Plus (sqlplus) to source on-prem database (primary). Remove Data Guard broker configuration:
alter system set dg_broker_start=FALSE scope=memory;
alter system reset dg_broker_start;
alter system reset dg_broker_config_file1;
alter system reset dg_broker_config_file2;
  1. Check log_archive_config. It must contain DB_UNIQUE_NAME of the source on-prem database (primary) and the OCI target database – nothing else. Using the above example, it should be:
alter system set log_archive_config='dg_config=(SALES_1,SALES_oci)' scope=both;
  1. Check log_archive_dest_n parameters. Ensure that Data Guard broker removed the correct ones. There should only be one remote redo log destination – and that is the OCI target database – all others should be removed. If you need to remove one, use this command (remember adjusting the suffix):
alter system set log_archive_dest_2='' scope=memory;
alter system reset log_archive_dest_2;
  1. Ensure that log_archive_config is set correctly (like #4) in the OCI target database:
alter system set log_archive_config='dg_config=(SALES_1,SALES_oci)' scope=both;
  1. Finally, complete migration:
$ZDM_HOME/bin/zdmcli resume job -jobid <id>

The Backup

ZDM needs a full backup that can be restored on your target.

  • DBCS: If your target database is one of the OCI Cloud Services (Virtual Machine, Bare Metal or Exadata DB System), ZDM will need to take a new full backup. Existing backups can’t be used.
  • ExaCC or Exadata on-prem: You can either take a new full backup or use an existing backup that is made available on disk. In addition, if you are so fortunate to have a Zero Data Loss Recovery Appliance (ZDLRA), you don’t need to take a backup. ZDM can just restore directly from ZDLRA.

If you are targeting a DBCS your DATA_TRANSFER_MEDIUM is set to OSS (Object Storage Service). The backup is stored in Object Storage using Oracle Database Cloud Backup Module for OCI. The backup in the source database and the restore in the target database will happen via a special sbt channel which streams the backup directly to and from Object Storage. This means:

  • The backup never hits the disk, so you don’t need additional disk space to hold the backup.
  • The duration of the backup is depending on your network speed to OCI. Since the backup is streamed directly to OCI, the network can become a bottleneck. If you have a slow connection to OCI, the backup will run equally slow. The same applies about the restore, however, the target database is already in OCI and does have a good connection to Object Storage.

ZDM will by default use 10 RMAN channels for the restore and the backup. With your knowledge of the source database, you might know better. You can tweak the number of channels in the response file. Look for the parameters SRC_RMAN_CHANNELS and TGT_RMAN_CHANNELS.

Based on your knowledge or testing you can determine which RMAN Compression algorithm that gives the best benefit on your database. You can adjust the compression algorithm in the response file using the parameter ZDM_RMAN_COMPRESSION_ALGORITHM. The default is MEDIUM which is normally gives the best balance between compression ratio and CPU time. And remember, RMAN Compression normally require a license for Advanced Compression Option but when you migrate with ZDM, you can use it for free.

While ZDM is taking a backup of the source database, no other backups should be running. Be sure to put your regular backups, including archive backups, on hold.

Redo Apply

In the source database, you should keep archive logs on disk until the target database has been restored, Data Guard has been configured, and the target database has caught up with redo apply. If you have a slow network connection and a huge database, it can take days until the backup has completed, restored has completed and redo apply has caught up.

  • Imagine you start the backup at sequence 100.
  • The restore of the target database finishes two days later. The source database is now at sequence 200.
  • ZDM configure Data Guard and starts redo transfer and redo apply. The source database is not at sequence 220.
  • Sequences 100-220 must be available on disk on the source database host, so the source (primary) database can transfer them to the target (standby) database.

It is not uncommon for VLDBs to generate redo on a daily basis that are double-digit TB. Just the other day I talked to a customer whose database generated 15 TB of archive logs a day.

First, you must be able to transfer the redo from the source database (primary) to the target database (standby). This is simple math: If you have 15 TB redo a day, you should be able to transfer that using a 1,5 Gbps connection (amount of redo / 24 / 60 / 60 * 8). If transferring redo becomes a problem, you can look into using redo transport compression. This can reduce the amount of data that must be transferred at the cost of CPU cycles. You can read more about it in the MOS note Redo Transport Compression in a Data Guard Environment (Doc ID 729551.1). I learned from colleagues in the Maximum Availability Architecture (MAA) team that TDE Tablespace Encryption and redo transport compression doesn’t play very well together. If your source database is encrypted, you should not expect that much benefit from redo transport compression.

Next, redo must be applied on the target database. Is the target database capable of applying redo so fast? On Exadata the answer is most likely: YES – but as always in IT, it depends. Redo Apply benchmark of Oracle Database Data Guard The numbers of the above graph comes from Redo Apply Best Practices – Oracle Data Guard and Active Data Guard. Based on your database release and the type of workload you have in the database; you can see the amount of redo that can be applied daily (in TB). The last two columns are using Multi-Instance Redo Apply (MIRA) with either two or four active RAC nodes. The numbers were generated on an Exadata.

Backup of Target Database

How do you backup your target database in OCI? You want to have a valid backup from the very second that you switch over to the target database.

The target placeholder database that you originally created will be overwritten by ZDM. This means that you can’t configure and enable automatic backup in OCI in advance. You must wait until the migration has completed until you enable automatic backup.

You could:

  • Extend the downtime window to allow automatic backup to be configured after the migration. Also, allow enough time for the first backup to complete.
  • Or, do your own backup in OCI. This is more cumbersome but will allow you to open the database for business immediately after the switchover. But you are in charge of the backup now. All the various bits and pieces are available:
    • Original backup is still in object storage.
    • Archive logs are on disk – you can back them up manually.
    • Perform incrementals if needed – put them somewhere safe.
    • In case of emergency – glue it all together

Data Guard on Target Database

Similar to automatic backup, you can’t create a Data Guard Association until ZDM has completed the migration. The cloud tooling does not support creating the standby database through a cascading standby. This means that you can’t build your OCI standby database until after ZDM has completed the migration – and the OCI database is the primary database. You can start to create the OCI Data Guard as soon as ZDM is done, but

  • You must tolerate that the OCI Data Guard is missing
  • Or, take downtime


Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Automatic backup and data guard can’t be created in OCI until after the migration. This might force you to take downtime. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration of Very Large Databases

Things always work on PowerPoint slides. And they almost always work in our lab environments as well. And recorded demos are also pretty bullet-proof.

But what happens when you have a huge database to migrate? A Very Large Database (VLDB). First, things get more exciting. Second, there are more knobs to turn and the smallest thing might become a big thing. Here are some things to consider when you migrate VLDBs using the Logical Online method.

Existing Data Guard

No need to worry about your existing on-prem Oracle Data Guard environment. The Logical Migration uses Oracle Data Pump and Oracle GoldenGate. Both tools can be used on a database in a Data Guard and does not interfere with Data Guard operations.

But no switchovers or failovers are allowed during replication. GoldenGate can be configured to fully support a Data Guard environment and seamlessly reconnect in case of a Data Guard switchover. But it requires advanced configuration of GoldenGate. Zero Downtime Migration (ZDM) does the GoldenGate configuration but does so in a basic way that does not support Data Guard switchovers.

Data Pump

  • Apply a proper degree of parallelism. Rule-of-thumb:

    • On-prem: 2 x number of physical cores
    • OCI: Number of OCPUs
  • When you are importing in OCI, consider scaling up on CPUs. More CPUs, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines needs around 10 minutes of downtime.

  • ZDM applies Data Pump compression automatically. Remember you don’t need a license for Advanced Compression Option to use compression when you are migrating with ZDM. Our experience is that MEDIUM most of the time is the best alternative. It provides a good compression ratio at a low CPU footprint. Although HIGH can compress better, it often comes at a much higher CPU footprint. I would recommend HIGH only in situations where the dump file size really matters (lack of disk space or really slow network connection). Below you can find two benchmarks we did, that compares the different compression algorithms: Comparing Data Pump compression algorithms

  • ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. SecureFile LOBs are superior in many ways, also when it comes to importing. Based on a customer case, we did a benchmark that measures the time it takes to import LOB data. The customer saw a 3x improvement during by transforming to SecureFile LOBs – and got the benefits of SecureFile LOBs afterwards. ZDM transforms to SecureFile LOBs automatically. Don’t turn it off. Here are the log entries from the two imports. Notice the elapsed time:

# BasicFile LOBs
10-OCT-20 21:43:21.848: W-3 . .  imported "SCHEMA"."TABLE"      31.83 GB  681025 rows in 804 seconds using direct_path
# SecureFile LOBs
15-OCT-20 18:16:48.663: W-13 . . imported "SCHEMA"."TABLES"     31.83 GB  681025 rows in 261 seconds using external_table
  • Don’t use import over network link. Use dump files. Performance-wise there are some restrictions when you import over network link. These restrictions really hurt on a VLDB.

  • But when you export to dump files, remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance and ideally, they should be separated from the storage that holds the database data files.

  • A Data Pump export does not affect your existing backup operations. But the export and the backup will fight over the same resources, so I recommend suspending major backup operations (level 0+1) until after the export.

  • Speaking of fighting over resources. You should run the export at off peak hours. That will allow the export to use many more resources. The export is not a fully consistent export. Each table is consistent, but ZDM does not use FLASHBACK_TIME or FLASHBACK_SCN to make the entire export consistent. This helps avoid ORA-01555 snapshot too old during the export. The SCN of each table is recorded in the dump file and GoldenGate later on uses that information to start replication on each table individually from the appropriate SCN.

GoldenGate Replication

How much data will GoldenGate need to transfer from source database to GoldenGate hub and from GoldenGate hub to target database? GoldenGate stores the replication data in trail files. The trail files are smaller than redo logs. The size of trail files is typically 30-40 % of the size of the redo logs. Imagine a database generating 15 TB of redo a day. The size of the trail files will be 4,5-6 TB. If you further apply compression on the trail files, you can typically reduce the size to around 1/8. Using our example, the trail files are now 550-750 GB. Which is significantly less than the initial 15 TB.

In the ZDM response file there is a parameter called GOLDENGATESETTINGS_EXTRACT_PERFORMANCEPROFILE. The default value is MEDIUM, but you have the option of changing it do HIGH. This should increase the performance of the extract process.

ZDM configure GoldenGate to use Parallel Replicat. This gives you a few options. First, you can control the number of mappers by using the parameter GOLDENGATESETTINGS_REPLICAT_MAPPARALLELISM (default is 4). Also, apply parallelism is auto-tuned. You can set a minimum and maximum value using GOLDENGATESETTINGS_REPLICAT_MINAPPLYPARALLELISM and GOLDENGATESETTINGS_REPLICAT_MAXAPPLYPARALLELISM=50. Defaults are 4 and 50 so I don’t think it will be necessary to change.

Last, you can find more advice in Administering Oracle GoldenGate 19.1, Tuning the Performance of Oracle GoldenGate.

GoldenGate Health Check

Oracle GoldenGate comes with health check scripts that you can use on the source and target database. This will help you monitor and troubleshoot the process. Here is information on how to install and use the health check scripts. You can also find information in Administering Oracle GoldenGate 19.1, Using Healthcheck Scripts to Monitor and Troubleshoot.

Backup of Target Database

When you use logical online migration in ZDM, you create the target database in advance. In contrast to physical migration, the target database is not overwritten. This means that you can configure automatic backup of your target OCI database before you complete the migration and switch over to OCI. You will even have time to and possibility of testing the backup.

I recommend that you configure automatic backup after the Data Pump initial load. This way there is less archive logs to back up as well.

Data Guard on Target Database

Similar to automatic backup, you can also create your Data Guard association before the switchover. Likewise, wait until after the initial load has completed.


Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Your database can be protected by backup and Data Guard from the very second you switch over to OCI. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration and Testing

Testing is an essential part of any migration project of your Oracle Database. With Zero Downtime Migration (ZDM) and the Physical Online method it has become a lot easier. Before going live (i.e. doing the Data Guard switchover) you can test on your production data on your future production system – the OCI database. That’s cool.


For the duration of your test convert the OCI target database into a snapshot standby database. A short recap on snapshot standby database:

  • A snapshot standby database is a type of updatable standby database that provides full data protection for a primary database.
  • A snapshot standby database receives and archives, but does not apply, redo data from its primary database. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database.
  • A snapshot standby database diverges from its primary database over time because redo data from the primary database is not applied as it is received. Local updates to the snapshot standby database cause additional divergence. The data in the primary database is fully protected however, because a snapshot standby can be converted back into a physical standby database at any time, and the redo data received from the primary is then applied.

The plan is:

  1. Build the standby database in OCI.
  2. Migration is currently paused at ZDM_CONFIGURE_DG_SRC (-pauseafter ZDM_CONFIGURE_DG_SRC).
  3. Convert the OCI target database – which is a physical standby – into a snapshot standby.
  4. Do your test. Do whatever you want. The database is protected by Flashback Database, so you can insert and delete data, truncate, add tablespaces, you name it.
  5. When you are done with your tests, convert the OCI target database back into a physical standby database. Implicitly, the database is flashed back, and now the redo is getting applied again.
  6. After a little while the target database is now back in sync again.
  7. Complete migration at your will.


To convert the standby database to a snapshot standby database using broker (CDB19_fra3zt is the target DB_UNIQUE_NAME):

convert database 'CDB19_fra3zt' to snapshot standby;

Or manually:

alter database recover managed standby database cancel;
shutdown immediate
startup mount
alter database convert to snapshot standby;
alter database open;

Now, the database is opened in READ WRITE mode and you can use it for testing. To convert back to a physical standby database:

convert database 'CDB19_fra3zt' to physical standby;

Or manually:

shutdown immediate
startup mount
alter database convert to physical standby;
shutdown immediate
alter database recover managed standby database disconnect from session;

You can even do this multiple times if you want several test runs on your OCI database.


One of the really cool features of ZDM is that you can use my standby database for testing in OCI – before I decide to do the switchover. You can achieve this by converting to a snapshot standby database.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration to ExaCS

This will be an easy blog post. To migrate your Oracle Database to Exadata DB System (ExaCS), just follow this procedure from the DBCS blog post. Plus, execute these two commands on the target after the migration:

[root@tgthost]$ dbaascli registerdb prereqs --dbname [db_name] --db_unique_name [db_unique_name]
[root@tgthost]$ dbaascli registerdb begin --dbname [db_name] --db_unique_name [db_unique_name]

More Details, Please

Granted – the above statement is bold and it is almost true. There are a few important details to share. First, have a look at Additional Information for Migrating to Exadata Cloud Service which you find the Release Notes

Target Environment

To get the full benefits of Exadata you should be running RAC databases. Exadata and RAC is a perfect match but it is up to you to decide. If your source database is already a RAC database (or RAC One Node) you must migrate to a RAC database. However, if your source database is a single instance you have to option to either stay single instance or go RAC. If you go RAC, just create the target placeholder database as a RAC database, and everything else will happen automatically.

You must create a placeholder database on the target system. The placeholder database gets overwritten ZDM during the migration but it is initially used by ZDM to get information on how you want to configure your target database in OCI. For example, the migrated database will be placed in the same Oracle Home as the target placeholder database. Also, the architecture is determined this way. In other words, if you create the target placeholder database as a RAC database; then your source database is automatically converted to a RAC database during migration. If you create a single instance placeholder database; you get a single instance database.

Just like any other migration, when creating the placeholder database there are some things you should be aware of. On the OCI webpage you have to:

  • Set Database name to the DB_NAME of the source database.
  • Set Database version to the same as the source database.
  • Ensure the patch level of the Oracle Home match that of the source system – or be higher.
  • Ensure that the Password matches the SYS password of the source database.

When using Zero Downtime Migration (ZDM) to migrate to Exadata DB System (ExaCS) be sure to create the target placeholder database in the correct manner.

You can choose your own DB_UNIQUE_NAME – it should differ from the source database. Select an Oracle Home that has the same or higher patch level than your source database. I recommend to always migrate to the latest Release Update. If necessary, ZDM will automatically invoke datapatch after the switchover. The other parameters don’t matter – the database gets overwritten anyway by ZDM. Also, be aware when using the OCI webpage you get a RAC database. There is no option to change it. But it is after all the perfect match for Exadata anyway.

Now, if you want more advanced options – like creating a single instance database, you can’t use the OCI webpage. You will have to use either dbaaspi or dbaascli. That gives you full control over the options – but they are not as easy to use as the webpage.

It’s A Wrap

I have created a video on YouTube that demos a migration to Exadata DB System.

Speaking of YouTube, I suggest that you subscribe to the Oracle Database Upgrades and Migrations YouTube channel so you never miss anything.

The Exadata Cloud Service is an awesome platform and it is really easy to migrate to it using Zero Downtime Migration. And converting to RAC is even easier.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration to DBCS

Let me show you how you can migrate an Oracle Database into OCI. My source database is a CDB running on I want to migrate to the same version using the Physical Online method which uses Data Guard.

This procedure can be used to migrate to:

  • VM DB Systems
  • Bare Metal DB Systems
  • Exadata DB System (ExaCS)

If you want to migrate to Exadata Cloud at Customer and Exadata on-prem, you can use a lot from this post, but there are some differences which I will not cover.


  1. I need a ZDM service host. It is called zdmhost and the user is called zdmuser.
  2. My source database is a CDB called SALES (DB_NAME), and the host is named srchost.
  3. I want to migrate to a VM DB System, and it is already created. I name it tgthost. I use the same name for the database, SALES. I must use ASM as storage management software.

Overview of the components in this demo

Prepare Source

Ensure the database is in ARCHIVELOG mode:

select log_mode from v$database;

If database version is 12.2 or higher, a TDE keystore must be present – even if the database is not encrypted. A license is not needed to create a TDE keystore – not until data is getting encrypted. The keystore must be OPEN and the type is either AUTOLOGIN, LOCAL_AUTOLOGIN or PASSWORD. In a CDB, this applies to CDB$ROOT and all PDBs:

SELECT con_id, status, wallet_type FROM v$encryption_wallet;

If status is OPEN_NO_MASTER_KEY it means that no TDE master encryption key has been created and I will need to create one. Instructions are also in the documentation.

Ensure that the source host can resolve the network name of the target host. It is important to add two entries – one with the host name and one with the SCAN name (they should both point to the target host):

[root@srchost]$ echo -e "[ip address] tgthost" >> /etc/hosts
[root@srchost]$ echo -e "[ip address] tgthost-scan" >> /etc/hosts

Test connection over SQL*NET to the target:

[oracle@srchost]$ sqlplus system@tgthost-scan/[target-cdb-service-name]

Prepare Target

I need a target placeholder database. The placeholder target database is overwritten during migration, but it retains the overall configuration.

  • VM DB System: When I create a new system, a database is automatically created. This database is my target placeholder database.
  • Bare Metal and Exadata DB Systems: Create a new database on an existing system.

In any case:

  • DB_NAME must match that of the source database, and DB_UNIQUE_NAME must be different.
  • Use the same SYS password as the source database.
  • Do not enable automatic backup (until after the migration has completed).
  • The architecture (single instance or RAC) of the target placeholder database determine the architecture of the OCI database. If I want my OCI database to be a RAC database, simply create the target placeholder database as a RAC database, and the end result will be a RAC database.
  • COMPATIBLE must match that of the source database.
  • The patch level of the target database must be equal to or higher than the source database. If they differ, ZDM will automatically execute datapatch as part of the migration. Use $ORACLE_HOME/OPatch/opatch lsinventory.
  • The source and target database must use the same time zone file version. If they don’t match, I must upgrade the time zone file in the source database. If I have a lot of data of type TIMESTAMP WITH TIMEZONE and I can’t afford the downtime it takes to do the upgrade, then consider using the logical online method: select * from v$timezone_file;

There are a few other requirements listed in the document, but these are the most important ones.

Like the source host, the target host must be able to resolve the network name of its counterpart:

[root@tgthost]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@tgthost]$ echo -e "[ip address] srchost-scan" >> /etc/hosts

And I test the connection:

[oracle@tgthost]$ sqlplus system@srchost-scan/[source-cdb-service-name]

Prepare ZDM

The ZDM service host must be able to resolve the host names of the source and target database host:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts
[root@zdmhost]$ echo "<target IP address> tgthost" >> /etc/hosts

I put my private SSH keys to the source and target database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 
[zdmuser@zdmhost]$ cp tgthost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/tgthost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date
[zdmuser@zdmhost]$ ssh -i ~/.ssh/tgthost_key_file opc@tgthost date

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_template.rsp ~/physical_online.rsp
[zdmuser@zdmhost]$ chmod 700 ~/physical_online.rsp

This is the response file that I end up with (see appendix B for an explanation):


Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/ \
   -backupuser "" \

A few comments:

  • sourcenode and targetnode are the host names of the source and target database host. Those names must be resolvable which I why I put them into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database. My database is on ASM. If I didn’t use ASM I should use sourcesid instead and specify the database SID.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 and tgtarg2 is the location of the private key file that I use to connect via SSH.

I am prompted for the SYS password to the source database and also the password for the backupuser (which is my OCI user). For the latter, please note that this password is not my user password, however, it is an auth token. I also get a job ID: When using OCI oject storage this is NOT your user password, but an auth token

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

My colleague Sinan Petrus Toma showed how to loop:

[zdmuser@zdmhost]$ while :; do $ZDM_HOME/bin/zdmcli query job -jobid <job ID>; sleep 10; done

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-06-03T07:19:50.317Z : Starting zero downtime migrate operation ...
zdmhost: 2021-06-03T07:19:53.445Z : Executing phase ZDM_GET_SRC_INFO
zdmhost: 2021-06-03T07:19:53.446Z : Retrieving information from source node "srchost" ...
zdmhost: 2021-06-03T07:19:53.447Z : retrieving information about database "SALES_fra3fw" ...
zdmhost: 2021-06-03T07:20:02.743Z : Execution of phase ZDM_GET_SRC_INFO completed
zdmhost: 2021-06-03T07:20:02.826Z : Executing phase ZDM_GET_TGT_INFO
zdmhost: 2021-06-03T07:20:02.827Z : Retrieving information from target node "tgthost" ...
zdmhost: 2021-06-03T07:20:12.197Z : Determined value for parameter TGT_DATADG is '+DATA'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_REDODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_RECODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.203Z : Execution of phase ZDM_GET_TGT_INFO completed

Eventually, I end up with a successful evaluation: When status of an evaluation is SUCCEEDED

Start Migration

When the evaluation passes, I can start the real migration. I am re-using the same command line, but I have removed the -eval option. Instead – and this is important – I am using -pauseafter to tell ZDM to pause the migration, just before the switchover takes place. Downtime has not started yet. The database is still open for business, but ZDM will copy the data and configure Data Guard:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/ \
   -backupuser "" \
   -pauseafter ZDM_CONFIGURE_DG_SRC

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file. Note down the job ID. I need it later on to resume the migration.

When ZDM completes the phase ZDM_CONFIGURE_DG_SRC it will pause and wait. I can verify it with zdmcli query job: ZDM is currently paused

Now the standby database has been built in OCI. Redo gets transferred from my source database to the target database in OCI and is applied (see appendix D for monitoring queries). Also, this is a good time to test your new database.

Complete Migration

Now it is time to finalize the migration and switch over to the OCI target database. All I need to do, is to resume the paused ZDM job. I use the job ID that was created when I started the migration:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli resume job -jobid <job ID>

ZDM will now ensure that all redo is sent and applied before switching over to the OCI target database. Again, I use the zdmcli query job command to monitor the progress and I can tail the log file. After a short while the migration completes. ZDM migration completed

That’s it. I have now migrated into OCI!

I have a few post-migration tasks to carry out:

  • Configure automatic backup
  • Protect my new OCI database with a Data Guard in OCI
  • Take a peek at the alert log (just be sure)

Other Blog Posts in This Series


A – RAC Databases

If your source or target database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH keys to that host. Also, ensure that the IP address you are using is not the floating one (VIP). In OCI that is referred to as the private IP.

B – Response File

Allow me to put some comments on the values:

Setting Comment
DATA_TRANSFER_MEDIUM When migrating to a DB System in OCI OSS is the only option. It is also the default value, so I could omit the parameter. The other options are applicable for Exadata Cloud at Customer and Exadata on-prem.
                                                                                                                                                                     HOST                                                                                                                                                                      Getting the right URL for the HOST parameter might be a little tricky, but check the documentation. In my demo I use the Frankfurt data center, and, thus, the region is set to eu-frankfurt-1. Visit the API documentation for a list of regions. Use OCI CLI and oci os ns get to find the tenancy object storage namespace. Alternatively, in the OCI Console (the web page) open the Profile menu and click Tenancy: <your tenancy name>. The namespace string is listed under Object Storage Settings. To read more about have a look at the OCI documentation. Thanks to Bartlomiej Sowa for putting in a comment with this information – much appreciated!
OPC_CONTAINER The OCI Object Storage bucket that will be used as a staging area for the backup of the source database. I recommend using a separate bucket for each migration. It makes it a lot easier to clean up afterwards.
PLATFORM_TYPE VMDB covers Virtual Machine and Bare Metal DB Services. It is also the default value, so I could omit the parameter.
SHUTDOWN_SRC I choose to shut down the source database to ensure no one uses it unintentionally after the migration.
SKIP_FALLBACK To make my demo simple I choose not to configure fallback. Remember falling back to the source database requires a license for Advanced Security Option.
TGT_DB_UNIQUE_NAME The DB_UNIQUE_NAME of the target placeholder database. It must be different than that of the source database.

C – Backup Strategy

During a migration with ZDM, you should keep your regular backup strategy. Keep doing the same backups as you did before. But avoid having ZDM backups and regular backups run at the same time. Also, if you are dealing with a RAC database be sure to put the snapshot control file on shared storage. Otherwise, you might get ORA-00245 errors during backups.

D – Monitoring

Use these queries to monitor the redo apply. On source/primary database:

   host_name, instance_name, db_unique_name, status, database_role, open_mode 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

Target/standby database:

   host_name, instance_name, db_unique_name, status, database_role, open_mode 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;
--MRP process should be 'APPLYING_LOG'
SELECT process, status, sequence# FROM v$managed_standby;
SELECT * FROM v$archive_gap;

E – Troubleshooting


If you run into this error:

PRCZ-4001 : failed to execute command "/bin/uname" using the privileged execution plugin "zdmauth" on nodes "doverbyh-zdm-tgt" within 120 seconds
PRCZ-2006 : Unable to establish SSH connection to node "doverbyh-zdm-tgt" to execute command "/bin/uname"
No more authentication methods available

Check your key files. They must be in RSA/PEM format (the private key must start with -----BEGIN RSA PRIVATE KEY-----).


If you run into this error:

srchost: 07:08:00.000: Validating object store credentials..

Check your credentials to OCI. Remember when prompted for the password of your OCI account it is an auth token, not your password (even though the prompt text is misleading).


If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the source database host is added to the known_hosts file on the ZDM service host. Also, you can verify connectivity by trying to log on via SSH:

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-source-host>


If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the target database host is added to the known_hosts file on the ZDM service host.

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-target-host>

Zero Downtime Migration – Physical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Physical Online method. A standby database is built in OCI and kept in sync via redo apply. At your will, a switchover is all it takes to complete the migration. This way even very large databases can be migrated with no or very little downtime.

Concept of physical online migration

With the Physical Online method, you can target:

  • Virtual Machine DB System
  • Bare Metal DB System
  • Exadata DB System (ExaCS)
  • Exadata Cloud at Customer (ExaCC)
  • Exadata (on-prem)

Since the method uses Data Guard as the migration vehicle, this can only be only with Enterprise Edition databases.


  • Personally, I like this approach because it builds on technology that most of us know already. Data Guard is heavily in use in most organizations. Although ZDM does all the heavy lifting, it is nice to know what happens underneath the hood.

  • To complete the migration all you need is a regular Data Guard switchover (which ZDM also takes care of). A switchover operation ensures that there will be no data loss at all. In addition, if you have a properly configured application, it won’t experience downtime. Just a brown-out while the switchover takes place.

  • This method has an excellent fallback possibility. When you switch over to the OCI database, the redo flow is reversed. Now, the on-prem database is a standby database, and it is kept in sync via redo from the OCI database. However, this option requires a license for Advanced Security Option on your on-prem system. Read more about it later on.

  • You migrate the entire database. All the internals are brought to the cloud as well. This includes:

    • AWR
    • SQL Plan Baseline
    • SQL Profiles
    • Public objects
    • Etc.
  • The initial backup of the source, on-prem database will use RMAN Compression. It will drastically reduce the size of the backup, and thus the amount of data that you must transport to the cloud. And the really good thing: ZDM can use RMAN Compression even if you don’t have a license for the Advanced Compression Option. You are allowed to use RMAN Compression with ZDM for migration purposes without paying license for Advanced Compression Option.

  • You can customize the RMAN backup. You know your system best, so it is possible to tweak the number of RMAN channels used and the compression algorithm applied. Default is 10 and medium.

  • When ZDM takes the full backup of your source database (Exadata on-prem and ExaCC excluded), it uses Oracle Database Cloud Backup Module for OCI to allocate a special sbt channel. The backup is sent directly to OCI Object Storage. Thus, you don’t need any additional disk space on your source system to hold the backup. The same applies on the target system.

  • When migrating to ExaCC or Exadata on-prem you can use an existing backup. Use the response file option DATA_TRANSFER_MEDIUM=EXTBACKUP. Also, if you have a Zero Data Loss Recovery Appliance (ZDLRA) you can restore directly from it, without taking a full backup first. If you are migrating to any other platform, ZDM will need to take a full backup as part of the workflow.

  • The standby database is kept in sync via redo apply. This means that there are no restrictions in supported data types. Further, DDL in any form is supported and even with a heavy workload on the primary database, your standby database should be able to keep up.

  • You can migrate to a higher patch level. ZDM will automatically invoke datapatch for you on the OCI database. But you must handle the on-prem database yourself. After switching over to the OCI database and datapatch has been executed, you should patch the on-prem Oracle Home to the same patch level.


  • You can migrate to the same version only. This is a restriction of Data Guard. If you also need to upgrade the database, then you must do that after the migration. But it will incur additional downtime.
  • The same applies to PDB conversion. ZDM can run the noncdb_to_pdb.sql script for you. But that will incur downtime as well. If you decide to convert to PDB, you need to develop an alternate fallback plan (which is Data Pump or Transportable Tablespaces). The PDB conversion is irreversible, and your on-prem database will be useless as a fallback option.
  • It is not possible to migrate between editions.
  • The entire database is migrated. I listed this as a benefit as well, but it has a flipside as well. It is not possible to perform any transformation during the migration, e.g. converting any old BasicFile LOBs to SecureFile LOBs. Also, you bring over any old baggage in your database. Sometimes it is nice to start from scratch because garbage tends to accumulate in a database over time.
  • If you have a Standard Edition database, you can’t use Physical Online method. Only the Physical Offline is supported.
  • You can’t configure Automatic backup on your target database until after the migration has been completed.
  • If you need to protect your OCI database with Data Guard, then you must build the standby database after the migration has been completed. Normally, you would use a cascading standby database to keep the target database protected by Data Guard even after the migration has been completed. But currently the OCI tooling does not support that option.


Using this approach, you have a great fallback option. When you switch over to the OCI database, then the redo flow is reversed and the source, on-prem database is now the standby database. If you need to fallback, simply issue another switchover (you could call that a switchback) and start to use the on-prem database again – with no data loss. See step 5 in MAA Practices for Cloud Migration Using ZDM (Doc ID 2562063.1).

There is a catch, however. The OCI database is encrypted using TDE Tablespace Encryption. Any redo generated for an encrypted tablespace is also encrypted. This means that the on-prem database must be able to decrypt the redo before it can be applied. That requires a license for the Advanced Security Option. This means that the fallback is only usable, if your on-prem database has the proper license.


Using the Physical Online method in ZDM is a straight-forward way of migrating your database to OCI. It uses Data Guard which is very familiar to most of us. The method does, however, have some limitations, and you can’t target Autonomous Databases.

Want to Know More

If you want to know more about migrations in general, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series

Zero Downtime Migration – Logical Migration and the Final Touches

Logical migration has many benefits but there is also a catch. It does not transfer some of that nice meta data that make your life easier – such as:

  • AWR
  • SQL Plan Baselines
  • SQL Profiles

Further, we can make a few tweaks to avoid having performance problems after the migration. This post applies to migrations into any database – except Autonomous Databases. In an Autonomous Database you don’t need to worry about such things. The database will fix the issues for you.


If you are licensed to use AWR in your source database, you can transport the AWR data into your target database.

  1. In your source database, execute the script $ORACLE_HOME/rdbms/admin/awrextr.sql. It extracts the AWR data and load them into a Data Pump dump file.
  2. Transfer the dump file to your target database.
  3. In your target database, import the AWR data using the script $ORACLE_HOME/rdbms/admin/awrload.sql.

When you look at the old AWR data in your target database, use it with caution. The AWR data was collected on a completely different system (CPU, memory, settings, database release, initialization parameters etc). Your new system is very different. Keep that in mind when you compare data from before and after the migration.

SQL Plan Baselines

If you are using SQL Plan Management to ensure plan stability, you want to transport your baselines as well.

First, in my source database create a staging table. It is used to transport the baselines. Create the staging table in a schema that you will be migrating.

   dbms_spm.create_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH');

Next, I will extract all the baselines including the fixed or accepted plans. It is now stored in the staging table and will be migrated together with my data:

   l_count number;
   l_count := dbms_spm.pack_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH',
      enabled      => 'YES',
      fixed        => 'YES');

After the migration, in my target database, I can now extract the baselines into the SQL Management Base:

   l_count number;
   l_count := dbms_spm.unpack_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH');

That’s it. Now SQL Plan Management will ensure that the same plans are used for the statements that are in the baselines.

If you are not using SQL Plan Management, you should look into it. In my opinion it is one of the most underrated features in the Oracle Database. Many issues are caused by plan changes which is exactly what you can avoid with SQL Plan Management.

SQL Profiles

If you have SQL Profiles you can transport them in a way very similar to transporting SQL Plan Baselines.

First, in the source database, create a staging table in a schema that you will be migrating:

   dbms_sqltune.create_stgtab_sqlprof (
      table_name   => 'MIGR_SQLP_1',
      table_owner  => 'SH');

Next, put the SQL Profiles into the staging table. This example will load all profiles from the DEFAULT category, but there are options to cherry-pick those of interest:

   dbms_sqltune.pack_stgtab_sqlprof (
      staging_table_name   => 'MIGR_SQLP_1',
      staging_schema_owner => 'SH');

Finally, in my target database, I can unpack the profiles from the staging table and into the data dictionary:

   dbms_sqltune.unpack_stgtab_sqlprof (
      staging_table_name   => 'MIGR_SQLP_1',
      staging_schema_owner => 'SH',
      replace              => TRUE);


Should you do anything about object statistics?

  • If you have migrated into the same release, the answer is no.
  • If you have migrated into a higher release, the answer is it depends. It is not required, but if you come from a very old release, you might consider it.

Imagine a migration from to 19c. Significant changes have been made to histograms in that period. If you want to benefit from the new, improved histogram types, you should re-gather statistics. If you don’t do it, the histograms will gradually change over time when the statistics become stale. Some people prefer taking such a change immediately because they don’t like the idea of things happening over the course of time.

If you want to refresh statistics:

exec dbms_stats.delete_database_stats;
exec dbms_stats.gather_database_stats;

To make it a little faster:

exec dbms_stats.set_global_prefs('CONCURRENT','AUTOMATIC');

To make it even faster – requires CPU:

exec dbms_stats.set_global_prefs('DEGREE', DBMS_STATS.AUTO_DEGREE);

To make it as fast as possible – requires a lot of CPU:

exec dbms_stats.set_global_prefs('CONCURRENT','AUTOMATIC');
exec dbms_stats.set_global_prefs('DEGREE', DBMS_STATS.AUTO_DEGREE);

Be sure to set the preferences back to the original value when you are done gathering statistics. Nigel Bayliss has an interesting blog post on How to gather optimizer statistics fast.

Dictionary Statistics

After the migration, the target database has many more tables, indexes, views and so on. This is represented in the data dictionary as additional rows in e.g. OBJ$. All that extra data probably means that the dictionary statistics are stale. Stale dictionary statistics can lead to all sorts of troubles.

Immediately after the migration is completed you should re-gather dictionary statistics to avoid problems caused by stale statistics. Do this right after ZDM has completed the migration:

exec dbms_stats.gather_dictionary_stats;

Or even better:


I prefer gathering statistics on SYS and SYSTEM independently. My colleagues and I have seen issues from time to time with dbms_stats.gather_dictionary_stats.

Fixed Object Statistics

After migration and once the system has been properly warmed up, you should re-gather fixed objects statistics. It is important that you don’t gather fixed objects statistics right after migration. You must until you have run a representative workload. If you wait a few days or until the next weekend, then you are most likely on the safe side:

exec dbms_stats.gather_fixed_objects_stats;

This recommendation is similar to the one we give about after upgrade. Read the blog post to read more about fixed objects statistics in general. Also, you can find a very useful piece of code that gathers fixed objects statistics in seven days via a scheduler job.

SQL Tuning Sets

This is something that you should do before you start the migration. Capture your workload from the source database into SQL Tuning Sets.

What is a SQL Tuning Set? It is an object that contains information about a number of SQLs. For each SQL the following is collected:

  • SQL Statement – the SQL text itself
  • Context – executing schema, binds, and environment (like NLS settings)
  • Statistics – execution statistics like elapsed time, CPU time, buffer gets, rows processed
  • Plan – last, but not least, the actual execution plan

You can then transport the SQL Tuning Set into the target database. If an SQL is now running slow, you can use the information from the SQL Tuning Set to identify why. For an SQL you can see the old and new plan, compare amount of buffer gets and CPU time and so forth. You can also bring back the old plan, if the new plan is less effective.

First, in the source you create a SQL Tuning Set:

exec dbms_sqlset.create_sqlset (sqlset_name => 'MIGR_STS_1', description => 'For migration - from source');

Next, capture statements from AWR. This will take the top 5000 statements from your entire AWR ordered by elapsed time:

   begin_id number;
   end_id number;
   cur sys_refcursor;
   select min(snap_id), max(snap_id) int begin_id, end_id
   from dba_hist_snapshot;

open cur for
  select value(p) from table(dbms_sqltune.select_workload_repository(
       begin_snap       => begin_id,
       end_snap         => end_id,
       basic_filter     => 'parsing_schema_name not in (''SYS'')',
       ranking_measure1 => 'elapsed_time',
       result_limit     => 5000,
       attribute_list   => 'ALL')) p;

  dbms_sqltune.load_sqlset('MIGR_STS_1', cur);
close cur;


You can also sample directly from cursor cache. Start the sampling and then run your workload. This example will sample every minute for 15 minutes:

      sqlset_name     => 'MIGR_STS_1',
      time_limit      => 900,
      repeat_interval => 60,
      capture_option  => 'MERGE',
      capture_mode    => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
      basic_filter    => 'parsing_schema_name not in (''SYS'')',
      sqlset_owner    => NULL,
      recursive_sql   => 'HAS_RECURSIVE_SQL');

Finally, put all that information into a staging table, so it can be transported to the target database. Put the staging table into a schema that you plan on migrate. This way you ensure that the SQL Tuning Set is migrated together with your data. In my example it is the schema SH:

   dbms_sqltune.create_stgtab_sqlset ( 
      schema               => 'SH',
      table_name           => 'MIGR_STGTAB_1');
   dbms_sqltune.pack_stgtab_sqlset (      
      sqlset_name          => 'MIGR_STS_1',
	  staging_schema_owner => 'SH',
      staging_table_name   => 'MIGR_STGTAB_1');

After the migration, in my target database I can now extract the SQL Tuning Set from the staging table and into the data dictionary:

   dbms_sqltune.unpack_stgtab_sqlset (
      sqlset_name          => '%',
      replace              => true,
	  staging_schema_owner => 'SH',
      staging_table_name   => 'MIGR_STGTAB_1'

Now – if you get into trouble with a misbehaving SQL – you can use the information in the SQL Tuning Set to investigate the problem.


When you perform a logical migration, there are some additional tasks that you should consider at least for your mission-critical databases. These steps will help you avoid performance issues after the migration.

In addition to the topics discussed here, I recommend that you also watch our webinar Performance Stability, Tips and Tricks and Underscores (slides). It has the complete prescription to avoid performance problems after upgrade and migration.

Other Blog Posts in This Series