XTTS: Target Database and Data Guard

Often when you migrate an Oracle Database using cross-platform transportable tablespace (XTTS) and incremental backups, it is a big database. Big databases must often be protected by Data Guard.

How do you ensure that Data Guard protects the target database at the end of the migration?

Build After Migration

A simple solution is to build the standby database after the migration. There are some downsides to this approach:

  • It takes time. In many situations, the business requires that the database can’t go live until a standby database is in place. Having to build a standby database will prolong the downtime.
  • It puts a load on the primary database. Right after the migration, you don’t have your backups in place yet, so you will need to build the standby databases directly from the primary database. That requires a lot of I/O and network traffic. You might want to use those resources for other activities, like taking a level 0 backup or regathering statistics.
  • It can become even more complicated if you migrate into multitenant architecture. Rebuilding the standby database of an already running, active CDB might not be an option. The other PDBs in the CDB are compromised while the standby is rebuilt.

Restore Data Files to Standby Host

Here is an approach that offers much less downtime. Restore the data files onto the standby host as well as part of the migration. During the Data Pump import, the plug-in propagates to the standby database via redo apply.

When migrating with XTTS and the Perl script you are familiar with running the backup on the source database (xttdriver.pl --backup). Also, you know how to restore and recover the data files on the target system (xttdriver.pl --restore).

Now, the idea is you restore and recover the data files on the primary host and also on the standby host. During the Data Pump import, the data files are plugged into the primary database. The plug-in commands are propagated to the standby database via redo apply. Then, the standby database can plug-in the data files if they are located in the same place as on the primary database. The rest of the Data Pump import will propagate as well, and in the end you will have a working Data Guard environment.

You must recover the data files on the primary and standby database to the exact same SCN. Be sure to restore all backups to both the primary and standby database.

Overview of the process

How To

The following assumes that your target databases are using ASM. Also, I am migrating directly into a PDB. But the procedure is similar for migrations into a non-CDB database. Use the procedure described in my previous blog post or MOS note V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1). Adjust the procedure according to the following.

  • Prepare the target database and build a standby database before starting the migration.
  • In xtt.properties set destconnstr to point to the target primary database.
  • Copy xtt.properties to the target primary host and target standby host.
  • On target standby host, change xtt.properties and set destconnstr to point to the target standby database.
  • Create the cross-platform backups on the source database using xttdriver.pl --backup in the usual way
  • Now copy the file res.txt to both the target primary and target standby database.
  • The backups in the scratch location should also be available to both the target primary and standby databases.
  • Now restore or recover the backups on the target primary and target standby database using xttdriver.pl --restore.
  • Repeat the backup/restore/recover process as many times as needed. Keep recovering the target primary and target standby database. It is very important that the data files on both the target primary and target standby database are recovered to the exact same SCN.
  • On the target databases the data files will be restored into ASM. The data file itself is created with an OMF file name:
    +<disk_group>/<db_unique_name>/<source_guid>/DATAFILE/<omf_name>
    
    It could be something like this:
    +DATA/CDB1_FRA356/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.281.1099469863
    
    • The disk group is what you specified in xtt.properties in dest_datafile_location
    • DB_UNIQUE_NAME is not the same on the primary and standby database. It will differ.
    • To comply with OMF standard the database must restore the data files into a folder corresponding to the PDB it belongs to. However, currently, the data files do not belong to a PDB. We haven’t done the Data Pump import plug-in operation yet. The database will create a folder with the GUID of the source database. If you are interested, you can get the GUID from the source database using select guid from v$containers.
    • The last part of the OMF file name is the tablespace name, and some numbers representing the file ID and a number to ensure uniqueness. This part will differ on the primary and standby database as well.
  • We now know that the data file name is different on the primary and standby database. Previously, it was stated that it is important that the data files are stored in the same location and has the same name. This is a problem! But the Perl script solves that by creating ASM aliases.
  • The aliases will be created in the location specified by dest_datafile_location in xtt.properties. Use asmcmd to verify it. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, we can see in the Name column that it is an alias:
    ASMCMD> ls -l +DATA
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855
    ...
    
  • If you look at the directory where the alias is pointing to, you can see that the files are proper OMF files – real data files. Column SYS is Y, and Name does not contain the alias pointer =>:
    ASMCMD> cd +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE
    ASMCMD> ls -l 
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNTIDX.280.1099469855
    ...
    
  • Thus, the aliases are hiding the fact that the data files have different name on the target primary and target standby database.
  • When you prepare the parameter file for the Data Pump import, be sure to reference the aliases – not the OMF named data files. The aliases have the same name on both the target primary and target standby database:
    $ cat import.par
    
    transport_datafiles=+DATA/account_25.dbf
    transport_datafiles=+DATA/accountidx_26.dbf
    transport_datafiles=...
    
  • Then start the Data Pump import. The data files are plugged into the primary database during DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK. When the redo containing that information is applied on the standby database, the standby database will plug in the data files as well.
  • When the Data Pump import completes, you can verify that the standby database survived and is still applying redo. A switch-over is also a good way of testing it.

Conclusion

You can prepare the target standby database in advance. This enables the target database to be protected by Data Guard as soon as the Data Pump import completes.

Restore and recover the data files to the target primary and target standby database. If they are recovered to the exact same SCN the plug-in of the data files propagates to the standby database via redo apply. After the Data Pump import, your target database has a fully functional standby database.

Further Reading

Other Blog Posts in This Series

XTTS: Backup on Standby Database

When doing a cross-platform migration with transportable tablespaces and incremental backup, is it possible to perform backups on a standby database? If so, you could offload the work from a primary database.

The short answer is yes. You can use a physical standby database but not a snapshot standby database.

Using a standby database for the backups is an advantage in some situations:

  • The primary database does not have the capacity to perform the extra backups (CPU, disk space, or I/O).
  • You want the primary database to be completely unaffected by the migration.

How To

You follow the regular procedure described in the MOS note V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1) with a few changes. The changes are described in Using XTTs in a Data Guard Environment.

Be Careful

You must have an Active Data Guard license, if you:

  • Open a physical standby database and enable redo apply.
  • Enable block change tracking on the standby database (for faster incremental backups).

Procedure

The primary database is called SRCPRI and runs on the host src-pri. The standby is called SRCSTDBY and runs on the host src-stdby.

  • Ensure allowstandby=1 in xtt.properties.

  • Whenever you need to perform a backup on the physical standby database – level 0 or incremental:

    • Cancel redo apply and open the physical standby database:
    SRCSTDBY SQL> alter database recover managed standby database cancel;
    SRCSTDBY SQL> alter database open;
    
    • Perform the backup
    [oracle@src-stdby]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
    
    • Bring the standby database back in MOUNT mode and re-enable redo apply:
    SRCSTDBY SQL> shutdown immediate
    SRCSTDBY SQL> startup mount
    SRCSTDBY SQL> alter database recover managed standby database disconnect from session;
    
  • When it is time to perform the final incremental backup:

    • Set tablespaces in READ ONLY mode on the primary database:
    SRCPRI SQL> alter tablespace ... read only;
    
    • Archive current log and ensure it is applied on the standby database:
    SRCPRI SQL> alter system archive log current;
    
    • When you have confirmed the redo is applied on the standby database, cancel redo apply and open it:
    SRCSTDBY SQL> alter database recover managed standby database cancel;
    SRCSTDBY SQL> alter database open;
    
    • Verify tablespaces are READ ONLY:
    SRCSTDBY SQL> select tablespace_name, status from dba_tablespaces;
    
    • Perform the final incremental backup
    [oracle@src-stdby]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
    
    • Perform the Data Pump export on the primary database:
    [oracle@src-pri]$ expdp system ... full=y transportable=always ...
    

Active Data Guard

If you have a license for Active Data Guard, you can simply keep the physical standby database in OPEN WITH REDO APPLY mode. You don’t need to switch from OPEN mode to MOUNT mode.

Conclusion

Being able to perform the backups on a standby database is a huge advantage. But it makes the process slightly more complicated, so I would recommend it only if really needed. In addition, if you don’t follow the procedure strictly, you might accidentally use a feature that requires Active Data Guard.

The MOS note V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1) has some additional details about using a standby database. You can get those details by opening a Service Request and asking for it.

Other Blog Posts in This Series

XTTS: Slow Network Between Source and Target

When you migrate an Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups, you will need to transfer a large amount of data to the target host. In some situations, especially if the target host is in a remote data center or the cloud, you are restricted by the network throughput, and it can be a limiting factor.

One approach to speed up the data transfer is to compress it before it leaves the source host. Compressing and decompressing takes time and CPU. But if your network connection is slow enough, it may pay off in the end.

The Approach

Like all other posts in this series, I am using the Perl scripts found in MOS note V4 PERL Scripts to reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

First, you start the initial level 0 backup of the source database:

$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

The backups will be stored in the directory defined by src_scratch_location. In my case, the directory is /u01/app/oracle/xtts_src_scratch. Now you can simple compress the entire directory:

gzip -r /u01/app/oracle/xtts_src_scratch

This should significantly reduce the size of the files. Transfer the compressed files to your target host, and put them into the directory defined by dest_scratch_location. Then, uncompress the files:

gunzip -r /u01/app/oracle/xtts_dest_scratch

Continue the procedure described in the above MOS note and start the restore:

$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Some Figures

I made a little test to give you an idea of how much there is to save.

Size
Data files total size 197 GB
Backup set size 197 GB
After gzip compression 12 GB

Because I don’t have any free space in my data files, the backup sets have almost the same size as the actual data files – 197 GB. By gzipping the files I could reduce the file size to 12 GB. So there is a significant amount to save – from 197 GB to 12 GB. It makes a huge difference whether you have to transfer 197 or 12 GB. But you must consider the time it takes to compress and decompress.

Elapsed time
Compression 13 min 52 sec
Decompression 8 min 7 sec

I made the same test but with image file backups and the numbers were almost the same. Generally, you should always try to use backup sets over image file backups. Backup sets use unused block compression which will skip all the unused blocks in your data files. That can make quite a difference for database with a lot of free space.

What About RMAN Compression

RMAN does have the option of compressing the backup set, but that is currently not supported by the Perl scripts. If the Perl scripts would be able to do that, you could get a good compression ratio with RMAN compression as well (of course provided you have a license for Advanced Compression Option) and avoid the gzip trick. Using the example above, by using RMAN compressed backup sets (medium compression) the backup files would be 0,3 GB which is very good as well.

But for now, you are "stuck" with the gzip trick.

Final Words

A few things to consider:

  • You should only consider this approach if you have a slow network.
  • You will need extra disk space during the compression/decompression operation.
  • You should only compress the initial, level 0 backup. The subsequent level 1 incremental backups will be much smaller in size, and it is unlikely to pay off.
  • You should test on your own database and hardware to know the actual compression ratio. Add the network throughput to the equation, and you can do the math to see if it is a good idea for your specific project.

Other Blog Posts in This Series

What Is a Self-contained Transportable Tablespace Set

Yesterday a comment was made on a video on our YouTube channel. The question was, what happens when you try to transport a tablespace that is not self-contained. Let’s find out.

The Error

First, some test data:

create tablespace a;
create tablespace b;
create tablespace c;
create user daniel identified by oracle;
grant dba to daniel;
create table daniel.sales (c1 number) tablespace a;
create table daniel.orders (c1 number) tablespace c;
create index daniel.i_orders on daniel.orders (c1) tablespace b;

Then, let’s run the Data Pump command to start the process

alter tablespace a read only;
alter tablespace b read only;

host expdp daniel/oracle transport_tablespaces=a,b
Export: Release 19.0.0.0.0 - Production on Fri Feb 4 12:32:14 2022
Version 19.14.0.0.0

...

Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
ORA-39123: Data Pump transportable tablespace job aborted
ORA-39187: The transportable set is not self-contained, violation list is

ORA-39907: Index DANIEL.I_ORDERS in tablespace B points to table DANIEL.ORDERS in tablespace C.
Job "DANIEL"."SYS_EXPORT_TRANSPORTABLE_01" stopped due to fatal error at Fri Feb 4 12:32:33 2022 elapsed 0 00:00:17

The answer is: Data Pump will check the set of tablespaces and ensure they are self-contained, when you perform the export.

Self-Contained

In the documentation it lists the following as one of the tasks for transporting tablespaces:

  1. Pick a self-contained set of tablespaces.

What does that mean exactly? Later on, in the documentation there are some examples of violations:

  • An index inside the set of tablespaces is for a table outside of the set of tablespaces.
  • A partitioned table is partially contained in the set of tablespaces.
  • A referential integrity constraint points to a table across a set boundary.

Let me illustrate it. Imagine I want to transport tablespaces A and B.

This first illustration shows a self-contained tablespace set. Table SALES is in tablespace A and a corresponding index I_SALES in tablespace B. Both tablespaces are part of the set that I want to transport. All good!

An example of a self-contained tablespace set

The next example is no good. Tablespace B now contains an index I_ORDERS, which refers to a table ORDERS which is not placed in the tablespaces that I want to transport. Either I have to drop the index I_ORDERS or move the table ORDERS into one of the tablespaces I am transporting.

An example of a tablespace set that is not self-contained

This example is also no good. Table SALES has three partitions. One of the partitions is in a tablespace that I am not transporting. I need to either drop the partition in tablespace C or move it to one of the other tablespaces.

An example of a tablespace set that is not self-contained

This last example is also no good. Table SALES has a foreign key constraint that refers table ORDERS, but ORDERS is located outside the tablespace set. There is a solution to this which will be discussed shortly.

An example of a tablespace set that is not self-contained

Checking

You can use DBMS_TTS to check whether a given set of tablespaces are self-contained. Using my previous example. I would run:

execute dbms_tts.transport_set_check('A,B');

Next, I could check the result:

SQL> select * from transport_set_violations;

By default, the procedure transport_set_check doesn’t check for foreign keys that refer to something outside the tablespace set. This means that my last illustration above would not be found by transport_set_check, and you would end up with an error when you try to transport the tablespaces.

To also check for bad foreign key constraints:

execute dbms_tts.transport_set_check('A,B', true);

Foreign Key Constraints

If you have foreign key constraints that refer to a table outside of the tablespace set, it is one of the issues which can be ignored. You can instruct Data Pump to exclude constraints, and this issue will be ignored:

$ expdp ... transport_tablespaces=A,B exclude=constraint

When you import the tablespaces into another database, there won’t be any constraints. Even those foreign key constraints that were valid are gone. And even check constaints. All of them! You should consider whether you want to create them again manually.

Indexes

If you can exclude constraints, can you also exclude indexes? If possible, you could avoid dropping or moving an offending index. Let’s try! I am using the same test data as the first example.

Check for violations:

execute dbms_tts.transport_set_check('A,B');
select * from transport_set_violations;

VIOLATIONS
-----------------------------------------------------------------
ORA-39907: Index DANIEL.I_ORDERS in tablespace B points to table DANIEL.ORDERS in tablespace C.

As expected, index I_ORDERS is a problem. Let’s try to export and exclude indexes (exclude=index):

alter tablespace a read only;
alter tablespace b read only;
host expdp daniel/oracle transport_tablespaces=a,b exclude=index
Export: Release 19.0.0.0.0 - Production on Fri Feb 4 12:32:14 2022
Version 19.14.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "DANIEL"."SYS_EXPORT_TRANSPORTABLE_01":  daniel/******** transport_tablespaces=a,b exclude=index 
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
ORA-39123: Data Pump transportable tablespace job aborted
ORA-39187: The transportable set is not self-contained, violation list is

ORA-39907: Index DANIEL.I_ORDERS in tablespace B points to table DANIEL.ORDERS in tablespace C.
Job "DANIEL"."SYS_EXPORT_TRANSPORTABLE_01" stopped due to fatal error at Fri Feb 4 12:32:33 2022 elapsed 0 00:00:17

It is not possible to exclude indexes in the same way as constraints. However, I think it makes sense. A constraint is just metadata; something that is recorded in the dictionary. We can easily ignore that when doing the Data Pump export. An index is an object with segments; data in the tablespace. Imagine we could somehow exclude the index. All those blocks belonging to the index would now be zombie blocks belonging to nothing.

Standby Database

If you need to move data out of a running database and your tablespaces are not self-contained, you need to fix the issues. Some of the solutions involve dropping or moving data, but you might not want to do that in a running database.

Imagine the example above where there is an index that refers to a table outside of the tablespace set (I_ORDERS). You probably need that index in your running database, so not a good idea to drop the index.

But you can use your standby database if you have one.

  • Convert your physical standby into a snapshot standby.
  • On your snapshot standby, do the required changes to make your tablespace set self-contained.
  • Still, on the snapshot standby, copy the data files and perform the Data Pump export to generate your transportable tablespace set and corresponding Data Pump dump file.
  • Revert the snapshot standby back into a physical standby. This will automatically revert all the temporary changes you made and re-sync with the primary database.

If you don’t have a standby database, you could achieve the same with Flashback Database, but that would require an outage on the running database.

Conclusion

If you try to transport tablespaces that are not self-contained, you will get an error. There is no way around the issues except for foreign key constraints. If you need to make changes to your database to have self-contained tablespaces, you can do it on a snapshot standby database to avoid interfering with a running database.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Offline Migration How To Minimize Downtime

With Oracle Zero Downtime Migration (ZDM) 21.2, you can now perform logical offline migrations from IBM AIX and Oracle Solaris. Since it is an offline approach, the database is unavailable during the entire migration. Let’s look at how you can minimize downtime.

Before Migration

Statistics

Ensure that dictionary statistics are up-to-date in the source database. Data Pump does a lot of querying in the data dictionary, and accurate statistics are vital for good execution plans. I recommend gathering stats on SYS and SYSTEM:

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

You could also use dbms_stats.gather_dictionary_stats. But it gathers statistics on all internal schemas, and most often, only SYS and SYSTEM is sufficient:

begin
   dbms_stats.gather_dictionary_stats;
end;
/

Data

I think it goes without saying, but the less data, the faster the migration complete. If you have any junk data in your database, get rid of it.

Migration

Parallel

Apply a proper degree of parallelism. Rule-of-thumb:

  • On-prem: 2 x number of physical cores
  • OCI: Number of OCPUs

When you are importing in OCI, consider scaling up on CPUs. More CPUs, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines need around 10 minutes of downtime.

Import processes running in parallel can read from the same dump file. This means that the number of parallel processes for an import is not limited by the number of dump files (which is the case of an export). To illustrate with an example:

  • Export: parallel=8 – 8 dump files were created. Each parallel worker needs exclusive access to a dump file.
  • Import: parallel=16 – still only 8 dump files. Each parallel worker can read from all dump files, no locks required.

You can control Data Pump parallel in ZDM using these parameters:

Data Pump Mode

Should you go via dump file or directly over a database link? Only your tests can tell the answer, but I am pretty sure that you will find that dump file mode is the quickest.

When you export via a dump file, you also need to figure out how you can transfer it to the target database:

  • Direct Transfer – you can use either scp or rsync. At the risk of starting a religious discussion, I don’t think it makes much of a difference in this use case. If you need rsync on Exadata, you can find guidance in MOS Doc ID 1556257.1.
  • Via Object Store – you can use either curl or OCI CLI. I would expect you can get better performance with OCI CLI. But you need to test it in your environment.

Remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance, and ideally, they should be separated from the storage that holds the database data files.

SecureFile LOB

ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. Don’t turn it off. Imports run faster when SecureFile LOBs are in play.

After Migration

Statistics

As soon as ZDM has completed the migration, gather dictionary stats in the target database. All those new objects that came in with the import have for sure made the statistics stale.

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

Next, you need to figure out how you want to take care of the optimizer statistics. In a previous post, I discussed why this is needed and how to do it. If you decide to import statistics via DBMS_STATS, just ensure that you have gathered dictionary stats before.

Further Reading

I recommend that you also read the following posts:

Conclusion

When migrating from IBM AIX and Oracle Solaris, you use the logical offline approach, which means a lot of downtime. You have a few options to make it run as fast as possible.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Migration and Statistics

If you decided to do a logical migration of your Oracle Database with Zero Downtime Migration (ZDM), here is something important about optimizer statistics. You must manually take care of the optimizer statistics on the target database after the Data Pump import. Either by recreating the statistics or transporting the statistics from the source database.

Background

Whenever I talk about migration of Oracle Database with Data Pump, I always mention that it is best practice to exclude optimizer statistics from the Data Pump export. Here is how you do it:

$ expdp ... exclude=statistics

Why is that recommended? Data Pump is not very good at extracting the optimizer statistics. There is nothing wrong with the statistics when they are imported. But it can take a very long time to do the export of statistics.

ZDM and Data Pump

The development team behind ZDM wanted the tool to be easy to use. Also, they wanted ZDM to use all the best practices that come with the various other tools that ZDM uses. When they talked to us about Data Pump, we told them to exclude statistics.

BUT – this also leaves you in a situation where you have a database completely without optimizer statistics. It goes without saying that it is a disaster waiting to happen. You must ensure that optimizer statistics are present on the target database before allowing the users to connect to it.

Regather

One option is to regather optimizer statistics on the target database after the Data Pump import. When the Data Pump import completes, and before you proceed with the switchover, it is time to regather the statistics. Typically, you would start ZDM something like this:

$ zdmcli migrate database .... -pauseafter ZDM_MONITOR_GG_LAG

It will start by performing the initial load of the database with Data Pump. Then it will configure GoldenGate before it pauses – waiting for your signal to complete the migration. At this time, use DBMS_STATS to gather statistics:

SQL> exec dbms_stats.gather_database_stats;

This has some drawbacks:

  • It requires time and resources – which might not be the biggest problem. The source database is still open for business. We haven’t performed the switchover yet.
  • Column usage information (table COL_USAGE$) is not populated, so in some cases, no histograms will be created. This will happen if the database is supposed to automatically determine whether histograms are needed (method_opt includes size auto). In that case, you can merge the column usage information from another database via a database link, which can be a good idea if your application is depending on histograms.
  • The table, schema, or database statistics preferences are not present. It could be degree, method_opt, stale_pct or any other preferences that you can set with DBMS_STATS.SET_TABLE_PREFS (or schema or database-wide preferences). These preferences can be transferred to the new system, which is what I will talk about next.

Transferring Statistics

Another option is to transfer the statistics using DBMS_STATS. We have covered this in detail in a webinar, so I suggest you watch Performance Stability, Tips and Tricks and Underscores for all the details.

In short,

  1. The optimizer statistics in the source database are extracted from the data dictionary and stored in a transportable format in a regular heap table (referred to as a staging table).
  2. Using Data Pump, you move that table to the target database.
  3. Then you put the statistics into the data dictionary of the target database so that optimizer can use them.

Pro tip: You should perform step #1 before you start ZDM and store the staging table in one of the schemas that you are migrating with ZDM. That way, you don’t have to move the table manually. It is moved by ZDM together with the real data.

One thing to be aware of is that the table, schema or database statistics preferences are not transferred when you use e.g. DBMS_STATS.EXPORT_TABLE_STATS. There are dedicated procedures for transferring the statistics preferences:

Conclusion

You must figure out how to move the optimizer statistics into your target database when you use ZDM to perform logical migrations. If not, your target database will be without optimizer statistics which is a disaster waiting to happen.

Appendix

You might now ask. If Data Pump is bad at exporting statistics, but there are already better ways available in the database, why don’t we change the Data Pump code? And you are right – but so far, other things have been prioritized. I would love to see this one day fully embedded in Data Pump.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration of Very Large Databases

This blog post is written based on ZDM version 21. The latest version of ZDM has significant changes. Please consult the documentation for updates.

Following the blog post on migrating Very Large Databases (VLDBs) using Logical Online method, let’s touch upon the Physical Online method as well.

Existing Data Guard

VLDBs are very often protected by Data Guard with one or more standby databases. When you start a migration with Zero Downtime Migration (ZDM) you don’t want to promise your existing Data Guard setup. If something happens during the preparation of the migration to OCI, you still want to be able to switch over to an on-prem standby database. Something like this:

Setup with on-prem Oracle Database Data Guard and standby database in OCI

I recommend that you always configure Data Guard using Data Guard broker. ZDM supports using the broker for the migration. If you don’t have Data Guard broker configured, ZDM can use a manual configuration as well.

A few things to observe:

  • When ZDM is working, no switch-overs are allowed. This will cause the process to error out.
  • By working I mean – from the second you start the ZDM migration and until it is paused at ZDM_CONFIGURE_DG_SRC.
  • When ZDM is paused, you can do as many switch-overs as you like. Just ensure that the original source database become the primary database again as soon as possible.
  • When you need to complete the migration, the source database must be the primary database and no switch-overs are allowed. Which does make sense because in this last phase, ZDM is switching over to the OCI target database.
  • What about fail-overs. Fail-overs means loss of data and to accept that you need to open the database with RESETLOGS. This causes all sort of havoc in your Data Guard setup. You are back to start.

At which phases can you switch over to your on-prem standby database without jeopardizing the ZDM migration

The Backup

ZDM needs a full backup that can be restored on your target.

  • DBCS: If your target database is one of the OCI Cloud Services (Virtual Machine, Bare Metal or Exadata DB System), ZDM will need to take a new full backup. Existing backups can’t be used.
  • ExaCC or Exadata on-prem: You can either take a new full backup or use an existing backup that is made available on disk. In addition, if you are so fortunate to have a Zero Data Loss Recovery Appliance (ZDLRA), you don’t need to take a backup. ZDM can just restore directly from ZDLRA.

If you are targeting a DBCS your DATA_TRANSFER_MEDIUM is set to OSS (Object Storage Service). The backup is stored in Object Storage using Oracle Database Cloud Backup Module for OCI. The backup in the source database and the restore in the target database will happen via a special sbt channel which streams the backup directly to and from Object Storage. This means:

  • The backup never hits the disk, so you don’t need additional disk space to hold the backup.
  • The duration of the backup is depending on your network speed to OCI. Since the backup is streamed directly to OCI, the network can become a bottleneck. If you have a slow connection to OCI, the backup will run equally slow. The same applies about the restore, however, the target database is already in OCI and does have a good connection to Object Storage.

ZDM will by default use 10 RMAN channels for the restore and the backup. With your knowledge of the source database, you might know better. You can tweak the number of channels in the response file. Look for the parameters SRC_RMAN_CHANNELS and TGT_RMAN_CHANNELS.

Based on your knowledge or testing you can determine which RMAN Compression algorithm that gives the best benefit on your database. You can adjust the compression algorithm in the response file using the parameter ZDM_RMAN_COMPRESSION_ALGORITHM. The default is MEDIUM which is normally gives the best balance between compression ratio and CPU time. And remember, RMAN Compression normally require a license for Advanced Compression Option but when you migrate with ZDM, you can use it for free.

While ZDM is taking a backup of the source database, no other backups should be running. Be sure to put your regular backups, including archive backups, on hold.

Redo Apply

In the source database, you should keep archive logs on disk until the target database has been restored, Data Guard has been configured, and the target database has caught up with redo apply. If you have a slow network connection and a huge database, it can take days until the backup has completed, restored has completed and redo apply has caught up.

  • Imagine you start the backup at sequence 100.
  • The restore of the target database finishes two days later. The source database is now at sequence 200.
  • ZDM configure Data Guard and starts redo transfer and redo apply. The source database is now at sequence 220.
  • Sequences 100-220 must be available on disk on the source database host, so the source (primary) database can transfer them to the target (standby) database.

It is not uncommon for VLDBs to generate redo on a daily basis that are double-digit TB. Just the other day I talked to a customer whose database generated 15 TB of archive logs a day.

First, you must be able to transfer the redo from the source database (primary) to the target database (standby). This is simple math: If you have 15 TB redo a day, you should be able to transfer that using a 1,5 Gbps connection (amount of redo / 24 / 60 / 60 * 8). If transferring redo becomes a problem, you can look into using redo transport compression. This can reduce the amount of data that must be transferred at the cost of CPU cycles. You can read more about it in the MOS note Redo Transport Compression in a Data Guard Environment (Doc ID 729551.1). I learned from colleagues in the Maximum Availability Architecture (MAA) team that TDE Tablespace Encryption and redo transport compression doesn’t play very well together. If your source database is encrypted, you should not expect that much benefit from redo transport compression.

Next, redo must be applied on the target database. Is the target database capable of applying redo so fast? On Exadata the answer is most likely: YES – but as always in IT, it depends. Redo Apply benchmark of Oracle Database Data Guard The numbers of the above graph comes from Redo Apply Best Practices – Oracle Data Guard and Active Data Guard. Based on your database release and the type of workload you have in the database; you can see the amount of redo that can be applied daily (in TB). The last two columns are using Multi-Instance Redo Apply (MIRA) with either two or four active RAC nodes. The numbers were generated on an Exadata.

Backup of Target Database

How do you backup your target database in OCI? You want to have a valid backup from the very second that you switch over to the target database.

The target placeholder database that you originally created will be overwritten by ZDM. This means that you can’t configure and enable automatic backup in OCI in advance. You must wait until the migration has completed until you enable automatic backup.

You could:

  • Extend the downtime window to allow automatic backup to be configured after the migration. Also, allow enough time for the first backup to complete.
  • Or, do your own backup in OCI. This is more cumbersome but will allow you to open the database for business immediately after the switchover. But you are in charge of the backup now. All the various bits and pieces are available:
    • Original backup is still in object storage.
    • Archive logs are on disk – you can back them up manually.
    • Perform incrementals if needed – put them somewhere safe.
    • In case of emergency – glue it all together

Data Guard on Target Database

Similar to automatic backup, you can’t create a Data Guard Association until ZDM has completed the migration. The cloud tooling does not support creating the standby database through a cascading standby. This means that you can’t build your OCI standby database until after ZDM has completed the migration – and the OCI database is the primary database. You can start to create the OCI Data Guard as soon as ZDM is done, but

  • You must tolerate that the OCI Data Guard is missing
  • Or, take downtime

Conclusion

Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Automatic backup and data guard can’t be created in OCI until after the migration. This might force you to take downtime. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration to ExaCS

This will be an easy blog post. To migrate your Oracle Database to Exadata DB System (ExaCS), just follow this procedure from the DBCS blog post. Plus, execute these two commands on the target after the migration:

[root@tgthost]$ dbaascli registerdb prereqs --dbname [db_name] --db_unique_name [db_unique_name]
[root@tgthost]$ dbaascli registerdb begin --dbname [db_name] --db_unique_name [db_unique_name]

More Details, Please

Granted – the above statement is bold and it is almost true. There are a few important details to share. First, have a look at Additional Information for Migrating to Exadata Cloud Service which you find the Release Notes

Target Environment

To get the full benefits of Exadata you should be running RAC databases. Exadata and RAC is a perfect match but it is up to you to decide. If your source database is already a RAC database (or RAC One Node) you must migrate to a RAC database. However, if your source database is a single instance you have to option to either stay single instance or go RAC. If you go RAC, just create the target placeholder database as a RAC database, and everything else will happen automatically.

You must create a placeholder database on the target system. The placeholder database gets overwritten ZDM during the migration but it is initially used by ZDM to get information on how you want to configure your target database in OCI. For example, the migrated database will be placed in the same Oracle Home as the target placeholder database. Also, the architecture is determined this way. In other words, if you create the target placeholder database as a RAC database; then your source database is automatically converted to a RAC database during migration. If you create a single instance placeholder database; you get a single instance database.

Just like any other migration, when creating the placeholder database there are some things you should be aware of. On the OCI webpage you have to:

  • Set Database name to the DB_NAME of the source database.
  • Set Database version to the same as the source database.
  • Ensure the patch level of the Oracle Home match that of the source system – or be higher.
  • Ensure that the Password matches the SYS password of the source database.

When using Zero Downtime Migration (ZDM) to migrate to Exadata DB System (ExaCS) be sure to create the target placeholder database in the correct manner.

You can choose your own DB_UNIQUE_NAME – it should differ from the source database. Select an Oracle Home that has the same or higher patch level than your source database. I recommend to always migrate to the latest Release Update. If necessary, ZDM will automatically invoke datapatch after the switchover. The other parameters don’t matter – the database gets overwritten anyway by ZDM. Also, be aware when using the OCI webpage you get a RAC database. There is no option to change it. But it is after all the perfect match for Exadata anyway.

Now, if you want more advanced options – like creating a single instance database, you can’t use the OCI webpage. You will have to use either dbaaspi or dbaascli. That gives you full control over the options – but they are not as easy to use as the webpage.

It’s A Wrap

I have created a video on YouTube that demos a migration to Exadata DB System.

Speaking of YouTube, I suggest that you subscribe to the Oracle Database Upgrades and Migrations YouTube channel so you never miss anything.

The Exadata Cloud Service is an awesome platform and it is really easy to migrate to it using Zero Downtime Migration. And converting to RAC is even easier.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration to DBCS

Let me show you how you can migrate an Oracle Database into OCI. My source database is a CDB running on 12.1.0.2. I want to migrate to the same version using the Physical Online method which uses Data Guard.

This procedure can be used to migrate to:

  • VM DB Systems
  • Bare Metal DB Systems
  • Exadata DB System (ExaCS)

If you want to migrate to Exadata Cloud at Customer and Exadata on-prem, you can use a lot from this post, but there are some differences which I will not cover.

Prerequisites

  1. I need a ZDM service host. It is called zdmhost and the user is called zdmuser.
  2. My source database is a CDB called SALES (DB_NAME), and the host is named srchost.
  3. I want to migrate to a VM DB System, and it is already created. I name it tgthost. I use the same name for the database, SALES. I must use ASM as storage management software.

Overview of the components in this demo

Prepare Source

Ensure the database is in ARCHIVELOG mode:

select log_mode from v$database;

If database version is 12.2 or higher, a TDE keystore must be present – even if the database is not encrypted. A license is not needed to create a TDE keystore – not until data is getting encrypted. The keystore must be OPEN and the type is either AUTOLOGIN, LOCAL_AUTOLOGIN or PASSWORD. In a CDB, this applies to CDB$ROOT and all PDBs:

SELECT con_id, status, wallet_type FROM v$encryption_wallet;

If status is OPEN_NO_MASTER_KEY it means that no TDE master encryption key has been created and I will need to create one. Instructions are also in the documentation.

Ensure that the source host can resolve the network name of the target host. It is important to add two entries – one with the host name and one with the SCAN name (they should both point to the target host):

[root@srchost]$ echo -e "[ip address] tgthost" >> /etc/hosts
[root@srchost]$ echo -e "[ip address] tgthost-scan" >> /etc/hosts

Test connection over SQL*NET to the target:

[oracle@srchost]$ sqlplus system@tgthost-scan/[target-cdb-service-name]

Prepare Target

I need a target placeholder database. The placeholder target database is overwritten during migration, but it retains the overall configuration.

  • VM DB System: When I create a new system, a database is automatically created. This database is my target placeholder database.
  • Bare Metal and Exadata DB Systems: Create a new database on an existing system.

In any case:

  • DB_NAME must exactly match that of the source database – also in the same case.
  • DB_UNIQUE_NAME must be different.
  • Use the same SYS password as the source database.
  • Any database parameters for the target database, including SGA parameters, are maintained during the migration, and the migrated database runs with this same configuration.
  • Do not enable automatic backup (until after the migration has completed).
  • The architecture (single instance or RAC) of the target placeholder database determine the architecture of the OCI database. If I want my OCI database to be a RAC database, simply create the target placeholder database as a RAC database, and the end result will be a RAC database.
  • COMPATIBLE must match that of the source database.
  • The patch level of the target database must be equal to or higher than the source database. If they differ, ZDM will automatically execute datapatch as part of the migration. Use $ORACLE_HOME/OPatch/opatch lsinventory.
  • The source and target database must use the same time zone file version. If they don’t match, I must upgrade the time zone file in the source database. If I have a lot of data of type TIMESTAMP WITH TIMEZONE and I can’t afford the downtime it takes to do the upgrade, then consider using the logical online method: select * from v$timezone_file;

There are a few other requirements listed in the document, but these are the most important ones.

Like the source host, the target host must be able to resolve the network name of its counterpart:

[root@tgthost]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@tgthost]$ echo -e "[ip address] srchost-scan" >> /etc/hosts

And I test the connection:

[oracle@tgthost]$ sqlplus system@srchost-scan/[source-cdb-service-name]

Prepare ZDM

The ZDM service host must be able to resolve the host names of the source and target database host:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts
[root@zdmhost]$ echo "<target IP address> tgthost" >> /etc/hosts

I put my private SSH keys to the source and target database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 
[zdmuser@zdmhost]$ cp tgthost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/tgthost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date
[zdmuser@zdmhost]$ ssh -i ~/.ssh/tgthost_key_file opc@tgthost date

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_template.rsp ~/physical_online.rsp
[zdmuser@zdmhost]$ chmod 700 ~/physical_online.rsp

This is the response file that I end up with (see appendix B for an explanation):

TGT_DB_UNIQUE_NAME=SALES_fra2t4
MIGRATION_METHOD=ONLINE_PHYSICAL
DATA_TRANSFER_MEDIUM=OSS
PLATFORM_TYPE=VMDB
HOST=https://swiftobjectstorage.[region].oraclecloud.com/v1/[my-tenancy-object-storage-namespace]
OPC_CONTAINER=zdm-staging
SKIP_FALLBACK=TRUE
SHUTDOWN_SRC=TRUE

If your database is Oracle Database 11.2.0.4 be sure to set these response file parameters:

ZDM_RMAN_DIRECT_METHOD=ACTIVE_DUPLICATE
ZDM_USE_DG_BROKER=FALSE

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
   -backupuser "daniel.overby.hansen@oracle.com" \
   -eval

A few comments:

  • sourcenode and targetnode are the host names of the source and target database host. Those names must be resolvable which I why I put them into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database. My database is on ASM. If I didn’t use ASM I should use sourcesid instead and specify the database SID.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 and tgtarg2 is the location of the private key files to the source and target database host respectively. The private key file must be usable by the user that is specified in srcarg1 and tgtarg1 which in this case is opc.

I am prompted for the SYS password to the source database and also the password for the backupuser (which is my OCI user). For the latter, please note that this password is not my user password, however, it is an auth token. I also get a job ID: When using OCI oject storage this is NOT your user password, but an auth token

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

My colleague Sinan Petrus Toma showed how to loop:

[zdmuser@zdmhost]$ while :; do $ZDM_HOME/bin/zdmcli query job -jobid <job ID>; sleep 10; done

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-06-03T07:19:50.317Z : Starting zero downtime migrate operation ...
zdmhost: 2021-06-03T07:19:53.445Z : Executing phase ZDM_GET_SRC_INFO
zdmhost: 2021-06-03T07:19:53.446Z : Retrieving information from source node "srchost" ...
zdmhost: 2021-06-03T07:19:53.447Z : retrieving information about database "SALES_fra3fw" ...
zdmhost: 2021-06-03T07:20:02.743Z : Execution of phase ZDM_GET_SRC_INFO completed
zdmhost: 2021-06-03T07:20:02.826Z : Executing phase ZDM_GET_TGT_INFO
zdmhost: 2021-06-03T07:20:02.827Z : Retrieving information from target node "tgthost" ...
zdmhost: 2021-06-03T07:20:12.197Z : Determined value for parameter TGT_DATADG is '+DATA'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_REDODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_RECODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.203Z : Execution of phase ZDM_GET_TGT_INFO completed

Eventually, I end up with a successful evaluation: When status of an evaluation is SUCCEEDED

Start Migration

When the evaluation passes, I can start the real migration. I am re-using the same command line, but I have removed the -eval option. Instead – and this is important – I am using -pauseafter to tell ZDM to pause the migration, just before the switchover takes place. Downtime has not started yet. The database is still open for business, but ZDM will copy the data and configure Data Guard:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
   -backupuser "daniel.overby.hansen@oracle.com" \
   -pauseafter ZDM_CONFIGURE_DG_SRC

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file. Note down the job ID. I need it later on to resume the migration.

When ZDM completes the phase ZDM_CONFIGURE_DG_SRC it will pause and wait. I can verify it with zdmcli query job: ZDM is currently paused

Now the standby database has been built in OCI. Redo gets transferred from my source database to the target database in OCI and is applied (see appendix D for monitoring queries). Also, this is a good time to test your new database.

Complete Migration

Now it is time to finalize the migration and switch over to the OCI target database. All I need to do, is to resume the paused ZDM job. I use the job ID that was created when I started the migration:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli resume job -jobid <job ID>

ZDM will now ensure that all redo is sent and applied before switching over to the OCI target database. Again, I use the zdmcli query job command to monitor the progress and I can tail the log file. After a short while the migration completes. ZDM migration completed

That’s it. I have now migrated into OCI!

I have a few post-migration tasks to carry out:

  • Configure automatic backup
  • Protect my new OCI database with a Data Guard in OCI
  • Take a peek at the alert log (just be sure)

Other Blog Posts in This Series

Appendix

A – RAC Databases

If your source or target database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH keys to that host. Also, ensure that the IP address you are using is not the floating one (VIP). In OCI that is referred to as the private IP.

B – Response File

Allow me to put some comments on the values:

Setting Comment
DATA_TRANSFER_MEDIUM When migrating to a DB System in OCI OSS is the only option. It is also the default value, so I could omit the parameter. The other options are applicable for Exadata Cloud at Customer and Exadata on-prem.
                                                                                                                                                                     HOST                                                                                                                                                                      Getting the right URL for the HOST parameter might be a little tricky, but check the documentation. In my demo I use the Frankfurt data center, and, thus, the region is set to eu-frankfurt-1. Visit the API documentation for a list of regions. Use OCI CLI and oci os ns get to find the tenancy object storage namespace. Alternatively, in the OCI Console (the web page) open the Profile menu and click Tenancy: <your tenancy name>. The namespace string is listed under Object Storage Settings. To read more about have a look at the OCI documentation. Thanks to Bartlomiej Sowa for putting in a comment with this information – much appreciated!
OPC_CONTAINER The OCI Object Storage bucket that will be used as a staging area for the backup of the source database. I recommend using a separate bucket for each migration. It makes it a lot easier to clean up afterwards.
PLATFORM_TYPE VMDB covers Virtual Machine and Bare Metal DB Services. It is also the default value, so I could omit the parameter.
SHUTDOWN_SRC I choose to shut down the source database to ensure no one uses it unintentionally after the migration.
SKIP_FALLBACK To make my demo simple I choose not to configure fallback. Remember falling back to the source database requires a license for Advanced Security Option or setting the paramete TABLESPACE_ENCRYPTION correctly.
TGT_DB_UNIQUE_NAME The DB_UNIQUE_NAME of the target placeholder database. It must be different than that of the source database.

C – Backup Strategy

During a migration with ZDM, you should keep your regular backup strategy. Keep doing the same backups as you did before. But avoid having ZDM backups and regular backups run at the same time. Also, if you are dealing with a RAC database be sure to put the snapshot control file on shared storage. Otherwise, you might get ORA-00245 errors during backups.

D – Monitoring

Use these queries to monitor the redo apply. On source/primary database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

Target/standby database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;
--MRP process should be 'APPLYING_LOG'
SELECT process, status, sequence# FROM v$managed_standby;
SELECT * FROM v$archive_gap;

E – Troubleshooting

PRCZ-4001

If you run into this error:

PRCZ-4001 : failed to execute command "/bin/uname" using the privileged execution plugin "zdmauth" on nodes "doverbyh-zdm-tgt" within 120 seconds
PRCZ-2006 : Unable to establish SSH connection to node "doverbyh-zdm-tgt" to execute command "/bin/uname"
No more authentication methods available

Check your key files. They must be in RSA/PEM format (the private key must start with -----BEGIN RSA PRIVATE KEY-----).

ZDM_OBC_INSTALL_CREDENTIALS_INVALID

If you run into this error:

srchost: 07:08:00.000: Validating object store credentials..
srchost: <ERR_FILE><Facility>PRGO</Facility><ID>ZDM_OBC_INSTALL_CREDENTIALS_INVALID</ID></ERR_FILE>

Check your credentials to OCI. Remember when prompted for the password of your OCI account it is an auth token, not your password (even though the prompt text is misleading).

ZDM_GET_SRC_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the source database host is added to the known_hosts file on the ZDM service host. Also, you can verify connectivity by trying to log on via SSH:

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-source-host>

ZDM_GET_TGT_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the target database host is added to the known_hosts file on the ZDM service host.

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-target-host>

ZDM_SWITCHOVER_SRC

If the migration is stuck at the phase ZDM_SWITCHOVER_SRC and you see the following in the ZDM log file:

zdmhost213: 2022-05-10T18:01:22.565Z : Executing phase ZDM_SWITCHOVER_SRC
zdmhost213: 2022-05-10T18:01:22.566Z : Switching database SALES2_fra3cx on the source node srchost to standby role ...
zdmhost213: 2022-05-10T18:01:22.566Z : checking if source database is ready for switching role...
srchost: 2022-05-10T18:01:35.340Z : Validating database SALES2_fra3cx role is PRIMARY...
srchost: 2022-05-10T18:01:35.742Z : Validating database SALES2_fra3cx is in open mode...
srchost: 2022-05-10T18:01:36.144Z : Waiting for SALES2_fra3cx to catch up, this could take a few minutes...

Then proceed to primary and standby database and investigate whether there are any log shipping or log apply problems. The root cause can be many different things, for instance conflicts when negotiating network security options.

PRGZ-3420 / ZDM_MANIFEST_TO_CLOUD

Check the ZDM log file stored on the ZDM service host in $ZDM_BASE/chkbase/scheduled. If you find this error:

zdmhost213: 2022-05-10T18:20:34.330Z : Executing phase ZDM_MANIFEST_TO_CLOUD
zdmhost213: 2022-05-10T18:20:34.331Z : registering database "SALES2_OCI" to Oracle Cloud
tgthost: 2022-05-10T18:20:50.895Z : Updating DBAAS wallet keys ...
####################################################################
PRGZ-3420 : failed to modify keystore entry "oracle.security.client.password1" in wallet "/var/opt/oracle/dbaas_acfs/sales2/db_wallet".

The proceed to the target host and find the log file from that phase. It is located in $ORACLE_BASE/zdm/zdm_<target_db_unique_name>_<zdm_job_id>/zdm/log. Do a grep:

cd /u02/app/oracle/zdm/zdm_SALES2_OCI_12/zdm/log
grep "cloud registry" zdm_manifest_to_cloud_*.log

If you find the following lines in the log file:

grep "Error: Missing cloud registry" zdm_manifest_to_cloud_*.log
zdm_manifest_to_cloud_337115.log:Error: Missing cloud registry file SALES2.ini
zdm_manifest_to_cloud_337115.log:Error: Missing cloud registry file SALES2.ini

You have most likely used the wrong case of DB_NAME on the target database. You choose that in the OCI console in the field Database name. In this case, DB_NAME in source database is SALES2, but I had incorrectly used sales2 (lowercase) for the target database. Normally, DB_NAME is case insensitive, and it is to the database itself. But the OCI cloud tooling is not case insensitive. Looking in /var/opt/oracle/creg on the target database, I can find a file called sales2.ini but ZDM is looking for SALES2.ini. How do you solve this? If this is a test database, then scratch everything and start all over. But bear in mind that the switchover has already taken place, and the OCI database is now the primary database. For a production database this is tricky. First, the database has been migrated to OCI. It works. All your data is fine on the target database. But the OCI cloud tooling does not work properly. You need to open a SR and get assistance from support.

PRGO-4086 : failed to query the "VALUE" details from the view "V$ARCHIVE_GAP" for database "target_db_name"

If the switchover fails during this query:

select count(*) from v$archive_gap

And your target database is 11.2.0.4 or 12.1.0.2, you have hit bug 18411339 (Low performance or ORA-1220 for query on V$ARCHIVE_GAP on 11.2.0.4 or later release). Apply the patch on the target database and retry the ZDM migration. You can also open the target database before you start the switchover process (requires Active Data Guard because redo apply must be on). The above query fails only when the standby database (the target database) is mounted. It works fine on an open database.

Zero Downtime Migration – Physical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Physical Online method. A standby database is built in OCI and kept in sync via redo apply. At your will, a switchover is all it takes to complete the migration. This way even very large databases can be migrated with no or very little downtime.

Concept of physical online migration

With the Physical Online method, you can target:

  • Virtual Machine DB System
  • Bare Metal DB System
  • Exadata DB System (ExaCS)
  • Exadata Cloud at Customer (ExaCC)
  • Exadata (on-prem)

Since the method uses Data Guard as the migration vehicle, this can only be only with Enterprise Edition databases.

Benefits

  • Personally, I like this approach because it builds on technology that most of us know already. Data Guard is heavily in use in most organizations. Although ZDM does all the heavy lifting, it is nice to know what happens underneath the hood.

  • To complete the migration all you need is a regular Data Guard switchover (which ZDM also takes care of). A switchover operation ensures that there will be no data loss at all. In addition, if you have a properly configured application, it won’t experience downtime. Just a brown-out while the switchover takes place.

  • This method has an excellent fallback possibility. When you switch over to the OCI database, the redo flow is reversed. Now, the on-prem database is a standby database, and it is kept in sync via redo from the OCI database. For those with Advanced Security Option special attention is needed. Read more about it later on.

  • You migrate the entire database. All the internals are brought to the cloud as well. This includes:

    • AWR
    • SQL Plan Baseline
    • SQL Profiles
    • Public objects
    • Etc.
  • The initial backup of the source, on-prem database will use RMAN Compression. It will drastically reduce the size of the backup, and thus the amount of data that you must transport to the cloud. And the really good thing: ZDM can use RMAN Compression even if you don’t have a license for the Advanced Compression Option. Check the documentation for details on the restricted use license.

  • You can customize the RMAN backup. You know your system best, so it is possible to tweak the number of RMAN channels used and the compression algorithm applied. Default is 10 and medium.

  • When ZDM takes the full backup of your source database (Exadata on-prem and ExaCC excluded), it uses Oracle Database Cloud Backup Module for OCI to allocate a special sbt channel. The backup is sent directly to OCI Object Storage. Thus, you don’t need any additional disk space on your source system to hold the backup. The same applies on the target system.

  • When migrating to ExaCC or Exadata on-prem you can use an existing backup. Use the response file option DATA_TRANSFER_MEDIUM=EXTBACKUP. Also, if you have a Zero Data Loss Recovery Appliance (ZDLRA) you can restore directly from it, without taking a full backup first. If you are migrating to any other platform, ZDM will need to take a full backup as part of the workflow.

  • The standby database is kept in sync via redo apply. This means that there are no restrictions in supported data types. Further, DDL in any form is supported and even with a heavy workload on the primary database, your standby database should be able to keep up.

  • You can migrate to a higher patch level. ZDM will automatically invoke datapatch for you on the OCI database. But you must handle the on-prem database yourself. After switching over to the OCI database and datapatch has been executed, you should patch the on-prem Oracle Home to the same patch level.

Considerations

  • You can migrate to the same version only. This is a restriction of Data Guard. If you also need to upgrade the database, then you must do that after the migration. But it will incur additional downtime.
  • The same applies to PDB conversion. ZDM can run the noncdb_to_pdb.sql script for you. But that will incur downtime as well. If you decide to convert to PDB, you need to develop an alternate fallback plan (which is Data Pump or Transportable Tablespaces). The PDB conversion is irreversible, and your on-prem database will be useless as a fallback option.
  • It is not possible to migrate between editions.
  • The entire database is migrated. I listed this as a benefit as well, but it has a flipside as well. It is not possible to perform any transformation during the migration, e.g. converting any old BasicFile LOBs to SecureFile LOBs. Also, you bring over any old baggage in your database. Sometimes it is nice to start from scratch because garbage tends to accumulate in a database over time.
  • If you have a Standard Edition database, you can’t use Physical Online method. Only the Physical Offline is supported.
  • You can’t configure Automatic backup on your target database until after the migration has been completed.
  • If you need to protect your OCI database with Data Guard, then you must build the standby database after the migration has been completed. Normally, you would use a cascading standby database to keep the target database protected by Data Guard even after the migration has been completed. But currently the OCI tooling does not support that option.
  • If your source database is not encrypted, you must create a keystore that can be used later on for TDE Tablespace Encryption. The keystore is created in the source database with no downtime, and you don’t need a separate license just for creating a keystore. You do not have to actually encrypt your source database until it reaches the cloud.

Fallback

Using this approach, you have a great fallback option. When you switch over to the OCI database, then the redo flow is reversed and the source, on-prem database is now the standby database. If you need to fallback, simply issue another switchover (you could call that a switchback) and start to use the on-prem database again – with no data loss. See step 5 in MAA Practices for Cloud Migration Using ZDM (Doc ID 2562063.1).

There is a catch, however. The OCI database is encrypted using TDE Tablespace Encryption. Any redo generated for an encrypted tablespace is also encrypted. This means that the on-prem database must be able to decrypt the redo before it can be applied. This can happen in two ways:

  • You have a license the Advanced Security Option.
  • You tell the on-prem database to decrypt the redo and apply it to unencrypted tablespaces.
    • On-prem database:
    SQL> alter system set tablespace_encryption=decrypt_only;
    
    • OCI database:
    SQL> alter system set tablespace_encryption=auto_enable;
    

Conclusion

Using the Physical Online method in ZDM is a straight-forward way of migrating your database to OCI. It uses Data Guard which is very familiar to most of us. The method does, however, have some limitations, and you can’t target Autonomous Databases.

Want to Know More

If you want to know more about migrations in general, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series