Zero Downtime Migration – Logical Offline Migration How To Minimize Downtime

With Oracle Zero Downtime Migration (ZDM) 21.2, you can now perform logical offline migrations from IBM AIX and Oracle Solaris. Since it is an offline approach, the database is unavailable during the entire migration. Let’s look at how you can minimize downtime.

Before Migration

Statistics

Ensure that dictionary statistics are up-to-date in the source database. Data Pump does a lot of querying in the data dictionary, and accurate statistics are vital for good execution plans. I recommend gathering stats on SYS and SYSTEM:

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

You could also use dbms_stats.gather_dictionary_stats. But it gathers statistics on all internal schemas, and most often, only SYS and SYSTEM is sufficient:

begin
   dbms_stats.gather_dictionary_stats;
end;
/

Data

I think it goes without saying, but the less data, the faster the migration complete. If you have any junk data in your database, get rid of it.

Migration

Parallel

Apply a proper degree of parallelism. Rule-of-thumb:

  • On-prem: 2 x number of physical cores
  • OCI: Number of OCPUs

When you are importing in OCI, consider scaling up on CPUs. More CPUs, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines need around 10 minutes of downtime.

Import processes running in parallel can read from the same dump file. This means that the number of parallel processes for an import is not limited by the number of dump files (which is the case of an export). To illustrate with an example:

  • Export: parallel=8 – 8 dump files were created. Each parallel worker needs exclusive access to a dump file.
  • Import: parallel=16 – still only 8 dump files. Each parallel worker can read from all dump files, no locks required.

You can control Data Pump parallel in ZDM using these parameters:

Data Pump Mode

Should you go via dump file or directly over a database link? Only your tests can tell the answer, but I am pretty sure that you will find that dump file mode is the quickest.

When you export via a dump file, you also need to figure out how you can transfer it to the target database:

  • Direct Transfer – you can use either scp or rsync. At the risk of starting a religious discussion, I don’t think it makes much of a difference in this use case. If you need rsync on Exadata, you can find guidance in MOS Doc ID 1556257.1.
  • Via Object Store – you can use either curl or OCI CLI. I would expect you can get better performance with OCI CLI. But you need to test it in your environment.

Remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance, and ideally, they should be separated from the storage that holds the database data files.

SecureFile LOB

ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. Don’t turn it off. Imports run faster when SecureFile LOBs are in play.

After Migration

Statistics

As soon as ZDM has completed the migration, gather dictionary stats in the target database. All those new objects that came in with the import have for sure made the statistics stale.

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

Next, you need to figure out how you want to take care of the optimizer statistics. In a previous post, I discussed why this is needed and how to do it. If you decide to import statistics via DBMS_STATS, just ensure that you have gathered dictionary stats before.

Further Reading

I recommend that you also read the following posts:

Conclusion

When migrating from IBM AIX and Oracle Solaris, you use the logical offline approach, which means a lot of downtime. You have a few options to make it run as fast as possible.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online and Sequences

When migrating with Oracle Zero Downtime Migration (ZDM) and the logical online, you use Data Pump for the initial load and Oracle GoldenGate to keep the database in sync. When using Oracle GoldenGate, attention is needed on your sequences; otherwise, you will probably end up with a lot of ORA-00001 unique constraint violated.

The Problem

I think it is best to illustrate with an example:

Source Database Target Database
Sequence s1 currently has a value of 100.
ZDM starts and performs the export for the initial load.
The Data Pump import creates sequence s1 with current value 100.
A user inserts a row and uses the sequence:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence, and the statement now looks like this:
insert into orders values(101 ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                GoldenGate replicates the statement. It does not take a new value from the sequence on the target. Doing so would corrupt the data because there is no guarantee that the same number would be taken. Instead, it uses the statement as it was executed on the source:
insert into orders values(101 ...
Now it is time to complete the migration. Users are disconnected.
GoldenGate synchronizes the final changes before users are now connecting to the target database.
A user inserts another record into orders:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence on the target database. It was created with current value 100, so next value is 101:
insert into orders values(101 ...
101 has already been used, and typically, there is a primary key or unique key constraint on such a column.
This causes ORA-00001 unique constraint violated.

I talked to our GoldenGate experts, and it is a common pitfall. During migrations, sequences are often forgotten, and soon after the migration they start seeing ORA-00001 unique constraint violated.

The Solution

After the switchover, you must ensure that the sequences have a proper value. The sequences must be forwarded or advanced. You can do that in many ways. Here are some ideas:

Recreate sequences

Right after the migration has been completed and GoldenGate has done the final synchronization, but before users are connecting, you can recreate the sequences.

  • Drop the sequences on the target database.
  • Recreate the sequences either using DBMS_METADATA or Data Pump.
  • Ensure to adjust the privileges on the sequences.

If you can wrap all those changes into a shell script, you can have ZDM execute it automatically as part of the migration flow. Check out Pro Tip 5: Adding Custom Scripts from a previous blog post.

Forward or advance the sequence

Another approach is to forward or advance the sequence. Increment the sequence on the target database until the next value is high enough. What do I mean by high enough?

  • The value of the same sequence on the source database.
  • If you know the sequence is only used to generate numbers for one column (let’s call it t1.c1), then get the highest value from that column SELECT max(c1) FROM t1. Typically, such a column is a primary key column with a unique index underneath, so a SELECT max should go very fast.

Now that you have the desired, new value for the sequence, you can calculate how much to increment by. Example: The source database sequence (DBA_SEQUENCES.LAST_NUMBER) is 1500 and target database sequence is 1000, then you need to increment by 500:

alter sequence seq1 increment by 500;
select seq1.nextval from dual;
alter sequence seq1 increment by 1;

If you use the approach of SELECT max then you need to increment by 501. In any case, better increment by too much than too little.

Similar to the former option, you can put the commands into a shell script and have ZDM execute it

Replicate sequence changes

GoldenGate also has the possibility of replicating the changes to the sequences. If you decide to use this with ZDM, it will require additional configuration of GoldenGate, which is why I prefer any of the former options. If you want to more about it, check out Oracle GoldenGate 19.1 – Using Oracle GoldenGate for Oracle Database, chapter 8.1 Installing Support for Oracle Sequences.

Conclusion

If your database uses sequences and you are using Oracle GoldenGate as part of your migration, ensure to handle your sequences properly. Otherwise, you will fall into a typical pitfall of GoldenGate migrations.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Migration and Statistics

If you decided to do a logical migration of your Oracle Database with Zero Downtime Migration (ZDM), here is something important about optimizer statistics. You must manually take care of the optimizer statistics on the target database after the Data Pump import. Either by recreating the statistics or transporting the statistics from the source database.

Background

Whenever I talk about migration of Oracle Database with Data Pump, I always mention that it is best practice to exclude optimizer statistics from the Data Pump export. Here is how you do it:

$ expdp ... exclude=statistics

Why is that recommended? Data Pump is not very good at extracting the optimizer statistics. There is nothing wrong with the statistics when they are imported. But it can take a very long time to do the export of statistics.

ZDM and Data Pump

The development team behind ZDM wanted the tool to be easy to use. Also, they wanted ZDM to use all the best practices that come with the various other tools that ZDM uses. When they talked to us about Data Pump, we told them to exclude statistics.

BUT – this also leaves you in a situation where you have a database completely without optimizer statistics. It goes without saying that it is a disaster waiting to happen. You must ensure that optimizer statistics are present on the target database before allowing the users to connect to it.

Regather

One option is to regather optimizer statistics on the target database after the Data Pump import. When the Data Pump import completes, and before you proceed with the switchover, it is time to regather the statistics. Typically, you would start ZDM something like this:

$ zdmcli migrate database .... -pauseafter ZDM_MONITOR_GG_LAG

It will start by performing the initial load of the database with Data Pump. Then it will configure GoldenGate before it pauses – waiting for your signal to complete the migration. At this time, use DBMS_STATS to gather statistics:

SQL> exec dbms_stats.gather_database_stats;

This has some drawbacks:

  • It requires time and resources – which might not be the biggest problem. The source database is still open for business. We haven’t performed the switchover yet.
  • Column usage information (table COL_USAGE$) is not populated, so in some cases, no histograms will be created. This will happen if the database is supposed to automatically determine whether histograms are needed (method_opt includes size auto). In that case, you can merge the column usage information from another database via a database link, which can be a good idea if your application is depending on histograms.
  • The table, schema, or database statistics preferences are not present. It could be degree, method_opt, stale_pct or any other preferences that you can set with DBMS_STATS.SET_TABLE_PREFS (or schema or database-wide preferences). These preferences can be transferred to the new system, which is what I will talk about next.

Transferring Statistics

Another option is to transfer the statistics using DBMS_STATS. We have covered this in detail in a webinar, so I suggest you watch Performance Stability, Tips and Tricks and Underscores for all the details.

In short,

  1. The optimizer statistics in the source database are extracted from the data dictionary and stored in a transportable format in a regular heap table (referred to as a staging table).
  2. Using Data Pump, you move that table to the target database.
  3. Then you put the statistics into the data dictionary of the target database so that optimizer can use them.

Pro tip: You should perform step #1 before you start ZDM and store the staging table in one of the schemas that you are migrating with ZDM. That way, you don’t have to move the table manually. It is moved by ZDM together with the real data.

One thing to be aware of is that the table, schema or database statistics preferences are not transferred when you use e.g. DBMS_STATS.EXPORT_TABLE_STATS. There are dedicated procedures for transferring the statistics preferences:

Conclusion

You must figure out how to move the optimizer statistics into your target database when you use ZDM to perform logical migrations. If not, your target database will be without optimizer statistics which is a disaster waiting to happen.

Appendix

You might now ask. If Data Pump is bad at exporting statistics, but there are already better ways available in the database, why don’t we change the Data Pump code? And you are right – but so far, other things have been prioritized. I would love to see this one day fully embedded in Data Pump.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration of Very Large Databases

Following the blog post on migrating Very Large Databases (VLDBs) using Logical Online method, let’s touch upon the Physical Online method as well.

Existing Data Guard

VLDBs are very often protected by Data Guard with one or more standby databases. When you start a migration with Zero Downtime Migration (ZDM) you don’t want to promise your existing Data Guard setup. If something happens during the preparation of the migration to OCI, you still want to be able to switch over to an on-prem standby database. Something like this:

Setup with on-prem Oracle Database Data Guard and standby database in OCI

When ZDM configure the OCI standby database, it will be added as a manually managed database, i.e. it does not use Data Guard broker. It does not interfere with your Data Guard broker setup and the setup remains valid. A few things to observe:

  • When ZDM is working, no switch-overs are allowed. This will cause the process to error out.
  • By working I mean – from the second you start the ZDM migration and until it is paused at ZDM_CONFIGURE_DG_SRC.
  • When ZDM is paused, you can do as many switch-overs as you like. Just ensure that the original source database become the primary database again as soon as possible.
  • When you need to complete the migration, the source database must be the primary database and no switch-overs are allowed. Which does make sense because in this last phase, ZDM is switching over to the OCI target database.
  • What about fail-overs. Fail-overs means loss of data and to accept that you need to open the database with RESETLOGS. This causes all sort of havoc in your Data Guard setup. You are back to start.

At which phases can you switch over to your on-prem standby database without jeopardizing the ZDM migration

Important

But you must remove your on-prem Data Guard setup, before you complete the migration with ZDM, i.e. you resume the paused job. ZDM will not be able to finish properly if there is a conflicting Data Guard configuration. To remove your existing Data Guard setup:

  1. ZDM migration is currently paused at ZDM_CONFIGURE_DG_SRC.
  2. Connect using Data Guard CLI (dgmgrl) to source on-prem database (primary). Remove configuration:
remove configuration
  1. Connect using SQL*Plus (sqlplus) to source on-prem database (primary). Remove Data Guard broker configuration:
alter system set dg_broker_start=FALSE scope=memory;
alter system reset dg_broker_start;
alter system reset dg_broker_config_file1;
alter system reset dg_broker_config_file2;
  1. Check log_archive_config. It must contain DB_UNIQUE_NAME of the source on-prem database (primary) and the OCI target database – nothing else. Using the above example, it should be:
alter system set log_archive_config='dg_config=(SALES_1,SALES_oci)' scope=both;
  1. Check log_archive_dest_n parameters. Ensure that Data Guard broker removed the correct ones. There should only be one remote redo log destination – and that is the OCI target database – all others should be removed. If you need to remove one, use this command (remember adjusting the suffix):
alter system set log_archive_dest_2='' scope=memory;
alter system reset log_archive_dest_2;
  1. Ensure that log_archive_config is set correctly (like #4) in the OCI target database:
alter system set log_archive_config='dg_config=(SALES_1,SALES_oci)' scope=both;
  1. Finally, complete migration:
$ZDM_HOME/bin/zdmcli resume job -jobid <id>

The Backup

ZDM needs a full backup that can be restored on your target.

  • DBCS: If your target database is one of the OCI Cloud Services (Virtual Machine, Bare Metal or Exadata DB System), ZDM will need to take a new full backup. Existing backups can’t be used.
  • ExaCC or Exadata on-prem: You can either take a new full backup or use an existing backup that is made available on disk. In addition, if you are so fortunate to have a Zero Data Loss Recovery Appliance (ZDLRA), you don’t need to take a backup. ZDM can just restore directly from ZDLRA.

If you are targeting a DBCS your DATA_TRANSFER_MEDIUM is set to OSS (Object Storage Service). The backup is stored in Object Storage using Oracle Database Cloud Backup Module for OCI. The backup in the source database and the restore in the target database will happen via a special sbt channel which streams the backup directly to and from Object Storage. This means:

  • The backup never hits the disk, so you don’t need additional disk space to hold the backup.
  • The duration of the backup is depending on your network speed to OCI. Since the backup is streamed directly to OCI, the network can become a bottleneck. If you have a slow connection to OCI, the backup will run equally slow. The same applies about the restore, however, the target database is already in OCI and does have a good connection to Object Storage.

ZDM will by default use 10 RMAN channels for the restore and the backup. With your knowledge of the source database, you might know better. You can tweak the number of channels in the response file. Look for the parameters SRC_RMAN_CHANNELS and TGT_RMAN_CHANNELS.

Based on your knowledge or testing you can determine which RMAN Compression algorithm that gives the best benefit on your database. You can adjust the compression algorithm in the response file using the parameter ZDM_RMAN_COMPRESSION_ALGORITHM. The default is MEDIUM which is normally gives the best balance between compression ratio and CPU time. And remember, RMAN Compression normally require a license for Advanced Compression Option but when you migrate with ZDM, you can use it for free.

While ZDM is taking a backup of the source database, no other backups should be running. Be sure to put your regular backups, including archive backups, on hold.

Redo Apply

In the source database, you should keep archive logs on disk until the target database has been restored, Data Guard has been configured, and the target database has caught up with redo apply. If you have a slow network connection and a huge database, it can take days until the backup has completed, restored has completed and redo apply has caught up.

  • Imagine you start the backup at sequence 100.
  • The restore of the target database finishes two days later. The source database is now at sequence 200.
  • ZDM configure Data Guard and starts redo transfer and redo apply. The source database is not at sequence 220.
  • Sequences 100-220 must be available on disk on the source database host, so the source (primary) database can transfer them to the target (standby) database.

It is not uncommon for VLDBs to generate redo on a daily basis that are double-digit TB. Just the other day I talked to a customer whose database generated 15 TB of archive logs a day.

First, you must be able to transfer the redo from the source database (primary) to the target database (standby). This is simple math: If you have 15 TB redo a day, you should be able to transfer that using a 1,5 Gbps connection (amount of redo / 24 / 60 / 60 * 8). If transferring redo becomes a problem, you can look into using redo transport compression. This can reduce the amount of data that must be transferred at the cost of CPU cycles. You can read more about it in the MOS note Redo Transport Compression in a Data Guard Environment (Doc ID 729551.1). I learned from colleagues in the Maximum Availability Architecture (MAA) team that TDE Tablespace Encryption and redo transport compression doesn’t play very well together. If your source database is encrypted, you should not expect that much benefit from redo transport compression.

Next, redo must be applied on the target database. Is the target database capable of applying redo so fast? On Exadata the answer is most likely: YES – but as always in IT, it depends. Redo Apply benchmark of Oracle Database Data Guard The numbers of the above graph comes from Redo Apply Best Practices – Oracle Data Guard and Active Data Guard. Based on your database release and the type of workload you have in the database; you can see the amount of redo that can be applied daily (in TB). The last two columns are using Multi-Instance Redo Apply (MIRA) with either two or four active RAC nodes. The numbers were generated on an Exadata.

Backup of Target Database

How do you backup your target database in OCI? You want to have a valid backup from the very second that you switch over to the target database.

The target placeholder database that you originally created will be overwritten by ZDM. This means that you can’t configure and enable automatic backup in OCI in advance. You must wait until the migration has completed until you enable automatic backup.

You could:

  • Extend the downtime window to allow automatic backup to be configured after the migration. Also, allow enough time for the first backup to complete.
  • Or, do your own backup in OCI. This is more cumbersome but will allow you to open the database for business immediately after the switchover. But you are in charge of the backup now. All the various bits and pieces are available:
    • Original backup is still in object storage.
    • Archive logs are on disk – you can back them up manually.
    • Perform incrementals if needed – put them somewhere safe.
    • In case of emergency – glue it all together

Data Guard on Target Database

Similar to automatic backup, you can’t create a Data Guard Association until ZDM has completed the migration. The cloud tooling does not support creating the standby database through a cascading standby. This means that you can’t build your OCI standby database until after ZDM has completed the migration – and the OCI database is the primary database. You can start to create the OCI Data Guard as soon as ZDM is done, but

  • You must tolerate that the OCI Data Guard is missing
  • Or, take downtime

Conclusion

Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Automatic backup and data guard can’t be created in OCI until after the migration. This might force you to take downtime. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration to ExaCS

This will be an easy blog post. To migrate your Oracle Database to Exadata DB System (ExaCS), just follow this procedure from the DBCS blog post. Plus, execute these two commands on the target after the migration:

[root@tgthost]$ dbaascli registerdb prereqs --dbname [db_name] --db_unique_name [db_unique_name]
[root@tgthost]$ dbaascli registerdb begin --dbname [db_name] --db_unique_name [db_unique_name]

More Details, Please

Granted – the above statement is bold and it is almost true. There are a few important details to share. First, have a look at Additional Information for Migrating to Exadata Cloud Service which you find the Release Notes

Target Environment

To get the full benefits of Exadata you should be running RAC databases. Exadata and RAC is a perfect match but it is up to you to decide. If your source database is already a RAC database (or RAC One Node) you must migrate to a RAC database. However, if your source database is a single instance you have to option to either stay single instance or go RAC. If you go RAC, just create the target placeholder database as a RAC database, and everything else will happen automatically.

You must create a placeholder database on the target system. The placeholder database gets overwritten ZDM during the migration but it is initially used by ZDM to get information on how you want to configure your target database in OCI. For example, the migrated database will be placed in the same Oracle Home as the target placeholder database. Also, the architecture is determined this way. In other words, if you create the target placeholder database as a RAC database; then your source database is automatically converted to a RAC database during migration. If you create a single instance placeholder database; you get a single instance database.

Just like any other migration, when creating the placeholder database there are some things you should be aware of. On the OCI webpage you have to:

  • Set Database name to the DB_NAME of the source database.
  • Set Database version to the same as the source database.
  • Ensure the patch level of the Oracle Home match that of the source system – or be higher.
  • Ensure that the Password matches the SYS password of the source database.

When using Zero Downtime Migration (ZDM) to migrate to Exadata DB System (ExaCS) be sure to create the target placeholder database in the correct manner.

You can choose your own DB_UNIQUE_NAME – it should differ from the source database. Select an Oracle Home that has the same or higher patch level than your source database. I recommend to always migrate to the latest Release Update. If necessary, ZDM will automatically invoke datapatch after the switchover. The other parameters don’t matter – the database gets overwritten anyway by ZDM. Also, be aware when using the OCI webpage you get a RAC database. There is no option to change it. But it is after all the perfect match for Exadata anyway.

Now, if you want more advanced options – like creating a single instance database, you can’t use the OCI webpage. You will have to use either dbaaspi or dbaascli. That gives you full control over the options – but they are not as easy to use as the webpage.

It’s A Wrap

I have created a video on YouTube that demos a migration to Exadata DB System.

Speaking of YouTube, I suggest that you subscribe to the Oracle Database Upgrades and Migrations YouTube channel so you never miss anything.

The Exadata Cloud Service is an awesome platform and it is really easy to migrate to it using Zero Downtime Migration. And converting to RAC is even easier.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration to DBCS

Let me show you how you can migrate an Oracle Database into OCI. My source database is a CDB running on 12.1.0.2. I want to migrate to the same version using the Physical Online method which uses Data Guard.

This procedure can be used to migrate to:

  • VM DB Systems
  • Bare Metal DB Systems
  • Exadata DB System (ExaCS)

If you want to migrate to Exadata Cloud at Customer and Exadata on-prem, you can use a lot from this post, but there are some differences which I will not cover.

Prerequisites

  1. I need a ZDM service host. It is called zdmhost and the user is called zdmuser.
  2. My source database is a CDB called SALES (DB_NAME), and the host is named srchost.
  3. I want to migrate to a VM DB System, and it is already created. I name it tgthost. I use the same name for the database, SALES. I must use ASM as storage management software.

Overview of the components in this demo

Prepare Source

Ensure the database is in ARCHIVELOG mode:

select log_mode from v$database;

If database version is 12.2 or higher, a TDE keystore must be present – even if the database is not encrypted. A license is not needed to create a TDE keystore – not until data is getting encrypted. The keystore must be OPEN and the type is either AUTOLOGIN, LOCAL_AUTOLOGIN or PASSWORD. In a CDB, this applies to CDB$ROOT and all PDBs:

SELECT con_id, status, wallet_type FROM v$encryption_wallet;

If status is OPEN_NO_MASTER_KEY it means that no TDE master encryption key has been created and I will need to create one. Instructions are also in the documentation.

Ensure that the source host can resolve the network name of the target host. It is important to add two entries – one with the host name and one with the SCAN name (they should both point to the target host):

[root@srchost]$ echo -e "[ip address] tgthost" >> /etc/hosts
[root@srchost]$ echo -e "[ip address] tgthost-scan" >> /etc/hosts

Test connection over SQL*NET to the target:

[oracle@srchost]$ sqlplus system@tgthost-scan/[target-cdb-service-name]

Prepare Target

I need a target placeholder database. The placeholder target database is overwritten during migration, but it retains the overall configuration.

  • VM DB System: When I create a new system, a database is automatically created. This database is my target placeholder database.
  • Bare Metal and Exadata DB Systems: Create a new database on an existing system.

In any case:

  • DB_NAME must match that of the source database, and DB_UNIQUE_NAME must be different.
  • Use the same SYS password as the source database.
  • Do not enable automatic backup (until after the migration has completed).
  • The architecture (single instance or RAC) of the target placeholder database determine the architecture of the OCI database. If I want my OCI database to be a RAC database, simply create the target placeholder database as a RAC database, and the end result will be a RAC database.
  • COMPATIBLE must match that of the source database.
  • The patch level of the target database must be equal to or higher than the source database. If they differ, ZDM will automatically execute datapatch as part of the migration. Use $ORACLE_HOME/OPatch/opatch lsinventory.
  • The source and target database must use the same time zone file version. If they don’t match, I must upgrade the time zone file in the source database. If I have a lot of data of type TIMESTAMP WITH TIMEZONE and I can’t afford the downtime it takes to do the upgrade, then consider using the logical online method: select * from v$timezone_file;

There are a few other requirements listed in the document, but these are the most important ones.

Like the source host, the target host must be able to resolve the network name of its counterpart:

[root@tgthost]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@tgthost]$ echo -e "[ip address] srchost-scan" >> /etc/hosts

And I test the connection:

[oracle@tgthost]$ sqlplus system@srchost-scan/[source-cdb-service-name]

Prepare ZDM

The ZDM service host must be able to resolve the host names of the source and target database host:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts
[root@zdmhost]$ echo "<target IP address> tgthost" >> /etc/hosts

I put my private SSH keys to the source and target database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 
[zdmuser@zdmhost]$ cp tgthost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/tgthost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date
[zdmuser@zdmhost]$ ssh -i ~/.ssh/tgthost_key_file opc@tgthost date

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_template.rsp ~/physical_online.rsp
[zdmuser@zdmhost]$ chmod 700 ~/physical_online.rsp

This is the response file that I end up with (see appendix B for an explanation):

TGT_DB_UNIQUE_NAME=SALES_fra2t4
MIGRATION_METHOD=ONLINE_PHYSICAL
DATA_TRANSFER_MEDIUM=OSS
PLATFORM_TYPE=VMDB
HOST=https://swiftobjectstorage.[region].oraclecloud.com/v1/[my-tenancy-object-storage-namespace]
OPC_CONTAINER=zdm-staging
SKIP_FALLBACK=TRUE
SHUTDOWN_SRC=TRUE

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
   -backupuser "daniel.overby.hansen@oracle.com" \
   -eval

A few comments:

  • sourcenode and targetnode are the host names of the source and target database host. Those names must be resolvable which I why I put them into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database. My database is on ASM. If I didn’t use ASM I should use sourcesid instead and specify the database SID.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 and tgtarg2 is the location of the private key file that I use to connect via SSH.

I am prompted for the SYS password to the source database and also the password for the backupuser (which is my OCI user). For the latter, please note that this password is not my user password, however, it is an auth token. I also get a job ID: When using OCI oject storage this is NOT your user password, but an auth token

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

My colleague Sinan Petrus Toma showed how to loop:

[zdmuser@zdmhost]$ while :; do $ZDM_HOME/bin/zdmcli query job -jobid <job ID>; sleep 10; done

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-06-03T07:19:50.317Z : Starting zero downtime migrate operation ...
zdmhost: 2021-06-03T07:19:53.445Z : Executing phase ZDM_GET_SRC_INFO
zdmhost: 2021-06-03T07:19:53.446Z : Retrieving information from source node "srchost" ...
zdmhost: 2021-06-03T07:19:53.447Z : retrieving information about database "SALES_fra3fw" ...
zdmhost: 2021-06-03T07:20:02.743Z : Execution of phase ZDM_GET_SRC_INFO completed
zdmhost: 2021-06-03T07:20:02.826Z : Executing phase ZDM_GET_TGT_INFO
zdmhost: 2021-06-03T07:20:02.827Z : Retrieving information from target node "tgthost" ...
zdmhost: 2021-06-03T07:20:12.197Z : Determined value for parameter TGT_DATADG is '+DATA'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_REDODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_RECODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.203Z : Execution of phase ZDM_GET_TGT_INFO completed

Eventually, I end up with a successful evaluation: When status of an evaluation is SUCCEEDED

Start Migration

When the evaluation passes, I can start the real migration. I am re-using the same command line, but I have removed the -eval option. Instead – and this is important – I am using -pauseafter to tell ZDM to pause the migration, just before the switchover takes place. Downtime has not started yet. The database is still open for business, but ZDM will copy the data and configure Data Guard:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
   -backupuser "daniel.overby.hansen@oracle.com" \
   -pauseafter ZDM_CONFIGURE_DG_SRC

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file. Note down the job ID. I need it later on to resume the migration.

When ZDM completes the phase ZDM_CONFIGURE_DG_SRC it will pause and wait. I can verify it with zdmcli query job: ZDM is currently paused

Now the standby database has been built in OCI. Redo gets transferred from my source database to the target database in OCI and is applied (see appendix D for monitoring queries). Also, this is a good time to test your new database.

Complete Migration

Now it is time to finalize the migration and switch over to the OCI target database. All I need to do, is to resume the paused ZDM job. I use the job ID that was created when I started the migration:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli resume job -jobid <job ID>

ZDM will now ensure that all redo is sent and applied before switching over to the OCI target database. Again, I use the zdmcli query job command to monitor the progress and I can tail the log file. After a short while the migration completes. ZDM migration completed

That’s it. I have now migrated into OCI!

I have a few post-migration tasks to carry out:

  • Configure automatic backup
  • Protect my new OCI database with a Data Guard in OCI
  • Take a peek at the alert log (just be sure)

Other Blog Posts in This Series

Appendix

A – RAC Databases

If your source or target database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH keys to that host. Also, ensure that the IP address you are using is not the floating one (VIP). In OCI that is referred to as the private IP.

B – Response File

Allow me to put some comments on the values:

Setting Comment
DATA_TRANSFER_MEDIUM When migrating to a DB System in OCI OSS is the only option. It is also the default value, so I could omit the parameter. The other options are applicable for Exadata Cloud at Customer and Exadata on-prem.
                                                                                                                                                                     HOST                                                                                                                                                                      Getting the right URL for the HOST parameter might be a little tricky, but check the documentation. In my demo I use the Frankfurt data center, and, thus, the region is set to eu-frankfurt-1. Visit the API documentation for a list of regions. Use OCI CLI and oci os ns get to find the tenancy object storage namespace. Alternatively, in the OCI Console (the web page) open the Profile menu and click Tenancy: <your tenancy name>. The namespace string is listed under Object Storage Settings. To read more about have a look at the OCI documentation. Thanks to Bartlomiej Sowa for putting in a comment with this information – much appreciated!
OPC_CONTAINER The OCI Object Storage bucket that will be used as a staging area for the backup of the source database. I recommend using a separate bucket for each migration. It makes it a lot easier to clean up afterwards.
PLATFORM_TYPE VMDB covers Virtual Machine and Bare Metal DB Services. It is also the default value, so I could omit the parameter.
SHUTDOWN_SRC I choose to shut down the source database to ensure no one uses it unintentionally after the migration.
SKIP_FALLBACK To make my demo simple I choose not to configure fallback. Remember falling back to the source database requires a license for Advanced Security Option.
TGT_DB_UNIQUE_NAME The DB_UNIQUE_NAME of the target placeholder database. It must be different than that of the source database.

C – Backup Strategy

During a migration with ZDM, you should keep your regular backup strategy. Keep doing the same backups as you did before. But avoid having ZDM backups and regular backups run at the same time. Also, if you are dealing with a RAC database be sure to put the snapshot control file on shared storage. Otherwise, you might get ORA-00245 errors during backups.

D – Monitoring

Use these queries to monitor the redo apply. On source/primary database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

Target/standby database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;
--MRP process should be 'APPLYING_LOG'
SELECT process, status, sequence# FROM v$managed_standby;
SELECT * FROM v$archive_gap;

E – Troubleshooting

PRCZ-4001

If you run into this error:

PRCZ-4001 : failed to execute command "/bin/uname" using the privileged execution plugin "zdmauth" on nodes "doverbyh-zdm-tgt" within 120 seconds
PRCZ-2006 : Unable to establish SSH connection to node "doverbyh-zdm-tgt" to execute command "/bin/uname"
No more authentication methods available

Check your key files. They must be in RSA/PEM format (the private key must start with -----BEGIN RSA PRIVATE KEY-----).

ZDM_OBC_INSTALL_CREDENTIALS_INVALID

If you run into this error:

srchost: 07:08:00.000: Validating object store credentials..
srchost: <ERR_FILE><Facility>PRGO</Facility><ID>ZDM_OBC_INSTALL_CREDENTIALS_INVALID</ID></ERR_FILE>

Check your credentials to OCI. Remember when prompted for the password of your OCI account it is an auth token, not your password (even though the prompt text is misleading).

ZDM_GET_SRC_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the source database host is added to the known_hosts file on the ZDM service host. Also, you can verify connectivity by trying to log on via SSH:

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-source-host>

ZDM_GET_TGT_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the target database host is added to the known_hosts file on the ZDM service host.

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-target-host>

Zero Downtime Migration – Physical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Physical Online method. A standby database is built in OCI and kept in sync via redo apply. At your will, a switchover is all it takes to complete the migration. This way even very large databases can be migrated with no or very little downtime.

Concept of physical online migration

With the Physical Online method, you can target:

  • Virtual Machine DB System
  • Bare Metal DB System
  • Exadata DB System (ExaCS)
  • Exadata Cloud at Customer (ExaCC)
  • Exadata (on-prem)

Since the method uses Data Guard as the migration vehicle, this can only be only with Enterprise Edition databases.

Benefits

  • Personally, I like this approach because it builds on technology that most of us know already. Data Guard is heavily in use in most organizations. Although ZDM does all the heavy lifting, it is nice to know what happens underneath the hood.

  • To complete the migration all you need is a regular Data Guard switchover (which ZDM also takes care of). A switchover operation ensures that there will be no data loss at all. In addition, if you have a properly configured application, it won’t experience downtime. Just a brown-out while the switchover takes place.

  • This method has an excellent fallback possibility. When you switch over to the OCI database, the redo flow is reversed. Now, the on-prem database is a standby database, and it is kept in sync via redo from the OCI database. However, this option requires a license for Advanced Security Option on your on-prem system. Read more about it later on.

  • You migrate the entire database. All the internals are brought to the cloud as well. This includes:

    • AWR
    • SQL Plan Baseline
    • SQL Profiles
    • Public objects
    • Etc.
  • The initial backup of the source, on-prem database will use RMAN Compression. It will drastically reduce the size of the backup, and thus the amount of data that you must transport to the cloud. And the really good thing: ZDM can use RMAN Compression even if you don’t have a license for the Advanced Compression Option. You are allowed to use RMAN Compression with ZDM for migration purposes without paying license for Advanced Compression Option.

  • You can customize the RMAN backup. You know your system best, so it is possible to tweak the number of RMAN channels used and the compression algorithm applied. Default is 10 and medium.

  • When ZDM takes the full backup of your source database (Exadata on-prem and ExaCC excluded), it uses Oracle Database Cloud Backup Module for OCI to allocate a special sbt channel. The backup is sent directly to OCI Object Storage. Thus, you don’t need any additional disk space on your source system to hold the backup. The same applies on the target system.

  • When migrating to ExaCC or Exadata on-prem you can use an existing backup. Use the response file option DATA_TRANSFER_MEDIUM=EXTBACKUP. Also, if you have a Zero Data Loss Recovery Appliance (ZDLRA) you can restore directly from it, without taking a full backup first. If you are migrating to any other platform, ZDM will need to take a full backup as part of the workflow.

  • The standby database is kept in sync via redo apply. This means that there are no restrictions in supported data types. Further, DDL in any form is supported and even with a heavy workload on the primary database, your standby database should be able to keep up.

  • You can migrate to a higher patch level. ZDM will automatically invoke datapatch for you on the OCI database. But you must handle the on-prem database yourself. After switching over to the OCI database and datapatch has been executed, you should patch the on-prem Oracle Home to the same patch level.

Considerations

  • You can migrate to the same version only. This is a restriction of Data Guard. If you also need to upgrade the database, then you must do that after the migration. But it will incur additional downtime.
  • The same applies to PDB conversion. ZDM can run the noncdb_to_pdb.sql script for you. But that will incur downtime as well. If you decide to convert to PDB, you need to develop an alternate fallback plan (which is Data Pump or Transportable Tablespaces). The PDB conversion is irreversible, and your on-prem database will be useless as a fallback option.
  • It is not possible to migrate between editions.
  • The entire database is migrated. I listed this as a benefit as well, but it has a flipside as well. It is not possible to perform any transformation during the migration, e.g. converting any old BasicFile LOBs to SecureFile LOBs. Also, you bring over any old baggage in your database. Sometimes it is nice to start from scratch because garbage tends to accumulate in a database over time.
  • If you have a Standard Edition database, you can’t use Physical Online method. Only the Physical Offline is supported.
  • You can’t configure Automatic backup on your target database until after the migration has been completed.
  • If you need to protect your OCI database with Data Guard, then you must build the standby database after the migration has been completed. Normally, you would use a cascading standby database to keep the target database protected by Data Guard even after the migration has been completed. But currently the OCI tooling does not support that option.

Fallback

Using this approach, you have a great fallback option. When you switch over to the OCI database, then the redo flow is reversed and the source, on-prem database is now the standby database. If you need to fallback, simply issue another switchover (you could call that a switchback) and start to use the on-prem database again – with no data loss. See step 5 in MAA Practices for Cloud Migration Using ZDM (Doc ID 2562063.1).

There is a catch, however. The OCI database is encrypted using TDE Tablespace Encryption. Any redo generated for an encrypted tablespace is also encrypted. This means that the on-prem database must be able to decrypt the redo before it can be applied. That requires a license for the Advanced Security Option. This means that the fallback is only usable, if your on-prem database has the proper license.

Conclusion

Using the Physical Online method in ZDM is a straight-forward way of migrating your database to OCI. It uses Data Guard which is very familiar to most of us. The method does, however, have some limitations, and you can’t target Autonomous Databases.

Want to Know More

If you want to know more about migrations in general, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series

Zero Downtime Migration – Logical Migration and the Final Touches

Logical migration has many benefits but there is also a catch. It does not transfer some of that nice meta data that make your life easier – such as:

  • AWR
  • SQL Plan Baselines
  • SQL Profiles

Further, we can make a few tweaks to avoid having performance problems after the migration. This post applies to migrations into any database – except Autonomous Databases. In an Autonomous Database you don’t need to worry about such things. The database will fix the issues for you.

AWR

If you are licensed to use AWR in your source database, you can transport the AWR data into your target database.

  1. In your source database, execute the script $ORACLE_HOME/rdbms/admin/awrextr.sql. It extracts the AWR data and load them into a Data Pump dump file.
  2. Transfer the dump file to your target database.
  3. In your target database, import the AWR data using the script $ORACLE_HOME/rdbms/admin/awrload.sql.

When you look at the old AWR data in your target database, use it with caution. The AWR data was collected on a completely different system (CPU, memory, settings, database release, initialization parameters etc). Your new system is very different. Keep that in mind when you compare data from before and after the migration.

SQL Plan Baselines

If you are using SQL Plan Management to ensure plan stability, you want to transport your baselines as well.

First, in my source database create a staging table. It is used to transport the baselines. Create the staging table in a schema that you will be migrating.

begin
   dbms_spm.create_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH');
   end;
/

Next, I will extract all the baselines including the fixed or accepted plans. It is now stored in the staging table and will be migrated together with my data:

declare
   l_count number;
begin
   l_count := dbms_spm.pack_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH',
      enabled      => 'YES',
      fixed        => 'YES');
   end;
/

After the migration, in my target database, I can now extract the baselines into the SQL Management Base:

declare
   l_count number;
begin
   l_count := dbms_spm.unpack_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH');
   end;
/

That’s it. Now SQL Plan Management will ensure that the same plans are used for the statements that are in the baselines.

If you are not using SQL Plan Management, you should look into it. In my opinion it is one of the most underrated features in the Oracle Database. Many issues are caused by plan changes which is exactly what you can avoid with SQL Plan Management.

SQL Profiles

If you have SQL Profiles you can transport them in a way very similar to transporting SQL Plan Baselines.

First, in the source database, create a staging table in a schema that you will be migrating:

begin
   dbms_sqltune.create_stgtab_sqlprof (
      table_name   => 'MIGR_SQLP_1',
      table_owner  => 'SH');
end;
/

Next, put the SQL Profiles into the staging table. This example will load all profiles from the DEFAULT category, but there are options to cherry-pick those of interest:

begin
   dbms_sqltune.pack_stgtab_sqlprof (
      staging_table_name   => 'MIGR_SQLP_1',
      staging_schema_owner => 'SH');
end;
/

Finally, in my target database, I can unpack the profiles from the staging table and into the data dictionary:

begin
   dbms_sqltune.unpack_stgtab_sqlprof (
      staging_table_name   => 'MIGR_SQLP_1',
      staging_schema_owner => 'SH',
      replace              => TRUE);
   end;
/

Statistics

Should you do anything about object statistics?

  • If you have migrated into the same release, the answer is no.
  • If you have migrated into a higher release, the answer is it depends. It is not required, but if you come from a very old release, you might consider it.

Imagine a migration from 11.2.0.4 to 19c. Significant changes have been made to histograms in that period. If you want to benefit from the new, improved histogram types, you should re-gather statistics. If you don’t do it, the histograms will gradually change over time when the statistics become stale. Some people prefer taking such a change immediately because they don’t like the idea of things happening over the course of time.

If you want to refresh statistics:

exec dbms_stats.delete_database_stats;
exec dbms_stats.gather_database_stats;

To make it a little faster:

exec dbms_stats.set_global_prefs('CONCURRENT','AUTOMATIC');

To make it even faster – requires CPU:

exec dbms_stats.set_global_prefs('DEGREE', DBMS_STATS.AUTO_DEGREE);

To make it as fast as possible – requires a lot of CPU:

exec dbms_stats.set_global_prefs('CONCURRENT','AUTOMATIC');
exec dbms_stats.set_global_prefs('DEGREE', DBMS_STATS.AUTO_DEGREE);

Be sure to set the preferences back to the original value when you are done gathering statistics. Nigel Bayliss has an interesting blog post on How to gather optimizer statistics fast.

Dictionary Statistics

After the migration, the target database has many more tables, indexes, views and so on. This is represented in the data dictionary as additional rows in e.g. OBJ$. All that extra data probably means that the dictionary statistics are stale. Stale dictionary statistics can lead to all sorts of troubles.

Immediately after the migration is completed you should re-gather dictionary statistics to avoid problems caused by stale statistics. Do this right after ZDM has completed the migration:

exec dbms_stats.gather_dictionary_stats;

Or even better:

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

I prefer gathering statistics on SYS and SYSTEM independently. My colleagues and I have seen issues from time to time with dbms_stats.gather_dictionary_stats.

Fixed Object Statistics

After migration and once the system has been properly warmed up, you should re-gather fixed objects statistics. It is important that you don’t gather fixed objects statistics right after migration. You must until you have run a representative workload. If you wait a few days or until the next weekend, then you are most likely on the safe side:

exec dbms_stats.gather_fixed_objects_stats;

This recommendation is similar to the one we give about after upgrade. Read the blog post to read more about fixed objects statistics in general. Also, you can find a very useful piece of code that gathers fixed objects statistics in seven days via a scheduler job.

SQL Tuning Sets

This is something that you should do before you start the migration. Capture your workload from the source database into SQL Tuning Sets.

What is a SQL Tuning Set? It is an object that contains information about a number of SQLs. For each SQL the following is collected:

  • SQL Statement – the SQL text itself
  • Context – executing schema, binds, and environment (like NLS settings)
  • Statistics – execution statistics like elapsed time, CPU time, buffer gets, rows processed
  • Plan – last, but not least, the actual execution plan

You can then transport the SQL Tuning Set into the target database. If an SQL is now running slow, you can use the information from the SQL Tuning Set to identify why. For an SQL you can see the old and new plan, compare amount of buffer gets and CPU time and so forth. You can also bring back the old plan, if the new plan is less effective.

First, in the source you create a SQL Tuning Set:

exec dbms_sqlset.create_sqlset (sqlset_name => 'MIGR_STS_1', description => 'For migration - from source');

Next, capture statements from AWR. This will take the top 5000 statements from your entire AWR ordered by elapsed time:

declare
   begin_id number;
   end_id number;
   cur sys_refcursor;
begin
   select min(snap_id), max(snap_id) int begin_id, end_id
   from dba_hist_snapshot;

open cur for
  select value(p) from table(dbms_sqltune.select_workload_repository(
       begin_snap       => begin_id,
       end_snap         => end_id,
       basic_filter     => 'parsing_schema_name not in (''SYS'')',
       ranking_measure1 => 'elapsed_time',
       result_limit     => 5000,
       attribute_list   => 'ALL')) p;

  dbms_sqltune.load_sqlset('MIGR_STS_1', cur);
  
close cur;

end;
/

You can also sample directly from cursor cache. Start the sampling and then run your workload. This example will sample every minute for 15 minutes:

begin
   dbms_sqlset.capture_cursor_cache_sqlset(
      sqlset_name     => 'MIGR_STS_1',
      time_limit      => 900,
      repeat_interval => 60,
      capture_option  => 'MERGE',
      capture_mode    => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
      basic_filter    => 'parsing_schema_name not in (''SYS'')',
      sqlset_owner    => NULL,
      recursive_sql   => 'HAS_RECURSIVE_SQL');
end;
/

Finally, put all that information into a staging table, so it can be transported to the target database. Put the staging table into a schema that you plan on migrate. This way you ensure that the SQL Tuning Set is migrated together with your data. In my example it is the schema SH:

begin
   dbms_sqltune.create_stgtab_sqlset ( 
      schema               => 'SH',
      table_name           => 'MIGR_STGTAB_1');
   dbms_sqltune.pack_stgtab_sqlset (      
      sqlset_name          => 'MIGR_STS_1',
	  staging_schema_owner => 'SH',
      staging_table_name   => 'MIGR_STGTAB_1');
end;
/

After the migration, in my target database I can now extract the SQL Tuning Set from the staging table and into the data dictionary:

begin
   dbms_sqltune.unpack_stgtab_sqlset (
      sqlset_name          => '%',
      replace              => true,
	  staging_schema_owner => 'SH',
      staging_table_name   => 'MIGR_STGTAB_1'
   );
end;
/

Now – if you get into trouble with a misbehaving SQL – you can use the information in the SQL Tuning Set to investigate the problem.

Conclusion

When you perform a logical migration, there are some additional tasks that you should consider at least for your mission-critical databases. These steps will help you avoid performance issues after the migration.

In addition to the topics discussed here, I recommend that you also watch our webinar Performance Stability, Tips and Tricks and Underscores (slides). It has the complete prescription to avoid performance problems after upgrade and migration.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration to DBCS

Let me show you how you can migrate an Oracle Database into OCI. My source database is a PDB running on 12.1.0.2. I want to migrate it directly into a 19c CDB. I will show you:

  • import via dump file and object storage
  • import via database link

This procedure can be used to migrate to:

  • VM DB Systems
  • Bare Metal DB Systems
  • Exadata DB System (ExaCS)

Prerequisites

  1. I need a ZDM service host. I already blogged about how to create such. The computer is called zdmhost and the user is called zdmuser.
  2. My source database is a PDB called srcpdb, the CDB is called srccdb, and the host is named srchost. All my data is in the schema SH.
  3. I have already created a GoldenGate hub as described in this blog post. It is called ogg19cora.
  4. I want to migrate to a VM DB System, and it is already created. I name it tgthost, the CDB is named tgtcdb and the PDB is named tgtpdb. You can name it whatever you want. Be sure to use ASM as your storage management software.

Overview of the components in this demo

Prepare Source

My source database is a PDB. I need to prepare the PDB and CDB$ROOT. If you have a non-CDB perform all the steps in your non-CDB unless otherwise written.

First, I prepare the database for GoldenGate replication:

alter session set container=CDB$ROOT;
alter database force logging;
alter database add supplemental log data;
alter system set enable_goldengate_replication=true scope=both;
--At least 2GB
alter system set streams_pool_size=2g scope=both;
alter system set global_names=false;

Create a common user for GoldenGate in CDB$ROOT. Skip this step for non-CDBs:

alter session set container=CDB$ROOT;
create user c##ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to c##ggadmin;
grant unlimited tablespace to c##ggadmin;
grant select any dictionary to c##ggadmin;
grant create view to c##ggadmin;
grant execute on dbms_lock to c##ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('c##ggadmin',container=>' all');

Create a user for GoldenGate:

alter session set container=SRCPDB;
create user ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('ggadmin');

Generate a list of tablespaces that must exist in the target database. Add all schemas to the in-list:

alter session set container=SRCPDB;
select distinct tablespace_name
from (
      select distinct tablespace_name from dba_segments where owner in ('SH')
      union
      select distinct default_tablespace from dba_users where username in ('SH')
      union
      select distinct tablespace_name from dba_ts_quotas where dropped = 'NO' and username in ('SH')
      );

Optional. The best starting point for Data Pump is good and accurate dictionary statistics. This helps Data Pump extract the information as fast as possible:

exec dbms_stats.gather_dictionary_stats;

Via Dump File

I need to create a directory that can be used by Data Pump:

[oracle@srchost]$ mkdir -p /u01/app/oracle/datapump/mydirsrc

Prepare Target

The target database must use a timezone file version that is equal to or higher than the source:

alter session set container=tgtpdb;
select * from v$timezone_file;

Prepare the database for GoldenGate replication:

alter session set container=CDB$ROOT;
alter system set enable_goldengate_replication=true scope=both;

Create a user for GoldenGate:

alter session set container=tgtpdb;
create user ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('ggadmin');

Allow the GoldenGate user to perform DML (see appendix A):

alter session set container=tgtpdb;
grant insert any table to ggadmin;
grant update any table to ggadmin;
grant delete any table to ggadmin;

I ensure that the list of tablespaces I found in the source database, exist in the target database. If some are missing, I create them. Also, I will extend the data files in advance, so the size is pretty close to the expected end size. For new tablespaces I set the initial size of the data files. If the data files are too small, I will waste a lot of time during import to auto-extend. If I make them too big, I potentially waste space:

alter database datafile '...' resize 100G;
create tablespace ... size 100G ...;

Via Dump File

I need to create a directory that can be used by Data Pump. It will hold the Data Pump dump files and logs:

[oracle@tgthost]$ mkdir -p /u01/app/oracle/datapump/mydirtgt

Via DB Link

When I use a DB link, ZDM insists on using the built-in directory called DATA_PUMP_DIR. I ensure that the database directory exist. Also, I verify that the directory exist in the file system:

select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';

I will do the import directly over a database link. In that case, the target host must be able to resolve the host name of my source database host:

[root@tgthost]$ echo "<source IP address> srchost" >> /etc/hosts

Prepare ZDM

The ZDM service host must be able to resolve the host names of the source and target database host and the GoldenGate hub:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts
[root@zdmhost]$ echo "<target IP address> tgthost" >> /etc/hosts
[root@zdmhost]$ echo "<GoldenGate IP address> ogg19cora" >> /etc/hosts

I put my private SSH keys to the source and target database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 
[zdmuser@zdmhost]$ cp tgthost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/tgthost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date
[zdmuser@zdmhost]$ ssh -i ~/.ssh/tgthost_key_file opc@tgthost date

If you have configured a proper certificate on your GoldenGate hub, you can jump to the next chapter. If not, read on.

When deployed the GoldenGate hub comes with a self-signed certificate. ZDM will complain about it, because it is considered insecure. For migrations that involve real data, you should use a proper certificate on your GoldenGate hub to ensure your migration is secure. But for tests and demos you might want to disregard the security warning. If so, you can add the self-signed certificate as a trusted one to the Java JDK’s certificate store ($ZDM_HOME/jdk/jre/lib/security/cacerts). You find instructions on how to do so in the MOS support note Zero Downtime Migration – GoldenGate Hub Certificate Known Issues (Doc ID 2768483.1). Use the same host that you added to /etc/hosts.

For your reference, this is the error that ZDM will throw if the certificate is not trusted:

Verifying status of Oracle GoldenGate Microservices at URL "https://..."
PRGZ-1136 : failed to verify configuration and status of Oracle GoldenGate Microservices at URL "https://..."
PRGG-1008 : failed to retrieve detailed information for the health of the Service Manager on Oracle GoldenGate hub "https://..."
PRGG-1001 : HTTP GET request "https://.../services/v2/config/health" failed.
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
unable to find valid certification path to requested target

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp ~/logical_online.rsp
[zdmuser@zdmhost]$ cmod 700 ~/logical_online.rsp

Via Dump Files

This is the response file that I end up with (see appendix E for an explanation):

MIGRATION_METHOD=ONLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_GGADMINUSERNAME=GGADMIN
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....oraclevcn.com
SOURCECONTAINERDATABASE_ADMINUSERNAME=SYSTEM
SOURCECONTAINERDATABASE_GGADMINUSERNAME=C##GGADMIN
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME=SRCCDB_fra3dd....oraclevcn.com

TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=SYSTEM
TARGETDATABASE_GGADMINUSERNAME=GGADMIN
TARGETDATABASE_CONNECTIONDETAILS_HOST=tgthost
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=tgtpdb....oraclevcn.com

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

GOLDENGATEHUB_ADMINUSERNAME=oggadmin
GOLDENGATEHUB_URL=https://ogg19cora....oraclevcn.com
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME=Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME=Target
GOLDENGATEHUB_COMPUTEID=ocid1.instance.oc1.eu-frankfurt-1....

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=MYDIRTGT
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirtgt
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MYDIRSRC
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirsrc
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=zdm-staging
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oradbclouducm
INCLUDEOBJECTS-1=owner:SH

Via DB Link

This is the response file that I end up with (see appendix E for an explanation):

MIGRATION_METHOD=ONLINE_LOGICAL
DATA_TRANSFER_MEDIUM=DBLINK

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_GGADMINUSERNAME=GGADMIN
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....oraclevcn.com
SOURCECONTAINERDATABASE_ADMINUSERNAME=SYSTEM
SOURCECONTAINERDATABASE_GGADMINUSERNAME=C##GGADMIN
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME=SRCCDB_fra3dd....oraclevcn.com

TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=SYSTEM
TARGETDATABASE_GGADMINUSERNAME=GGADMIN
TARGETDATABASE_CONNECTIONDETAILS_HOST=tgthost
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=tgtpdb....oraclevcn.com

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

GOLDENGATEHUB_ADMINUSERNAME=oggadmin
GOLDENGATEHUB_URL=https://ogg19cora....oraclevcn.com
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME=Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME=Target
GOLDENGATEHUB_COMPUTEID=ocid1.instance.oc1.eu-frankfurt-1....

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
INCLUDEOBJECTS-1=owner:SH

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_online.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -eval

A few comments:

  • sourcenode and targetnode are the host names of the source and target database host. Those names must be resolvable which I why I put them into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 and tgtarg2 is the location of the private key file that I use to connect via SSH.

Next, I am prompted for various passwords and I also get a job ID. When submitting a migration job you are prompted for various passwords

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

My colleague Sinan Petrus Toma showed how to loop:

[zdmuser@zdmhost]$ while :; do $ZDM_HOME/bin/zdmcli query job -jobid <job ID>; sleep 10; done

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-05-28T06:25:25.518Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR
zdmhost: 2021-05-28T06:25:31.883Z : Source PDB name : SRCPDB
zdmhost: 2021-05-28T06:25:32.410Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node srchost ...
zdmhost: 2021-05-28T06:25:38.533Z : Premigration advisor output:
Cloud Premigration Advisor Tool Version 21.0.0
Cloud Premigration Advisor Tool completed with overall result: WARNING
Cloud Premigration Advisor Tool generated report location: /u01/app/oracle/zdm/zdm_SRCCDB_fra3dd_6/out/premigration_advisor_report.json

Start Migration

When the evaluation passes, I can start the real migration. I am re-using the same command line, but I have removed the -eval option. Instead – and this is important – I am using -pauseafter to tell ZDM to pause the migration, just before the switchover takes place. Downtime has not started yet. The database is still open for business, but ZDM will copy the data and start the GoldenGate replication:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_online.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -pauseafter ZDM_MONITOR_GG_LAG

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file. Note down the job ID. I need it later on to resume the migration.

When ZDM completes the phase ZDM_MONITOR_GG_LAG it will pause and wait. I can verify it with zdmcli query job: ZDM is currently paused

My data has been imported to the target database in OCI, GoldenGate has been configured, and my changes are being replicated. I can monitor the replication by logging on to the GoldenGate hub.

Important: Now, I strongly recommend that you take care of the statistics in your target database:

  1. Gather dictionary statistics (DBMS_STATS.GATHER_DICTIONARY_STATS). After the import the data dictionary is now full of a lot of new data – the metadata about your tables and other objects. To ensure a smooth running database, gather dictionary statistics.
  2. Decide on what to do about statistics on user objects as described in this blog post.

Complete Migration

Now it is time to finalize the migration and switch over to the OCI target database. All I need to do, is to resume the paused ZDM job. I use the job ID that was created when I started the migration:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli resume job -jobid <job ID>

ZDM will now ensure that all changes are replicated before switching over to the OCI target database. Again, I use the zdmcli query job command to monitor the progress and I can tail the log file. After a short while the migration completes. ZDM migration completed

That’s it. I have now migrated into OCI! And my new database is running Oracle Database 19c in addition.

I have a few post-migration tasks to carry out:

  • I prefer to shut down the source database to ensure that use it anymore.
  • ZDM will remove the GoldenGate configuration, but I need to disable the replication in the database: alter system set enable_goldengate_replication=false scope=both;.

Finally, I would recommend that you take a look at Logical Migration and the Final Touches. A few pointers that can make your migration even more successful.

Other Blog Posts in This Series

Appendix

A – GoldenGate user privileges

When GoldenGate is replicating changes into the target database, it does so as a regular database user. Thus, it must have privileges to perform DML and DDL on the appropriate schemas. There is a number of ways to do that.

I granted INSERT ANY, UPDATE ANY, and DELETE ANY which will allow me to perform DML in any schema. But that does not cover DDL. If I want to cover that as well there is a number of similar ANY privileges that I would need to grant, like CREATE ANY TABLE, CREATE ANY INDEX and so forth.

But it is discouraged to perform DDL statements during the replication. Plus, the way ZDM configures the extract and replicat process does not allow DDLs to be replicated. If you want to replicate DDL you need to tweak the GoldenGate settings:

During the migration period, to provide the most optimal environment for fast database replication, avoid the following database operations:

  • Data Definition Language (DDL) – While DDL is replicated, the Replicat serializes data to ensure that there are no locking issues between DML and DDL on the same objects.
  • Large batch DML – Running large batch operations, like a single transaction that affects multi-millions of rows, can slow down replication rates.

Other customers prefer an easier solution and grant PDB_DBA role (or DBA in a non-CDB). Connor McDonald also blogged about a procedure to perform schema grant. If you need inspiration you can have a look at the GGADMIN user in an Autonomous Database. It has PDB_DBA role amongst others.

In the end it comes down to preferences and security regulations in your organization.

B – Update Cloud Premigration Advisor Tool

As part of the migration ZDM will use the Cloud Premigration Advisor Tool (CPAT) to check your source database. The version of CPAT that ships with ZDM is not the latest one. So, you might be missing out on bug fixes and new recommendations. If you want to get the latest and greatest recommendations, I suggest that you manually update the CPAT tool in your ZDM home. You can find instructions in the MOS note Cloud Premigration Advisor Tool (CPAT) Analyzes Databases for Suitability of Cloud Migration (Doc ID 2758371.1)

C – Ignore Certain Data Pump Errors

If there is an error in either your Data Pump export or import, then ZDM will error out. But by default, ZDM will ignore the following errors:

  • ORA-31684: Object type string already exists
  • ORA-39111: Dependent object type string skipped, base object type string already exists
  • ORA-39082 Object type string created with compilation warnings

If you know that your data exports or imports with a specific error and you want to ignore it then you can add your own errors. The list of ignorable errors is maintained by the ZDM service host and it applies to all migrations. So, if you add another error for one migration, that error is also ignored in a second migration. Read more about it in the documentation.

D – RAC Databases

If your source or target database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH keys to that host.

E – Response File

Allow me to put some comments on the values:

Setting Comment
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME Name of the bucket in OCI Object Storage that will be used as a staging area. I recommend using a separate and dedicated bucket for each migration
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME This is your Object Storage Namespace. You find it in the OCI Console, Administration > Tenancy Details
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE For on-prem databases set to number of physical cores x 2. For OCI databases set to number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE Set to the number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_JOBMODE I recommend using SCHEMA. Each schema to import is specified in a separate parameter, INCLUDEOBJECTS-n
GOLDENGATEHUB_ADMINUSERNAME Remember the usernames in GoldenGate are case sensitive
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME The name of the source deployment in GoldenGate hub. If you went with the default value, it should be Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME The name of the target deployment in GoldenGate hub. If you went with the default value, it should be Target
GOLDENGATEHUB_URL Construct the URL by using the Fully Qualified Domain Name (FQDN), e.g. https://ogg19cora.subx.oraclevcn.com
INCLUDEOBJECTS-n Specify each schema in a separate parameter, example: INCLUDEOBJECTS-1=owner:SH and INCLUDEOBJECTS-2=owner:OE
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT This is the fingerprint of the API keys that you used to configure OCI CLI on the ZDM service host. You can find the fingerprint in the OCI Console. Go to Identify > Users > User Details > API Keys
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE This is the private key file that you used when you configured OCI CLI on the ZDM service host
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into CDB$ROOT. I normally use lsnrctl status to find the name of the service
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into CDB$ROOT. I normally use lsnrctl status to find the name of the service
SOURCECONTAINERDATABASE_ The settings apply to my source CDB. If you have a non-CDB simply remove these settings
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the source PDB (or non-CDB). I normally use lsnrctl status to find the name of the service
SOURCEDATABASE_ These settings apply to my source PDB. Or, to your non-CDB
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the target PDB. I normally use lsnrctl status to find the name of the service

Zero Downtime Migration – Logical Offline Migration to Autonomous Database

Let me show you how you can migrate an Oracle Database into an Autonomous Database. My source database is a PDB running on 12.1.0.2. I want to migrate it directly into an Oracle Autonomous Transaction Processing (ATP) database. I will do an offline migration using just Oracle Data Pump.

Prerequisites

  1. I need a Zero Downtime Migration (ZDM) service host. I already blogged about how to create such. The computer is called zdmhost and the user is called zdmuser.
  2. My source database is a PDB called srcpdb, the CDB is called srccdb, and the host is named srchost. All my data is in the schema SH.
  3. I have created an ATP database. I name it tgtadb.

Overview of the components in this demo

Prepare Source

My source database is a PDB. If you have a non-CDB perform all the steps in your non-CDB.

First, set streams pool to a reasonable size (needed by Data Pump):

alter session set container=CDB$ROOT;
--At least 256M
alter system set streams_pool_size=256M scope=both;

Generate a list of tablespaces. In ADB there is only one tablespace named DATA (unless you have more than 32 TB of data, then you get DATA_2, DATA_3 and so forth). All tablespaces must be re-mapped to that. Add all schemas to the in-list:

alter session set container=SRCPDB;
select distinct tablespace_name
from (
      select distinct tablespace_name from dba_segments where owner in ('SH')
      union
      select distinct default_tablespace from dba_users where username in ('SH')
      union
      select distinct tablespace_name from dba_ts_quotas where dropped = 'NO' and username in ('SH')
      );

I need to create a directory that can be used by Data Pump:

[oracle@srchost]$ mkdir -p /u01/app/oracle/datapump/mydirsrc

Optional. The best starting point for Data Pump is good and accurate dictionary statistics. This helps Data Pump extract the information as fast as possible:

exec dbms_stats.gather_dictionary_stats;

Prepare ZDM

The ZDM service host must be able to resolve the host name of the source database host:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts

I put my private SSH key to the source database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp ~/logical_offline.rsp
[zdmuser@zdmhost]$ chmod 700 ~/logical_offline.rsp

This is the response file that I end up with (see appendix D for an explanation):

MIGRATION_METHOD=OFFLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....com

TARGETDATABASE_OCID=ocid1.autonomousdatabase.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=ADMIN

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oradbclouducm
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=zdm-staging
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MYDIRSRC
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirsrc
DATAPUMPSETTINGS_CREATEAUTHTOKEN=FALSE
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_SCHEMA,oldValue:SH,newValue:SH2
DATAPUMPSETTINGS_METADATAREMAPS-2=type:REMAP_TABLESPACE,oldValue:USERS,newValue:DATA
DATAPUMPSETTINGS_METADATAREMAPS-3=type:REMAP_TABLESPACE,oldValue:TEST,newValue:DATA
INCLUDEOBJECTS-1=owner:SH

A few comments:

  • I am remapping SH to SH2 because SH already exist in ADB as a common user. I want to import into a new user.
  • Check the appendix for recommendations on setting parallel degree. Remember scaling up on CPUs in ADB happens fast and online. If you have a lot of data, you can really benefit from scaling up during import.
  • The list of tablespaces that I created earlier is used to generate the list of tablespace remaps.

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_offline.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -eval

A few comments:

  • sourcenode is the host name of the source database host. The name must be resolvable which I why I put it into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 is the location of the private key file that I use to connect via SSH.

Next, I am prompted for passwords and my auth token. It also prompts for an encryption password. This is used to encrypt and decrypt the Data Pump dump file. Generate a secure password. I also get a job ID. When submitting a migration job you are prompted for various passwords

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-05-28T06:25:25.518Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR
zdmhost: 2021-05-28T06:25:31.883Z : Source PDB name : SRCPDB
zdmhost: 2021-05-28T06:25:32.410Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node srchost ...
zdmhost: 2021-05-28T06:25:38.533Z : Premigration advisor output:
Cloud Premigration Advisor Tool Version 21.0.0
Cloud Premigration Advisor Tool completed with overall result: WARNING
Cloud Premigration Advisor Tool generated report location: /u01/app/oracle/zdm/zdm_SRCCDB_fra3dd_6/out/premigration_advisor_report.json

Start Migration

When the evaluation passes, I can start the real migration. This is an offline migration, so I need to drain the database before I start. Kick off the users and ensure that they don’t reconnect.

I am re-using the same command line, but I have removed the -eval option:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_offline.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file.

When the migration completes, status changes to succeeded

That’s it. I have now migrated into an Autonomous Database in OCI!

I have a few post-migration tasks to carry out:

  • I prefer to shut down the source database to ensure that use it anymore.
  • Take care of the optimizer statistics as described in this blog post. Since your target is an Autonomous Database I would recommend to not move your statistics preferences, and in addition, simply regather statistics using DBMS_STATS.GATHER_DATABASE_STATS.
  • Gather dictionary statistics (DBMS_STATS.GATHER_DICTIONARY_STATS). After the import the data dictionary is now full of a lot of new data – the metadata about your tables and other objects. To ensure a smooth running database, gather dictionary statistics.
  • Figure out what to do with the Data Pump dump and log files that are stored in my object storage bucket.

I showed the simple, offline method. However, if you want to use the online method without downtime, have a look at my colleague Sinan’s blog post.

Other Blog Posts in This Series

Appendix

A – Update Cloud Premigration Advisor Tool

As part of the migration ZDM will use the Cloud Premigration Advisor Tool (CPAT) to check your source database. The version of CPAT that ships with ZDM is not the latest one. So, you might be missing out on bug fixes and new recommendations. If you want to get the latest and greatest recommendations, I suggest that you manually update the CPAT tool in your ZDM home. You can find instructions in the MOS note Cloud Premigration Advisor Tool (CPAT) Analyzes Databases for Suitability of Cloud Migration (Doc ID 2758371.1)

B – Ignore Certain Data Pump Errors

If there is an error in either your Data Pump export or import, then ZDM will error out. But by default, ZDM will ignore the following errors:

  • ORA-31684: Object type string already exists
  • ORA-39111: Dependent object type string skipped, base object type string already exists
  • ORA-39082 Object type string created with compilation warnings

If you know that your data exports or imports with a specific error and you want to ignore it then you can add your own errors. The list of ignorable errors is maintained by the ZDM service host and it applies to all migrations. So, if you add another error for one migration, that error is also ignored in a second migration. Read more about it in the documentation.

C – RAC Databases

If your source database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH key to that host.

D – Response File

Allow me to put some comments on the values:

Setting Comment
DATAPUMPSETTINGS_CREATEAUTHTOKEN I set this to false to re-use an existing auth token. I will be prompted to import the auto token when the migration starts
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME Name of the bucket in OCI Object Storage that will be used as a staging area. I recommend using a separate and dedicated bucket for each migration
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME This is your Object Storage Namespace. You find it in the OCI Console, Administration > Tenancy Details
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE For on-prem databases set to number of physical cores x 2. For OCI databases set to number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE Set to the number of OCPUs
DATAPUMPSETTINGS_JOBMODE I recommend using SCHEMA. Each schema to import is specified in a separate parameter, INCLUDEOBJECTS-n
DATAPUMPSETTINGS_METADATAREMAPS-n You should remap any tablespace into DATA which is the only that exist in ADB. If you have a lot of data you will have DATA_2, DATA_3 and so forth. In addition, I am remapping SH into SH2. SH is already created as a common user in ADB, and I want to import into an empty schema. If you need to specify multiple remaps, increment the number and specify only one remap per parameter
INCLUDEOBJECTS-n Specify each schema in a separate parameter, example: INCLUDEOBJECTS-1=owner:SH and INCLUDEOBJECTS-2=owner:OE
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT This is the fingerprint of the API keys that you used to configure OCI CLI on the ZDM service host. You can find the fingerprint in the OCI Console. Go to Identify > Users > User Details > API Keys
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE This is the private key file that you used when you configured OCI CLI on the ZDM service host
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the source PDB (or non-CDB). I normally use lsnrctl status to find the name of the service
SOURCEDATABASE_ These settings apply to my source PDB. Or, to your non-CDB

E – Data Pump Logfile

Check out my Pro Tips for information on how to retrieve the Data Pump log file.