Zero Downtime Migration – The Pro Tips

Here are my pro tips. It is a little mix and match of all my notes that didn’t make it into the previous blog posts but are still too good to go.

Pro Tip 1: Log Files

If something goes south where can you find the log files? On the ZDM service host:

  • $ZDM_BASE/chkbase/scheduled
  • $ZDM_BASE/crsdata/<hostname>/rhp

On the source and target hosts you can also find additional log files containing all the commands that are executed by ZDM:

  • $ORACLE_BASE/zdm/zdm_<db_unique_name>_<zdm_job_id>/zdm/log

Other sources:

  • Alert log
  • Data Pump process trace file DM00

Data Pump log file

  • Directory referenced by directory object
  • $ORACLE_HOME/rdbms/log/<PDB GUID>

Pro Tip 2: Troubleshooting

When you are troubleshooting it is sometimes useful to get rid of all the log files and have ZDM start all over. Some of the log files get really big and are a hard to read, so I usually stop the ZDM service, delete all the log files, and restart ZDM and my troubleshooting. But only do this if there are no other jobs running than the one you are troubleshooting:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice stop
[zdmuser@zdmhost]$ rm $ZDM_BASE/crsdata/*/rhp/rhpserver.log*
[zdmuser@zdmhost]$ rm $ZDM_BASE/chkbase/scheduled/*
[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice start

There are also several chapters on troubleshooting:

Pro Tip 3: Aborting A Job

Sometimes it is useful to completely restart a migration. If a database migration is already registered in ZDM, you are not allowed to specify another migration job. First, you have to abort the existing job, before you can enter a new migration job.

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli abort job -jobid n

Now, you can use zdmcli migrate database command again.

Pro Tip 4: Show All Phases

A ZDM migration is split into phases, and you can have ZDM pause after each of the phases. The documentation has a list of all phases but you can also get it directly from the ZDM tool itself for a specific migration job:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp ~/migrate.rsp
   ... \
   ... \
   ... \
   -listphases

Pro Tip 5: Adding Custom Scripts

You can add your own custom scripts to run before or after a phase in the migration job. You can use the -listphases command (described above) to get a list of all the phases. Then decide whether your script should run before or after that phase. This is called an action plug-in. You can bundle those together in a template to make it easier to re-use. If this is something you need, you should dive into the documentation.

If you target an Autonomous Database, you are not allowed to execute scripts on the target database host. Instead, you can .sql scripts.

The environment in which the script starts has some environment variables that you can use, like:

  • Database (ZDM_SRCDB)
  • Oracle Home (ZDM_SRCDBHOME)
  • ZDM Phase (RHP_PHASE)

Pro Tip 6: GoldenGate Health Check

You can use the healthcheck script on the source and target databases – where the extract and replicat process is running. It will give you invaluable information for your troubleshooting experience and it is a good idea to run and attach a health check if you need to contact My Oracle Support. It is like an AWR report but with information specific to Oracle GoldenGate replication.

Generate report by:

  • Installing objects in database: ogghc_install.sql
  • Execute health check: ogghc_run.sql
  • Optionally, clean-up objects: ogghc_uninstall.sql

For GoldenGate MicroServices Architecture find the scripts on the GoldenGate hub:

  • /u01/app/ogg/oraclenn/lib/sql/healthcheck

And run the scripts in source and target database.

Pro Tip 7: Convert From Single Instance To RAC

A useful feature of ZDM is that it can convert a single instance database to a RAC database in OCI. And it is super simple to do that. The only thing you have to do is to create the target placeholder database as a RAC database. ZDM will detect that and take care of the rest.

Finally, let me mention that if the source database is RAC One Node or RAC, then the target database must be a RAC database. Be sure to create the target placeholder database as RAC.

Pro Tip 8: Get Data Pump Log File in Autonomous Database

When you are importing into Autonomous Database, the Data Pump log file is stored in the directory DATA_PUMP_DIR. But in Autonomous Database you don’t have access to the underlying file system, so how do you get the log file? One approach is to upload the log file into Object Storage.

  1. ZDM will create a set of credentials as part of the migration workflow. Find the name of the credentials (or create new ones using DBMS_CLOUD):
select owner, credential_name, username, enabled from dba_credentials;
  1. Find the name of the Data Pump log file:
select * from dbms_cloud.list_files('DATA_PUMP_DIR');
  1. Upload it. If you need help generating the URI, check the documentation:
begin
    DBMS_CLOUD.PUT_OBJECT (
       credential_name => '<your credential>',
       object_uri      => 'https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket>/',
       directory_name  => 'DATA_PUMP_DIR',
       file_name       => '<file name>');
end;
/
  1. Your OCI Console to download the Data Pump log file.

Other Blog Posts in This Series

Zero Downtime Migration – Install And Configure ZDM

To use Zero Downtime Migration (ZDM) I must install a Zero Downtime Migration service host. It is the piece of software that will control the entire process of migrating my database into Oracle Cloud Infrastructure (OCI). This post describes the process for ZDM version 21. The requirements are:

  • Must be running Oracle Linux 7 or newer.
  • 100 GB disk space according to the documentation. I could do with way less – basically there should be a few GBs for the binaries and then space for configuration and log files.
  • SSH access (port 22) to each of the database hosts.
  • Recommended to install it on a separate server (although technically possible to use one of the database hosts).

Create and configure server

In my example, I will install the ZDM service host on a compute instance in OCI. There are no requirements to CPU nor memory and ZDM is only acting as a coordinator – all the work is done by the database hosts – so I can use the smallest compute shape available. I was inspired to use OCI CLI after reading a blog post by Michał. I will use that approach to create the compute instance. But I could just as well use the web interface or REST APIs.

First, I will define a few variables that you have to change to your needs. DISPLAYNAME is the hostname of my compute instance – and also the name I see in the OCI webpage. AVAILDOM is the availability domain into which the compute instance is created. SHAPE is the compute shape:

DISPLAYNAME=zdm
AVAILDOM=OUGC:EU-FRANKFURT-1-AD-1
SHAPE=VM.Standard2.1

When I create a compute instance using the webpage these are the values: Screenshot of OCI webpage where display name, availability domain and shape are shown

In addition, I will define the OCID of my compartment, and also the OCID of the subnet that I will use. I am making sure to select a subnet that I can reach via SSH from my own computer. Last, I have the public key file:

COMPARTMENTID="..."
SUBNETID="..."
PUBKEYFILE="/path/to/key-file.pub"

Because I want to use the latest Oracle Linux 7 image I will query for the OCID of that and store it in a variable:

IMAGEID=`oci compute image list \
   --compartment-id $COMPARTMENTID \
   --operating-system "Oracle Linux" \
   --sort-by TIMECREATED \
   --query "data[?contains(\"display-name\", 'GPU')==\\\`false\\\` && contains(\"display-name\", 'Oracle-Linux-7')==\\\`true\\\`].{DisplayName:\"display-name\", OCID:\"id\"} | [0]" \
   | grep OCID \
   | awk -F'[\"|\"]' '{print $4}'`

And now I can create the compute instance:

oci compute instance launch \
 --compartment-id $COMPARTMENTID \
 --display-name $DISPLAYNAME \
 --availability-domain $AVAILDOM \
 --subnet-id $SUBNETID \
 --image-id $IMAGEID \
 --shape $SHAPE \
 --ssh-authorized-keys-file $PUBKEYFILE \
 --wait-for-state RUNNING

The command will wait until the compute instance is up and running because I used the wait-for-state RUNNING option. Now, I can get the public IP address so I can connect to the instance:

VMID=`oci compute instance list \
  --compartment-id $COMPARTMENTID \
  --display-name $DISPLAYNAME \
  --lifecycle-state RUNNING \
  | grep \"id\" \
  | awk -F'[\"|\"]' '{print $4}'`
oci compute instance list-vnics \
 --instance-id $VMID \
 | grep public-ip \
 | awk -F'[\"|\"]' '{print $4}'

Prepare Host

The installation process is described in the documentation which you should visit to get the latest changes. Log on to the ZDM service host and install required packages (python36 is needed for OCI CLI):

[root@zdm]$ yum -y install \
  glibc-devel \
  expect \
  unzip \
  libaio \
  kernel-uek-devel-$(uname -r) \
  python36

Create a group (ZDM) and user (ZDMUSER). This is only needed on the ZDM Service Host. You don’t need this user on the database hosts:

[root@zdm]$ groupadd zdm ; useradd -g zdm zdmuser

Make it possible to SSH to the box as zdmuser. I will just reuse the SSH keys from opc:

[root@zdm]$ cp -r /home/opc/.ssh /home/zdmuser/.ssh ; chown -R zdmuser:zdm /home/zdmuser/.ssh

Create directory for Oracle software and change permissions:

[root@zdm]$ mkdir /u01 ; chown zdmuser:zdm /u01

Edit hosts file, and ensure name resolution work to the source host (srchost) and target hosts (tgthost):

[root@zdm]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@zdm]$ echo -e "[ip address] tgthost" >> /etc/hosts

Install And Configure ZDM

Now, to install ZDM I will log on as zdmuser and set the environment in my .bashrc file:

[zdmuser@zdm]$ echo "INVENTORY_LOCATION=/u01/app/oraInventory; export INVENTORY_LOCATION" >> ~/.bashrc
[zdmuser@zdm]$ echo "ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_BASE=\$ORACLE_BASE; export ZDM_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_HOME=\$ZDM_BASE/zdm21; export ZDM_HOME" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_INSTALL_LOC=/u01/zdm21-inst; export ZDM_INSTALL_LOC" >> ~/.bashrc
[zdmuser@zdm]$ source ~/.bashrc

Create directories

[zdmuser@zdm]$ mkdir -p $ORACLE_BASE $ZDM_BASE $ZDM_HOME $ZDM_INSTALL_LOC

Next, download the ZDM software into $ZDM_INSTALL_LOC.

Once downloaded, start the installation:

[zdmuser@zdm]$ ./zdminstall.sh setup \
  oraclehome=$ZDM_HOME \
  oraclebase=$ZDM_BASE \
  ziploc=./zdm_home.zip -zdm

And it should look something similar to this: Screenshot of a successful ZDM installation

Start the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start

Which should produce something like this: Sample output when starting ZDM service (jwcctl debug jwc) And, optionally, I can verify the status of the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice status

The ZDM service is running

Install OCI CLI

You might need OCI CLI as part of the migration. It is simple to install, so I always do it:

bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
oci setup config

You find further instructions here.

Configure Network Connectivity

The ZDM service host must communicate with the source and target hosts via SSH. For that purpose I need private key files to each of the hosts. The private key files must be without a passphrase, in RSA/PEM format and I have to put them at /home/zdmuser/.ssh/[host name]. In my demo, the files are to be named:

  • /home/zdmuser/.ssh/srchost
  • /home/zdmuser/.ssh/tgthost

Ensure that only zdmuser can read them:

[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/srchost
[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/tgthost

Now, I will verify the connection. In my example I will connect to opc on both database hosts, but you can change it if you like:

[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/srchost opc@srchost
[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/tgthost opc@tgthost

If you get an error when connecting ensure the following:

  • The public key is added to /home/opc/.ssh/authorized_keys on both database hosts (change opc if you are connecting as another user)
  • The key files are in RSA/PEM format (the private key file should start with -----BEGIN RSA PRIVATE KEY-----)
  • The key files are without a passphrase

That’s It

Now, I have a working ZDM service host. I am ready to start the migration process.

It is probably also a good idea to find a way to start the ZDM service automatically, if the server restarts.

There is also a community marketplace image that comes with ZDM already installed. You can read about it here; evaluate it and see if it is something for you.

Other Blog Posts in This Series

Zero Downtime Migration

When you need to migrate into OCI we have a cool – and free – tool that you can use: Zero Downtime Migration (ZDM).

You can use Zero Downtime Migration to easily migrate to Oracle Cloud Infrastructure - OCI

In short: ZDM builds a copy of your database in OCI. It keeps the OCI database in sync with the on-prem database until you are ready to complete the migration. Then connections are simply switched over to the OCI database. ZDM will take care of all the steps and ensure nothing is lost in the process.

In this blog post series, I will take you through the entire process using version 21 (the latest at time of writing). In the end you will know all there is to know – and you can start migrating your databases into OCI.

Source Database

Your source database can be located:

  • On-prem
  • OCI Classic (you know, our old cloud)
  • OCI (the new cloud, useful when you want to migrate between regions or locations or between system types)

Requirements:

  • The source database must be running 11.2.0.4 or newer
  • Source platform must be Linux

Options

You can migrate your database in two ways – each of them in an online and offline manner:

  • Physical Online – The entire database is migrated by restoring a backup of the database and instantiate that as a standby database. The standby database is kept in sync with redo apply. It is online because the only downtime needed is to perform a regular switchover.
  • Physical Offline – The entire database is migrated by restoring a backup of the database. It is offline because there is downtime while the backup is created, transferred and restored in OCI.
  • Logical Online – One or more schemas are migrated using Data Pump. In addition, Oracle GoldenGate is used to keep the OCI database in sync to avoid downtime. It is online because the only downtime needed is to switch over the users to the new database.
  • Logical Offline – Like the online option but without Oracle GoldenGate on top. It is offline because the database is unavailable during the export and import operation.

Free? Easy?

I know what you think right now. First, you say it is a free tool, and now you mention Oracle GoldenGate. We don’t have a license for Oracle GoldenGate. Possibly, you are also thinking that Oracle GoldenGate is complex. Let me address that:

Free

If you migrate your database into OCI (currently ExaCC and Exadata on-prem is excluded), you can use Oracle GoldenGate (for migration purposes only) without paying a separate license – provided you use the OCI Marketplace image. It says:

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools:

  • Oracle Zero Downtime Migration
  • Oracle Cloud Infrastructure Database Migration

So no license for Oracle GoldenGate to handle the migration. You still have to pay for the underlying compute instance – but no license.

Complex

To install Oracle GoldenGate simply follow the wizard to deploy your installation. It is basically just click-click-click. Afterwards, you need to configure the extract and replicat process, but ZDM takes care of that for you. If all goes well, you won’t even have to log into the Oracle GoldenGate Hub.

Yes, Oracle GoldenGate is a complex product, but for this purpose you should not worry. I promise you. It is super easy.

Comparison

Physical Online Physical Offline Logical Online Logical Offline
TARGET PLATFORMS
ATP-D x x
ATP-S x x
ADW-D x x
ADW-S x x
ExaCS x x x x
ExaCC x x x x
Exadata (on-prem) x x x x
Bare Metal DBCS x x x x
Virtual Machine DBCS x x x x
RELEASE AND EDITIONS
SE2 x x x
EE x x x x
Migrate from SE2 to EE x x
Migrate to same version x x x x
Migrate to same version, higher patch level x x x x
Migrate to higher version (see note 1) x x
ARCHITECTURE
Non-CDB x x x x
CDB (see note 2) x x x x
Migrate into PDB – no extra downtime x x
Migrate into PDB – with extra downtime (see note 3) x x
Single instance x x x x
RAC One Node (see note 4) x x x x
RAC x x x x
Migrate from single instance to RAC x x x x
Migrate from RAC to single instance x x
ENCRYPTION
Unencrypted (see note 5) x x x x
Encrypted x x x x
Encrypt data-in-transit during migration (see note 6) x x x x

Note 1: When doing physical migrations, you can’t migrate directly into a higher release. However, you are free to upgrade the database afterwards. But that will incur additional downtime.

Note 2: When migrating databases using the physical option all PDBs are migrated. If you use the logical approach, you migrate each PDB individually and you can choose which you want.

Note 3: When you migrate databases using the physical option, you can optionally convert the database into a PDB afterwards. However, that will incur additional downtime while the noncdb_to_pdb.sql script is executed.

Note 4: RAC One Node are always migrated into RAC.

Note 5: All databases must be encrypted in OCI. An unencrypted database is always encrypted when it is created in OCI.

Note 6: A combination of techniques are in play here (dump file encryption, SQL*Net encryption, HTTPS, SSH/rsync) and it depends a lot of how you choose to carry out the migration. As an example, when doing a physical online migration if the source database is not encrypted, it will be encrypted on-the-fly once they are created in OCI. The initial backup of the source database is sent over an encrypted connection to OCI Object Storage, and redo are transferred over encrypted SQL*Net.

Network Connectivity

  • The ZDM service host needs SSH access (22) to the source database. Unless the target database is an Autonomous Database then it also needs SSH access to the target database host.
  • If you plan on using OCI Object Storage as a staging area, the source database needs access to OCI Object Storage over HTTPS (443). Unless the target database is an Autonomous Database, the same applies to the target database host.
  • SQL*Net connection (1521) are needed between the two databases hosts.
  • If you will be using Oracle GoldenGate as well, you need SQL*Net connection (1521) from the GoldenGate Hub and to the source and target database. In addition, you need HTTPS (443) from the ZDM service host to the GoldenGate Hub.

And then there are all the special cases with proxy and Bastion host which I will not cover here.

References

In case you want to read more here are some useful links:

Other Blog Posts in This Series

This is the introduction blog post in this series. Over the next days the other blog posts will follow. Stay tuned!