To use Zero Downtime Migration (ZDM) I must install a Zero Downtime Migration service host. It is the piece of software that will control the entire process of migrating my database into Oracle Cloud Infrastructure (OCI). The requirements are:
- Must be running Oracle Linux 7 or newer.
- 100 GB disk space according to the documentation. I could do with way less – basically there should be a few GBs for the binaries and then space for configuration and log files.
- SSH access (port 22) to each of the database hosts.
- Recommended to install it on a separate server (although technically possible to use one of the database hosts).
Create and configure server
In my example, I will install the ZDM service host on a compute instance in OCI. There are no requirements to CPU nor memory and ZDM is only acting as a coordinator – all the work is done by the database hosts – so I can use the smallest compute shape available. I am using OCI CLI* to create the compute instance which you can install on your own computer or use a Cloud Shell. But I could just as well use the web interface or REST APIs.
First, I will define a few variables that you have to change to your needs. DISPLAYNAME is the hostname of my compute instance – and also the name I see in the OCI webpage. AVAILDOM is the availability domain into which the compute instance is created. SHAPE is the compute shape:
DISPLAYNAME=zdm
AVAILDOM=OUGC:EU-FRANKFURT-1-AD-1
SHAPE=VM.Standard2.1
When I create a compute instance using the webpage these are the values:
In addition, I will define the OCID of my compartment, and also the OCID of the subnet that I will use. I am making sure to select a subnet that I can reach via SSH from my own computer. Last, I have the public key file:
COMPARTMENTID="..."
SUBNETID="..."
PUBKEYFILE="/path/to/key-file.pub"
Because I want to use the latest Oracle Linux image I will query for the OCID of that and store it in a variable:
IMAGEID=`oci compute image list \
--compartment-id $COMPARTMENTID \
--operating-system "Oracle Linux" \
--sort-by TIMECREATED \
--query "data[?contains(\"display-name\", 'GPU')==\\\`false\\\`].{DisplayName:\"display-name\", OCID:\"id\"} | [0]" \
| grep OCID \
| awk -F'[\"|\"]' '{print $4}'`
And now I can create the compute instance:
oci compute instance launch \
--compartment-id $COMPARTMENTID \
--display-name $DISPLAYNAME \
--availability-domain $AVAILDOM \
--subnet-id $SUBNETID \
--image-id $IMAGEID \
--shape $SHAPE \
--ssh-authorized-keys-file $PUBKEYFILE \
--wait-for-state RUNNING
The command will wait until the compute instance is up and running because I used the wait-for-state RUNNING
option. Now, I can get the public IP address so I can connect to the instance:
VMID=`oci compute instance list \
--compartment-id $COMPARTMENTID \
--display-name $DISPLAYNAME \
--lifecycle-state RUNNING \
| grep \"id\" \
| awk -F'[\"|\"]' '{print $4}'`
oci compute instance list-vnics \
--instance-id $VMID \
| grep public-ip \
| awk -F'[\"|\"]' '{print $4}'
Prepare Host
The installation process is described in the documentation which you should visit to get the latest changes. Log on to the ZDM service host as OPC using the public IP address. By using -o ServerAliveInterval=300
I can avoid getting kicked off all the time:
ssh -o ServerAliveInterval=300 -i [key-file] opc@[ip-address]
Now, switch to root and install required packages:
[opc@zdm]$ sudo su -
[root@zdm]$ yum -y install \
gcc \
kernel-devel \
kernel-headers \
dkms \
make \
bzip2 \
perl \
glibc-devel \
expect \
zip \
unzip \
kernel-uek-devel-$(uname -r)
Create a ZDM group and user:
[root@zdm]$ groupadd zdm ; useradd -g zdm zdmuser
Make it possible to SSH to the box as zdmuser. I will just reuse the SSH keys from opc:
[root@zdm]$ cp -r /home/opc/.ssh /home/zdmuser/.ssh ; chown -R zdmuser:zdm /home/zdmuser/.ssh
Create directory for Oracle software and change permissions:
[root@zdm]$ mkdir /u01 ; chown zdmuser:zdm /u01
Edit hosts file, and ensure name resolution work to the source host (srchost) and target hosts (tgthost):
[root@zdm]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@zdm]$ echo -e "[ip address] tgthost" >> /etc/hosts
Install And Configure ZDM
Now, to install ZDM I will log on as zdmuser and set the environment in my .bashrc file:
[root@zdm]$ su - zdmuser
[zdmuser@zdm]$ echo "INVENTORY_LOCATION=/u01/app/oraInventory; export INVENTORY_LOCATION" >> ~/.bashrc
[zdmuser@zdm]$ echo "ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_BASE=\$ORACLE_BASE; export ZDM_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_HOME=\$ZDM_BASE/zdm19; export ZDM_HOME" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_INSTALL_LOC=/u01/zdm19-inst; export ZDM_INSTALL_LOC" >> ~/.bashrc
[zdmuser@zdm]$ source ~/.bashrc
Create directories
[zdmuser@zdm]$ mkdir -p $ORACLE_BASE $ZDM_BASE $ZDM_HOME $ZDM_INSTALL_LOC
Next, download the ZDM software into $ZDM_INSTALL_LOC
.
Once downloaded, start the installation:
[zdmuser@zdm]$ $ZDM_INSTALL_LOC/zdminstall.sh setup \
oraclehome=$ZDM_HOME \
oraclebase=$ZDM_BASE \
ziploc=$ZDM_INSTALL_LOC/zdm_home.zip -zdm
And it should look something similar to this:
The installation process will output some warnings which are ignorable:
[WARNING] [INS-42505] The installer has detected that the Oracle Grid Infrastructure home software at (/u01/app/oracle/zdm19) is not complete.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
[WARNING] [INS-41875] Oracle ASM Administrator (OSASM) Group specified is same as the users primary group.
[WARNING] [INS-32022] Grid infrastructure software for a cluster installation must not be under an Oracle base directory.
[WARNING] [INS-32055] The Central Inventory is located in the Oracle base.
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
And there are reference to scripts that I should execute, but this is not needed as mentioned in the documentation. ZDM piggybacks on Grid Infrastructure components which is why the messages are displayed:
Start the ZDM service:
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start
Which should produce something like this:
And, optionally, I can verify the status of the ZDM service:
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice status
Configure Network Connectivity
The ZDM service host must communicate with the source and target hosts via SSH. For that purpose I need private key files to each of the hosts. The private key files must be without a passphrase, in RSA/PEM format and I have to put them at /home/zdmuser/.ssh/[host name]. In my demo, the files are to be named:
- /home/zdmuser/.ssh/srchost
- /home/zdmuser/.ssh/tgthost
Ensure that only zdmuser can read them:
[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/srchost
[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/tgthost
Now, I will verify the connection. In my example I will connect to opc on both database hosts, but you can change it if you like:
[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/srchost opc@srchost
[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/tgthost opc@tgthost
If you get an error when connecting ensure the following:
- The public key is added to /home/opc/.ssh/authorized_keys on both database hosts (change opc if you are connecting as another user)
- The key files are in RSA/PEM format (the private key file should start with
-----BEGIN RSA PRIVATE KEY-----
) - The key files are without a passphrase
That’s It
Now, I have a working ZDM service host. Previously, I have prepared my source and target environments which means that I am ready to start the migration process. Stay tuned!
References
* I found this blog post by Michał very usefull when figuring out how to use OCI CLI to create a compute instance.