In the previous blog posts we configured the environments and installed and configured the ZDM service host. Now we can start working on the actual migration.
I will use Object Storage as the staging area between the source and target host, and, hence, I need to create a bucket that I can use for that purpose:
$ oci os bucket create \ --compartment-id "..." \ --name "zdm-staging"
Connect to the ZDM service host as zdmuser and have a look at a template response file. It contains a description of each of the parameters that you can use:
[zdmuser@zdm]$ more $ZDM_HOME/rhp/zdm/template/zdm_template.rsp
Now, I can create my own response file:
[zdmuser@zdm]$ vi ~/migrate.rsp
In my demo it contains the following:
#Migration method: DG_OSS - DataGuard using object storage for standby initialization MIGRATION_METHOD=DG_OSS #This is DB_UNIQUE_NAME of the target database, connect to the target database and execute: SELECT db_unique_name FROM v$database; TGT_DB_UNIQUE_NAME=CDB1_fra3kw #Name of the ASM diskgroups that I will use. To get a list of disk groups and free space, connect to target database and execute: SELECT name, free_mb, total_mb FROM v$asm_diskgroup; TGT_DATADG=+DATA TGT_REDODG=+RECO TGT_RECODG=+RECO #BACKUP_PATH should be left blank when MIGRATION_METHOD=DG_OSS BACKUP_PATH= #URL to OCI object storage HOST=https://swiftobjectstorage.[region].oraclecloud.com/v1/[your-tenancy-object-storage-namespace] #The Object Storage Bucket that will be used as a staging area OPC_CONTAINER=zdm-staging #In my simple demo I will skip configuration of fallback, and just shutdown the source database after the switchover SKIP_FALLBACK=TRUE SHUTDOWN_SRC=TRUE
If you need help figuring out what
HOST should be set to, you can look in the documentation.
In my demo I use the Frankfurt data center, and, thus, the region is set to eu-frankfurt-1. You can visit the API documentation for a list of regions. But often the region string is listed many places.
To figure out what your tenancy object storage namespace is you can use
oci os ns get. Alternatively, in the OCI web page open the Profile menu and click Tenancy: <your tenancy name>. The namespace string is listed under Object Storage Settings. To read more about have a look at the OCI documentation. Thanks to Bartlomiej Sowa for putting in a comment with this information – much appreciated!
Now, let’s start a migration evaluation. Most of the parameters are self-explanatory but you can also look in the documentation. Normally, you should leave
tgtauth to zdmauth.
srcarg2 identify_file refers to the private key files that are needed to SSH to the source and target host. The
backupuser is the user name that you want to use to connect to OCI object storage:
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \ -rsp ~/migrate.rsp \ -sourcesid CDB1 \ -sourcenode srchost \ -srcauth zdmauth \ -srcarg1 user:opc \ -srcarg2 identity_file:/home/zdmuser/.ssh/srchost \ -srcarg3 sudo_location:/usr/bin/sudo \ -targetnode tgthost \ -tgtauth zdmauth \ -tgtarg1 user:opc \ -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost \ -tgtarg3 sudo_location:/usr/bin/sudo \ -targethome /u01/app/oracle/product/22.214.171.124/dbhome_1 \ -backupuser "firstname.lastname@example.org" \ -eval
Shortly after you will be prompted for the SYS password to the source database and also the password for the
backupuser (which is your OCI user). For the latter, please note that this password is not your user password, however, it is an auth token:
Also, from the output you can see the ID of your job. Use it to query the status of the job:
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli query job -jobid 1
Hopefully, you will end up with a successful evaluation:
Before we start the actual migration let me say a few words about backup during ZDM migration. You should keep regular backup strategy during migration. But avoid having ZDM backups and regular backups run at the same time. Also, if you are dealing with a RAC database be sure to put the snapshot controlfile on shared storage. Otherwise, you might get ORA-00245 errors during backups.
Back on track! Start the migration, but – VERY IMPORTANT – specify that ZDM should pause after the standby has been built and redo apply has started using
-pauseafter option. If you fail to do so the switch-over will be executed as soon as the standby has been built:
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \ -rsp ~/migrate.rsp \ -sourcesid CDB1 -sourcenode srchost \ -srcauth zdmauth \ -srcarg1 user:opc \ -srcarg2 identity_file:/home/zdmuser/.ssh/srchost \ -srcarg3 sudo_location:/usr/bin/sudo \ -targetnode tgthost \ -targethome /u01/app/oracle/product/126.96.36.199/dbhome_1 \ -backupuser "email@example.com" \ -tgtauth zdmauth \ -tgtarg1 user:opc \ -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost \ -tgtarg3 sudo_location:/usr/bin/sudo \ -pauseafter ZDM_CONFIGURE_DG_SRC
I can use the same command (
zdmcli query job) to query the progress of the migration. After a while the migration will pause after
Now the standby database has been built in OCI. Redo gets transferred from my source database to the target database in OCI and applied. If you want to verify it, you can use these queries on the source/primary database:
SELECT host_name, instance_name, db_unique_name, status, database_role, open_mode FROM v$database, v$instance; SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;
And on the target/standby database:
SELECT host_name, instance_name, db_unique_name, status, database_role, open_mode FROM v$database, v$instance; SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;
One of the really cool features of ZDM is that I can now use my standby database for testing in OCI – before I decide to do the switch-over. You can archieve this by converting to a snapshot standby database. I will explain this in a later blog post.
Now I can just sit back and wait. All the changes from my source environment are immediately replicated to my target database in OCI. When it is time to complete the migration and perform the switch-over, I can simply just let ZDM finalize the job. You shouldn’t worry – ZDM won’t do the switch-over until it have verified that all changes are applied on the target database.
However, to prove that it really works lets add another piece of fruit to our SALES PDB:
INSERT INTO zdmtest.items VALUES (4, 'Lemon', 4); COMMIT;
And instruct ZDM to resume the job:
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli resume job -jobid 2 [zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli query job -jobid 2
In the end you will have output similar to this:
Job execution elapsed time: 3 minutes 37 seconds ZDM_GET_SRC_INFO .............. COMPLETED ZDM_GET_TGT_INFO .............. COMPLETED ZDM_SETUP_SRC ................. COMPLETED ZDM_SETUP_TGT ................. COMPLETED ZDM_GEN_RMAN_PASSWD ........... COMPLETED ZDM_PREUSERACTIONS ............ COMPLETED ZDM_PREUSERACTIONS_TGT ........ COMPLETED ZDM_VALIDATE_SRC .............. COMPLETED ZDM_VALIDATE_TGT .............. COMPLETED ZDM_OBC_INST_SRC .............. COMPLETED ZDM_OBC_INST_TGT .............. COMPLETED ZDM_BACKUP_FULL_SRC ........... COMPLETED ZDM_BACKUP_INCREMENTAL_SRC .... COMPLETED ZDM_DISCOVER_SRC .............. COMPLETED ZDM_COPYFILES ................. COMPLETED ZDM_PREPARE_TGT ............... COMPLETED ZDM_SETUP_TDE_TGT ............. COMPLETED ZDM_CLONE_TGT ................. COMPLETED ZDM_FINALIZE_TGT .............. COMPLETED ZDM_CONFIGURE_DG_SRC .......... COMPLETED ZDM_SWITCHOVER_SRC ............ COMPLETED ZDM_SWITCHOVER_TGT ............ COMPLETED ZDM_SHUTDOWN_SRC .............. COMPLETED ZDM_NONCDBTOPDB_PRECHECK ...... COMPLETED ZDM_NONCDBTOPDB_CONVERSION .... COMPLETED ZDM_POSTUSERACTIONS ........... COMPLETED ZDM_POSTUSERACTIONS_TGT ....... COMPLETED ZDM_CLEANUP_SRC ............... COMPLETED ZDM_CLEANUP_TGT ............... COMPLETED
And now my database is migrated to the cloud. Let’s query the target database:
SELECT host_name, instance_name, db_unique_name, status, database_role, open_mode FROM v$database, v$instance;
And check that all four pieces of fruit are in our SALES PDB:
SELECT * FROM zdmtest.items;
And that should be it: Welcome to OCI. Personally, I would also take a quick peek at the alert log. Just to ensure things are running smooth.
Exadata DB System
If your target environment is an Exadata DB System (or ExaCS) it is a good idea to ensure that the cloud tooling is fully up-to-date with the new database.
[root@zdm]$ dbaascli registerdb prereqs --dbname [db_name] --db_unique_name [db_unique_name] [root@zdm]$ dbaascli registerdb begin --dbname [db_name] --db_unique_name [db_unique_name]
You can read more about it in the ZDM run book which also has a sample output from the commands.
If you run into this error:
PRCZ-4001 : failed to execute command "/bin/uname" using the privileged execution plugin "zdmauth" on nodes "doverbyh-zdm-tgt" within 120 seconds PRCZ-2006 : Unable to establish SSH connection to node "doverbyh-zdm-tgt" to execute command "/bin/uname" No more authentication methods available
Check your key files. They must be in RSA/PEM format (the private key must start with
-----BEGIN RSA PRIVATE KEY-----).
If you run into this error:
srchost: 07:08:00.000: Validating object store credentials.. srchost: <ERR_FILE><Facility>PRGO</Facility><ID>ZDM_OBC_INSTALL_CREDENTIALS_INVALID</ID></ERR_FILE>
Check your credentials to OCI. Remember when prompted for the password of your OCI account it is an auth token, not your password (even though the prompt text is misleading).
If the phase ends in
PRECHECK_FAILED and there is no real clue about the error, ensure that the source database host is added to the
known_hosts file on the ZDM service host.
Also, you can verify connectivity by trying to log on via SSH:
[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-source-host>
If the phase ends in
PRECHECK_FAILED and there is no real clue about the error, ensure that the target database host is added to the
known_hosts file on the ZDM service host.
[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-target-host>