Zero Downtime Migration – Logical Online Migration to DBCS

Let me show you how you can migrate an Oracle Database into OCI. My source database is a PDB running on 12.1.0.2. I want to migrate it directly into a 19c CDB. I will show you:

  • import via dump file and object storage
  • import via database link

This procedure can be used to migrate to:

  • VM DB Systems
  • Bare Metal DB Systems
  • Exadata DB System (ExaCS)

Prerequisites

  1. I need a ZDM service host. I already blogged about how to create such. The computer is called zdmhost and the user is called zdmuser.
  2. My source database is a PDB called srcpdb, the CDB is called srccdb, and the host is named srchost. All my data is in the schema SH.
  3. I have already created a GoldenGate hub as described in this blog post. It is called ogg19cora.
  4. I want to migrate to a VM DB System, and it is already created. I name it tgthost, the CDB is named tgtcdb and the PDB is named tgtpdb. You can name it whatever you want. Be sure to use ASM as your storage management software.

Overview of the components in this demo

Prepare Source

My source database is a PDB. I need to prepare the PDB and CDB$ROOT. If you have a non-CDB perform all the steps in your non-CDB unless otherwise written.

First, I prepare the database for GoldenGate replication:

alter session set container=CDB$ROOT;
alter database force logging;
alter database add supplemental log data;
alter system set enable_goldengate_replication=true scope=both;
--At least 2GB
alter system set streams_pool_size=2g scope=both;
alter system set global_names=false;

Create a common user for GoldenGate in CDB$ROOT. Skip this step for non-CDBs:

alter session set container=CDB$ROOT;
create user c##ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to c##ggadmin;
grant unlimited tablespace to c##ggadmin;
grant select any dictionary to c##ggadmin;
grant create view to c##ggadmin;
grant execute on dbms_lock to c##ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('c##ggadmin',container=>' all');

Create a user for GoldenGate:

alter session set container=SRCPDB;
create user ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('ggadmin');

Generate a list of tablespaces that must exist in the target database. Add all schemas to the in-list:

alter session set container=SRCPDB;
select distinct tablespace_name
from (
      select distinct tablespace_name from dba_segments where owner in ('SH')
      union
      select distinct default_tablespace from dba_users where username in ('SH')
      union
      select distinct tablespace_name from dba_ts_quotas where dropped = 'NO' and username in ('SH')
      );

Optional. The best starting point for Data Pump is good and accurate dictionary statistics. This helps Data Pump extract the information as fast as possible:

exec dbms_stats.gather_dictionary_stats;

Via Dump File

I need to create a directory that can be used by Data Pump:

[oracle@srchost]$ mkdir -p /u01/app/oracle/datapump/mydirsrc

Prepare Target

The target database must use a timezone file version that is equal to or higher than the source:

alter session set container=tgtpdb;
select * from v$timezone_file;

Prepare the database for GoldenGate replication:

alter session set container=CDB$ROOT;
alter system set enable_goldengate_replication=true scope=both;

Create a user for GoldenGate:

alter session set container=tgtpdb;
create user ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('ggadmin');

Allow the GoldenGate user to perform DML (see appendix A):

alter session set container=tgtpdb;
grant insert any table to ggadmin;
grant update any table to ggadmin;
grant delete any table to ggadmin;

I ensure that the list of tablespaces I found in the source database, exist in the target database. If some are missing, I create them. Also, I will extend the data files in advance, so the size is pretty close to the expected end size. For new tablespaces I set the initial size of the data files. If the data files are too small, I will waste a lot of time during import to auto-extend. If I make them too big, I potentially waste space:

alter database datafile '...' resize 100G;
create tablespace ... size 100G ...;

Via Dump File

I need to create a directory that can be used by Data Pump. It will hold the Data Pump dump files and logs:

[oracle@tgthost]$ mkdir -p /u01/app/oracle/datapump/mydirtgt

Via DB Link

When I use a DB link, ZDM insists on using the built-in directory called DATA_PUMP_DIR. I ensure that the database directory exist. Also, I verify that the directory exist in the file system:

select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';

I will do the import directly over a database link. In that case, the target host must be able to resolve the host name of my source database host:

[root@tgthost]$ echo "<source IP address> srchost" >> /etc/hosts

Prepare ZDM

The ZDM service host must be able to resolve the host names of the source and target database host and the GoldenGate hub:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts
[root@zdmhost]$ echo "<target IP address> tgthost" >> /etc/hosts
[root@zdmhost]$ echo "<GoldenGate IP address> ogg19cora" >> /etc/hosts

I put my private SSH keys to the source and target database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 
[zdmuser@zdmhost]$ cp tgthost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/tgthost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date
[zdmuser@zdmhost]$ ssh -i ~/.ssh/tgthost_key_file opc@tgthost date

If you have configured a proper certificate on your GoldenGate hub, you can jump to the next chapter. If not, read on.

When deployed the GoldenGate hub comes with a self-signed certificate. ZDM will complain about it, because it is considered insecure. For migrations that involve real data, you should use a proper certificate on your GoldenGate hub to ensure your migration is secure. But for tests and demos you might want to disregard the security warning. If so, you can add the self-signed certificate as a trusted one to the Java JDK’s certificate store ($ZDM_HOME/jdk/jre/lib/security/cacerts). You find instructions on how to do so in the MOS support note Zero Downtime Migration – GoldenGate Hub Certificate Known Issues (Doc ID 2768483.1). Use the same host that you added to /etc/hosts.

For your reference, this is the error that ZDM will throw if the certificate is not trusted:

Verifying status of Oracle GoldenGate Microservices at URL "https://..."
PRGZ-1136 : failed to verify configuration and status of Oracle GoldenGate Microservices at URL "https://..."
PRGG-1008 : failed to retrieve detailed information for the health of the Service Manager on Oracle GoldenGate hub "https://..."
PRGG-1001 : HTTP GET request "https://.../services/v2/config/health" failed.
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
unable to find valid certification path to requested target

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp ~/logical_online.rsp
[zdmuser@zdmhost]$ cmod 700 ~/logical_online.rsp

Via Dump Files

This is the response file that I end up with (see appendix E for an explanation):

MIGRATION_METHOD=ONLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_GGADMINUSERNAME=GGADMIN
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....oraclevcn.com
SOURCECONTAINERDATABASE_ADMINUSERNAME=SYSTEM
SOURCECONTAINERDATABASE_GGADMINUSERNAME=C##GGADMIN
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME=SRCCDB_fra3dd....oraclevcn.com

TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=SYSTEM
TARGETDATABASE_GGADMINUSERNAME=GGADMIN
TARGETDATABASE_CONNECTIONDETAILS_HOST=tgthost
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=tgtpdb....oraclevcn.com

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

GOLDENGATEHUB_ADMINUSERNAME=oggadmin
GOLDENGATEHUB_URL=https://ogg19cora....oraclevcn.com
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME=Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME=Target
GOLDENGATEHUB_COMPUTEID=ocid1.instance.oc1.eu-frankfurt-1....

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=MYDIRTGT
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirtgt
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MYDIRSRC
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirsrc
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=zdm-staging
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oradbclouducm
INCLUDEOBJECTS-1=owner:SH

Via DB Link

This is the response file that I end up with (see appendix E for an explanation):

MIGRATION_METHOD=ONLINE_LOGICAL
DATA_TRANSFER_MEDIUM=DBLINK

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_GGADMINUSERNAME=GGADMIN
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....oraclevcn.com
SOURCECONTAINERDATABASE_ADMINUSERNAME=SYSTEM
SOURCECONTAINERDATABASE_GGADMINUSERNAME=C##GGADMIN
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME=SRCCDB_fra3dd....oraclevcn.com

TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=SYSTEM
TARGETDATABASE_GGADMINUSERNAME=GGADMIN
TARGETDATABASE_CONNECTIONDETAILS_HOST=tgthost
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=tgtpdb....oraclevcn.com

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

GOLDENGATEHUB_ADMINUSERNAME=oggadmin
GOLDENGATEHUB_URL=https://ogg19cora....oraclevcn.com
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME=Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME=Target
GOLDENGATEHUB_COMPUTEID=ocid1.instance.oc1.eu-frankfurt-1....

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
INCLUDEOBJECTS-1=owner:SH

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_online.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -eval

A few comments:

  • sourcenode and targetnode are the host names of the source and target database host. Those names must be resolvable which I why I put them into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 and tgtarg2 is the location of the private key file that I use to connect via SSH.

Next, I am prompted for various passwords and I also get a job ID. When submitting a migration job you are prompted for various passwords

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

My colleague Sinan Petrus Toma showed how to loop:

[zdmuser@zdmhost]$ while :; do $ZDM_HOME/bin/zdmcli query job -jobid <job ID>; sleep 10; done

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-05-28T06:25:25.518Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR
zdmhost: 2021-05-28T06:25:31.883Z : Source PDB name : SRCPDB
zdmhost: 2021-05-28T06:25:32.410Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node srchost ...
zdmhost: 2021-05-28T06:25:38.533Z : Premigration advisor output:
Cloud Premigration Advisor Tool Version 21.0.0
Cloud Premigration Advisor Tool completed with overall result: WARNING
Cloud Premigration Advisor Tool generated report location: /u01/app/oracle/zdm/zdm_SRCCDB_fra3dd_6/out/premigration_advisor_report.json

Start Migration

When the evaluation passes, I can start the real migration. I am re-using the same command line, but I have removed the -eval option. Instead – and this is important – I am using -pauseafter to tell ZDM to pause the migration, just before the switchover takes place. Downtime has not started yet. The database is still open for business, but ZDM will copy the data and start the GoldenGate replication:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_online.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -pauseafter ZDM_MONITOR_GG_LAG

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file. Note down the job ID. I need it later on to resume the migration.

When ZDM completes the phase ZDM_MONITOR_GG_LAG it will pause and wait. I can verify it with zdmcli query job: ZDM is currently paused

My data has been imported to the target database in OCI, GoldenGate has been configured, and my changes are being replicated. I can monitor the replication by logging on to the GoldenGate hub.

Important: Now, I strongly recommend that you take care of the statistics in your target database:

  1. Gather dictionary statistics (DBMS_STATS.GATHER_DICTIONARY_STATS). After the import the data dictionary is now full of a lot of new data – the metadata about your tables and other objects. To ensure a smooth running database, gather dictionary statistics.
  2. Decide on what to do about statistics on user objects as described in this blog post.

Complete Migration

Now it is time to finalize the migration and switch over to the OCI target database. All I need to do, is to resume the paused ZDM job. I use the job ID that was created when I started the migration:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli resume job -jobid <job ID>

ZDM will now ensure that all changes are replicated before switching over to the OCI target database. Again, I use the zdmcli query job command to monitor the progress and I can tail the log file. After a short while the migration completes. ZDM migration completed

That’s it. I have now migrated into OCI! And my new database is running Oracle Database 19c in addition.

I have a few post-migration tasks to carry out:

  • I prefer to shut down the source database to ensure that use it anymore.
  • ZDM will remove the GoldenGate configuration, but I need to disable the replication in the database: alter system set enable_goldengate_replication=false scope=both;.

Finally, I would recommend that you take a look at Logical Migration and the Final Touches. A few pointers that can make your migration even more successful.

Other Blog Posts in This Series

Appendix

A – GoldenGate user privileges

When GoldenGate is replicating changes into the target database, it does so as a regular database user. Thus, it must have privileges to perform DML and DDL on the appropriate schemas. There is a number of ways to do that.

I granted INSERT ANY, UPDATE ANY, and DELETE ANY which will allow me to perform DML in any schema. But that does not cover DDL. If I want to cover that as well there is a number of similar ANY privileges that I would need to grant, like CREATE ANY TABLE, CREATE ANY INDEX and so forth.

But it is discouraged to perform DDL statements during the replication. By default, ZDM does not configure DDL replication. If you want to replicate DDL you need to use the response file parameter GOLDENGATESETTINGS_REPLICATEDDL=true.

Other customers prefer an easier solution and grant PDB_DBA role (or DBA in a non-CDB). Connor McDonald also blogged about a procedure to perform schema grant. If you need inspiration you can have a look at the GGADMIN user in an Autonomous Database. It has PDB_DBA role amongst others.

In the end it comes down to preferences and security regulations in your organization.

B – Update Cloud Premigration Advisor Tool

As part of the migration ZDM will use the Cloud Premigration Advisor Tool (CPAT) to check your source database. The version of CPAT that ships with ZDM is not the latest one. So, you might be missing out on bug fixes and new recommendations. If you want to get the latest and greatest recommendations, I suggest that you manually update the CPAT tool in your ZDM home. You can find instructions in the MOS note Cloud Premigration Advisor Tool (CPAT) Analyzes Databases for Suitability of Cloud Migration (Doc ID 2758371.1)

C – Ignore Certain Data Pump Errors

If there is an error in either your Data Pump export or import, then ZDM will error out. But by default, ZDM will ignore the following errors:

  • ORA-31684: Object type string already exists
  • ORA-39111: Dependent object type string skipped, base object type string already exists
  • ORA-39082 Object type string created with compilation warnings

If you know that your data exports or imports with a specific error and you want to ignore it then you can add your own errors. Please get in touch with My Oracle Support for details.

D – RAC Databases

If your source or target database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH keys to that host.

E – Response File

Allow me to put some comments on the values:

Setting Comment
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME Name of the bucket in OCI Object Storage that will be used as a staging area. I recommend using a separate and dedicated bucket for each migration
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME This is your Object Storage Namespace. You find it in the OCI Console, Administration > Tenancy Details
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE For on-prem databases set to number of physical cores x 2. For OCI databases set to number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE Set to the number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_JOBMODE I recommend using SCHEMA. Each schema to import is specified in a separate parameter, INCLUDEOBJECTS-n
GOLDENGATEHUB_ADMINUSERNAME Remember the usernames in GoldenGate are case sensitive
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME The name of the source deployment in GoldenGate hub. If you went with the default value, it should be Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME The name of the target deployment in GoldenGate hub. If you went with the default value, it should be Target
GOLDENGATEHUB_URL Construct the URL by using the Fully Qualified Domain Name (FQDN), e.g. https://ogg19cora.subx.oraclevcn.com
INCLUDEOBJECTS-n Specify each schema in a separate parameter, example: INCLUDEOBJECTS-1=owner:SH and INCLUDEOBJECTS-2=owner:OE
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT This is the fingerprint of the API keys that you used to configure OCI CLI on the ZDM service host. You can find the fingerprint in the OCI Console. Go to Identify > Users > User Details > API Keys
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE This is the private key file that you used when you configured OCI CLI on the ZDM service host
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into CDB$ROOT. I normally use lsnrctl status to find the name of the service
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into CDB$ROOT. I normally use lsnrctl status to find the name of the service
SOURCECONTAINERDATABASE_ The settings apply to my source CDB. If you have a non-CDB simply remove these settings
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the source PDB (or non-CDB). I normally use lsnrctl status to find the name of the service
SOURCEDATABASE_ These settings apply to my source PDB. Or, to your non-CDB
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the target PDB. I normally use lsnrctl status to find the name of the service

F – Troubleshooting

If you forget to set global_names=false you might run into this error:

PRGD-1019 : creation of Data Pump job "..." with operation tyoe "IMPORT_ONLINE" failed
PRGD-1016 : stored procedure "DBMS_DATABASE.OPEN" execution as user ...
ORA-20000: Datapump: Unexpected error
ORA-06512: at line 1
ORA-39006: internal error
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 6844
ORA-06512: at line 1

Zero Downtime Migration – Logical Offline Migration to Autonomous Database

Let me show you how you can migrate an Oracle Database into an Autonomous Database. My source database is a PDB running on 12.1.0.2. I want to migrate it directly into an Oracle Autonomous Transaction Processing (ATP) database. I will do an offline migration using just Oracle Data Pump.

Prerequisites

  1. I need a Zero Downtime Migration (ZDM) service host. I already blogged about how to create such. The computer is called zdmhost and the user is called zdmuser.
  2. My source database is a PDB called srcpdb, the CDB is called srccdb, and the host is named srchost. All my data is in the schema SH.
  3. I have created an ATP database. I name it tgtadb.

Overview of the components in this demo

Prepare Source

My source database is a PDB. If you have a non-CDB perform all the steps in your non-CDB.

First, set streams pool to a reasonable size (needed by Data Pump):

alter session set container=CDB$ROOT;
--At least 256M
alter system set streams_pool_size=256M scope=both;

Generate a list of tablespaces. In ADB there is only one tablespace named DATA (unless you have more than 32 TB of data, then you get DATA_2, DATA_3 and so forth). All tablespaces must be re-mapped to that. Add all schemas to the in-list:

alter session set container=SRCPDB;
select distinct tablespace_name
from (
      select distinct tablespace_name from dba_segments where owner in ('SH')
      union
      select distinct default_tablespace from dba_users where username in ('SH')
      union
      select distinct tablespace_name from dba_ts_quotas where dropped = 'NO' and username in ('SH')
      );

I need to create a directory that can be used by Data Pump:

[oracle@srchost]$ mkdir -p /u01/app/oracle/datapump/mydirsrc

Optional. The best starting point for Data Pump is good and accurate dictionary statistics. This helps Data Pump extract the information as fast as possible:

exec dbms_stats.gather_dictionary_stats;

Prepare ZDM

The ZDM service host must be able to resolve the host name of the source database host:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts

I put my private SSH key to the source database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp ~/logical_offline.rsp
[zdmuser@zdmhost]$ chmod 700 ~/logical_offline.rsp

This is the response file that I end up with (see appendix D for an explanation):

MIGRATION_METHOD=OFFLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....com

TARGETDATABASE_OCID=ocid1.autonomousdatabase.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=ADMIN

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oradbclouducm
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=zdm-staging
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MYDIRSRC
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirsrc
DATAPUMPSETTINGS_CREATEAUTHTOKEN=FALSE
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_SCHEMA,oldValue:SH,newValue:SH2
DATAPUMPSETTINGS_METADATAREMAPS-2=type:REMAP_TABLESPACE,oldValue:USERS,newValue:DATA
DATAPUMPSETTINGS_METADATAREMAPS-3=type:REMAP_TABLESPACE,oldValue:TEST,newValue:DATA
INCLUDEOBJECTS-1=owner:SH

A few comments:

  • I am remapping SH to SH2 because SH already exist in ADB as a common user. I want to import into a new user.
  • Check the appendix for recommendations on setting parallel degree. Remember scaling up on CPUs in ADB happens fast and online. If you have a lot of data, you can really benefit from scaling up during import.
  • The list of tablespaces that I created earlier is used to generate the list of tablespace remaps.

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_offline.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -eval

A few comments:

  • sourcenode is the host name of the source database host. The name must be resolvable which I why I put it into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 is the location of the private key file that I use to connect via SSH.

Next, I am prompted for passwords and my auth token. It also prompts for an encryption password. This is used to encrypt and decrypt the Data Pump dump file. Generate a secure password. I also get a job ID. When submitting a migration job you are prompted for various passwords

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-05-28T06:25:25.518Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR
zdmhost: 2021-05-28T06:25:31.883Z : Source PDB name : SRCPDB
zdmhost: 2021-05-28T06:25:32.410Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node srchost ...
zdmhost: 2021-05-28T06:25:38.533Z : Premigration advisor output:
Cloud Premigration Advisor Tool Version 21.0.0
Cloud Premigration Advisor Tool completed with overall result: WARNING
Cloud Premigration Advisor Tool generated report location: /u01/app/oracle/zdm/zdm_SRCCDB_fra3dd_6/out/premigration_advisor_report.json

Start Migration

When the evaluation passes, I can start the real migration. This is an offline migration, so I need to drain the database before I start. Kick off the users and ensure that they don’t reconnect.

I am re-using the same command line, but I have removed the -eval option:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_offline.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file.

When the migration completes, status changes to succeeded

That’s it. I have now migrated into an Autonomous Database in OCI!

I have a few post-migration tasks to carry out:

  • I prefer to shut down the source database to ensure that use it anymore.
  • Take care of the optimizer statistics as described in this blog post. Since your target is an Autonomous Database I would recommend to not move your statistics preferences, and in addition, simply regather statistics using DBMS_STATS.GATHER_DATABASE_STATS.
  • Gather dictionary statistics (DBMS_STATS.GATHER_DICTIONARY_STATS). After the import the data dictionary is now full of a lot of new data – the metadata about your tables and other objects. To ensure a smooth running database, gather dictionary statistics.
  • Figure out what to do with the Data Pump dump and log files that are stored in my object storage bucket.

I showed the simple, offline method. However, if you want to use the online method without downtime, have a look at my colleague Sinan’s blog post.

Other Blog Posts in This Series

Appendix

A – Update Cloud Premigration Advisor Tool

As part of the migration ZDM will use the Cloud Premigration Advisor Tool (CPAT) to check your source database. The version of CPAT that ships with ZDM is not the latest one. So, you might be missing out on bug fixes and new recommendations. If you want to get the latest and greatest recommendations, I suggest that you manually update the CPAT tool in your ZDM home. You can find instructions in the MOS note Cloud Premigration Advisor Tool (CPAT) Analyzes Databases for Suitability of Cloud Migration (Doc ID 2758371.1)

B – Ignore Certain Data Pump Errors

If there is an error in either your Data Pump export or import, then ZDM will error out. But by default, ZDM will ignore the following errors:

  • ORA-31684: Object type string already exists
  • ORA-39111: Dependent object type string skipped, base object type string already exists
  • ORA-39082 Object type string created with compilation warnings

If you know that your data exports or imports with a specific error and you want to ignore it then you can add your own errors. The list of ignorable errors is maintained by the ZDM service host and it applies to all migrations. So, if you add another error for one migration, that error is also ignored in a second migration. Read more about it in the documentation.

C – RAC Databases

If your source database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH key to that host.

D – Response File

Allow me to put some comments on the values:

Setting Comment
DATAPUMPSETTINGS_CREATEAUTHTOKEN I set this to false to re-use an existing auth token. I will be prompted to import the auto token when the migration starts
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME Name of the bucket in OCI Object Storage that will be used as a staging area. I recommend using a separate and dedicated bucket for each migration
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME This is your Object Storage Namespace. You find it in the OCI Console, Administration > Tenancy Details
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE For on-prem databases set to number of physical cores x 2. For OCI databases set to number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE Set to the number of OCPUs
DATAPUMPSETTINGS_JOBMODE I recommend using SCHEMA. Each schema to import is specified in a separate parameter, INCLUDEOBJECTS-n
DATAPUMPSETTINGS_METADATAREMAPS-n You should remap any tablespace into DATA which is the only that exist in ADB. If you have a lot of data you will have DATA_2, DATA_3 and so forth. In addition, I am remapping SH into SH2. SH is already created as a common user in ADB, and I want to import into an empty schema. If you need to specify multiple remaps, increment the number and specify only one remap per parameter
INCLUDEOBJECTS-n Specify each schema in a separate parameter, example: INCLUDEOBJECTS-1=owner:SH and INCLUDEOBJECTS-2=owner:OE
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT This is the fingerprint of the API keys that you used to configure OCI CLI on the ZDM service host. You can find the fingerprint in the OCI Console. Go to Identify > Users > User Details > API Keys
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE This is the private key file that you used when you configured OCI CLI on the ZDM service host
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the source PDB (or non-CDB). I normally use lsnrctl status to find the name of the service
SOURCEDATABASE_ These settings apply to my source PDB. Or, to your non-CDB

E – Data Pump Logfile

Check out my Pro Tips for information on how to retrieve the Data Pump log file.

Zero Downtime Migration – Monitor GoldenGate Replication

When you are migrating your Oracle Database with Zero Downtime Migration (ZDM) and the Logical Online method, you are using Oracle GoldenGate to handle the replication. The replication keeps the OCI target database in sync until you can perform the switchover. Thus, it is a vital part of the process. Let’s see how you can monitor the replication.

I have already started a migration which is currently paused at the phase ZDM_MONITOR_GG_LAG.

Monitor Services

Log on to the Oracle GoldenGate Hub via HTTPS. Use the host name, private or public IP address or whatever your network allows.

You are prompted for credentials. Log on as oggadmin – in lowercase. The password you can find in the file ogg-credentials.json which you find on GoldenGate Hub. The log in prompt of Oracle GoldenGate

Once logged in you can ensure that all the services are up and running. Most important are the Administration Server services – on source and target. They are handling the replication. Ensure all services are running You can start and stop the services from the Action drop-down menu.

By clicking on the Port number of each of the services, you can go to the specific console for each of the services. Again, the Administration Server are the ones of interest. Click on the port number to go to the specific console

Monitor Extract

If you followed the link to the Source Administration Server (by clicking on the port number) you end up here: Administration Server overview

Ensure that the extract is running. Further down the page you see a list of events. From the Action drop-down menu, you can start and stop the extract. And you get into the Details as well: Follow the Details link

You can get a lot of details, like extract statistics. Here you can see that extract has caught one insert into the table locations: Extract statistics

Monitor Replicat

Now you have monitored the extract, and you should also monitor the replication, which is handled by the replicat process. Basically, you do the same as with extract but use the link on the target administration server: Follow the link to the target administration server to monitor the replicat process

Here you should look at the replicat process and ensure that it is running: Ensure replicat process is running

You start and stop the services and get into the details like described above.

Troubleshooting

Here is an example of an error. I have created a table in my source database that does not exist in the target. This break the replication. I can use the list of critical events to figure out what went wrong: Replicat failed

Also, if you go into the details of the replicat process, and switch to the report tab, you can find even more details:

You can also find more logs on the GoldenGate host in the directories:

  • /u02/deployments/Source/etc/conf/ogg/
  • /u02/deployments/Target/etc/conf/ogg/

Conclusion

With Oracle GoldenGate Microservices Architecture it is easy to get an overview of the replication process. There are nice graphical overviews and you can even get detailed information and statistics.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Logical Online method. The migration will happen with a combination of Data Pump and Oracle GoldenGate. This way even very large databases can be migrated with very little downtime.

In the introduction blog post I covered the basic characteristics of this approach and explained that Oracle GoldenGate can be used at no extra license cost and it is really simple to use.

Migrate your Oracle Database with Zero Downtime Migration ZDM using Data Pump and GoldenGate

With the Logical Online method, you can even target Autonomous Databases. Both Oracle Autonomous Transaction Processing (ATP) and Oracle Autonomous Data Warehouse (ADW) and also shared and dedicated. When you target an Autonomous Database, you don’t have access to the underlying database host, so instead of SSH access, ZDM will require SQL*Net connectivity.

Benefits

  • First and most important, you don’t need a license for Oracle GoldenGate to use this approach. Yes, that’s right! You are allowed to use a marketplace edition of Oracle GoldenGate 19c for 183 days to migrate your database with ZDM.

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools: Oracle Zero Downtime Migration and Oracle Cloud Infrastructure Database Migration

  • Even if you don’t have any GoldenGate experience, you can still use this approach. For example, you don’t need to install and configure GoldenGate yourself. The marketplace image creates the GoldenGate installation for you. Afterwards, ZDM configures GoldenGate and will eventually clean up as well.
  • You can migrate directly into a higher release. For example, if your source database is 12.1.0.2, you can migrate into 19c directly. This avoids the extra downtime normally needed to perform an upgrade.
  • You can migrate directly into a PDB. For example, if you source database is a non-CDB, you can migrate directly into a PDB. This avoids the extra downtime normally needed to convert the database into a PDB using the script noncdb_to_pdb.sql.
  • You can use the full power of Data Pump and GoldenGate to transform your data. By default, ZDM will convert any old BasicFile LOBs into SecureFile LOBs during import. But you can apply other transformations as well. You can partition the table in the target database, you can change the tablespace layout, you can apply compression and anything else that is possible with Data Pump.
  • You can migrate into a different character set (Data Pump and GoldenGate restrictions apply).
  • Your Data Pump export is automatically compressed if the source database is Enterprise Edition. Normally, Data Pump compression would require a license for the Advanced Compression Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump compression without having the Advanced Compression Option.
  • Your Data Pump export is automatically encrypted if the source database is Enterprise Edition. Normally, Data Pump encryption would require a license for the Advanced Security Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump encryption without having the Advanced Security Option.
  • You get recommendations on the migration before it is actually started. ZDM includes the Cloud Premigration Advisor Tool (CPAT) which analyzes your database and give you specific recommendations that matches the target database. For example, if you target an Autonomous Database, CPAT will warn if you have unsupported objects.
  • You can migrate a SE2 database into OCI using this approach as well.
  • You can build your target database in advance and configure backup upfront as well. You can test your backup/recovery strategy before you complete the migration.
  • You can build a standby database in advance as well. Your new OCI target database can be protected by Data Guard from the very second you perform the switch-over. I would recommend doing the initial Data Pump load first, and then build the standby database. This ensures that a minimum of changes has to be applied via redo.

Considerations

  • The time zone file in the OCI target database must be higher than that in the source database. Typically, this is not a problem if you provision a brand-new DB System. But if you plan to use an existing CDB in OCI do check the time zone file upfront:
SQL> select * from v$timezone_file;
  • In My Oracle Support you can find a list of recommended patches to apply on the source database when using Oracle GoldenGate. There is one for 11g and one for 12c and newer. You don’t have to apply those patches, but it is a recommendation. However, if you run into issues, you potentially have to patch your source database.
  • Most databases can be migrated with Oracle GoldenGate. Only very exotic data types are not supported. Be sure to check the documentation.
  • When you use Data Pump to migrate you will lose a lot of the information that is stored in the data dictionary, like
    • AWR
    • SQL Plan Baselines
    • SQL Profiles
    • SQL Patches
  • You can still transfer that information, but you need to use other tools. If this is relevant to you, I suggest that you watch our webinar Performance Stability, Tips and Tricks and Underscores where it is covered in detail.
  • Depending on whether you use a full or schema mode export you need to take care of your public objects afterwards. It could be synonyms or database links.
  • For a huge database you should be prepared for the time it takes to perform the Data Pump export and import. No need to worry too much because GoldenGate will keep track of the changes. But for a 100 TB database you should not start the job the day before the switchover is supposed to take place.
  • It is not recommended to execute DDL statements when GoldenGate is replicating changes. The extract and replicat processes that are created by GoldenGate does not replicate DDL statements. If you for example add a table after the initial Data Pump load, it will not get replicated by GoldenGate. However, GoldeGate is a very powerful tool and you can configure it to replicate DDL as well. But you will need to adjust the GoldenGate user privileges and change the parameter files. In the documentation you can read more about the possibilities of DDL replication and how to configure it.

Fallback

Using this approach there is no straight forward way of falling back to the source on-prem database. You could use GoldenGate to reverse the replication when you switch over to the OCI target database. However, it is not configured automatically by ZDM. So, this is very you would need to know GoldenGate and do the required configuration.

Conclusion

You get a lot more options when you use the Logical Online method to migrate your database. Especially, the possibility of migrating directly into a higher release or into a PDB is very attractive. Plus, you can use GoldenGate during the migration at no extra license cost.

However, the flexibility comes at a cost. There is a little more configuration and it is not as straight-forward as the Physical Online method. But it is not complicated.

Want to Know More

If you want to know more about using Data Pump and GoldenGate for database migrations, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series

Zero Downtime Migration – Create GoldenGate Hub

The Logical Online migration method in Zero Downtime Migration (ZDM) uses Oracle GoldenGate to keep the Oracle Database in OCI in sync until you perform the switchover. You need an Oracle GoldenGate Hub to do that and you should use the deployment from the OCI Marketplace. Let’s do it!

  1. First, go to the OCI Marketplace and search for goldengate for oracle. You should use the image called Oracle GoldenGate for Oracle – Database Migrations. Use the OCI Marketplace to find the Oracle GoldenGate application

  2. On the application page you can verify that the software is free but you still have to pay for the underlying infrastructure, like the compute instance. In addition, certain terms apply:

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools: Oracle Zero Downtime Migration and Oracle Cloud Infrastructure Database Migration

  1. When you hit Get App button you need to log in to OCI. Be sure to select the right region and compartment. Again, you can verify that the software is free – but you pay for the compute instance. Hit Launch Stack. Launching an Oracle GoldenGate stack using OCI Marketplace This version of Oracle GoldenGate is available for free but you have to pay for the underlying infrastructure

  2. On the next page Create Stack – Stack Information simply give it a name. I call it OGG4DEMO.

  3. On the next page Create Stack – Configure Variables you fill in additional information like network, your SSH public key etc. But most important – you also need to specify the Oracle Database version of the source and target database – and whether the target database is an Autonomous Database. Don’t worry about the deployment names they are good as they are. Even if the target attributes are listed as optional – fill them out. Pick the right database versions for the source and target database

  4. Review the settings and hit Create. The stack is now being deployed. The stack is deploying and very soon it will be ready

  5. Now, head on over to Compute and Instances to find the newly created compute instance with Oracle GoldenGate already installed. Depending on your network settings use either the public or private IP address or the computer name (DNS). When the stack has been created, you can find it in Compute > Instances

  6. Use the IP address (or computer name) and your SSH key to connect to the server as opc. There is a file called ogg-credentials.json which contains the username and password that is needed to connect to the Oracle GoldenGate Hub In ogg-credentials.json you find the username and password required to connect to the Oracle GoldenGate Hub

  7. Now you can connect via HTTPS (https://193.122.52.34) to the Oracle GoldenGate Hub. Use the credentials from ogg-credentials.json and log in. That’s it – your very own Oracle GoldenGate deployment. The landing page when you log in to the Oracle GoldenGate Hub

Other Blog Posts in This Series

Appendix

A. The very last thing is to apply the latest patches to Oracle GoldenGate. But that will be a topic for a future post.

B. When you connect to Oracle GoldenGate via HTTPS, you will probably get a security warning in your browser. The initial deployment comes with a self-signed certificate which alerts many browsers. Follow the procedures in your organization to deploy a proper certificate. Read more about Securing the Microservices Architecture.

C. The credentials for Oracle GoldenGate are case sensitive. It applies to the password but also the username (oggadmin, not OGGADMIN).

Zero Downtime Migration – The Pro Tips

Here are my pro tips. It is a little mix and match of all my notes that didn’t make it into the previous blog posts but are still too good to go.

Pro Tip 1: Log Files

If something goes south where can you find the log files? On the ZDM service host:

  • $ZDM_BASE/chkbase/scheduled
  • $ZDM_BASE/crsdata/<hostname>/rhp

On the source and target hosts you can also find additional log files containing all the commands that are executed by ZDM:

  • $ORACLE_BASE/zdm/zdm_<db_unique_name>_<zdm_job_id>/zdm/log

Other sources:

  • Alert log
  • Data Pump process trace file DM00

Data Pump log file

  • Directory referenced by directory object
  • $ORACLE_HOME/rdbms/log/<PDB GUID>

Pro Tip 2: Troubleshooting

When you are troubleshooting it is sometimes useful to get rid of all the log files and have ZDM start all over. Some of the log files get really big and are a hard to read, so I usually stop the ZDM service, delete all the log files, and restart ZDM and my troubleshooting. But only do this if there are no other jobs running than the one you are troubleshooting:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice stop
[zdmuser@zdmhost]$ rm $ZDM_BASE/crsdata/*/rhp/rhpserver.log*
[zdmuser@zdmhost]$ rm $ZDM_BASE/chkbase/scheduled/*
[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice start

There are also several chapters on troubleshooting:

Pro Tip 3: Aborting A Job

Sometimes it is useful to completely restart a migration. If a database migration is already registered in ZDM, you are not allowed to specify another migration job. First, you have to abort the existing job, before you can enter a new migration job.

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli abort job -jobid n

Now, you can use zdmcli migrate database command again.

Pro Tip 4: Show All Phases

A ZDM migration is split into phases, and you can have ZDM pause after each of the phases. The documentation has a list of all phases but you can also get it directly from the ZDM tool itself for a specific migration job:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp ~/migrate.rsp
   ... \
   ... \
   ... \
   -listphases

Pro Tip 5: Adding Custom Scripts

You can add your own custom scripts to run before or after a phase in the migration job. You can use the -listphases command (described above) to get a list of all the phases. Then decide whether your script should run before or after that phase. This is called an action plug-in. You can bundle those together in a template to make it easier to re-use. If this is something you need, you should dive into the documentation.

If you target an Autonomous Database, you are not allowed to execute scripts on the target database host. Instead, you can .sql scripts.

The environment in which the script starts has some environment variables that you can use, like:

  • Database (ZDM_SRCDB)
  • Oracle Home (ZDM_SRCDBHOME)
  • ZDM Phase (RHP_PHASE)

Pro Tip 6: GoldenGate Health Check

You can use the healthcheck script on the source and target databases – where the extract and replicat process is running. It will give you invaluable information for your troubleshooting experience and it is a good idea to run and attach a health check if you need to contact My Oracle Support. It is like an AWR report but with information specific to Oracle GoldenGate replication.

Generate report by:

  • Installing objects in database: ogghc_install.sql
  • Execute health check: ogghc_run.sql
  • Optionally, clean-up objects: ogghc_uninstall.sql

For GoldenGate MicroServices Architecture find the scripts on the GoldenGate hub:

  • /u01/app/ogg/oraclenn/lib/sql/healthcheck

And run the scripts in source and target database.

Pro Tip 7: Convert From Single Instance To RAC

A useful feature of ZDM is that it can convert a single instance database to a RAC database in OCI. And it is super simple to do that. The only thing you have to do is to create the target placeholder database as a RAC database. ZDM will detect that and take care of the rest.

Finally, let me mention that if the source database is RAC One Node or RAC, then the target database must be a RAC database. Be sure to create the target placeholder database as RAC.

Pro Tip 8: Get Data Pump Log File in Autonomous Database

When you are importing into Autonomous Database, the Data Pump log file is stored in the directory DATA_PUMP_DIR. But in Autonomous Database you don’t have access to the underlying file system, so how do you get the log file? One approach is to upload the log file into Object Storage.

  1. ZDM will create a set of credentials as part of the migration workflow. Find the name of the credentials (or create new ones using DBMS_CLOUD):
select owner, credential_name, username, enabled from dba_credentials;
  1. Find the name of the Data Pump log file:
select * from dbms_cloud.list_files('DATA_PUMP_DIR');
  1. Upload it. If you need help generating the URI, check the documentation:
begin
    DBMS_CLOUD.PUT_OBJECT (
       credential_name => '<your credential>',
       object_uri      => 'https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket>/',
       directory_name  => 'DATA_PUMP_DIR',
       file_name       => '<file name>');
end;
/
  1. Your OCI Console to download the Data Pump log file.

Other Blog Posts in This Series

Zero Downtime Migration – Install And Configure ZDM

To use Zero Downtime Migration (ZDM) I must install a Zero Downtime Migration service host. It is the piece of software that will control the entire process of migrating my database into Oracle Cloud Infrastructure (OCI). This post describes the process for ZDM version 21. The requirements are:

  • Must be running Oracle Linux 7 or newer.
  • 100 GB disk space according to the documentation. I could do with way less – basically there should be a few GBs for the binaries and then space for configuration and log files.
  • SSH access (port 22) to each of the database hosts.
  • Recommended to install it on a separate server (although technically possible to use one of the database hosts).

Create and configure server

In my example, I will install the ZDM service host on a compute instance in OCI. There are no requirements to CPU nor memory and ZDM is only acting as a coordinator – all the work is done by the database hosts – so I can use the smallest compute shape available. I was inspired to use OCI CLI after reading a blog post by MichaÅ‚. I will use that approach to create the compute instance. But I could just as well use the web interface or REST APIs.

First, I will define a few variables that you have to change to your needs. DISPLAYNAME is the hostname of my compute instance – and also the name I see in the OCI webpage. AVAILDOM is the availability domain into which the compute instance is created. SHAPE is the compute shape:

DISPLAYNAME=zdm
AVAILDOM=OUGC:EU-FRANKFURT-1-AD-1
SHAPE=VM.Standard2.1

When I create a compute instance using the webpage these are the values: Screenshot of OCI webpage where display name, availability domain and shape are shown

In addition, I will define the OCID of my compartment, and also the OCID of the subnet that I will use. I am making sure to select a subnet that I can reach via SSH from my own computer. Last, I have the public key file:

COMPARTMENTID="..."
SUBNETID="..."
PUBKEYFILE="/path/to/key-file.pub"

Because I want to use the latest Oracle Linux 7 image I will query for the OCID of that and store it in a variable:

IMAGEID=`oci compute image list \
   --compartment-id $COMPARTMENTID \
   --operating-system "Oracle Linux" \
   --sort-by TIMECREATED \
   --query "data[?contains(\"display-name\", 'GPU')==\\\`false\\\` && contains(\"display-name\", 'Oracle-Linux-7')==\\\`true\\\`].{DisplayName:\"display-name\", OCID:\"id\"} | [0]" \
   | grep OCID \
   | awk -F'[\"|\"]' '{print $4}'`

And now I can create the compute instance:

oci compute instance launch \
 --compartment-id $COMPARTMENTID \
 --display-name $DISPLAYNAME \
 --availability-domain $AVAILDOM \
 --subnet-id $SUBNETID \
 --image-id $IMAGEID \
 --shape $SHAPE \
 --ssh-authorized-keys-file $PUBKEYFILE \
 --wait-for-state RUNNING

The command will wait until the compute instance is up and running because I used the wait-for-state RUNNING option. Now, I can get the public IP address so I can connect to the instance:

VMID=`oci compute instance list \
  --compartment-id $COMPARTMENTID \
  --display-name $DISPLAYNAME \
  --lifecycle-state RUNNING \
  | grep \"id\" \
  | awk -F'[\"|\"]' '{print $4}'`
oci compute instance list-vnics \
 --instance-id $VMID \
 | grep public-ip \
 | awk -F'[\"|\"]' '{print $4}'

Prepare Host

The installation process is described in the documentation which you should visit to get the latest changes. Log on to the ZDM service host and install required packages (python36 is needed for OCI CLI):

[root@zdm]$ yum -y install \
  glibc-devel \
  expect \
  unzip \
  libaio \
  kernel-uek-devel-$(uname -r) \
  python36

Create a group (ZDM) and user (ZDMUSER). This is only needed on the ZDM Service Host. You don’t need this user on the database hosts:

[root@zdm]$ groupadd zdm ; useradd -g zdm zdmuser

Make it possible to SSH to the box as zdmuser. I will just reuse the SSH keys from opc:

[root@zdm]$ cp -r /home/opc/.ssh /home/zdmuser/.ssh ; chown -R zdmuser:zdm /home/zdmuser/.ssh

Create directory for Oracle software and change permissions:

[root@zdm]$ mkdir /u01 ; chown zdmuser:zdm /u01

Edit hosts file, and ensure name resolution work to the source host (srchost) and target hosts (tgthost):

[root@zdm]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@zdm]$ echo -e "[ip address] tgthost" >> /etc/hosts

Install And Configure ZDM

Now, to install ZDM I will log on as zdmuser and set the environment in my .bashrc file:

[zdmuser@zdm]$ echo "INVENTORY_LOCATION=/u01/app/oraInventory; export INVENTORY_LOCATION" >> ~/.bashrc
[zdmuser@zdm]$ echo "ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_BASE=\$ORACLE_BASE; export ZDM_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_HOME=\$ZDM_BASE/zdm21; export ZDM_HOME" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_INSTALL_LOC=/u01/zdm21-inst; export ZDM_INSTALL_LOC" >> ~/.bashrc
[zdmuser@zdm]$ source ~/.bashrc

Create directories

[zdmuser@zdm]$ mkdir -p $ORACLE_BASE $ZDM_BASE $ZDM_HOME $ZDM_INSTALL_LOC

Next, download the ZDM software into $ZDM_INSTALL_LOC.

Once downloaded, start the installation:

[zdmuser@zdm]$ ./zdminstall.sh setup \
  oraclehome=$ZDM_HOME \
  oraclebase=$ZDM_BASE \
  ziploc=./zdm_home.zip -zdm

And it should look something similar to this: Screenshot of a successful ZDM installation

Start the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start

Which should produce something like this: Sample output when starting ZDM service (jwcctl debug jwc) And, optionally, I can verify the status of the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice status

The ZDM service is running

Install OCI CLI

You might need OCI CLI as part of the migration. It is simple to install, so I always do it:

bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
oci setup config

You find further instructions here.

Configure Network Connectivity

The ZDM service host must communicate with the source and target hosts via SSH. For that purpose I need private key files to each of the hosts. The private key files must be without a passphrase, in RSA/PEM format and I have to put them at /home/zdmuser/.ssh/[host name]. In my demo, the files are to be named:

  • /home/zdmuser/.ssh/srchost
  • /home/zdmuser/.ssh/tgthost

Ensure that only zdmuser can read them:

[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/srchost
[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/tgthost

Now, I will verify the connection. In my example I will connect to opc on both database hosts, but you can change it if you like:

[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/srchost opc@srchost
[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/tgthost opc@tgthost

If you get an error when connecting ensure the following:

  • The public key is added to /home/opc/.ssh/authorized_keys on both database hosts (change opc if you are connecting as another user)
  • The key files are in RSA/PEM format (the private key file should start with -----BEGIN RSA PRIVATE KEY-----)
  • The key files are without a passphrase

That’s It

Now, I have a working ZDM service host. I am ready to start the migration process.

It is probably also a good idea to find a way to start the ZDM service automatically, if the server restarts.

There is also a community marketplace image that comes with ZDM already installed. You can read about it here; evaluate it and see if it is something for you.

Other Blog Posts in This Series

Zero Downtime Migration

When you need to migrate into OCI we have a cool – and free – tool that you can use: Zero Downtime Migration (ZDM).

You can use Zero Downtime Migration to easily migrate to Oracle Cloud Infrastructure - OCI

In short: ZDM builds a copy of your database in OCI. It keeps the OCI database in sync with the on-prem database until you are ready to complete the migration. Then connections are simply switched over to the OCI database. ZDM will take care of all the steps and ensure nothing is lost in the process.

In this blog post series, I will take you through the entire process using version 21 (the latest at time of writing). In the end you will know all there is to know – and you can start migrating your databases into OCI.

Source Database

Your source database can be located:

  • On-prem
  • OCI Classic (you know, our old cloud)
  • OCI (the new cloud, useful when you want to migrate between regions or locations or between system types)

Requirements:

  • The source database must be running 11.2.0.4 or newer
  • Source platform must be Linux

Options

You can migrate your database in two ways – each of them in an online and offline manner:

  • Physical Online – The entire database is migrated by restoring a backup of the database and instantiate that as a standby database. The standby database is kept in sync with redo apply. It is online because the only downtime needed is to perform a regular switchover.
  • Physical Offline – The entire database is migrated by restoring a backup of the database. It is offline because there is downtime while the backup is created, transferred and restored in OCI.
  • Logical Online – One or more schemas are migrated using Data Pump. In addition, Oracle GoldenGate is used to keep the OCI database in sync to avoid downtime. It is online because the only downtime needed is to switch over the users to the new database.
  • Logical Offline – Like the online option but without Oracle GoldenGate on top. It is offline because the database is unavailable during the export and import operation.

Free? Easy?

I know what you think right now. First, you say it is a free tool, and now you mention Oracle GoldenGate. We don’t have a license for Oracle GoldenGate. Possibly, you are also thinking that Oracle GoldenGate is complex. Let me address that:

Free

If you migrate your database into OCI (currently ExaCC and Exadata on-prem is excluded), you can use Oracle GoldenGate (for migration purposes only) without paying a separate license – provided you use the OCI Marketplace image. It says:

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools:

  • Oracle Zero Downtime Migration
  • Oracle Cloud Infrastructure Database Migration

So no license for Oracle GoldenGate to handle the migration. You still have to pay for the underlying compute instance – but no license.

Complex

To install Oracle GoldenGate simply follow the wizard to deploy your installation. It is basically just click-click-click. Afterwards, you need to configure the extract and replicat process, but ZDM takes care of that for you. If all goes well, you won’t even have to log into the Oracle GoldenGate Hub.

Yes, Oracle GoldenGate is a complex product, but for this purpose you should not worry. I promise you. It is super easy.

Comparison

Physical Online Physical Offline Logical Online Logical Offline
TARGET PLATFORMS
ATP-D x x
ATP-S x x
ADW-D x x
ADW-S x x
ExaCS x x x x
ExaCC x x x x
Exadata (on-prem) x x x x
Bare Metal DBCS x x x x
Virtual Machine DBCS x x x x
RELEASE AND EDITIONS
SE2 x x x
EE x x x x
Migrate from SE2 to EE x x
Migrate to same version x x x x
Migrate to same version, higher patch level x x x x
Migrate to higher version (see note 1) x x
ARCHITECTURE
Non-CDB x x x x
CDB (see note 2) x x x x
Migrate into PDB – no extra downtime x x
Migrate into PDB – with extra downtime (see note 3) x x
Single instance x x x x
RAC One Node (see note 4) x x x x
RAC x x x x
Migrate from single instance to RAC x x x x
Migrate from RAC to single instance x x
ENCRYPTION
Unencrypted (see note 5) x x x x
Encrypted x x x x
Encrypt data-in-transit during migration (see note 6) x x x x

Note 1: When doing physical migrations, you can’t migrate directly into a higher release. However, you are free to upgrade the database afterwards. But that will incur additional downtime.

Note 2: When migrating databases using the physical option all PDBs are migrated. If you use the logical approach, you migrate each PDB individually and you can choose which you want.

Note 3: When you migrate databases using the physical option, you can optionally convert the database into a PDB afterwards. However, that will incur additional downtime while the noncdb_to_pdb.sql script is executed.

Note 4: RAC One Node are always migrated into RAC.

Note 5: All databases must be encrypted in OCI. An unencrypted database is always encrypted when it is created in OCI.

Note 6: A combination of techniques are in play here (dump file encryption, SQL*Net encryption, HTTPS, SSH/rsync) and it depends a lot of how you choose to carry out the migration. As an example, when doing a physical online migration if the source database is not encrypted, it will be encrypted on-the-fly once they are created in OCI. The initial backup of the source database is sent over an encrypted connection to OCI Object Storage, and redo are transferred over encrypted SQL*Net.

Network Connectivity

  • The ZDM service host needs SSH access (22) to the source database. Unless the target database is an Autonomous Database then it also needs SSH access to the target database host.
  • If you plan on using OCI Object Storage as a staging area, the source database needs access to OCI Object Storage over HTTPS (443). Unless the target database is an Autonomous Database, the same applies to the target database host.
  • SQL*Net connection (1521) are needed between the two databases hosts.
  • If you will be using Oracle GoldenGate as well, you need SQL*Net connection (1521) from the GoldenGate Hub and to the source and target database. In addition, you need HTTPS (443) from the ZDM service host to the GoldenGate Hub.

And then there are all the special cases with proxy and Bastion host which I will not cover here.

References

In case you want to read more here are some useful links:

Other Blog Posts in This Series

This is the introduction blog post in this series. Over the next days the other blog posts will follow. Stay tuned!