Zero Downtime Migration – Logical Online Migration to DBCS

Let me show you how you can migrate an Oracle Database into OCI. My source database is a PDB running on 12.1.0.2. I want to migrate it directly into a 19c CDB. I will show you:

  • import via dump file and object storage
  • import via database link

This procedure can be used to migrate to:

  • VM DB Systems
  • Bare Metal DB Systems
  • Exadata DB System (ExaCS)

Prerequisites

  1. I need a ZDM service host. I already blogged about how to create such. The computer is called zdmhost and the user is called zdmuser.
  2. My source database is a PDB called srcpdb, the CDB is called srccdb, and the host is named srchost. All my data is in the schema SH.
  3. I have already created a GoldenGate hub as described in this blog post. It is called ogg19cora.
  4. I want to migrate to a VM DB System, and it is already created. I name it tgthost, the CDB is named tgtcdb and the PDB is named tgtpdb. You can name it whatever you want. Be sure to use ASM as your storage management software.

Overview of the components in this demo

Prepare Source

My source database is a PDB. I need to prepare the PDB and CDB$ROOT. If you have a non-CDB perform all the steps in your non-CDB unless otherwise written.

First, I prepare the database for GoldenGate replication:

alter session set container=CDB$ROOT;
alter database force logging;
alter database add supplemental log data;
alter system set enable_goldengate_replication=true scope=both;
--At least 2GB
alter system set streams_pool_size=2g scope=both;
alter system set global_names=false;

Create a common user for GoldenGate in CDB$ROOT. Skip this step for non-CDBs:

alter session set container=CDB$ROOT;
create user c##ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to c##ggadmin;
grant unlimited tablespace to c##ggadmin;
grant select any dictionary to c##ggadmin;
grant create view to c##ggadmin;
grant execute on dbms_lock to c##ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('c##ggadmin',container=>' all');

Create a user for GoldenGate:

alter session set container=SRCPDB;
create user ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('ggadmin');

Generate a list of tablespaces that must exist in the target database. Add all schemas to the in-list:

alter session set container=SRCPDB;
select distinct tablespace_name
from (
      select distinct tablespace_name from dba_segments where owner in ('SH')
      union
      select distinct default_tablespace from dba_users where username in ('SH')
      union
      select distinct tablespace_name from dba_ts_quotas where dropped = 'NO' and username in ('SH')
      );

Optional. The best starting point for Data Pump is good and accurate dictionary statistics. This helps Data Pump extract the information as fast as possible:

exec dbms_stats.gather_dictionary_stats;

Via Dump File

I need to create a directory that can be used by Data Pump:

[oracle@srchost]$ mkdir -p /u01/app/oracle/datapump/mydirsrc

Prepare Target

The target database must use a timezone file version that is equal to or higher than the source:

alter session set container=tgtpdb;
select * from v$timezone_file;

Prepare the database for GoldenGate replication:

alter session set container=CDB$ROOT;
alter system set enable_goldengate_replication=true scope=both;

Create a user for GoldenGate:

alter session set container=tgtpdb;
create user ggadmin identified by <my_secret_pwd> default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.grant_admin_privilege('ggadmin');

Allow the GoldenGate user to perform DML (see appendix A):

alter session set container=tgtpdb;
grant insert any table to ggadmin;
grant update any table to ggadmin;
grant delete any table to ggadmin;

I ensure that the list of tablespaces I found in the source database, exist in the target database. If some are missing, I create them. Also, I will extend the data files in advance, so the size is pretty close to the expected end size. For new tablespaces I set the initial size of the data files. If the data files are too small, I will waste a lot of time during import to auto-extend. If I make them too big, I potentially waste space:

alter database datafile '...' resize 100G;
create tablespace ... size 100G ...;

Via Dump File

I need to create a directory that can be used by Data Pump. It will hold the Data Pump dump files and logs:

[oracle@tgthost]$ mkdir -p /u01/app/oracle/datapump/mydirtgt

Via DB Link

When I use a DB link, ZDM insists on using the built-in directory called DATA_PUMP_DIR. I ensure that the database directory exist. Also, I verify that the directory exist in the file system:

select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';

I will do the import directly over a database link. In that case, the target host must be able to resolve the host name of my source database host:

[root@tgthost]$ echo "<source IP address> srchost" >> /etc/hosts

Prepare ZDM

The ZDM service host must be able to resolve the host names of the source and target database host and the GoldenGate hub:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts
[root@zdmhost]$ echo "<target IP address> tgthost" >> /etc/hosts
[root@zdmhost]$ echo "<GoldenGate IP address> ogg19cora" >> /etc/hosts

I put my private SSH keys to the source and target database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 
[zdmuser@zdmhost]$ cp tgthost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/tgthost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date
[zdmuser@zdmhost]$ ssh -i ~/.ssh/tgthost_key_file opc@tgthost date

If you have configured a proper certificate on your GoldenGate hub, you can jump to the next chapter. If not, read on.

When deployed the GoldenGate hub comes with a self-signed certificate. ZDM will complain about it, because it is considered insecure. For migrations that involve real data, you should use a proper certificate on your GoldenGate hub to ensure your migration is secure. But for tests and demos you might want to disregard the security warning. If so, you can add the self-signed certificate as a trusted one to the Java JDK’s certificate store ($ZDM_HOME/jdk/jre/lib/security/cacerts). You find instructions on how to do so in the MOS support note Zero Downtime Migration – GoldenGate Hub Certificate Known Issues (Doc ID 2768483.1). Use the same host that you added to /etc/hosts.

For your reference, this is the error that ZDM will throw if the certificate is not trusted:

Verifying status of Oracle GoldenGate Microservices at URL "https://..."
PRGZ-1136 : failed to verify configuration and status of Oracle GoldenGate Microservices at URL "https://..."
PRGG-1008 : failed to retrieve detailed information for the health of the Service Manager on Oracle GoldenGate hub "https://..."
PRGG-1001 : HTTP GET request "https://.../services/v2/config/health" failed.
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
unable to find valid certification path to requested target

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp ~/logical_online.rsp
[zdmuser@zdmhost]$ cmod 700 ~/logical_online.rsp

Via Dump Files

This is the response file that I end up with (see appendix E for an explanation):

MIGRATION_METHOD=ONLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_GGADMINUSERNAME=GGADMIN
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....oraclevcn.com
SOURCECONTAINERDATABASE_ADMINUSERNAME=SYSTEM
SOURCECONTAINERDATABASE_GGADMINUSERNAME=C##GGADMIN
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME=SRCCDB_fra3dd....oraclevcn.com

TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=SYSTEM
TARGETDATABASE_GGADMINUSERNAME=GGADMIN
TARGETDATABASE_CONNECTIONDETAILS_HOST=tgthost
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=tgtpdb....oraclevcn.com

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

GOLDENGATEHUB_ADMINUSERNAME=oggadmin
GOLDENGATEHUB_URL=https://ogg19cora....oraclevcn.com
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME=Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME=Target
GOLDENGATEHUB_COMPUTEID=ocid1.instance.oc1.eu-frankfurt-1....

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=MYDIRTGT
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirtgt
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MYDIRSRC
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirsrc
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=zdm-staging
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oradbclouducm
INCLUDEOBJECTS-1=owner:SH

Via DB Link

This is the response file that I end up with (see appendix E for an explanation):

MIGRATION_METHOD=ONLINE_LOGICAL
DATA_TRANSFER_MEDIUM=DBLINK

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_GGADMINUSERNAME=GGADMIN
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....oraclevcn.com
SOURCECONTAINERDATABASE_ADMINUSERNAME=SYSTEM
SOURCECONTAINERDATABASE_GGADMINUSERNAME=C##GGADMIN
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME=SRCCDB_fra3dd....oraclevcn.com

TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=SYSTEM
TARGETDATABASE_GGADMINUSERNAME=GGADMIN
TARGETDATABASE_CONNECTIONDETAILS_HOST=tgthost
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=tgtpdb....oraclevcn.com

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

GOLDENGATEHUB_ADMINUSERNAME=oggadmin
GOLDENGATEHUB_URL=https://ogg19cora....oraclevcn.com
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME=Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME=Target
GOLDENGATEHUB_COMPUTEID=ocid1.instance.oc1.eu-frankfurt-1....

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
INCLUDEOBJECTS-1=owner:SH

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_online.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -eval

A few comments:

  • sourcenode and targetnode are the host names of the source and target database host. Those names must be resolvable which I why I put them into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 and tgtarg2 is the location of the private key file that I use to connect via SSH.

Next, I am prompted for various passwords and I also get a job ID. When submitting a migration job you are prompted for various passwords

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

My colleague Sinan Petrus Toma showed how to loop:

[zdmuser@zdmhost]$ while :; do $ZDM_HOME/bin/zdmcli query job -jobid <job ID>; sleep 10; done

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-05-28T06:25:25.518Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR
zdmhost: 2021-05-28T06:25:31.883Z : Source PDB name : SRCPDB
zdmhost: 2021-05-28T06:25:32.410Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node srchost ...
zdmhost: 2021-05-28T06:25:38.533Z : Premigration advisor output:
Cloud Premigration Advisor Tool Version 21.0.0
Cloud Premigration Advisor Tool completed with overall result: WARNING
Cloud Premigration Advisor Tool generated report location: /u01/app/oracle/zdm/zdm_SRCCDB_fra3dd_6/out/premigration_advisor_report.json

Start Migration

When the evaluation passes, I can start the real migration. I am re-using the same command line, but I have removed the -eval option. Instead – and this is important – I am using -pauseafter to tell ZDM to pause the migration, just before the switchover takes place. Downtime has not started yet. The database is still open for business, but ZDM will copy the data and start the GoldenGate replication:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_online.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -pauseafter ZDM_MONITOR_GG_LAG

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file. Note down the job ID. I need it later on to resume the migration.

When ZDM completes the phase ZDM_MONITOR_GG_LAG it will pause and wait. I can verify it with zdmcli query job: ZDM is currently paused

My data has been imported to the target database in OCI, GoldenGate has been configured, and my changes are being replicated. I can monitor the replication by logging on to the GoldenGate hub.

Important: Now, I strongly recommend that you take care of the statistics in your target database:

  1. Gather dictionary statistics (DBMS_STATS.GATHER_DICTIONARY_STATS). After the import the data dictionary is now full of a lot of new data – the metadata about your tables and other objects. To ensure a smooth running database, gather dictionary statistics.
  2. Decide on what to do about statistics on user objects as described in this blog post.

Complete Migration

Now it is time to finalize the migration and switch over to the OCI target database. All I need to do, is to resume the paused ZDM job. I use the job ID that was created when I started the migration:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli resume job -jobid <job ID>

ZDM will now ensure that all changes are replicated before switching over to the OCI target database. Again, I use the zdmcli query job command to monitor the progress and I can tail the log file. After a short while the migration completes. ZDM migration completed

That’s it. I have now migrated into OCI! And my new database is running Oracle Database 19c in addition.

I have a few post-migration tasks to carry out:

  • I prefer to shut down the source database to ensure that use it anymore.
  • ZDM will remove the GoldenGate configuration, but I need to disable the replication in the database: alter system set enable_goldengate_replication=false scope=both;.

Finally, I would recommend that you take a look at Logical Migration and the Final Touches. A few pointers that can make your migration even more successful.

Other Blog Posts in This Series

Appendix

A – GoldenGate user privileges

When GoldenGate is replicating changes into the target database, it does so as a regular database user. Thus, it must have privileges to perform DML and DDL on the appropriate schemas. There is a number of ways to do that.

I granted INSERT ANY, UPDATE ANY, and DELETE ANY which will allow me to perform DML in any schema. But that does not cover DDL. If I want to cover that as well there is a number of similar ANY privileges that I would need to grant, like CREATE ANY TABLE, CREATE ANY INDEX and so forth.

But it is discouraged to perform DDL statements during the replication. Plus, the way ZDM configures the extract and replicat process does not allow DDLs to be replicated. If you want to replicate DDL you need to tweak the GoldenGate settings:

During the migration period, to provide the most optimal environment for fast database replication, avoid the following database operations:

  • Data Definition Language (DDL) – While DDL is replicated, the Replicat serializes data to ensure that there are no locking issues between DML and DDL on the same objects.
  • Large batch DML – Running large batch operations, like a single transaction that affects multi-millions of rows, can slow down replication rates.

Other customers prefer an easier solution and grant PDB_DBA role (or DBA in a non-CDB). Connor McDonald also blogged about a procedure to perform schema grant. If you need inspiration you can have a look at the GGADMIN user in an Autonomous Database. It has PDB_DBA role amongst others.

In the end it comes down to preferences and security regulations in your organization.

B – Update Cloud Premigration Advisor Tool

As part of the migration ZDM will use the Cloud Premigration Advisor Tool (CPAT) to check your source database. The version of CPAT that ships with ZDM is not the latest one. So, you might be missing out on bug fixes and new recommendations. If you want to get the latest and greatest recommendations, I suggest that you manually update the CPAT tool in your ZDM home. You can find instructions in the MOS note Cloud Premigration Advisor Tool (CPAT) Analyzes Databases for Suitability of Cloud Migration (Doc ID 2758371.1)

C – Ignore Certain Data Pump Errors

If there is an error in either your Data Pump export or import, then ZDM will error out. But by default, ZDM will ignore the following errors:

  • ORA-31684: Object type string already exists
  • ORA-39111: Dependent object type string skipped, base object type string already exists
  • ORA-39082 Object type string created with compilation warnings

If you know that your data exports or imports with a specific error and you want to ignore it then you can add your own errors. The list of ignorable errors is maintained by the ZDM service host and it applies to all migrations. So, if you add another error for one migration, that error is also ignored in a second migration. Read more about it in the documentation.

D – RAC Databases

If your source or target database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH keys to that host.

E – Response File

Allow me to put some comments on the values:

Setting Comment
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME Name of the bucket in OCI Object Storage that will be used as a staging area. I recommend using a separate and dedicated bucket for each migration
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME This is your Object Storage Namespace. You find it in the OCI Console, Administration > Tenancy Details
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE For on-prem databases set to number of physical cores x 2. For OCI databases set to number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE Set to the number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_JOBMODE I recommend using SCHEMA. Each schema to import is specified in a separate parameter, INCLUDEOBJECTS-n
GOLDENGATEHUB_ADMINUSERNAME Remember the usernames in GoldenGate are case sensitive
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME The name of the source deployment in GoldenGate hub. If you went with the default value, it should be Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME The name of the target deployment in GoldenGate hub. If you went with the default value, it should be Target
GOLDENGATEHUB_URL Construct the URL by using the Fully Qualified Domain Name (FQDN), e.g. https://ogg19cora.subx.oraclevcn.com
INCLUDEOBJECTS-n Specify each schema in a separate parameter, example: INCLUDEOBJECTS-1=owner:SH and INCLUDEOBJECTS-2=owner:OE
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT This is the fingerprint of the API keys that you used to configure OCI CLI on the ZDM service host. You can find the fingerprint in the OCI Console. Go to Identify > Users > User Details > API Keys
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE This is the private key file that you used when you configured OCI CLI on the ZDM service host
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into CDB$ROOT. I normally use lsnrctl status to find the name of the service
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into CDB$ROOT. I normally use lsnrctl status to find the name of the service
SOURCECONTAINERDATABASE_ The settings apply to my source CDB. If you have a non-CDB simply remove these settings
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the source PDB (or non-CDB). I normally use lsnrctl status to find the name of the service
SOURCEDATABASE_ These settings apply to my source PDB. Or, to your non-CDB
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the target PDB. I normally use lsnrctl status to find the name of the service

Zero Downtime Migration – Logical Offline Migration to Autonomous Database

Let me show you how you can migrate an Oracle Database into an Autonomous Database. My source database is a PDB running on 12.1.0.2. I want to migrate it directly into an Oracle Autonomous Transaction Processing (ATP) database. I will do an offline migration using just Oracle Data Pump.

Prerequisites

  1. I need a Zero Downtime Migration (ZDM) service host. I already blogged about how to create such. The computer is called zdmhost and the user is called zdmuser.
  2. My source database is a PDB called srcpdb, the CDB is called srccdb, and the host is named srchost. All my data is in the schema SH.
  3. I have created an ATP database. I name it tgtadb.

Overview of the components in this demo

Prepare Source

My source database is a PDB. If you have a non-CDB perform all the steps in your non-CDB.

First, set streams pool to a reasonable size (needed by Data Pump):

alter session set container=CDB$ROOT;
--At least 256M
alter system set streams_pool_size=256M scope=both;

Generate a list of tablespaces. In ADB there is only one tablespace named DATA (unless you have more than 32 TB of data, then you get DATA_2, DATA_3 and so forth). All tablespaces must be re-mapped to that. Add all schemas to the in-list:

alter session set container=SRCPDB;
select distinct tablespace_name
from (
      select distinct tablespace_name from dba_segments where owner in ('SH')
      union
      select distinct default_tablespace from dba_users where username in ('SH')
      union
      select distinct tablespace_name from dba_ts_quotas where dropped = 'NO' and username in ('SH')
      );

I need to create a directory that can be used by Data Pump:

[oracle@srchost]$ mkdir -p /u01/app/oracle/datapump/mydirsrc

Optional. The best starting point for Data Pump is good and accurate dictionary statistics. This helps Data Pump extract the information as fast as possible:

exec dbms_stats.gather_dictionary_stats;

Prepare ZDM

The ZDM service host must be able to resolve the host name of the source database host:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts

I put my private SSH key to the source database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp ~/logical_offline.rsp
[zdmuser@zdmhost]$ chmod 700 ~/logical_offline.rsp

This is the response file that I end up with (see appendix D for an explanation):

MIGRATION_METHOD=OFFLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS

SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_CONNECTIONDETAILS_HOST=srchost
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=srcpdb....com

TARGETDATABASE_OCID=ocid1.autonomousdatabase.oc1.eu-frankfurt-1....
TARGETDATABASE_ADMINUSERNAME=ADMIN

OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1....
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=58:b9:...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

DATAPUMPSETTINGS_JOBMODE=SCHEMA
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oradbclouducm
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=zdm-staging
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MYDIRSRC
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/datapump/mydirsrc
DATAPUMPSETTINGS_CREATEAUTHTOKEN=FALSE
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_SCHEMA,oldValue:SH,newValue:SH2
DATAPUMPSETTINGS_METADATAREMAPS-2=type:REMAP_TABLESPACE,oldValue:USERS,newValue:DATA
DATAPUMPSETTINGS_METADATAREMAPS-3=type:REMAP_TABLESPACE,oldValue:TEST,newValue:DATA
INCLUDEOBJECTS-1=owner:SH

A few comments:

  • I am remapping SH to SH2 because SH already exist in ADB as a common user. I want to import into a new user.
  • Check the appendix for recommendations on setting parallel degree. Remember scaling up on CPUs in ADB happens fast and online. If you have a lot of data, you can really benefit from scaling up during import.
  • The list of tablespaces that I created earlier is used to generate the list of tablespace remaps.

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_offline.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -eval

A few comments:

  • sourcenode is the host name of the source database host. The name must be resolvable which I why I put it into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 is the location of the private key file that I use to connect via SSH.

Next, I am prompted for passwords and my auth token. It also prompts for an encryption password. This is used to encrypt and decrypt the Data Pump dump file. Generate a secure password. I also get a job ID. When submitting a migration job you are prompted for various passwords

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-05-28T06:25:25.518Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR
zdmhost: 2021-05-28T06:25:31.883Z : Source PDB name : SRCPDB
zdmhost: 2021-05-28T06:25:32.410Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node srchost ...
zdmhost: 2021-05-28T06:25:38.533Z : Premigration advisor output:
Cloud Premigration Advisor Tool Version 21.0.0
Cloud Premigration Advisor Tool completed with overall result: WARNING
Cloud Premigration Advisor Tool generated report location: /u01/app/oracle/zdm/zdm_SRCCDB_fra3dd_6/out/premigration_advisor_report.json

Start Migration

When the evaluation passes, I can start the real migration. This is an offline migration, so I need to drain the database before I start. Kick off the users and ensure that they don’t reconnect.

I am re-using the same command line, but I have removed the -eval option:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/logical_offline.rsp \
   -sourcenode srchost \
   -sourcedb SRCCDB_fra3dd \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file.

When the migration completes, status changes to succeeded

That’s it. I have now migrated into an Autonomous Database in OCI!

I have a few post-migration tasks to carry out:

  • I prefer to shut down the source database to ensure that use it anymore.
  • Take care of the optimizer statistics as described in this blog post. Since your target is an Autonomous Database I would recommend to not move your statistics preferences, and in addition, simply regather statistics using DBMS_STATS.GATHER_DATABASE_STATS.
  • Gather dictionary statistics (DBMS_STATS.GATHER_DICTIONARY_STATS). After the import the data dictionary is now full of a lot of new data – the metadata about your tables and other objects. To ensure a smooth running database, gather dictionary statistics.
  • Figure out what to do with the Data Pump dump and log files that are stored in my object storage bucket.

I showed the simple, offline method. However, if you want to use the online method without downtime, have a look at my colleague Sinan’s blog post.

Other Blog Posts in This Series

Appendix

A – Update Cloud Premigration Advisor Tool

As part of the migration ZDM will use the Cloud Premigration Advisor Tool (CPAT) to check your source database. The version of CPAT that ships with ZDM is not the latest one. So, you might be missing out on bug fixes and new recommendations. If you want to get the latest and greatest recommendations, I suggest that you manually update the CPAT tool in your ZDM home. You can find instructions in the MOS note Cloud Premigration Advisor Tool (CPAT) Analyzes Databases for Suitability of Cloud Migration (Doc ID 2758371.1)

B – Ignore Certain Data Pump Errors

If there is an error in either your Data Pump export or import, then ZDM will error out. But by default, ZDM will ignore the following errors:

  • ORA-31684: Object type string already exists
  • ORA-39111: Dependent object type string skipped, base object type string already exists
  • ORA-39082 Object type string created with compilation warnings

If you know that your data exports or imports with a specific error and you want to ignore it then you can add your own errors. The list of ignorable errors is maintained by the ZDM service host and it applies to all migrations. So, if you add another error for one migration, that error is also ignored in a second migration. Read more about it in the documentation.

C – RAC Databases

If your source database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH key to that host.

D – Response File

Allow me to put some comments on the values:

Setting Comment
DATAPUMPSETTINGS_CREATEAUTHTOKEN I set this to false to re-use an existing auth token. I will be prompted to import the auto token when the migration starts
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME Name of the bucket in OCI Object Storage that will be used as a staging area. I recommend using a separate and dedicated bucket for each migration
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME This is your Object Storage Namespace. You find it in the OCI Console, Administration > Tenancy Details
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE For on-prem databases set to number of physical cores x 2. For OCI databases set to number of OCPUs. Standard Edition does not allow parallel, so set to 1
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE Set to the number of OCPUs
DATAPUMPSETTINGS_JOBMODE I recommend using SCHEMA. Each schema to import is specified in a separate parameter, INCLUDEOBJECTS-n
DATAPUMPSETTINGS_METADATAREMAPS-n You should remap any tablespace into DATA which is the only that exist in ADB. If you have a lot of data you will have DATA_2, DATA_3 and so forth. In addition, I am remapping SH into SH2. SH is already created as a common user in ADB, and I want to import into an empty schema. If you need to specify multiple remaps, increment the number and specify only one remap per parameter
INCLUDEOBJECTS-n Specify each schema in a separate parameter, example: INCLUDEOBJECTS-1=owner:SH and INCLUDEOBJECTS-2=owner:OE
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT This is the fingerprint of the API keys that you used to configure OCI CLI on the ZDM service host. You can find the fingerprint in the OCI Console. Go to Identify > Users > User Details > API Keys
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE This is the private key file that you used when you configured OCI CLI on the ZDM service host
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME This is a service that connects directly into the source PDB (or non-CDB). I normally use lsnrctl status to find the name of the service
SOURCEDATABASE_ These settings apply to my source PDB. Or, to your non-CDB

E – Data Pump Logfile

Check out my Pro Tips for information on how to retrieve the Data Pump log file.

Zero Downtime Migration – Logical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Logical Online method. The migration will happen with a combination of Data Pump and Oracle GoldenGate. This way even very large databases can be migrated with very little downtime.

In the introduction blog post I covered the basic characteristics of this approach and explained that Oracle GoldenGate can be used at no extra license cost and it is really simple to use.

Migrate your Oracle Database with Zero Downtime Migration ZDM using Data Pump and GoldenGate

With the Logical Online method, you can even target Autonomous Databases. Both Oracle Autonomous Transaction Processing (ATP) and Oracle Autonomous Data Warehouse (ADW) and also shared and dedicated. When you target an Autonomous Database, you don’t have access to the underlying database host, so instead of SSH access, ZDM will require SQL*Net connectivity.

Benefits

  • First and most important, you don’t need a license for Oracle GoldenGate to use this approach. Yes, that’s right! You are allowed to use a marketplace edition of Oracle GoldenGate 19c for 183 days to migrate your database with ZDM.

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools: Oracle Zero Downtime Migration and Oracle Cloud Infrastructure Database Migration

  • Even if you don’t have any GoldenGate experience, you can still use this approach. For example, you don’t need to install and configure GoldenGate yourself. The marketplace image creates the GoldenGate installation for you. Afterwards, ZDM configures GoldenGate and will eventually clean up as well.
  • You can migrate directly into a higher release. For example, if your source database is 12.1.0.2, you can migrate into 19c directly. This avoids the extra downtime normally needed to perform an upgrade.
  • You can migrate directly into a PDB. For example, if you source database is a non-CDB, you can migrate directly into a PDB. This avoids the extra downtime normally needed to convert the database into a PDB using the script noncdb_to_pdb.sql.
  • You can use the full power of Data Pump and GoldenGate to transform your data. By default, ZDM will convert any old BasicFile LOBs into SecureFile LOBs during import. But you can apply other transformations as well. You can partition the table in the target database, you can change the tablespace layout, you can apply compression and anything else that is possible with Data Pump.
  • You can migrate into a different character set (Data Pump and GoldenGate restrictions apply).
  • Your Data Pump export is automatically compressed if the source database is Enterprise Edition. Normally, Data Pump compression would require a license for the Advanced Compression Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump compression without having the Advanced Compression Option.
  • Your Data Pump export is automatically encrypted if the source database is Enterprise Edition. Normally, Data Pump encryption would require a license for the Advanced Security Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump encryption without having the Advanced Security Option.
  • You get recommendations on the migration before it is actually started. ZDM includes the Cloud Premigration Advisor Tool (CPAT) which analyzes your database and give you specific recommendations that matches the target database. For example, if you target an Autonomous Database, CPAT will warn if you have unsupported objects.
  • You can migrate a SE2 database into OCI using this approach as well.
  • You can build your target database in advance and configure backup upfront as well. You can test your backup/recovery strategy before you complete the migration.
  • You can build a standby database in advance as well. Your new OCI target database can be protected by Data Guard from the very second you perform the switch-over. I would recommend doing the initial Data Pump load first, and then build the standby database. This ensures that a minimum of changes has to be applied via redo.

Considerations

  • The time zone file in the OCI target database must be higher than that in the source database. Typically, this is not a problem if you provision a brand-new DB System. But if you plan to use an existing CDB in OCI do check the time zone file upfront:
SQL> select * from v$timezone_file;
  • In My Oracle Support you can find a list of recommended patches to apply on the source database when using Oracle GoldenGate. There is one for 11g and one for 12c and newer. You don’t have to apply those patches, but it is a recommendation. However, if you run into issues, you potentially have to patch your source database.
  • Most databases can be migrated with Oracle GoldenGate. Only very exotic data types are not supported. Be sure to check the documentation.
  • When you use Data Pump to migrate you will lose a lot of the information that is stored in the data dictionary, like
    • AWR
    • SQL Plan Baselines
    • SQL Profiles
    • SQL Patches
  • You can still transfer that information, but you need to use other tools. If this is relevant to you, I suggest that you watch our webinar Performance Stability, Tips and Tricks and Underscores where it is covered in detail.
  • Depending on whether you use a full or schema mode export you need to take care of your public objects afterwards. It could be synonyms or database links.
  • For a huge database you should be prepared for the time it takes to perform the Data Pump export and import. No need to worry too much because GoldenGate will keep track of the changes. But for a 100 TB database you should not start the job the day before the switchover is supposed to take place.
  • It is not recommended to execute DDL statements when GoldenGate is replicating changes. The extract and replicat processes that are created by GoldenGate does not replicate DDL statements. If you for example add a table after the initial Data Pump load, it will not get replicated by GoldenGate. However, GoldeGate is a very powerful tool and you can configure it to replicate DDL as well. But you will need to adjust the GoldenGate user privileges and change the parameter files. In the documentation you can read more about the possibilities of DDL replication and how to configure it.

Fallback

Using this approach there is no straight forward way of falling back to the source on-prem database. You could use GoldenGate to reverse the replication when you switch over to the OCI target database. However, it is not configured automatically by ZDM. So, this is very you would need to know GoldenGate and do the required configuration.

Conclusion

You get a lot more options when you use the Logical Online method to migrate your database. Especially, the possibility of migrating directly into a higher release or into a PDB is very attractive. Plus, you can use GoldenGate during the migration at no extra license cost.

However, the flexibility comes at a cost. There is a little more configuration and it is not as straight-forward as the Physical Online method. But it is not complicated.

Want to Know More

If you want to know more about using Data Pump and GoldenGate for database migrations, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series

Zero Downtime Migration – Create GoldenGate Hub

The Logical Online migration method in Zero Downtime Migration (ZDM) uses Oracle GoldenGate to keep the Oracle Database in OCI in sync until you perform the switchover. You need an Oracle GoldenGate Hub to do that and you should use the deployment from the OCI Marketplace. Let’s do it!

  1. First, go to the OCI Marketplace and search for goldengate for oracle. You should use the image called Oracle GoldenGate for Oracle – Database Migrations. Use the OCI Marketplace to find the Oracle GoldenGate application

  2. On the application page you can verify that the software is free but you still have to pay for the underlying infrastructure, like the compute instance. In addition, certain terms apply:

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools: Oracle Zero Downtime Migration and Oracle Cloud Infrastructure Database Migration

  1. When you hit Get App button you need to log in to OCI. Be sure to select the right region and compartment. Again, you can verify that the software is free – but you pay for the compute instance. Hit Launch Stack. Launching an Oracle GoldenGate stack using OCI Marketplace This version of Oracle GoldenGate is available for free but you have to pay for the underlying infrastructure

  2. On the next page Create Stack – Stack Information simply give it a name. I call it OGG4DEMO.

  3. On the next page Create Stack – Configure Variables you fill in additional information like network, your SSH public key etc. But most important – you also need to specify the Oracle Database version of the source and target database – and whether the target database is an Autonomous Database. Don’t worry about the deployment names they are good as they are. Even if the target attributes are listed as optional – fill them out. Pick the right database versions for the source and target database

  4. Review the settings and hit Create. The stack is now being deployed. The stack is deploying and very soon it will be ready

  5. Now, head on over to Compute and Instances to find the newly created compute instance with Oracle GoldenGate already installed. Depending on your network settings use either the public or private IP address or the computer name (DNS). When the stack has been created, you can find it in Compute > Instances

  6. Use the IP address (or computer name) and your SSH key to connect to the server as opc. There is a file called ogg-credentials.json which contains the username and password that is needed to connect to the Oracle GoldenGate Hub In ogg-credentials.json you find the username and password required to connect to the Oracle GoldenGate Hub

  7. Now you can connect via HTTPS (https://193.122.52.34) to the Oracle GoldenGate Hub. Use the credentials from ogg-credentials.json and log in. That’s it – your very own Oracle GoldenGate deployment. The landing page when you log in to the Oracle GoldenGate Hub

Other Blog Posts in This Series

Appendix

A. The very last thing is to apply the latest patches to Oracle GoldenGate. But that will be a topic for a future post.

B. When you connect to Oracle GoldenGate via HTTPS, you will probably get a security warning in your browser. The initial deployment comes with a self-signed certificate which alerts many browsers. Follow the procedures in your organization to deploy a proper certificate. Read more about Securing the Microservices Architecture.

C. The credentials for Oracle GoldenGate are case sensitive. It applies to the password but also the username (oggadmin, not OGGADMIN).

Upgrading in the Cloud – VM DB Systems – Automated Upgrade to 19c – The Details

Following a previous blog post here are all the details on automated upgrades in OCI, and (possibly) the answers to your questions.

Precheck

The precheck ensures the database is ready to upgrade. It uses DBUA which again uses preupgrade.jar to execute the checks. It is similar to running AutoUpgrade in analyze mode. The check is non-intrusive and can be executed while the database is in use.

Normally, when you use preupgrade.jar we always recommend you download the latest version from My Oracle Support. However, this is not possible when you use the tooling. The new, target Oracle Home is always deployed as part of the precheck process – and deleted again after the precheck. There is no way you can replace the preupgrade.jar package. You must use the version of preupgrade.jar that comes with the Oracle Home.

If there are no issues that prevent you from upgrading, you will see this message: The precheck completed and found no errors - the database is ready to upgrade

However, it could also be that there is an error in the database that must be fixed: If a critical issue prevents you from upgrading, a message will be displayed in the console

If you want to see the output from the precheck you must log on to the database host and find the file:

vi $ORACLE_BASE/cfgtoollogs/dbua/upgrade<timestamp>/$ORACLE_UNQNAME/upgrade.xml

Only the XML output is available, which might be a little hard to read. If you prefer you can also download AutoUpgrade to the server and run it in analyze mode. It can produce a much better output, and it works even if the target Oracle Home is not present. Create a simple config file:

upg1.sid=DB11204
upg1.source_home=/u01/app/oracle/product/11.2.0.4/dbhome_1
upg1.target_version=19

And now start AutoUpgrade in analyze mode:

java -jar autoupgrade.jar -config DB11204.cfg -mode analyze

You can use the preupgrade report to determine which issues prevents the upgrade from starting.

The database must be in ARCHIVELOG mode and the size of your Fast Recovery Area (FRA) must be at least 15G (parameter db_recovery_file_dest_size). As well, you must have 15G of free space on the mount point that hosts the FRA.

Upgrade

When you upgrade your database, all PDBs in the database are upgraded as well. There is no way to change it. If a PDB is closed when the upgrade starts, it is opened and upgraded. After the upgrade, the PDB is left opened and in READ WRITE state. But the state is not saved, so after a CDB restart, the PDB will start in whatever state that has been previously saved.

The first version of the tooling does not support standby database. If your database is a primary database, you must remove the standby database, upgrade, and then recreate the standby database. It is in the plan for future enhancements to get this streamlined.

Fallback

Enterprise Edition databases are protected by a guaranteed restore point (GRP) and Flashback Database. The tooling automatically creates the GRP before it starts to work on the database. If an error occurs during the upgrade, you can use the OCI console to initiate a roll back.

If the database upgrade fails on an Enterprise Edition database it is possible to roll back to a guaranteed restore point using Flashback Database

After successful upgrade the GRP is dropped again. The GRP only protects the database during the upgrade, so you can’t rely on the GRP as a fallback mechanism if you decide to fall back after the upgrade. Let’s say that your testing reveals a critical problem after the upgrade, then your only fallback mechanism is to restore a backup.

Since Flashback Database is an Enterprise Edition feature, this fallback mechanism is not available on Standard Edition databases.

In addition, it is strongly recommended that you perform a manual backup of the database before you start the upgrade. The console will also give you this warning, before you can start the upgrade.

Monitoring and Troubleshooting

When you have started the upgrade, you can’t monitor it from the console. You must log on to the host. When you do so, be aware that the timestamps shown in the OCI console are UTC, but the timestamps in the log files on the host is local timestamp (depending on your region).

Using dbcli

Log on as root and use the dbcli tool to monitor the progress. First, list jobs:

[root@host]$ dbcli list-jobs

Which should produce a list like this: Use dbcli list-jobs to list the jobs - including the upgrade - that run on the host Next, you can get additional information about the job using the ID:

[root@host]$ dbcli describe-job -i <id>

Which give you more details: dbcli describe-job can give you more detailed information about the upgrade

Using DBUA Log Files

But you can get even better information by looking in the log files from DBUA. Use the job id from the dbcli command to find the log file:

[oracle@host]$ export ORACLE_BASE=/u01/app/oracle
[oracle@host]$ export DBCLI_JOBID=f4b2597f-990f-4442-a774-153f3713fb7a
[oracle@host]$ tail -f -n 10 $ORACLE_BASE/cfgtoollogs/dbua/$DBCLI_JOBID/silent.log

And for really detailed information look in this directory:

[oracle@host]$ export ORACLE_BASE=/u01/app/oracle
[oracle@host]$ export DBCLI_JOBID=f4b2597f-990f-4442-a774-153f3713fb7a
[oracle@host]$ cd $ORACLE_BASE/cfgtoollogs/dbua/$DBCLI_JOBID/$ORACLE_UNQNAME

Using DCS Agent

The OCI control plane communicates with your DB System using an agent, and sometimes it can be useful to look in those logs:

[root@host]$ cd /opt/oracle/dcs/log
[root@host]$ vi dcs-agent.log

To find the log entries that are related to a specific upgrade search for the job ID:

[root@host]$ cat dcs-agent.log | grep "<job-id>" | more

Q&A

Which version and release update can I upgrade to?

The tooling only allows upgrades to Database 19c. If you need to upgrade to any other version, you must do it manually.

You can decide to upgrade to an Oracle provided image or a custom image: When you upgrade you can choose an Oracle provided image, or your own custom database software image However, for both type of images, the Release Update (or patch level) must be the latest or previous two Release Updates. Even if you have a custom database software image that is older, it can’t be used. You must upgrade to one of the recent Release Updates. If you select 19.0.0.0 you will not get the base release, but the latest Release Update. If you use the APIs this is a smart way of specifying that you always want the latest Release Update.

Where are my log files?

The output from the precheck is stored here:

  • $ORACLE_BASE/cfgtoollogs/dbua/upgrade<timestamp>

The output from the actual upgrade is stored here:

  • $ORACLE_BASE/cfgtoollogs/dbua/<job-id>
  • $ORACLE_BASE/cfgtoollogs/dbua/<job-id>/$ORACLE_UNQNAME

In addition, you can get details about the upgrade using dbcli:

[root@host]$ dbcli list-jobs
[root@host]$ dbcli describe-job -i <job-id>

Why is it taking so long to perform a precheck?

It consists of three phases:

  1. Deploy new Oracle Home to the VM DB System
  2. Precheck of the database
  3. Removing the new Oracle Home

The precheck (phase 2) is really fast. Just as fast as if you would run AutoUpgrade in analyze mode or using preupgrade.jar. The extra time is needed to deploy and remove the Oracle Home again. For each execution of the precheck the procedure repeats, and a new Oracle Home is deployed. It is never re-used.

Why is the upgrade slower than if I do it manually?

Typically, when you upgrade a database you have already – outside of the maintenance window – deployed a new Oracle Home. When you use the tooling, this happens inside the maintenance window. The tooling can’t deploy an Oracle Home prior to the upgrade. In addition, the upgrade is executed with DBUA using default options, which for instance means that the time zone file is upgraded as well. If you are sensitive to downtime and would like to complete the upgrade faster, you must perform the upgrade manually.

Will my 11.2.0.4 database get converted to a PDB?

No, the database is upgraded as-is and there is no PDB conversion. We are working on making it possible to perform the non-CDB conversion as well. If you must convert the non-CDB to a PDB, you must move the database to a new VM DB System that already have a CDB provision. In that case, I would recommend that you use the manual upgrade and plug-in as described in another blog post.

Can I perform an automated upgrade using dbcli?

No, although the command line help of dbcli suggests that such an option exist, it can’t be used.

Other Posts in This Series

Upgrading in the Cloud – VM DB Systems – Automated Upgrade to 19c

It is really easy to upgrade a database to 19c using the console:

Upgrade your Oracle Database in OCI with just a few clicks

In addition, the documentation is quite good and elaborate. I won’t make a copy/paste of the documentation, and also because things really move fast in the cloud, you should visit the documentation for up-to-date information.

Before Upgrade

First, your VM must be running Oracle Linux 7 and Grid Infrastructure must be version 19. If your system doesn’t meet these requirements either:

  • Move the database to a new DB System that meets these requirements via cloning or backup/restore, or
  • Use manual upgrade which is described in other post in the series.

To check the OS version:

[oracle@host]$ cat /etc/os-release

To check GI version:

[grid@host]$ crsctl query crs activeversion

Precheck

Start by running a precheck. I recommend to always upgrade to 19.0.0.0 as it will automatically give you an Oracle Home including the latest release update. Always choose 19.0.0.0 as your Oracle Home as it includes the latest release update

The console will notify you if something critical comes up during the precheck. If a critical issue prevents you from upgrading, a message will be displayed in the console

If you want to see the output from the precheck you must log on to the database host and find the file:

vi $ORACLE_BASE/cfgtoollogs/dbua/upgrade<timestamp>/$ORACLE_UNQNAME/upgrade.xml

Only the XML output is available, which might be a little hard to read. If you prefer you can also download AutoUpgrade to the server, and run it in analyze mode. It can produce a much better output, and it works even if the target Oracle Home is not present. Just use the target_version config file parameter.

Backup

Automatic backup must be disabled during upgrade.

In addition, it is recommended to create a manual backup before you start the upgrade. Enterprise Edition databases will be protected by a Guaranteed Restore Point which is the primary fallback method, however, it is recommended to also have a manual backup. For Standard Edition databases a manual backup is the only fallback method available.

Upgrade

Under the hood the upgrade consists of:

  1. Deploy new Oracle Home You should use the same Oracle Home version that was used during the previous precheck.
  2. Create guaranteed restore point (Enterprise Edition only) The name of the restore point is prefixed BEFORE#DB#UPGRADE# and the restore point is automatically removed following a successful upgrade.
  3. Upgrade using DBUA (incl. an additional precheck) The upgrade is executed using the default settings which does include an upgrade of the timezone file. This is not configurable. If you are upgrading a CDB all PDBs will be upgraded as well. It is not possible to exclude some of the PDBs. Any PDB that is closed when the upgrade starts, will open and upgrade.

While you are upgrading the database it is unavailable. Upgrade requires downtime. How long? As always in IT – it depends! These things matter a lot:

  • Number of installed components (select * from dba_registry)
  • Size & complexity of dictionary (select count(*) from obj$)
  • Some feature/version combinations

These things matter a little:

  • CPU
  • Disk speed
  • RAM

These things doesn’t matter at all:

  • Amount of user data

But an estimate is anywhere between 20 minutes to an hour.

After Upgrade

Parameter COMPATIBLE

The COMPATIBLE parameter is not changed by the tooling. You are in full control of this parameter, and should decide when you want to change it to the default value in 19c. Check the documentation to read more about this parameter.

I would recommend that you do it one or two weeks after the upgrade. This preserves the option of performing a database downgrade. But it requires a database restart, so you must be able to tolerate the additional maintenance window. If you can’t tolerate additional downtime, or you don’t care about the downgrade option, you should update COMPATIBLE right away.

In any case this is how you do it:

SQL> alter system set compatible='19.0.0' scope=spfile;
SQL> shutdown immediate
SQL> startup

And always set COMPATIBLE to the default value of the release. Mike Dietrich explain it really good in a blog post.

Update Profile

.bashrc must be updated to reflect the new Oracle Home. There are a few environment variables that should be updated. Either do it manually or you can use sed:

export OLD_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
export NEW_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_2
cp ~/.bashrc ~/.bashrc_backup_`(date -u +%Y%m%d-%H%M%S)`
sed -i "s,$OLD_HOME,$NEW_HOME,g" ~/.bashrc
source ~/.bashrc

Backup

Ensure that you have proper backups:

  • Re-enable automatic backup, if you want to use it
  • Personally, I would add a manual backup taken right after upgrade

Remove Old Oracle Home

You can remove the old Oracle Home. It is no longer supported to use it. You should use dbcli. First, get a list of Oracle Homes:

[root@host]$ dbcli list-dbhomes

Output from dbcli command that list all Oracle Homes - list-dbhomes

Use the ID to delete the Oracle Home:

[root@host]$ dbcli delete-dbhome -i <id>

Output from dbcli command that deletes an Oracle Home - delete-dbhome If you really want you can monitor the progress of the deletion:

[root@host]$ dbcli describe-job -i <id>

Conclusion

It is simple to upgrade a database in OCI to 19c. The upgrade itself is just a click on a button, but for real databases there is a little more to it. But it is still very easy, and most tasks can be executed from the console. If you want to be in better control, or want to minimize the downtime required, you should look into doing the upgrade manually.

I made a video that I posted to our YouTube channel which demos the process.

Other Posts in This Series

Minimal Downtime Migration with Full Transportable Export Import and Incremental Backups

If you need to migrate a database to the cloud or anywhere else for that matter, you should consider using Full Transportable Export Import (FTEX) and incremental backups. Even for really large databases – 10s or 100s of TB – you can still migrate with minimal downtime. And it works across different endian formats.

FTEX uses transportable tablespaces and the solution has these benefits:

  • You can implicitly upgrade the database as part of the migration
  • You can migrate from a non-CDB and into a PDB
  • You can keep downtime at a minimum by using frequent incremental backups
  • You can migrate across endianness – e.g. from AIX or Solaris to Oracle Linux

How Does It Work

To concept is explained in this video on our YouTube Channel and it includes a demo:

To make the backup and convert process really easy, Oracle is providing a perl script that can automate the entire process. You download the scripts from My Oracle Support: V4 PERL Scripts to reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

What You Need to Be Aware Of

Transportable Tablespaces

To get a complete list of limitations on transporting data, you should have a look in the documentation. Also, there are some specific to transportable tablespaces. The most notable are:

  • Character set and national character set should be the same. If not, there are a few options available, however.
  • No columns can be encrypted with TDE Column Encryption. Only option is to remove the encryption before migration, and re-encrypt afterwards.
  • TDE Tablespace Encryption is supported for same-endian migration if the source database is 12.1.0.2 or newer. If you need to go across endianness, you must decrypt the tablespaces, and re-encrypt after migration. Remember in Oracle Database 12.2 you can encrypt tablespaces online.
  • If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the convert on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
  • The database timezone and timezone file version must be identical.
  • Requires Enterprise Edition.

Full Transportable Export Import

  • Source database must be 11.2.0.3 or higher
  • Target database must be 12.1.0.1 or higher
  • It is recommended to import directly into the target database using the NETWORK_LINK option.
  • Requires Enterprise Edition.

If you can’t meet these requirements, you can still use this solution. But instead of doing a FTEX, you need to use Data Pump in another way.

Incremental Backups Using Perl Scripts

  • Source database must be 10.2.0.3 or higher
  • Target database must be 11.2.0.4 or higher

In addition, it is strongly recommended to use Block Change Tracking (BCT) on the source database. Note, that this is an Enterprise Edition feature (in OCI: DBCS EE-EP or ExaCS). If you don’t enable BCT the incremental backups will be much slower, because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.

The scripts will create a level 0 image file backup, and you must have room to accomodate this on your file system.

Conclusion

By using a combination of Full Transportable Export Import and incremental backups, you can migrate even huge databases to the cloud. And it even works for cross-endian migrations, like AIX or Solaris to Oracle Linux.

If you want to learn more about endianness and transportable tablespaces, you should watch this video on our YouTube Channel:

Update 30 November 2020

Thanks to Mark for his comment (see below). I have added some additional useful information.

Further Reading

Move to the Cloud – Webinar

Yesterday, Mike Dietrich and I gave the final webinar in the Oracle Database 19c Upgrade Virtual Classroom series. It was about Move to the Cloud – not only for techies. Now, I say final – but we all know you should never, say never. And in this case, it applies to final as well. We are already talking about subjects for a seventh webinar. If you have any interesting topic, that you think we should cover, get in touch with me.

Oracle Database 19c Upgrade Virtual Classroom

Unfortunately, due to a technical glitch we skipped the part about migrating using transportable tablespaces and full transportable export/import. We uploaded the missing part to YouTube, so you can watch it.

For those interested, you can now download the slides. We had really much information to share, so browse through the deck to find a lot of hidden slides. Typically, there are references and links to more information about a specific topic.

Within a week it should be possible to watch a recording of the webinar.

The Demos and Videos

This presentation we gave, was a brand new one. We used as many demos and videos as we could – or rather had time to prepare. We will post them on our YouTube channel as soon as possible. I suggest that you subscribe to it, so you can receive word as soon as new contents arrives. Further, we want to enhance the presentation even more, so we will be putting in more videos and demos. Let me know, if there was a topic, that could improve with a video or demo.

Thank You

Thanks to everyone that participated yesterday. Happy migrating!

Debut on the Big Stage

Mike Dietrich and I will give two webinars in mid-October on database migrations. One of them will focus solely on migration to Oracle Cloud Infrastructure. The webinars are part of the Oracle Database 19c Upgrade Virtual Classroom series. If you are interested, you have to register.

Oracle Database 19c Upgrade Virtual Classroom

Migration Strategies: Insights, Tips and Secrets

Date: Tuesday 13 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Now it’s time for us to dig deeper. We’d like to offer you further insights and a deep dive from a technical point of view, starting with Data Pump and Transportable Tablespaces, then Full Transportable Export Import, and adding RMAN Incremental Backups to decrease the downtime. Best practices and real-world experience will round up this two-hour webinar.

Move to the Cloud (for techies)

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Debut on the Big Stage

These two webinars will be my debut on the big stage. The previous webinars had a huge interest and, unfortunately, some people couldn’t join because the webinar had reached its maximum number of attendees. If you register, I recommend to join early to get your seat.

I have been presenting in person and virtually for some years now. Also, since I joined Oracle in January, I have been doing quite a few presentations. However, this is my first time for such big crowd. I am excited and look forward to it – but also a little intimidated. Luckily, I have super-star Mike Dietrich there to (virtually) hold my hand.

And, finally, I promise you: No marketing slides – just demos and details.

I hope to see you there!

Upgrading in the cloud – VM DB Systems – 11.2.0.4 to 19c (minimal downtime)

This blog post is a follow-up blog post to a previous post. The procedure I described earlier was a simple approach that required downtime while the entire database is moved from one VM DB System to another. If you have strict requirements to downtime you might not be able to use that approach. In this blog post I will come up with an alternative. I will describe how you can use incremental backups to significantly lower the downtime required. Instead of doing a full backup when the database is down my idea is to:

  • Take a level 0 backup while the source database is up and running
  • Restore the database on target system
  • These two steps take time – but I don’t care because the source database is still up
  • Take incremental backup on source database
  • Recover target database using incremental backup
  • Perform final incremental backup/recover after downtime has started

Overview of DB Systems and databases

My source environment is the red environment. The DB System is called SRCHOST11 and it has an 11.2.0.4 database that is called SALES. Due to the restrictions of the VM DB System I have to move the database to a new DB System in order to upgrade it. I have created a brand-new target environment – the green environment – on the release that I want to target. I have named the DB System TGTHOST19 and it has a multitenant database called CDB1. When I am done, the target environment – CDB1 – will also contain a PDB named SALES. The SALES PDB will be the original 11.2.0.4 database that has been upgraded and converted.

For a short period of time I need to spin up a second database instance on the target system. This second – or temporary – instance will be a duplicate of the source database (as non-CDB database) and I will upgrade it to the new release. Then I can plug in the database as a PDB in the precreated CDB database and get rid of the second/temporary instance. You will see how it works later in the blog post.

Backup Database

I need to exchange files between the source and the target systems, and I will use a File Storage service for that. Check out the documentation if you need help creating one – I created one already called upgsales and now I can mount it on my source system:

[opc@srchost11]$ sudo mkdir -p /mnt/upgsales
[opc@srchost11]$ sudo chmod 777 /mnt/upgsales/
[opc@srchost11]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

While the source database is still open and in use, I will start preparing the backup. First, the password file and wallet:

[oracle@srchost11]$ mkdir -p /mnt/upgsales/backup
[oracle@srchost11]$ cp /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/ewallet.p12 /mnt/upgsales/
[oracle@srchost11]$ cp $ORACLE_HOME/dbs/orapw$ORACLE_SID /mnt/upgsales/orapw$ORACLE_SID

If you are really concerned about security, you can copy the wallet file directly to the target system – instead of via the File Storage service. The File Storage service itself is secured, but the fewer places you have a copy of the wallet – the better and the safer, I assume. Further, you can also encrypt traffic to and from the File Storage service.

Next, a PFile:

SALES SQL> CREATE PFILE='/mnt/upgsales/init.ora' FROM SPFILE;

And now I start a level 0 backup:

SALES RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE FORMAT '/mnt/upgsales/backup/lvl0%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U' ;
SALES RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/upgsales/backup/controlfile';

Restore Database

On my target system, I need to access the File Storage service as well:

[opc@tgthost19]$ sudo mkdir -p /mnt/upgsales
[opc@tgthost19]$ sudo chmod 777 /mnt/upgsales/
[opc@tgthost19]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

Next, I will copy the password file and PFile into the target Oracle Home. I need that in order to start the temporary instance. Note, the name of the temporary instance will be SALES – the same as the source database SID:

[oracle@tgthost19]$ cp /mnt/upgsales/init.ora $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ cp /mnt/upgsales/orapwSALES $ORACLE_HOME/dbs/orapwSALES

I also need to copy the wallet:

[oracle@tgthost19]$ mkdir -p /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ cp /mnt/upgsales/ewallet.p12 /opt/oracle/dcs/commonstore/wallets/tde/SALES/

And I need to create a directory for audit_file_dest:

[oracle@tgthost19]$ mkdir -p /u01/app/oracle/admin/SALES/adump

Now, I must edit the PFile:

[oracle@tgthost19]$ vi $ORACLE_HOME/dbs/initSALES.ora

And make the following changes:

  • Remove all the double-underscore parameters that contains the memory settings from last restart. That could for instance be SALES.__db_cache_size.
  • Set audit_file_dest=’/u01/app/oracle/admin/SALES/adump’
  • Set control_files=’+RECO/sales/controlfile/current.256.1048859635′
  • Set SALES.sga_target=6G
  • Set SALES.pga_aggregate_target=2G
  • Set db_unique_name=’SALES’

I don’t have an abundance of memory on this system, so I keep the memory settings. Strictly speaking you don’t have to change db_unique_name, but I am doing it so it will be easier to cleanup afterwards.

While I work on the temporary instance, I must shut down the other database – the pre-created one that eventually will hold the PDB. Most likely there is not enough memory on the system to support two databases:

[oracle@tgthost19]$ $ORACLE_HOME/bin/srvctl stop database -db $ORACLE_UNQNAME

Let’s start the temporary instance in NOMOUNT mode. Remember to set the environment:

[oracle@tgthost19]$ export ORACLE_UNQNAME=SALES
[oracle@tgthost19]$ export ORACLE_SID=SALES
[oracle@tgthost19]$ sql / as sysdba

SALES SQL> STARTUP NOMOUNT

And finally, I can start the restore using RMAN. Once the database is mounted I must open the keystore, otherwise, the database can’t perform recovery. Then, I can use the catalog command to find the backup pieces in my staging area. And finally, do the restore:

[oracle@tgthost19]$ rman target /

SALES RMAN> RESTORE CONTROLFILE FROM '/mnt/upgsales/backup/controlfile';
SALES RMAN> ALTER DATABASE MOUNT;
SALES RMAN> sql 'ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>';
SALES RMAN> sql "ADMINISTER KEY MANAGEMENT CREATE LOCAL AUTO_LOGIN KEYSTORE FROM KEYSTORE ''/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/'' IDENTIFIED BY <SALES-keystore-password>";
SALES RMAN> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;
SALES RMAN> RESTORE DATABASE;

The SALES database is now restored on my target system. I will leave it there – unrecovered and in MOUNT mode so I can apply incremental backups later on.

Incremental Backup/Recover

I can do as many incremental backup/recover cycles as I want. But what matters is that I make one and restore it – as close to the start of the downtime window as possible. This will significantly reduce the time it takes to make the final incremental backup/restore later on.

On my source database, start an incremental backup:

[oracle@srchost11]$ rman target /

SALES RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE FORMAT '/mnt/upgsales/backup/lvl1%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U';

Now, switch to the target system and recover using that backup. I use the CATALOG command to instruct RMAN to find new backups at the shared file storage.

[oracle@tgthost19]$ rman target /

RMAN SALES> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;
RMAN SALES> RECOVER DATABASE;

RMAN will complain about a missing log file. But worry – this is expected and will be fixed later on:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/28/2020 09:06:51
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 60 and starting SCN of 793358

Down Time Starts

Now it is time to kick users off the database. Your precious downtime starts now.

Prepare Database For Upgrade

In my database I will create some sample data so I can verify the upgrade:

SALES SQL> CREATE USER UPG19 IDENTIFIED BY <secret-password>;
SALES SQL> ALTER USER UPG19 QUOTA UNLIMITED ON USERS;
SALES SQL> CREATE TABLE UPG19.ORDERS(ID NUMBER, CUSTOMER VARCHAR2(50), AMOUNT NUMBER) TABLESPACE USERS;
SALES SQL> INSERT INTO UPG19.ORDERS VALUES(1, 'John', 500);
SALES SQL> COMMIT;

I must prepare my database for upgrade on the source system. When I open the database on the target system, I can only do that in UPGRADE mode (because the database will be restored using 19c Oracle Home). In UPGRADE mode it is impossible to do the pre-upgrade tasks.

I will use the classic preupgrade.jar tool in this demo, but you could also use the newer AutoUpgrade. Always get the latest preupgrade tool from My Oracle Support. Upload the zip file (named preupgrade_19_cbuild_7_lf.zip in my demo) to the source system, extract to $ORACLE_HOME/rdbms/admin and do the pre-upgrade checks:

[oracle@srchost11]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ cd $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ unzip preupgrade_19_cbuild_7_lf.zip

[oracle@srchost11]$ mkdir -p /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ cd /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/preupgrade.jar FILE TEXT DIR .

You must upload the same version of the preupgrade tool to the target system before you can run the post-upgrade fixups. Hence, save the zip file so you don’t have to download it again.

Next, I will review the report generated by the tool:

[oracle@srchost11]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

And I can execute the pre-upgrade fixups:

SALES SQL> SET SERVEROUT ON
SALES SQL> @/mnt/upgsales/preupg_logs_SALES/preupgrade_fixups.sql

Final Incremental Backup/Recover

I can now make the last incremental backup on my source system. To be absolutely sure nothing else gets into the source database from now on, I restart the database in restricted mode:

[oracle@srchost11]$ sqlplus / as sysdba

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP RESTRICT

Then I use RMAN to archive the current log file and start the last backup:

[oracle@srchost11]$ rman target /

SALES RMAN> sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
SALES RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE FORMAT '/mnt/upgsales/backup/lvl1%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U';

Now, switch to the target system, catalog the new backups:

[oracle@tgthost19]$ rman target /

RMAN SALES> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;

By default, RMAN will try to perform complete recovery. But I can’t do that because I don’t have the online redo logs. I must perform incomplete recovery. That sounds dangerous, but it is not. I archived the current log file after I had ensured that no one was using the system (I started in restricted mode, remember). To perform incomplete recovery, I must know at which sequence to stop. I will use the LIST command in RMAN to do that:

RMAN SALES> LIST BACKUP OF ARCHIVELOG ALL;

Sample output of RMAN command "list archivelog all" showing which sequence to restore until I take the last available sequence and add one. In my case, I will recover until sequence 65:

RMAN SALES> run {
   SET UNTIL SEQUENCE 65 THREAD 1;
   RECOVER DATABASE;
}

Finally, you can switch to SQLPlus and open the database. You could in theory also do that from RMAN but likely you will hit ORA-04023: Object SYS.STANDARD could not be validated or authorized:

[oracle@tgthost19]$ sqlplus / as sysdba

SALES SQL> ALTER DATABASE OPEN RESETLOGS UPGRADE;

Upgrade Database

I must upload the same version of the preupgrade tool to the target Oracle Home, before I can do the post-upgrade fixups:

[oracle@tgthost19]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ cd $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ unzip preupgrade_19_cbuild_7_lf.zip

I can now upgrade the database. Ensure to use the same prompt that has the environment set to the SALES database – the temporary instance:

[oracle@tgthost19]$ mkdir -p /mnt/upgsales/upg_logs_SALES
[oracle@tgthost19]$ dbupgrade -l /mnt/upgsales/upg_logs_SALES

Once the upgrade completes, I will finish with the post-upgrade tasks

SQL> STARTUP

SQL> --Recompile
SQL> @$ORACLE_HOME/rdbms/admin/utlrp
SQL> --Check outcome of upgrade
SQL> @$ORACLE_HOME/rdbms/admin/utlusts.sql
SQL> --Post-upgrade fixups
SQL> @/mnt/upgsales/preupg_logs_$SOURCE_SID/postupgrade_fixups.sql
SQL> --Timezone file upgrade
SQL> SET SERVEROUTPUT ON
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql

Last, have a look in the report generated by preupgrade.jar to see if there are any post-upgrade tasks that you have to execute:

[oracle@tgthost19]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

Plug In Database

Now that the temporary database is upgraded let’s look at what we need to prepare for the conversion to a PDB. First, I will export the encryption keys:

SALES SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "<a-secret-password>" TO '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>;

And then I open the database in READ ONLY mode to create a manifest file. After that, I completely shut down the temporary database and, hopefully, it won’t be needed anymore:

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP MOUNT
SALES SQL> ALTER DATABASE OPEN READ ONLY;
SALES SQL> EXEC DBMS_PDB.DESCRIBE('/mnt/upgsales/manifest_sales.xml');
SALES SQL> SHUTDOWN IMMEDIATE

Now, I will restart CDB1 which I shut down previously. I will work in CDB1 for the rest of the blog post. Notice, how I am resetting my environment variables to the original values using the source command. You could also open a new SSH session instead. Anyway, just ensure that your environment is now set to work on the original database, CDB1:

[oracle@tgthost19]$ source ~/.bashrc
[oracle@tgthost19]$ env | grep ORA
[oracle@tgthost19]$ $ORACLE_HOME/bin/srvctl start database -db $ORACLE_UNQNAME

I check for plug in compatibility:

CDB1 SQL> SET SERVEROUT ON
CDB1 SQL> BEGIN 
    IF DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/mnt/upgsales/manifest_sales.xml', 'SALES') THEN
        DBMS_OUTPUT.PUT_LINE('SUCCESS');
    ELSE
        DBMS_OUTPUT.PUT_LINE('ERROR');
    END IF;
END;
/

Hopefully, it should read out SUCCESS. If not, you can query PDB_PLUG_IN_VIOLATIONS to find out why:

CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

I can plug in the SALES database as a new PDB – which I also will call SALES. I am using the MOVE keyword to have my data files moved to a directory that matches the naming standard:

CDB1 SQL> CREATE PLUGGABLE DATABASE SALES USING '/mnt/upgsales/manifest_sales.xml' MOVE;
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES OPEN;

I could also use the NOCOPY keyword and just use the data files from where they currently are placed. Later on, I could move the data files to a proper directory that follows the naming standard, and if I were on Enterprise Edition, I could even use online datafile move.

Next, I can switch to the SALES PDB and import my encryption keys from the file I made a little earlier. Note, that I must enter the secret that I used in the export. And now I have to enter the keystore password for CDB1:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "a-secret-password" FROM '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <CDB1-keystore-password> WITH BACKUP;

Be aware, that if your system tablespaces are encrypted, you might have to import the encryption key into CDB$ROOT as well before you can open the database.

Now, it is time to fully convert the database to a PDB:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
CDB1 SQL> SHUTDOWN IMMEDIATE
CDB1 SQL> STARTUP

Now, check and resolve any plug-in violations:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

And finally, ensure that OPEN_MODE=READ WRITE and RESTRICTED=NO. When so, I can save the state of the PDB so it will auto-open whenever the CDB restarts:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT OPEN_MODE, RESTRICTED FROM V$PDBS WHERE NAME='SALES';
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES SAVE STATE;

Verify that my test data exist:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> SELECT * FROM UPG19.ORDERS;

That’s it. The database is now fully upgraded to 19c and converted to a PDB. Be sure to:

  • Start a backup
  • Test your application
  • Adjust your connection strings
  • And what else your procedure mandates

Wrap-Up

Let’s clean up on the target system! I can remove the files and folders that were created to support the temporary instance:

[oracle@tgthost19]$ #audit dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/admin/SALES/adump
[oracle@tgthost19]$ #diag dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/diag/rdbms/sales
[oracle@tgthost19]$ #wallet
[oracle@tgthost19]$ rm -rf /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ #instance files
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/orapwSALES
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/spfileSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/hc_SALES.dat
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/lkSALES
[oracle@tgthost19]$ #exported master key
[oracle@tgthost19]$ rm /mnt/upgsales/key_export_SALES

Also, since I stored data files in ASM I can delete those as well. Note you have to log on as grid to do that:

[grid@tgthost19]$ asmcmd rm -rf +DATA/SALES
[grid@tgthost19]$ asmcmd rm -rf +RECO/SALES

I can also drop the PDB that gets created automatically when you deploy the new DB System. In my case it is named CDB1_PDB1:

SQL> ALTER PLUGGABLE DATABASE CDB1_PDB1 CLOSE;
SQL> DROP PLUGGABLE DATABASE CDB1_PDB1 INCLUDING DATAFILES;

Also, I can remove the File Storage service that I created. If you want to keep log files from the upgrade (or other files) be sure to copy them somewhere else.

Last, when I am convinced that my upgraded and converted database is doing good, I can terminate the entire source DB system.

Tweaks

If you have a license for any of the Enterprise Edition offerings you might be able to use some of the below features to speed up and backup and recovery. Before using any of them be sure to check the license guide and confirm you have a proper license.

  • Block change tracking – reduces backup time because RMAN doesn’t need to scan the entire database.
  • Parallel backup and recovery – more channels, faster backups and faster restores.
  • Compression – reduces the size of the backups. Since you can apply incremental backups continuously the backup size should be fairly small anyway.

Disclaimer

I am not a backup expert (probably far from). When writing this post I was struggling a lot with missing archive logs. I even had to call an old mentor for advice. But in the end, I decided just to include them in all backups. Possibly, there is a die-hard-RMAN-expert out there that can tell me a better way of doing it. But for sure it doesn’t hurt to include them…

If you come up with a better way, please leave a comment. I would love to learn more.

Conclusion

You can upgrade a 11.2.0.4 database to 19c by moving the database to a new VM DB System. You can reduce downtime by using incremantal backups. You must convert the database to a pluggable database as well because multitenant is the only supported architecture for VM DB Systems on 19c.

References

Other Posts in This Series