Zero Downtime Migration – Physical Online Migration to ExaCS

This will be an easy blog post. To migrate your Oracle Database to Exadata DB System (ExaCS), just follow this procedure from the DBCS blog post. Plus, execute these two commands on the target after the migration:

[root@tgthost]$ dbaascli registerdb prereqs --dbname [db_name] --db_unique_name [db_unique_name]
[root@tgthost]$ dbaascli registerdb begin --dbname [db_name] --db_unique_name [db_unique_name]

More Details, Please

Granted – the above statement is bold and it is almost true. There are a few important details to share. First, have a look at Additional Information for Migrating to Exadata Cloud Service which you find the Release Notes

Target Environment

To get the full benefits of Exadata you should be running RAC databases. Exadata and RAC is a perfect match but it is up to you to decide. If your source database is already a RAC database (or RAC One Node) you must migrate to a RAC database. However, if your source database is a single instance you have to option to either stay single instance or go RAC. If you go RAC, just create the target placeholder database as a RAC database, and everything else will happen automatically.

You must create a placeholder database on the target system. The placeholder database gets overwritten ZDM during the migration but it is initially used by ZDM to get information on how you want to configure your target database in OCI. For example, the migrated database will be placed in the same Oracle Home as the target placeholder database. Also, the architecture is determined this way. In other words, if you create the target placeholder database as a RAC database; then your source database is automatically converted to a RAC database during migration. If you create a single instance placeholder database; you get a single instance database.

Just like any other migration, when creating the placeholder database there are some things you should be aware of. On the OCI webpage you have to:

  • Set Database name to the DB_NAME of the source database.
  • Set Database version to the same as the source database.
  • Ensure the patch level of the Oracle Home match that of the source system – or be higher.
  • Ensure that the Password matches the SYS password of the source database.

When using Zero Downtime Migration (ZDM) to migrate to Exadata DB System (ExaCS) be sure to create the target placeholder database in the correct manner.

You can choose your own DB_UNIQUE_NAME – it should differ from the source database. Select an Oracle Home that has the same or higher patch level than your source database. I recommend to always migrate to the latest Release Update. If necessary, ZDM will automatically invoke datapatch after the switchover. The other parameters don’t matter – the database gets overwritten anyway by ZDM. Also, be aware when using the OCI webpage you get a RAC database. There is no option to change it. But it is after all the perfect match for Exadata anyway.

Now, if you want more advanced options – like creating a single instance database, you can’t use the OCI webpage. You will have to use either dbaaspi or dbaascli. That gives you full control over the options – but they are not as easy to use as the webpage.

It’s A Wrap

I have created a video on YouTube that demos a migration to Exadata DB System.

Speaking of YouTube, I suggest that you subscribe to the Oracle Database Upgrades and Migrations YouTube channel so you never miss anything.

The Exadata Cloud Service is an awesome platform and it is really easy to migrate to it using Zero Downtime Migration. And converting to RAC is even easier.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration to DBCS

Let me show you how you can migrate an Oracle Database into OCI. My source database is a CDB running on 12.1.0.2. I want to migrate to the same version using the Physical Online method which uses Data Guard.

This procedure can be used to migrate to:

  • VM DB Systems
  • Bare Metal DB Systems
  • Exadata DB System (ExaCS)

If you want to migrate to Exadata Cloud at Customer and Exadata on-prem, you can use a lot from this post, but there are some differences which I will not cover.

Prerequisites

  1. I need a ZDM service host. It is called zdmhost and the user is called zdmuser.
  2. My source database is a CDB called SALES (DB_NAME), and the host is named srchost.
  3. I want to migrate to a VM DB System, and it is already created. I name it tgthost. I use the same name for the database, SALES. I must use ASM as storage management software.

Overview of the components in this demo

Prepare Source

Ensure the database is in ARCHIVELOG mode:

select log_mode from v$database;

If database version is 12.2 or higher, a TDE keystore must be present – even if the database is not encrypted. A license is not needed to create a TDE keystore – not until data is getting encrypted. The keystore must be OPEN and the type is either AUTOLOGIN, LOCAL_AUTOLOGIN or PASSWORD. In a CDB, this applies to CDB$ROOT and all PDBs:

SELECT con_id, status, wallet_type FROM v$encryption_wallet;

If status is OPEN_NO_MASTER_KEY it means that no TDE master encryption key has been created and I will need to create one. Instructions are also in the documentation.

Ensure that the source host can resolve the network name of the target host. It is important to add two entries – one with the host name and one with the SCAN name (they should both point to the target host):

[root@srchost]$ echo -e "[ip address] tgthost" >> /etc/hosts
[root@srchost]$ echo -e "[ip address] tgthost-scan" >> /etc/hosts

Test connection over SQL*NET to the target:

[oracle@srchost]$ sqlplus system@tgthost-scan/[target-cdb-service-name]

Prepare Target

I need a target placeholder database. The placeholder target database is overwritten during migration, but it retains the overall configuration.

  • VM DB System: When I create a new system, a database is automatically created. This database is my target placeholder database.
  • Bare Metal and Exadata DB Systems: Create a new database on an existing system.

In any case:

  • DB_NAME must exactly match that of the source database – also in the same case.
  • DB_UNIQUE_NAME must be different.
  • Use the same SYS password as the source database.
  • Any database parameters for the target database, including SGA parameters, are maintained during the migration, and the migrated database runs with this same configuration.
  • Do not enable automatic backup (until after the migration has completed).
  • The architecture (single instance or RAC) of the target placeholder database determine the architecture of the OCI database. If I want my OCI database to be a RAC database, simply create the target placeholder database as a RAC database, and the end result will be a RAC database.
  • COMPATIBLE must match that of the source database.
  • The patch level of the target database must be equal to or higher than the source database. If they differ, ZDM will automatically execute datapatch as part of the migration. Use $ORACLE_HOME/OPatch/opatch lsinventory.
  • The source and target database must use the same time zone file version. If they don’t match, I must upgrade the time zone file in the source database. If I have a lot of data of type TIMESTAMP WITH TIMEZONE and I can’t afford the downtime it takes to do the upgrade, then consider using the logical online method: select * from v$timezone_file;

There are a few other requirements listed in the document, but these are the most important ones.

Like the source host, the target host must be able to resolve the network name of its counterpart:

[root@tgthost]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@tgthost]$ echo -e "[ip address] srchost-scan" >> /etc/hosts

And I test the connection:

[oracle@tgthost]$ sqlplus system@srchost-scan/[source-cdb-service-name]

Prepare ZDM

The ZDM service host must be able to resolve the host names of the source and target database host:

[root@zdmhost]$ echo "<source IP address> srchost" >> /etc/hosts
[root@zdmhost]$ echo "<target IP address> tgthost" >> /etc/hosts

I put my private SSH keys to the source and target database host into ~/.ssh directory. Ensure permissions are set properly:

[zdmuser@zdmhost]$ cp srchost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/srchost_key_file 
[zdmuser@zdmhost]$ cp tgthost_key_file ~/.ssh
[zdmuser@zdmhost]$ chmod 400 ~/.ssh/tgthost_key_file 

Test the connection. I connect as opc, but you might have a different user. Read more about access to the database host in the documentation:

[zdmuser@zdmhost]$ ssh -i ~/.ssh/srchost_key_file opc@srchost date
[zdmuser@zdmhost]$ ssh -i ~/.ssh/tgthost_key_file opc@tgthost date

Prepare Response File

I will use a template response file as the basis for my response file:

[zdmuser@zdmhost]$ cp $ZDM_HOME/rhp/zdm/template/zdm_template.rsp ~/physical_online.rsp
[zdmuser@zdmhost]$ chmod 700 ~/physical_online.rsp

This is the response file that I end up with (see appendix B for an explanation):

TGT_DB_UNIQUE_NAME=SALES_fra2t4
MIGRATION_METHOD=ONLINE_PHYSICAL
DATA_TRANSFER_MEDIUM=OSS
PLATFORM_TYPE=VMDB
HOST=https://swiftobjectstorage.[region].oraclecloud.com/v1/[my-tenancy-object-storage-namespace]
OPC_CONTAINER=zdm-staging
SKIP_FALLBACK=TRUE
SHUTDOWN_SRC=TRUE

If your database is Oracle Database 11.2.0.4 be sure to set these response file parameters:

ZDM_RMAN_DIRECT_METHOD=ACTIVE_DUPLICATE
ZDM_USE_DG_BROKER=FALSE

Perform Evaluation

I am now ready to perform a migration evaluation. It is a dry run of the migration and performs various sanity checks. Nothing is changed during the evaluation:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
   -backupuser "daniel.overby.hansen@oracle.com" \
   -eval

A few comments:

  • sourcenode and targetnode are the host names of the source and target database host. Those names must be resolvable which I why I put them into /etc/hosts already.
  • sourcedb is the DB_UNIQUE_NAME of the source database. My database is on ASM. If I didn’t use ASM I should use sourcesid instead and specify the database SID.
  • srcarg1 is the name of the user that I connect as to the source database host. You might need to change that.
  • srcarg2 and tgtarg2 is the location of the private key files to the source and target database host respectively. The private key file must be usable by the user that is specified in srcarg1 and tgtarg1 which in this case is opc.

I am prompted for the SYS password to the source database and also the password for the backupuser (which is my OCI user). For the latter, please note that this password is not my user password, however, it is an auth token. I also get a job ID: When using OCI oject storage this is NOT your user password, but an auth token

The migration evaluation is then started, and I can use the job ID to monitor it:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli query job -jobid <job ID>

Output from zdmcli query job command

My colleague Sinan Petrus Toma showed how to loop:

[zdmuser@zdmhost]$ while :; do $ZDM_HOME/bin/zdmcli query job -jobid <job ID>; sleep 10; done

However, I prefer to get more details, so I tail the ZDM log file instead. This little one-liner finds the newest file and tails it:

[zdmuser@zdmhost]$ tail -n 50 -f "`ls -td /u01/app/oracle/chkbase/scheduled/*log | head -1`"

And I get this level of detail:

zdmhost: 2021-06-03T07:19:50.317Z : Starting zero downtime migrate operation ...
zdmhost: 2021-06-03T07:19:53.445Z : Executing phase ZDM_GET_SRC_INFO
zdmhost: 2021-06-03T07:19:53.446Z : Retrieving information from source node "srchost" ...
zdmhost: 2021-06-03T07:19:53.447Z : retrieving information about database "SALES_fra3fw" ...
zdmhost: 2021-06-03T07:20:02.743Z : Execution of phase ZDM_GET_SRC_INFO completed
zdmhost: 2021-06-03T07:20:02.826Z : Executing phase ZDM_GET_TGT_INFO
zdmhost: 2021-06-03T07:20:02.827Z : Retrieving information from target node "tgthost" ...
zdmhost: 2021-06-03T07:20:12.197Z : Determined value for parameter TGT_DATADG is '+DATA'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_REDODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.198Z : Determined value for parameter TGT_RECODG is '+RECO'
zdmhost: 2021-06-03T07:20:12.203Z : Execution of phase ZDM_GET_TGT_INFO completed

Eventually, I end up with a successful evaluation: When status of an evaluation is SUCCEEDED

Start Migration

When the evaluation passes, I can start the real migration. I am re-using the same command line, but I have removed the -eval option. Instead – and this is important – I am using -pauseafter to tell ZDM to pause the migration, just before the switchover takes place. Downtime has not started yet. The database is still open for business, but ZDM will copy the data and configure Data Guard:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp /home/zdmuser/physical_online.rsp \
   -sourcenode srchost \
   -sourcedb SALES_fra3fw \
   -srcauth zdmauth \
   -srcarg1 user:opc \
   -srcarg2 identity_file:/home/zdmuser/.ssh/srchost_key_file \
   -srcarg3 sudo_location:/usr/bin/sudo \
   -targetnode tgthost \
   -tgtauth zdmauth \
   -tgtarg1 user:opc \
   -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost_key_file \
   -tgtarg3 sudo_location:/usr/bin/sudo \
   -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
   -backupuser "daniel.overby.hansen@oracle.com" \
   -pauseafter ZDM_CONFIGURE_DG_SRC

Again, I have to input the passwords as in eval mode. Use the job ID to monitor the progress or tail the log file. Note down the job ID. I need it later on to resume the migration.

When ZDM completes the phase ZDM_CONFIGURE_DG_SRC it will pause and wait. I can verify it with zdmcli query job: ZDM is currently paused

Now the standby database has been built in OCI. Redo gets transferred from my source database to the target database in OCI and is applied (see appendix D for monitoring queries). Also, this is a good time to test your new database.

Complete Migration

Now it is time to finalize the migration and switch over to the OCI target database. All I need to do, is to resume the paused ZDM job. I use the job ID that was created when I started the migration:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli resume job -jobid <job ID>

ZDM will now ensure that all redo is sent and applied before switching over to the OCI target database. Again, I use the zdmcli query job command to monitor the progress and I can tail the log file. After a short while the migration completes. ZDM migration completed

That’s it. I have now migrated into OCI!

I have a few post-migration tasks to carry out:

  • Configure automatic backup
  • Protect my new OCI database with a Data Guard in OCI
  • Take a peek at the alert log (just be sure)

Other Blog Posts in This Series

Appendix

A – RAC Databases

If your source or target database is a RAC database, then you only need access to one of the nodes. Select the node that you want to use and use the host name and SSH keys to that host. Also, ensure that the IP address you are using is not the floating one (VIP). In OCI that is referred to as the private IP.

B – Response File

Allow me to put some comments on the values:

Setting Comment
DATA_TRANSFER_MEDIUM When migrating to a DB System in OCI OSS is the only option. It is also the default value, so I could omit the parameter. The other options are applicable for Exadata Cloud at Customer and Exadata on-prem.
                                                                                                                                                                     HOST                                                                                                                                                                      Getting the right URL for the HOST parameter might be a little tricky, but check the documentation. In my demo I use the Frankfurt data center, and, thus, the region is set to eu-frankfurt-1. Visit the API documentation for a list of regions. Use OCI CLI and oci os ns get to find the tenancy object storage namespace. Alternatively, in the OCI Console (the web page) open the Profile menu and click Tenancy: <your tenancy name>. The namespace string is listed under Object Storage Settings. To read more about have a look at the OCI documentation. Thanks to Bartlomiej Sowa for putting in a comment with this information – much appreciated!
OPC_CONTAINER The OCI Object Storage bucket that will be used as a staging area for the backup of the source database. I recommend using a separate bucket for each migration. It makes it a lot easier to clean up afterwards.
PLATFORM_TYPE VMDB covers Virtual Machine and Bare Metal DB Services. It is also the default value, so I could omit the parameter.
SHUTDOWN_SRC I choose to shut down the source database to ensure no one uses it unintentionally after the migration.
SKIP_FALLBACK To make my demo simple I choose not to configure fallback. Remember falling back to the source database requires a license for Advanced Security Option.
TGT_DB_UNIQUE_NAME The DB_UNIQUE_NAME of the target placeholder database. It must be different than that of the source database.

C – Backup Strategy

During a migration with ZDM, you should keep your regular backup strategy. Keep doing the same backups as you did before. But avoid having ZDM backups and regular backups run at the same time. Also, if you are dealing with a RAC database be sure to put the snapshot control file on shared storage. Otherwise, you might get ORA-00245 errors during backups.

D – Monitoring

Use these queries to monitor the redo apply. On source/primary database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

Target/standby database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;
--MRP process should be 'APPLYING_LOG'
SELECT process, status, sequence# FROM v$managed_standby;
SELECT * FROM v$archive_gap;

E – Troubleshooting

PRCZ-4001

If you run into this error:

PRCZ-4001 : failed to execute command "/bin/uname" using the privileged execution plugin "zdmauth" on nodes "doverbyh-zdm-tgt" within 120 seconds
PRCZ-2006 : Unable to establish SSH connection to node "doverbyh-zdm-tgt" to execute command "/bin/uname"
No more authentication methods available

Check your key files. They must be in RSA/PEM format (the private key must start with -----BEGIN RSA PRIVATE KEY-----).

ZDM_OBC_INSTALL_CREDENTIALS_INVALID

If you run into this error:

srchost: 07:08:00.000: Validating object store credentials..
srchost: <ERR_FILE><Facility>PRGO</Facility><ID>ZDM_OBC_INSTALL_CREDENTIALS_INVALID</ID></ERR_FILE>

Check your credentials to OCI. Remember when prompted for the password of your OCI account it is an auth token, not your password (even though the prompt text is misleading).

ZDM_GET_SRC_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the source database host is added to the known_hosts file on the ZDM service host. Also, you can verify connectivity by trying to log on via SSH:

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-source-host>

ZDM_GET_TGT_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the target database host is added to the known_hosts file on the ZDM service host.

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-target-host>

ZDM_SWITCHOVER_SRC

If the migration is stuck at the phase ZDM_SWITCHOVER_SRC and you see the following in the ZDM log file:

zdmhost213: 2022-05-10T18:01:22.565Z : Executing phase ZDM_SWITCHOVER_SRC
zdmhost213: 2022-05-10T18:01:22.566Z : Switching database SALES2_fra3cx on the source node srchost to standby role ...
zdmhost213: 2022-05-10T18:01:22.566Z : checking if source database is ready for switching role...
srchost: 2022-05-10T18:01:35.340Z : Validating database SALES2_fra3cx role is PRIMARY...
srchost: 2022-05-10T18:01:35.742Z : Validating database SALES2_fra3cx is in open mode...
srchost: 2022-05-10T18:01:36.144Z : Waiting for SALES2_fra3cx to catch up, this could take a few minutes...

Then proceed to primary and standby database and investigate whether there are any log shipping or log apply problems. The root cause can be many different things, for instance conflicts when negotiating network security options.

PRGZ-3420 / ZDM_MANIFEST_TO_CLOUD

Check the ZDM log file stored on the ZDM service host in $ZDM_BASE/chkbase/scheduled. If you find this error:

zdmhost213: 2022-05-10T18:20:34.330Z : Executing phase ZDM_MANIFEST_TO_CLOUD
zdmhost213: 2022-05-10T18:20:34.331Z : registering database "SALES2_OCI" to Oracle Cloud
tgthost: 2022-05-10T18:20:50.895Z : Updating DBAAS wallet keys ...
####################################################################
PRGZ-3420 : failed to modify keystore entry "oracle.security.client.password1" in wallet "/var/opt/oracle/dbaas_acfs/sales2/db_wallet".

The proceed to the target host and find the log file from that phase. It is located in $ORACLE_BASE/zdm/zdm_<target_db_unique_name>_<zdm_job_id>/zdm/log. Do a grep:

cd /u02/app/oracle/zdm/zdm_SALES2_OCI_12/zdm/log
grep "cloud registry" zdm_manifest_to_cloud_*.log

If you find the following lines in the log file:

grep "Error: Missing cloud registry" zdm_manifest_to_cloud_*.log
zdm_manifest_to_cloud_337115.log:Error: Missing cloud registry file SALES2.ini
zdm_manifest_to_cloud_337115.log:Error: Missing cloud registry file SALES2.ini

You have most likely used the wrong case of DB_NAME on the target database. You choose that in the OCI console in the field Database name. In this case, DB_NAME in source database is SALES2, but I had incorrectly used sales2 (lowercase) for the target database. Normally, DB_NAME is case insensitive, and it is to the database itself. But the OCI cloud tooling is not case insensitive. Looking in /var/opt/oracle/creg on the target database, I can find a file called sales2.ini but ZDM is looking for SALES2.ini. How do you solve this? If this is a test database, then scratch everything and start all over. But bear in mind that the switchover has already taken place, and the OCI database is now the primary database. For a production database this is tricky. First, the database has been migrated to OCI. It works. All your data is fine on the target database. But the OCI cloud tooling does not work properly. You need to open a SR and get assistance from support.

PRGO-4086 : failed to query the "VALUE" details from the view "V$ARCHIVE_GAP" for database "target_db_name"

If the switchover fails during this query:

select count(*) from v$archive_gap

And your target database is 11.2.0.4 or 12.1.0.2, you have hit bug 18411339 (Low performance or ORA-1220 for query on V$ARCHIVE_GAP on 11.2.0.4 or later release). Apply the patch on the target database and retry the ZDM migration. You can also open the target database before you start the switchover process (requires Active Data Guard because redo apply must be on). The above query fails only when the standby database (the target database) is mounted. It works fine on an open database.

Zero Downtime Migration – Physical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Physical Online method. A standby database is built in OCI and kept in sync via redo apply. At your will, a switchover is all it takes to complete the migration. This way even very large databases can be migrated with no or very little downtime.

Concept of physical online migration

With the Physical Online method, you can target:

  • Virtual Machine DB System
  • Bare Metal DB System
  • Exadata DB System (ExaCS)
  • Exadata Cloud at Customer (ExaCC)
  • Exadata (on-prem)

Since the method uses Data Guard as the migration vehicle, this can only be only with Enterprise Edition databases.

Benefits

  • Personally, I like this approach because it builds on technology that most of us know already. Data Guard is heavily in use in most organizations. Although ZDM does all the heavy lifting, it is nice to know what happens underneath the hood.

  • To complete the migration all you need is a regular Data Guard switchover (which ZDM also takes care of). A switchover operation ensures that there will be no data loss at all. In addition, if you have a properly configured application, it won’t experience downtime. Just a brown-out while the switchover takes place.

  • This method has an excellent fallback possibility. When you switch over to the OCI database, the redo flow is reversed. Now, the on-prem database is a standby database, and it is kept in sync via redo from the OCI database. However, this option requires a license for Advanced Security Option on your on-prem system. Read more about it later on.

  • You migrate the entire database. All the internals are brought to the cloud as well. This includes:

    • AWR
    • SQL Plan Baseline
    • SQL Profiles
    • Public objects
    • Etc.
  • The initial backup of the source, on-prem database will use RMAN Compression. It will drastically reduce the size of the backup, and thus the amount of data that you must transport to the cloud. And the really good thing: ZDM can use RMAN Compression even if you don’t have a license for the Advanced Compression Option. You are allowed to use RMAN Compression with ZDM for migration purposes without paying license for Advanced Compression Option.

  • You can customize the RMAN backup. You know your system best, so it is possible to tweak the number of RMAN channels used and the compression algorithm applied. Default is 10 and medium.

  • When ZDM takes the full backup of your source database (Exadata on-prem and ExaCC excluded), it uses Oracle Database Cloud Backup Module for OCI to allocate a special sbt channel. The backup is sent directly to OCI Object Storage. Thus, you don’t need any additional disk space on your source system to hold the backup. The same applies on the target system.

  • When migrating to ExaCC or Exadata on-prem you can use an existing backup. Use the response file option DATA_TRANSFER_MEDIUM=EXTBACKUP. Also, if you have a Zero Data Loss Recovery Appliance (ZDLRA) you can restore directly from it, without taking a full backup first. If you are migrating to any other platform, ZDM will need to take a full backup as part of the workflow.

  • The standby database is kept in sync via redo apply. This means that there are no restrictions in supported data types. Further, DDL in any form is supported and even with a heavy workload on the primary database, your standby database should be able to keep up.

  • You can migrate to a higher patch level. ZDM will automatically invoke datapatch for you on the OCI database. But you must handle the on-prem database yourself. After switching over to the OCI database and datapatch has been executed, you should patch the on-prem Oracle Home to the same patch level.

Considerations

  • You can migrate to the same version only. This is a restriction of Data Guard. If you also need to upgrade the database, then you must do that after the migration. But it will incur additional downtime.
  • The same applies to PDB conversion. ZDM can run the noncdb_to_pdb.sql script for you. But that will incur downtime as well. If you decide to convert to PDB, you need to develop an alternate fallback plan (which is Data Pump or Transportable Tablespaces). The PDB conversion is irreversible, and your on-prem database will be useless as a fallback option.
  • It is not possible to migrate between editions.
  • The entire database is migrated. I listed this as a benefit as well, but it has a flipside as well. It is not possible to perform any transformation during the migration, e.g. converting any old BasicFile LOBs to SecureFile LOBs. Also, you bring over any old baggage in your database. Sometimes it is nice to start from scratch because garbage tends to accumulate in a database over time.
  • If you have a Standard Edition database, you can’t use Physical Online method. Only the Physical Offline is supported.
  • You can’t configure Automatic backup on your target database until after the migration has been completed.
  • If you need to protect your OCI database with Data Guard, then you must build the standby database after the migration has been completed. Normally, you would use a cascading standby database to keep the target database protected by Data Guard even after the migration has been completed. But currently the OCI tooling does not support that option.
  • If your source database is not encrypted, you must create a keystore that can be used later on for TDE Tablespace Encryption. The keystore is created in the source database with no downtime, and you don’t need a separate license just for creating a keystore. You do not have to actually encrypt your source database until it reaches the cloud.

Fallback

Using this approach, you have a great fallback option. When you switch over to the OCI database, then the redo flow is reversed and the source, on-prem database is now the standby database. If you need to fallback, simply issue another switchover (you could call that a switchback) and start to use the on-prem database again – with no data loss. See step 5 in MAA Practices for Cloud Migration Using ZDM (Doc ID 2562063.1).

There is a catch, however. The OCI database is encrypted using TDE Tablespace Encryption. Any redo generated for an encrypted tablespace is also encrypted. This means that the on-prem database must be able to decrypt the redo before it can be applied. That requires a license for the Advanced Security Option. When you are migrating your database with ZDM, you can get a 1-year limited license for Advanced Security Option. This will allow you to use decrypt the encrypted redo on the source database, thus, providing you with a great fallback option.

Conclusion

Using the Physical Online method in ZDM is a straight-forward way of migrating your database to OCI. It uses Data Guard which is very familiar to most of us. The method does, however, have some limitations, and you can’t target Autonomous Databases.

Want to Know More

If you want to know more about migrations in general, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series

DataPump Export from Data Guard

Exporting data from your production database can be a pain for several reasons:

  • You probably want a consistent export so you set the Data Pump parameter FLASHBACK_TIME. That requires a lot of undo space. If there is heavy activity on the database, then you risk running into ORA-01555.
  • You want export the data quickly using the parallel options in Data Pump. The quicker you export, the less susceptible you also are to ORA-01555. But parallel uses more resources.

If you have an Active Data Guard, you can export your data from the standby database. It should be straightforward, right? An export is after all just reading data…

But Data Pump is not just a simple export tool. It is quite advanced – and to support all that functionality it does require a master table to be created which can be used to coordinate the operation. Further, it uses Advanced Queueing (AQ) for communicating between the processes. And AQ also requires a writable database.

But how about DML Redirect that was introduced in Oracle Database 19c? Can it be used to handle those few DMLs? No, because creation of the master table is a DDL – not a DML. If you try, you will fail:

$  expdp system@localhost:1521/pdb1 schemas=SH directory=mydir

Export: Release 19.0.0.0.0 - Production on Mon Apr 12 18:19:22 2021
Version 19.10.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Password:

Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.SYS_EXPORT_SCHEMA_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1163
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.KUPV$FT", line 1056
ORA-06512: at "SYS.KUPV$FT", line 1044

Solution

I turned on my bat signal and got help from my fellow Product Manager, Pieter Van Puymbroeck. I turned on the bat signal and got help Photo by Ali Kokab on Unsplash

Use a snapshot standby, he said. Of course – Snapshot Standby is part of Enterprise Edition, so if you have Data Guard, you also have Snapshot Standby. If you need a quick recap on what a snapshot standby database is, you should check out this video.

First, I convert my physical standby database (named CDB19_fra3zt) into a snapshot standby database:

DGMGRL> convert database 'CDB19_fra3zt' to snapshot standby;

Next, create a directory object that can be used with Data Pump:

SQL> create directory mydir as '/tmp';

Finally, start an export. Notice that I haven’t set FLASHBACK_TIME. When exporting from a snapshot standby there are probably no other users on the database, so you don’t need to worry about consistency of the export:

$ expdp system schemas=sales directory=mydir

My data has now been exported, and it is time to convert my snapshot standby database back into a physical standby database to properly protect my precious database:

DGMGRL> convert database 'CDB19_fra3zt' to physical standby;

But

Will this jeopardize my primary database. No, but ….

Your primary database is still protected, because log files are still sent to the standby database, but not applied. What will then happen if I need to switch over or fail over?

  1. Convert the snapshot standby database back into a physical standby database
  2. That will implicitly issue a Flashback Database and revert all the changes made
  3. Then catch up with the primary by applying all the log files
  4. Finally, complete the switchover or failover

Bottomline, it will take longer! The flashback operation is really fast and should be done within minutes. After all, Data Pump isn’t making that many changes. Only the master table and some AQ stuff. How much time then to do log apply? That depends entirely on how big the apply lag is.

So, if you choose to go down this path be prepared that in the event of a switchover or a failover, it will take a little longer until the standby is open for business. Unless, you have two standby databases, then you will still have run ready to jump in immediately.

GoldenGate

If you need the Data Pump export to perform an initial load for GoldenGate, then you might want to control the SCN at which the export is happening. If so, cancel redo apply on the standby database and roll forward to the desired SCN:

alter database recover managed standby database cancel;
alter database recover managed standby database until change n;

Before converting the standby database to snapshot standby:

alter database convert to snapshot standby;

Later on, when you need to start replication in GoldenGate you can use the following option:

START REPLICAT ATCSN n

Conclusion

You can export from your Data Guard, if you convert it temporarily to a snapshot standby. Be prepared that a switchover or failover operation will take longer.

If you want to know more about Data Pump, you should check out our webinar Migration Strategies – Insights, Tips and Secrets.

And finally, speaking of Data Pump, remember to get the Data Pump bundle patch (bug 32551008) as Roy mentioned on Twitter.

How to Upgrade with AutoUpgrade and Data Guard

You can upgrade your database to a new release with AutoUpgrade and keep the Data Guard setup intact. The standby database(s) can be upgraded implicitly via the redo from the primary database, and there is no need to rebuild the standby database after upgrade.

The process: Overview of upgrade with a data guard

In the following I will be using this setup: Overview of the environment that is used for this procedure

In advance, you should install the new Oracle Home on both primary and standby host. The two Oracle Homes should have the same patches applied, and I recommend that you always apply the latest Release Update.

Before Upgrade

You must use AutoUpgrade version 21.1.1 or newer. A newer version of AutoUpgrade can upgrade to older database releases as well, so don’t worry if the AutoUpgrade version doesn’t match the Oracle Database release that you are upgrading to.

AutoUpgrade can handle a Data Guard environment that is manually configured or via Data Guard Broker.

The procedure starts right before you start AutoUpgrade in DEPLOY mode (or alternatively in FIXUPS mode). Downtime has started and users are no logged connected to the database.

Stop Data Guard

On the standby database, generate commands to copy the Data Guard broker config files. Don’t execute them yet:

PROD2 SQL> select 'cp ' || value || ' /tmp' as cmd from v$parameter where name like 'dg_broker_config_file%';

Shut down the standby database. Disabling the database is strictly speaking not necessary, but a better-safe-than-sorry approach:

[oracle@bm2]$ $ORACLE_HOME/bin/srvctl stop database -d PROD2 -stopoption immediate
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl disable database -d PROD2

If you are not managing the database with Grid Infrastructure (GI), just do a regular shutdown:

PROD2 SQL> shutdown immediate

Now, copy the broker config files into a temporary location. Use the cp commands that was executed earlier

[oracle@bm2]$ cp <broker_config_1> /tmp
[oracle@bm2]$ cp <broker_config_2> /tmp

Since redo transport has not been deferred yet in the primary database, it will complain about losing connection to the standby database. The alert log will contain an entry similar to this:

2020-12-03T06:30:12.751693+00:00
TT03 (PID:47477): Attempting LAD:2 network reconnect (3113)
TT03 (PID:47477): LAD:2 network reconnect abandoned
2020-12-03T06:30:12.752104+00:00
Errors in file /u01/app/oracle/diag/rdbms/prod1/PROD/trace/PROD_tt03_47477.trc:
ORA-03113: end-of-file on communication channel
TT03 (PID:47477): Error 3113 for LNO:3 to 'prod2'

It can be safely ignored, because it is after all a maintenance window and the database is about to be upgraded. Your monitoring system might detect this and start to complain.

Upgrade

Upgrade the database by starting AutoUpgrade in DEPLOY mode. AutoUpgrade will defer redo transport and stop Data Guard broker (if in use) automatically:

java -jar autoupgrade.jar -config PROD.cfg -mode deploy

After the upgrade you should perform the necessary tests to validate the new database release. Only when you are convinced to go live on the new release, you should continue.

Remember that the standby database was left behind before we started touching anything, so if all other fails, simply restart the standby database, and connect your users to it.

After Upgrade

Restart Data Guard

Update the listener on the standby host. Be sure to update the Oracle Home information in the listener.ora entry. Note, that your listener.ora might be stored in a non-default location, so use lsnrctl status to get the location. Finally, reload the listener:

[grid@bm2]$ $GRID_HOME/bin/lsnrctl status
[grid@bm2]$ vi $GRID_HOME/network/admin/listener.ora
[grid@bm2]$ $GRID_HOME/bin/lsnrctl reload

For the next commands, I will be using the same prompt, and I will need the following environment variables:

[oracle@bm2]$ export OLD_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
[oracle@bm2]$ export NEW_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm2]$ export ORACLE_HOME=$NEW_HOME
[oracle@bm2]$ export ORACLE_SID=PROD
[oracle@bm2]$ #Set ORACLE_UNQNAME to DB_UNIQUE_NAME
[oracle@bm2]$ export ORACLE_UNQNAME=PROD2 

Next, if the standby database is using TNS_ADMIN in the default location ($ORACLE_HOME/network/admin), then be sure to copy the relevant TNS aliases into the new tnsnames.ora. There should be TNS aliases to the primary and standby database. Or, if there are no other databases in the same Oracle Home, you can simply copy the files:

[oracle@bm2]$ #Back up files
[oracle@bm2]$ cp $NEW_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin/sqlnet.ora.backup
[oracle@bm2]$ cp $NEW_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin/tnsnames.ora.backup
[oracle@bm2]$ #Copy from old to new home
[oracle@bm2]$ cp $OLD_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin
[oracle@bm2]$ cp $OLD_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin

Now, you can edit /etc/oratab and update the information about the Oracle Home to match the new Oracle Home. In my example, the database is managed by GI, so I should not configure auto-start in /etc/oratab. If you are not managing your databases with GI, you probably want to configure the standby database to start automatically (see appendix):

[oracle@bm2]$ #Backup file
[oracle@bm2]$ cp /etc/oratab /tmp/oratab
[oracle@bm2]$ #Use sed to remove the line that starts with ORACLE_SID
[oracle@bm2]$ sed '/^'"$ORACLE_SID"':/d' /tmp/oratab > /etc/oratab
[oracle@bm2]$ #Add new entry
[oracle@bm2]$ echo "$ORACLE_SID:$ORACLE_HOME:N" >> /etc/oratab

Copy SPFile and password file to the new Oracle Home:

[oracle@bm2]$ cp $OLD_HOME/dbs/orapw$ORACLE_SID $ORACLE_HOME/dbs
[oracle@bm2]$ cp $OLD_HOME/dbs/spfile$ORACLE_SID.ora $ORACLE_HOME/dbs

Copy the broker config files into the new Oracle Home. If you store your broker config files outside of the Oracle Home this might not be necessary to you:

[oracle@bm2]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm2]$ export ORACLE_UNQNAME=PROD2 
[oracle@bm2]$ cp /tmp/dr1$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs
[oracle@bm2]$ cp /tmp/dr2$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs

Upgrade the database in GI, which updates the Oracle Home information, so GI will start the database in the correct Oracle Home. Next, re-enable and start the database:

[oracle@bm2]$ $ORACLE_HOME/bin/srvctl upgrade database -db $ORACLE_UNQNAME -oraclehome $ORACLE_HOME
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl modify database -db $ORACLE_UNQNAME -startoption MOUNT -role PHYSICAL_STANDBY
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl enable database -d $ORACLE_UNQNAME
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl start database -d $ORACLE_UNQNAME

Or, if you are not using GI, simply start the database in the new Oracle Home:

PROD2 SQL> startup mount

Re-enable Data Guard

To re-enable the Data Guard config use DG CLI:

[oracle@bm1]$ $ORACLE_HOME/bin/dgmgrl sys@PROD1

And re-enable redo transport:

DGMGRL SYS@PROD1> edit database prod1 set state=transport-on;

Now, redo is shipping to the standby database, and it will apply it. When the redo that was generated during the upgrade is applied on the standby database, it is implicitly upgraded. You can monitor the progress of the apply by looking at the Apply Lag information. The Apply Lag will decrease until the standby database eventually catches up and they are fully synchronized:

DGMGRL SYS@PROD1> show database prod2;

The apply lag will continue to decrease when the redo stream is applied on the standby, and, thus, implicitly upgrades the database

Test

Use the broker to ensure everything is fine:

DGMGRL SYS@PROD1> show configuration
DGMGRL SYS@PROD1> show database prod1
DGMGRL SYS@PROD1> show database prod2

You should have SUCCESS listed for both databases Use Data Guard Broker to verify data guard setup after upgrade

Let’s validate the setup and try to make a switchover. The database will not allow a switchover if there are any problems in the Data Guard setup. It is a good way of checking things are fine:

DGMGRL SYS@PROD1> validate database prod1
DGMGRL SYS@PROD1> validate database prod2
DGMGRL SYS@PROD1> switchover to prod2

After upgrading a primary database (data guard) with autoupgrade you can use validate database to ensure everything is fine

If you don’t use Data Guard Broker, you use regular SQLs and SQLPlus to verify the Data Guard environment.

Conclusion

It is actually not that complicated to upgrade your database, even if it is part of a Data Guard setup. And with version 21.1.1 of AutoUpgrade is has become easier. A little extra legwork is needed to take care of the standby database. But the good thing is that your Data Guard setup is maintained throughout the process.

I made a video on YouTube that shows the procedure. And while you are there, I suggest that you subscribe to our channel.

Appendix

Config File

For your reference this is the config file, that I used. It contains only the required information. All other parameters have a default value:

upg1.sid=PROD
upg1.source_home=/u01/app/oracle/product/18.0.0.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_1

Synchronize Standby Database

When you run un AutoUpgrade in ANALYZE mode and check the preupgrade report, you will find this information message:

[checkname]          SYNC_STANDBY_DB
[stage]              PRECHECKS
[fixup_available]    NO
[runfix]             N/A
[severity]           INFO
[action]             Synchronize your standby databases before database upgrade.
[broken rule]        The standby database is not currently synchronized with its associated primary database.
[rule]               To keep data in the source primary database synchronized with its associated standby databases, all standby databases must be synchronized before database upgrade.  See My Oracle Support Note 2064281.1 for details.

What does it say? Basically, it says that all redo generated on the primary database before the downtime window started, should be sent to and applied on the standby database. This way, your standby database is ready to replace your primary database at any time, if something goes really wrong. Strictly speaking it is not necessary to ensure that, but it is strongly recommended.

GI-managed Database in /etc/oratab

When the database is managed by GI, you don’t need to have it configured in /etc/oratab. Personally, I like to have it anyway, because then you have a clear overview of what databases are on the server, and you can use /etc/oratab to set your environment, like when you are using oraenv script.

But I know that die-hard GI-folks might roll their eyes when I say it, but I like it this way.

Further Reading

New Version of AutoUpgrade

Our developers just published a new version of AutoUpgrade. Head over to My Oracle Support and download version 21.1.1.

The version says 21, but it can still upgrade your databases to lower release. So, you should use this new version to upgrade to Oracle Database 19c as well.

The supported target Oracle Database versions that are supported by the new version of AutoUpgrade

What’s New

A total of 48 enhancements and bug fixes made it into the new release. It has been 35 working days since the last release (and that includes Thanksgiving week) which means that the team has put in – on average – close to 1,5 changes into AutoUpgrade every, single working day. I find that quite impressive. And it really supports our message that you should always use the latest version of AutoUpgrade.

In My Oracle Support note 2485457.1 you can find a complete change log. Also, you can also visit the 21c New Feature Guide for additional information.

Unplug/plug Upgrade

The first things that I would like to highlight is unplug/plug upgrade. Instead of upgrading an entire CDB, you can now choose to unplug one or many PDBs, plug them into a higher release CDB, and upgrade the individual PDBs.

How an unplug/upgrade works of an Oracle Database PDB

How do you specify to upgrade only one PDB? Use target_cdb and pdbs?

upg1.sid=CDB1
upg1.pdbs=PDB3
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.target_cdb=CDB2

When doing unplug/plug upgrades you must be aware of the COMPATIBLE setting in the higher release CDB. If COMPATIBLE is higher, when the PDB plugs in, COMPATIBLE will silently and automatically be raised by the CDB. This means that you lose the capability of doing downgrades and flashback database.

You can read more about unplug/plug upgrades in the documentation

AutoUpgrade and Oracle Data Guard

In addition, AutoUpgrade now detects that you are upgrading a primary database in a Data Guard setup. In that case, it will automatically defer redo log transport to the standby databases and disable the Data Guard broker. After the upgrade, the broker is automatically restarted, but you need to manually reconfigure the standby databases, and eventually re-enable redo log transport.

You might ask: Why don’t AutoUpgrade handle it all for me? This is to preserve your fallback capabilities. Even after a successful upgrade, you still might want to revert back to the old version. Let’s say your testing finds a critical issue. You might still have a restore point on the primary database, but what if – for some reason – flashing back fails? Then you still have your standby database which was disabled right before the upgrade.

You can read more about AutoUpgrade and Data Guard in the documentation.

More

A few hints for the interested reader to explore. Have a look at the config file option catctl_options. You can use that to control the level of parallelism for a specific upgrade.

Also, if you don’t want AutoUpgrade to handle your network files (e.g. sqlnet.ora, tnsnames.ora) you disable it using manage_network_files.

You can read more about those options in the documentation.

Conclusion

Get the latest version of AutoUpgrade and benefit from the many new features. Keep an eye out for additional blog posts that will dig deeper into the new features.

How to Upgrade with Data Guard

You can upgrade your database to a new release, and keep the data guard setup intact. The standby database(s) can be upgraded implicitly via the redo from the primary database, and there is no need to rebuild the standby database after upgrade.

The process: Overview of upgrade with a data guard

In the following I will be using this setup: Overview of the environment that is used for this procedure

In advance, you should install the new Oracle Home on both primary and standby host. The two Oracle Homes should have the same patches applied, and I recommend that you always apply the latest Release Update.

Before Upgrade

The procedure starts right before you run the preupgrade fixups. Downtime has started and users are no logged connected to the database.

Disable Data Guard Broker

If you don’t use Data Guard Broker, you can skip this chapter and go to the chapter Stop Data Guard. Connect to the broker and disable Fast Start Failover:

DGMGRL SYS@PROD1> disable fast_start failover

Next, you disable the broker configuration:

DGMGRL SYS@PROD1> disable configuration

Then, you can shut down the broker in the primary. Make a copy of the broker configuration files. Use the below SQL to generate commands to copy the files. Remember to execute the commands generated:

PROD1 SQL> alter system set dg_broker_start=false scope=both;
PROD1 SQL> select 'host cp ' || value || ' /tmp' as cmd from v$parameter where name like 'dg_broker_config_file%';
PROD1 SQL> --Now, execute the commands
PROD1 SQL> host ls /tmp/dr*.dat

Finally, you do the same for the standby database:

PROD2 SQL> alter system set dg_broker_start=false scope=both;
PROD2 SQL> select 'host cp ' || value || ' /tmp' as cmd from v$parameter where name like 'dg_broker_config_file%';
PROD1 SQL> --Now, execute the commands
PROD2 SQL> host ls /tmp/dr*.dat

Stop Data Guard

On the primary database, defer the redo log transport to the standby database. Strictly speaking, this is not necessary, but I do it from a "better-safe-than-sorry" aspect. Be sure to verify that log_archive_dest_state_2 is the actual archive destination for your standby database:

PROD1 SQL> show parameter log_archive_dest_2
PROD1 SQL> alter system set log_archive_dest_state_2='defer' scope=both;

Next, you cancel redo apply on the standby database:

PROD2 SQL> alter database recover managed standby database cancel;

Finally, you shut down the database:

PROD2 SQL> shutdown immediate

If you are using Grid Infrastructure (GI) to manage the database, you should stop and disable the database. Disabling the database is strictly speaking not necessary, but again a "better-safe-than-sorry" approach:

[oracle@bm2]$ $ORACLE_HOME/bin/srvctl stop database -d PROD2
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl disable database -d PROD2

Upgrade

Now you can upgrade the primary database using the method you prefer. Complete all the post-upgrade tasks and perform the necessary tests to validate the new database release.

If something happens during upgrade and you want to revert, you can flash back the database (or restore on Standard Edition) and simply undo the before upgrade steps (start by enabling database, starting database, starting redo apply and so forth).

Remember that the standby databases was left behind before we started touching anything, so if all other fails, simply restart the standby database, and connect your users to it.

After Upgrade

Restart Data Guard

When you are happy with the upgrade, and your tests validate the new database release, you can proceed.

Update the listener on the standby host. Be sure to update the Oracle Home information in the listener.ora entry. Note, that your listener.ora might be stored in a non-default location, so use lsnrctl status to get the location. Finally, reload the listener:

[grid@bm2]$ $GRID_HOME/bin/lsnrctl status
[grid@bm2]$ vi $GRID_HOME/network/admin/listener.ora
[grid@bm2]$ $GRID_HOME/bin/lsnrctl reload

For the next commands, I will be used the same prompt, and I will need the following environment variables:

[oracle@bm2]$ export OLD_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
[oracle@bm2]$ export NEW_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm2]$ export ORACLE_HOME=$NEW_HOME
[oracle@bm2]$ export ORACLE_SID=PROD
[oracle@bm2]$ #Set ORACLE_UNQNAME to DB_UNIQUE_NAME
[oracle@bm2]$ export ORACLE_UNQNAME=PROD2 

Next, if the standby database is using TNS_ADMIN in the default location ($ORACLE_HOME/network/admin), then be sure to copy the relevant TNS aliases into the new tnsnames.ora. There should be TNS aliases to the primary and standby database. Or, if there are no other databases in the same Oracle Home, you can simply copy the files:

[oracle@bm2]$ #Back up files
[oracle@bm2]$ cp $NEW_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin/sqlnet.ora.backup
[oracle@bm2]$ cp $NEW_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin/tnsnames.ora.backup
[oracle@bm2]$ #Copy from old to new home
[oracle@bm2]$ cp $OLD_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin
[oracle@bm2]$ cp $OLD_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin

Now, you can edit /etc/oratab and update the information about the Oracle Home to match the new Oracle Home:

[oracle@bm2]$ vi /etc/oratab

Copy SPFile and password file to the new Oracle Home:

[oracle@bm2]$ cp $OLD_HOME/dbs/orapw$ORACLE_SID $ORACLE_HOME/dbs
[oracle@bm2]$ cp $OLD_HOME/dbs/spfile$ORACLE_SID.ora $ORACLE_HOME/dbs

If you are using GI to manage the database, you must upgrade the database, meaning updating the Oracle Home information, so GI will start the database in the correct Oracle Home. Next, re-enable and start the database:

[oracle@bm2]$ $ORACLE_HOME/bin/srvctl upgrade database -db $ORACLE_UNQNAME -oraclehome $ORACLE_HOME
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl modify database -db $ORACLE_UNQNAME -startoption MOUNT -role PHYSICAL_STANDBY
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl enable database -d $ORACLE_UNQNAME
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl start database -d $ORACLE_UNQNAME

Or, if you are not using GI, simply start the database:

PROD2 SQL> startup mount

Re-enable Redo Log Transport and Apply

On the primary database re-enable redo log transport to standby database:

PROD1 SQL> alter system set log_archive_dest_state_2='enable' scope=both;

On the standby database restart redo apply

PROD2 SQL> alter database recover managed standby database disconnect from session;

Re-enable Data Guard Broker

First, we need to copy the broker config files into the new Oracle Home. If you store your broker config files outside of the Oracle Home this might not be necessary to you:

[oracle@bm1]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm1]$ export ORACLE_UNQNAME=PROD1
[oracle@bm1]$ cp /tmp/dr1$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs
[oracle@bm1]$ cp /tmp/dr2$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs

Do the same on the standby database host:

[oracle@bm2]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm2]$ export ORACLE_UNQNAME=PROD2 
[oracle@bm2]$ cp /tmp/dr1$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs
[oracle@bm2]$ cp /tmp/dr2$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs

Now, you can restart the Data Guard Broker on both primary and standby database:

PROD1 SQL> alter system set dg_broker_start=true scope=both;

PROD2 SQL> alter system set dg_broker_start=true scope=both;

Finally, enable the broker configuration and fast start failover:

DGMGRL SYS@PROD1> show configuration
DGMGRL SYS@PROD1> enable configuration
DGMGRL SYS@PROD1> enable fast_start failover

Test

Use the broker to ensure everything is fine:

DGMGRL SYS@PROD1> show configuration
DGMGRL SYS@PROD1> show database prod1
DGMGRL SYS@PROD1> show database prod2

You should have SUCCESS listed for both databases Use Data Guard Broker to verify data guard setup after upgrade

Let’s try to make a switchover:

DGMGRL SYS@PROD1> switchover to prod2

If you don’t use Data Guard Broker, you use regular SQLs and SQLPlus to verify the data guard environment.

Conclusion

It is actually not that complicated to upgrade your database, even if it is part of a data guard setup. A little extra legwork is needed to take care of the standby database. But the good thing is that your DR setup is maintained althroughout the process.

I made a video on YouTube that shows the procedure. And while you are there, I suggest that you subscribe to our channel.

Keep an eye out for coming versions of AutoUpgrade. At time of writing our developers are working on streamlining the process. We want upgrade with data guard to be 100 % automated (or as close to as possible).

Further Reading