Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB with Data Guard and Reusing Data Files (Enabled Recovery)

Let me show you how to upgrade an Oracle Database 19c non-CDB with Data Guard to Oracle AI Database 26ai.

  • I reuse data files on the primary database.
  • I also reuse the data files on the standby database (enabled recovery).
  • This is a more complex produced compared to deferred recovery. However, the PDB is immediately protected by the standby after plug-in.

The demo environment:

  • Two servers:
    • COPENHAGEN (primary)
    • AARHUS (standby)
  • Source non-CDB:
    • SID: DB19
    • Primary unique name: DB19_COPENHAGEN
    • Standby unique name: DB19_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_19_27
  • Target CDB:
    • SID: CDB26
    • Primary unique name: CDB26_COPENHAGEN
    • Standby unique name: CDB26_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_26_1
  • Data Guard broker manages the Data Guard configuration.
  • Oracle Restart (Grid Infrastructure) manages the databases.

Overview of the demo environment for upgrade to Oracle Database 26ai

You can still use the procedure if you don’t use Data Guard broker or Grid Infrastructure. Just be sure to change the commands accordingly.

1. Preparations On Primary

I’ve already prepared my database and installed a new Oracle home. I’ve also created a new CDB or decided to use an existing one. The CDB is also configured for Data Guard.

The maintenance window has started, and users have left the database.

  1. I create an AutoUpgrade config file. I call it upgrade26.cfg:
    global.global_log_dir=/home/oracle/autoupgrade/upgrade26
    upg1.source_home=/u01/app/oracle/product/dbhome_19_27
    upg1.target_home=/u01/app/oracle/product/dbhome_26_1
    upg1.sid=DB19
    upg1.target_cdb=CDB26
    upg1.manage_standbys_clause=standbys=all
    upg1.export_rman_backup_for_noncdb_to_pdb=yes
    
    • I specify the source and target Oracle homes. These are the Oracle homes of the source non-CDB and target CDB, respectively.
    • sid contains the name of my database.
    • target_cdb is the name of the CDB where I want to plug in.
    • manage_standbys_clause=standbys=all instructs AutoUpgrade to plug the PDB in with enabled recovery.
    • export_rman_backup_for_noncdb_to_pdb instructs AutoUpgrade to export RMAN metadata using DBMS_PDB.EXPORTRMANBACKUP. This makes it easier to use pre-plugin backups (see appendix).

2. Preparations On Standby

My standby database uses ASM. This complicates the plug-in on the primary because the standby database doesn’t know where to find the data files.

I solve this with ASM aliases. This allows the standby CDB to find the data files from the standby non-CDB and reuse them.

  1. On the standby, I connect to the non-CDB and get a list of all data files:

    select name from v$datafile;
    
  2. I also find the GUID of my non-CDB:

    select guid from v$containers;
    
    • The GUID doesn’t change in a Data Guard environment. It’s the same in all database.
  3. Then I switch to the ASM instance on the standby host:

    export ORACLE_SID=+ASM1
    sqlplus / as sysasm	  
    
  4. Here, I create the directories for the OMF location of the future PDB (the non-CDB you are about to plug-in). I insert the GUID found above:

    alter diskgroup data add directory '+DATA/CDB26_AARHUS/<GUID>';
    alter diskgroup data add directory '+DATA/CDB26_AARHUS/<GUID>/DATAFILE';
    
    • Notice how the CDB database unique name (CDB26_AARHUS) is part of the path.
    • During plug-in, the database keeps its GUID. It never changes.
  5. For each data file in my source non-CDB (including undo, excluding temp files), I must create an ASM alias, for instance:

    alter diskgroup data add alias
       '+DATA/CDB26_AARHUS/<GUID>/DATAFILE/users_273_1103046827'
       for '+DATA/DB19_AARHUS/DATAFILE/users.273.1103046827';
    
    • I must create an alias for each data file. I create the alias in the OMF location of the future PDB in the standby CDB.
    • The alias must not contain dots/punctuation (.). That would be a violation of the OMF naming standard. Notice how I replaced those with an underscore.
    • The alias points to the location of the data file in the standby non-CDB location.
    • You can find a script to create the aliases in MOS note KB117147.
  6. I stop redo apply in the standby non-CDB:

    edit database 'DB19_AARHUS' set state='apply-off';
    
  7. If my standby is open (Active Data Gurd), I restart in MOUNT:

    srvctl stop db -d DB19_AARHUS -o immediate
    srvctl start db -d DB19_AARHUS -o mount
    

3. Upgrade Primary

  1. On the primary, I start the upgrade:

    java -jar autoupgrade.jar -mode deploy -config upgrade26.cfg
    
    • AutoUpgrade re-analyzes my non-CDB and executes the pre-upgrade fixups.
    • Next, AutoUpgrade mounts the non-CDB and flushes redo to all standbys.
    • Then, it opens the non-CDB in read-only, generates the manifest file and shuts down.
    • Finally, AutoUpgrade stops the job.
  2. AutoUpgrade prints the following message:

    ----------------- Continue with the manual steps -----------------
    There is a job with manual steps pending.
    The checkpoint change number is 2723057 for database DB19.
    For the standby database <DB19_AARHUS>, use the checkpoint number <2723057> to recover the database.
    
    You can find the SCN information in:
    /home/oracle/autoupgrade/upgrade26/DB19/101/drain/scn.json
    Once these manual steps are completed, you can resume job 101
    ------------------------------------------------------------------
    
    • At this point, all data files on the primary non-CDB is consistent at a specified SCN, 2723057.
    • I must recover the standby data files to the same SCN.
  3. On the standby host, I connect to the non-CDB:

    alter database recover managed standby database
    until change 2723057;
    
    • until change is set to the SCN provided by AutoUpgrade.
    • The standby now recovers all data files to the exact same SCN as the primary. This is a requirement for re-using the data files on plug-in.
  4. I verify that no data files are at a different SCN:

    select file#
    from v$datafile_header
    where checkpoint_change# != 2723057;
    
    • checkpoint_change# is the SCN specified by AutoUpgrade.
    • The query should return no rows.
  5. I shut down the standby non-CDB and removes the database configuration:

    srvctl stop database -d DB19_AARHUS -o immediate
    srvctl remove database -d DB19_AARHUS -noprompt
    
    • I remove the database from /etc/oratab as well.
    • From this point, the standby non-CDB must not start again.
    • The standby CDB will use the data files.
  6. Back on the primary, in the AutoUpgrade console, I instruct AutoUpgrade to continue the plug-in, upgrade and conversion.

    upg> resume -job 101
    
    • AutoUpgrade plugs the non-CDB into the CDB.
    • The plug-in operation propagates via redo to the standby.
  7. On the standby, I verify that the standby CDB found the data files on plug-in. I find the following in the alert log:

    2025-12-18T16:08:28.772125+00:00
    Recovery created pluggable database DB19
    DB19(3):Recovery scanning directory +DATA/CDB26_AARHUS/<GUID>/DATAFILE for any matching files.
    DB19(3):Recovery created file +DATA/CDB26_AARHUS/<GUID>/DATAFILE/system_289_1220179503
    DB19(3):Successfully added datafile 12 to media recovery
    DB19(3):Datafile #12: '+DATA/CDB26_AARHUS/<GUID>/DATAFILE/system_289_1220179503'
    DB19(3):Recovery created file +DATA/CDB26_AARHUS/<GUID>/DATAFILE/sysaux_291_1220179503
    DB19(3):Successfully added datafile 13 to media recovery
    DB19(3):Datafile #13: '+DATA/CDB26_AARHUS/<GUID>/DATAFILE/sysaux_291_1220179503'
    
    • I can see that the standby CDB scans the OMF location (+DATA/CDB26_AARHUS/<GUID>/DATAFILE) for the data files.
    • However, the data files are in the non-CDB location (+DATA/DB19_AARHUS/DATAFILE).
    • For each of the data files, the standby finds the alias that I created, and plugs it with those.
  8. Back on the primary, I wait for AutoUpgrade to complete the plug-in, upgrade and PDB conversion. In the end, AutoUpgrade prints the following:

    Job 101 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    Please check the summary report at:
    /home/oracle/autoupgrade/upgrade26/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/upgrade26/cfgtoollogs/upgrade/auto/status/status.log
    
    • The non-CDB (DB19) is now a PDB in my CDB (CDB26).
    • AutoUpgrade removes the non-CDB (DB19) registration from /etc/oratab and Grid Infrastruction on the primary. I’ve already done that for the standby.

4. Check Data Guard

  1. On the standby CDB, ensure that the PDB is MOUNTED and recovery status is ENABLED:

    --Should be MOUNTED, ENABLED
    select open_mode, recovery_status 
    from   v$pdbs 
    where  name='DB19';
    
  2. I also ensure that the recovery process is running:

    --Should be APPLYING_LOG
    select process, status, sequence# 
    from   v$managed_standby
    where  process like 'MRP%'; 	
    
  3. Next, I use Data Guard broker (dgmgrl) to validate my configuration:

    validate database "CDB26_COPENHAGEN"
    validate database "CDB26_AARHUS"
    
    • Both databases must report that they are ready for switchover.
  4. Optionally, but strongly recommended, I perform a switchover:

    switchover to "CDB26_AARHUS"
    
  5. The former standby (CDB26_AARHUS) is now primary. I verify that my PDB opens:

    alter pluggable database DB19 open;
    
    --Must return READ WRITE, ENABLED, NO
    select open_mode, recovery_status, restricted
    from   v$pdbs
    where  name='DB19';
    
    • If something went wrong previously in the process, the PDB won’t open.

5. Post-upgrade Tasks

  1. I take care of the post-upgrade tasks.

  2. I update any profiles or scripts that use the database.

  3. I clean up and remove the old source non-CDB (DB19). On both primary and standby, I remove:

    • Database files, like PFile, SPFile, password file, control file, and redo logs.
    • Database directory, like diagnostic_dest, adump, or db_recovery_file_dest.
    • Since I reused the data files in my CDB, I don’t delete those.

Final Words

I’m using AutoUpgrade for the entire upgrade; nice and simple. AutoUpgrade helps me recover the non-CDB standby database, so I can reuse the data files on all database in my Data Guard config.

When I reuse the data files, I no longer have the non-CDB for rollback. Be sure to plan accordingly.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

What Happens If I Make a Mistake?

If the standby database fails to find the data files, the recovery process stops. Not only for the recently plugged-in PDBs, but for all PDBs in the CDB.

This is, of course, a critical situation!

  • I ensure to perform the operation correctly and double-check afterward (and then check again).
  • If I have Active Data Guard and PDB Recovery Isolation turned on, recovery stops only in the newly plugged-in PDB. The CDB continues to recover the other PDBs.

Data Files in ASM or OMF in Regular File System

Primary Database

  • When I create the PDB, I’m re-using the data files. The data files are in the OMF location of the non-CDB database, e.g.:

    +DATA/DB19_COPENHAGEN/DATAFILE
    
  • However, the proper OMF location for my PDB is:

    +DATA/CDB26_COPENHAGEN/<PDB GUID>/DATAFILE
    
  • The CDB doesn’t care about this anomaly. However, if I want to conform to the OMF naming standard, I must move the data files. Find the data files:

    select file#, name from v$datafile;
    
  • Use online datafile move to move those files in the wrong location. The database automatically generates a new OMF name:

    alter database move datafile <file#>;
    
    • Online data file move creates a copy of the data files before switching to the new file and dropping the old one. So, I need additional disk space, and the operation takes time while the database copies the file.

Standby Database

The data files are in the wrong OMF location, like on the primary.

I can use online data file move on the standby database as well. However, I must stop redo apply and open the standby database to do that. If I don’t have a license for Active Data Guard, I must stop redo apply before opening the database.

  • Stop redo apply:

    edit database 'CDB26_AARHUS' set state='apply-off';
    
  • Open the standby CDB and PDB:

    alter database open;
    alter pluggable database DB19 open;
    
  • Find the data files:

    alter session set container=DB19;
    select file#, name from v$datafile;
    
  • Move those data files that are in the wrong location. This command also removes the original file and the alias:

    alter database move datafile <file#>;
    
  • Restart the standby CDB in mount mode. If I have Active Data Guard, I leave the standby database in OPEN mode:

    srvctl stop database -d CDB26_AARHUS
    srvctl start database -d CDB26_AARHUS -o mount
    
  • Restart redo apply:

    edit database 'CDB26_AARHUS' set state='apply-on';
    

There are two things to pay attention to:

  • I must open the standby database. If redo apply is turned off, I can open the standby database with a license for Active Data Guard. Be sure not to violate your license.
  • I must turn off redo apply while I perform the move. Redo transport is still active, so the standby database receives the redo logs. However, apply lag increases while I move the files. In case of a switchover or failover, I need more time to close the apply gap.

Rollback Options

When you convert a non-CDB to a PDB, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • RMAN backups
  • Storage snapshots
  • Additional standby database (another source non-CDB standby, which you leave behind)
  • Refreshable clone PDB

What If I Have Multiple Standby Databases?

You must repeat the relevant steps for each standby database.

You can mix and match standbys with deferred and enabled recovery. Let’s say that you want to use enabled recovery on DR standbys and deferred recovery on the reporting standbys.

Role Name Method
Local DR CDB26_COPENHAGEN2 Enabled recovery
Local reporting CDB26_COPENHAGENREP Deferred recovery
Remote DR CDB26_AARHUS Enabled recovery
Remote reporting CDB26_AARHUSREP Deferred recovery

You would set the following in your AutoUpgrade config file:

upg1.manage_standbys_clause=standbys=CDB26_COPENHAGEN2,CDB26_AARHUS

You would need to merge the two procedures together. This is left as a reader’s exercise.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also removes the non-CDB from the Grid Infrastructure configuration.

Pre-plugin Backups

After converting a non-CDB to PDB, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata (config file parameter export_rman_backup_for_noncdb_to_pdb).

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

Transparent Data Encryption (TDE)

AutoUpgrade fully supports upgrading an encrypted database. I can still use the above procedure with a few changes.

  • You’ll need to input the non-CDB database keystore password into the AutoUpgrade keystore. You can find the details in a previous blog post.

  • In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

  • At one point, you must copy the keystore to the standby database, and restart it. Check the MOS note KB117147 for additional details.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

Further Reading

Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB with Data Guard and Restoring Data Files (Deferred Recovery)

Let me show you how to upgrade an Oracle Database 19c non-CDB with Data Guard to Oracle AI Database 26ai.

  • I reuse data files on the primary database.
  • But restore the data files on the standby database after the migration (deferred recovery).

The demo environment:

  • Two servers:
    • COPENHAGEN (primary)
    • AARHUS (standby)
  • Source non-CDB:
    • SID: DB19
    • Primary unique name: DB19_COPENHAGEN
    • Standby unique name: DB19_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_19_27
  • Target CDB:
    • SID: CDB26
    • Primary unique name: CDB26_COPENHAGEN
    • Standby unique name: CDB26_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_26_1
  • Data Guard broker manages the Data Guard configuration.
  • Oracle Restart (Grid Infrastructure) manages the databases.

Overview of the demo environment for upgrade to Oracle Database 26ai

You can still use the procedure if you don’t use Data Guard broker or Grid Infrastructure. Just be sure to change the commands accordingly.

1. Upgrade and Convert

I’ve already prepared my database and installed a new Oracle home. I’ve also created a new CDB or decided to use an existing one. The CDB is also configured for Data Guard.

The maintenance window has started, and users have left the database.

  1. I ensure my standby database has received and applied all changes. Then, I stop it:

    srvctl stop database -d DB19_AARHUS
    
    • I won’t need the non-CDB standby database anymore.
    • For now, I shut it down and keep it just in case I need to roll back.
  2. On the primary, I create an AutoUpgrade config file. I call it upgrade26.cfg:

    global.global_log_dir=/home/oracle/autoupgrade/upgrade26
    upg1.source_home=/u01/app/oracle/product/dbhome_19_27
    upg1.target_home=/u01/app/oracle/product/dbhome_26_1
    upg1.sid=DB19
    upg1.target_cdb=CDB26
    upg1.manage_standbys_clause=standbys=none
    upg1.export_rman_backup_for_noncdb_to_pdb=yes
    
    • sid contains the name or SID of my database.
    • I specify the source and target Oracle homes. These are the Oracle homes of the source non-CDB and target CDB, respectively.
    • manage_standbys_clause=standbys=none instructs AutoUpgrade to plug the PDB in with deferred recovery.
    • export_rman_backup_for_noncdb_to_pdb instructs AutoUpgrade to export RMAN metadata using DBMS_PDB.EXPORTRMANBACKUP. This makes it easier to use pre-plugin backups (see appendix).
  3. On the primary host, I start AutoUpgrade to plug in, upgrade, and convert:

    java -jar autoupgrade.jar -mode deploy -config upgrade26.cfg
    
  4. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  5. When the upgrade completes, check the summary report for details:

    cat /home/oracle/autoupgrade/upgrade26/cfgtoollogs/upgrade/auto/status/status.log
    
  6. The summary report lists the following details from the POSTCHECKS stage:

    The following PDB(s) were created with standbys=none option. Refer to the postcheck result /home/oracle/autoupgrade/upgrade26/DB19/100/postchecks/db19_copenhagen_postupgrade.log for more details on manual actions needed. DB19

  7. I find additional details in the referenced log file:

    [action] Manual steps need to be performed after upgrade to copy the files to the standby database and enable recovery of the PDB from PRIMARY to STANDBY. Refer to MOS document Doc ID 1916648.1 for detailed steps. [broken rule] The following PDB(s) [DB19] were created with standbys=none option. [rule] On a Data Guard configuration, the CREATE PLUGGABLE DATABASE statement needs to be executed with clause STANDBYS=NONE to avoid impacting redo apply. That clause allows for deferral of file instantiation on the standby and the physical standby database to continue to protect existing pluggable databases. The clause allows the general structure of the PDB to be created on all physical standbys but all files belonging to the PDB are marked as OFFLINE/RECOVER at the standby.

    • I deal with this issue in the next chapter.

2. Restore PDB on Standby

  1. On the standby database, stop redo apply in the standby CDB using dgmgrl:

    edit database CDB26_AARHUS set state='apply-off';
    
  2. Use RMAN to connect to the standby CDB:

    rman target sys@CDB26_AARHUS
    
  3. Restore the PDB and switch to the new data files:

    run {
       allocate channel d1 device type disk;
       allocate channel d2 device type disk;
       set newname for pluggable database DB19 to new;
       restore pluggable database DB19 from service CDB26_COPENHAGEN;
    }
    
  4. Connect to the standby CDB and enable recovery of the PDB:

    alter session set container=DB19;
    alter pluggable database enable recovery;
    
  5. Then, online all the data files in the PDB:

    select file#, status from v$datafile;
    alter database datafile <file#> online;
    alter database datafile <file#> online;
    ...
    alter database datafile <file#> online;
    
  6. Restart redo apply in the standby CDB:

    edit database CDB26_AARHUS set state='apply-on';
    
  7. Connect to the standby CDB and verify the PDB’s recovery status (should be ENABLED). Ensure that the recovery process is running (should be APPLYING_LOG):

    select recovery_status 
    from   v$pdbs 
    where  name='DB19';
    
    select process, status, sequence# 
    from   v$managed_standby
    where  process like 'MRP%'; 
    
  8. Optionally, but strongly recommended, perform a switchover as the ultimate test. Connect to dgmgrl using username and password:

    dgmgrl sys as sysdba
    
  9. Perform the switchover:

    validate database "CDB26_AARHUS";
    switchover to "CDB26_AARHUS";
    
  10. Then, finally, I connect to the new primary CDB, CDB26_AARHUS. I ensure the PDB is open in READ WRITE mode and unrestricted. I check the status of all data files is SYSTEM or ONLINE:

    alter pluggable database DB19 open;
    
    alter session set container=DB19;
    
    select open_mode, restricted, recovery_status 
    from   v$pdbs;
    
    select name, status 
    from   v$datafile;
    

3. Post-upgrade Tasks

  1. I take care of the post-upgrade tasks.

  2. I update any profiles or scripts that use the database.

  3. I remove the non-CDB entry from /etc/oratab and Grid Infrastructure on the standby. AutoUpgrade takes care of the primary.

  4. I clean up and remove the old source non-CDB. On both primary and standby, I remove: * Database files, like PFile, SPFile, password file, control file, and redo logs. * Database directory, like diagnostic_dest, adump, or db_recovery_file_dest.

  5. On the standby, I remove the source non-CDB data files using asmcmd:

    cd DATA
    rm -rf DATA/DB19_AARHUS/DATAFILE
    
    • Important: I don’t do that on the primary because I reused the data files during the plug-in.

That’s It!

I’m using AutoUpgrade for the entire upgrade; nice and simple. I must take care of the standby database after the migration.

When I reuse the data files on the primary database, I no longer have the non-CDB for rollback. Be sure to plan accordingly.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Data Files in ASM/OMF

Primary Database

  • When I create the PDB, I reuse the data files from the primary database. The data files are in the OMF location of the non-CDB database, e.g.:

    +DATA/DB19_COPENHAGEN/DATAFILE
    
  • However, after plug-in, the proper OMF location for my PDB data files is:

    +DATA/CDB26_COPENHAGEN/<PDB GUID>/DATAFILE
    
  • The CDB doesn’t care about this anomaly. However, if I want to conform to the OMF naming standard, I must move the data files. Find the data files:

    select file#, name from v$datafile;
    
  • I use online datafile move to move those files in the wrong location. I don’t specify a new data file name, so the database generates an OMF name:

    alter database move datafile <file#>;
    
    • Online data file move creates a copy of the data files before switching to the new file and dropping the old one. So I need additional disk space, and the operation takes time while the database copies the file.

Standby Database

  • There is nothing to do on the standby database. When I restore the data files, they are placed in the right OMF location.

Rollback Options

When you convert a non-CDB to a PDB, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • RMAN backups
  • Storage snapshots
  • Standby database (the source non-CDB standby database, which you leave behind)
  • Refreshable clone PDB

What If I Have Multiple Standby Databases?

You must repeat the relevant steps for each standby database.

You can mix and match standbys with deferred and enabled recovery. Let’s say that you want to use enabled recovery on DR standbys and deferred recovery on the reporting standbys.

Role Name Method
Local DR CDB26_COPENHAGEN2 Enabled recovery
Local reporting CDB26_COPENHAGENREP Deferred recovery
Remote DR CDB26_AARHUS Enabled recovery
Remote reporting CDB26_AARHUSREP Deferred recovery

You would set the following in your AutoUpgrade config file:

upg1.manage_standbys_clause=standbys=CDB26_COPENHAGEN2,CDB26_AARHUS

You would need to merge the two procedures together. This is left as a reader’s exercise.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also removes the non-CDB from the Grid Infrastructure configuration.

Pre-plugin Backups

After converting a non-CDB to PDB, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata (config file parameter export_rman_backup_for_noncdb_to_pdb).

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

What If My Database Is Encrypted

AutoUpgrade fully supports upgrading an encrypted database. I can still use the above procedure with a few changes.

You’ll need to input the non-CDB database keystore password into the AutoUpgrade keystore. You can find the details in a previous blog post.

In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

Further Information

Using Refreshable Clone PDBs from a Standby Database

The other day, I found myself praising refreshable clone PDBs to a customer (which I often do because it’s a killer feature). They liked the feature too but asked:

> We are concerned about the impact on the source database. When AutoUpgrade connects to the source database and clones the database, can we offload the work to a standby database?

Refreshable clone PDBs can eat up your resources if you don’t constrain the target CDB. So, let’s see what we can do.

Mounted Standby Database

This won’t work, because you must be able to connect to the database via a regular database link. Further, AutoUpgrade and the cloning process must be able to execute queries in the source database, which is not possible on a mounted database.

Open Standby Database / Active Data Guard

What if you stop redo apply and open the standby database? Or if you have Active Data Guard?

In this case, the database would be open in read-only mode, and those queries would work. However, the refreshable clone PDB feature was developed to work in and require a read-write database, so this won’t work either – Not even if you enable automatic redirection of DML operations (ADG_REDIRECT_DML).

Even if this case would work, we wouldn’t recommend it. Because, we recommend that you run analyze and fixups mode on the source database, which wouldn’t be possible on a read-only database. You could run analyze and fixups on the primary database. But is that really an option? If you’re worried about affecting your primary and want to offload to the standby, would running those commands on the primary be an option?

Snapshot Standby Database

What about a snapshot standby? That’s a read-write database. Let’s give it a try.

  1. Convert the source standby to a snapshot standby:
    DGMGRL&gt; convert database '...' to snapshot standby;
    
    • The standby must remain a snapshot standby for the entire duration of the job. If you need to switch over or fail over to the standby, you must restart the entire operation.
  2. Ensure the PDB is open on the source standby.
    alter pluggable database ... open;
    
    • Otherwise, you will run into ORA-03150 when querying the source database over the database link.
  3. In the source standby, create the user used by the database link and grant appropriate permissions:
    create user dblinkuser identified by ...;
    grant create session, create pluggable database, select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  4. In the target CDB, create a database link that points to the PDB in source standby:
    create database link clonepdb
    connect to dblinkuser identified by ...
    using '';
    
  5. Create an AutoUpgrade config file:
    global.global_log_dir=/home/oracle/autoupgrade/log
    global.keystore=/home/oracle/autoupgrade/keystore
    upg1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
    upg1.target_home=/u01/app/oracle/product/23.0.0/dbhome_1
    upg1.sid=CDB19
    upg1.target_cdb=CDB23
    upg1.pdbs=PDB1
    upg1.source_dblink.PDB1=CLONEPDB 300
    upg1.target_pdb_copy_option.PDB1=file_name_convert=none
    upg1.start_time=+12h
    
  6. Start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar ... -mode deploy
    
  7. Convert the source standby back to a physical standby:
    DGMGRL&gt; convert database '...' to physical standby;
    

Is It Safe?

Using a standby database for anything else than your DR strategy, is sometimes perceived as risky. But it is not, as I explain in this blog post (section What Happens If I Need to Switch Over or Fail Over?).

Happy upgrading!

Introduction to Patching Oracle Data Guard

Here’s a blog post series about patching Oracle Data Guard in single instance configuration. For simplicity, I am patching with Oracle AutoUpgrade to automate the process as much as possible.

First, a few ground rules:

The Methods

There are three ways of patching Data Guard:

All At Once

  • You patch all databases at the same time.
  • You need an outage until you’ve patched all databases.
  • You need to do more work during the outage.
  • You turn off redo transport while you patch.

Standby-first with restart

  • All the patches you apply must be standby-first installable (see appendix).
  • You need an outage to stop the primary database and restart it in the target Oracle home.
  • During the outage, you have to do less work to do compared to all at once and less work overall compared to standby-first with switchover.
  • The primary database remains the same. It is useful if you have an async configuration with a much more powerful primary database or just prefer to have a primary database at one specific location.

Standby-first with switchover

  • All the patches you apply must be standby-first installable (see appendix).
  • You need an outage to perform a switchover. If your application is well-configured, users will just experience it as a brownout (hanging for a short period while the switchover happens).
  • During the outage, you have little to do, but overall, there are more steps.
  • After the outage, if you switch over to an Active Data Guard, the workload from the read-only workload has pre-warmed the buffer cache and shared pool.

Summary

All at one Standby-first with restart Standby-first with switchover
Works for all patches Works for most patches Works for most patches
Bigger interruption Bigger interruption Smaller interruption
Downtime is a database restart Downtime is a database restart Downtime/brownout is a switchover
Slightly more effort Least effort Slightly more effort
Cold database Cold database Pre-warmed database if ADG

Here’s a decision tree you can use to find the method that suits you.

Decision tree showing which method to choose

What If

RAC

These blog posts focus on single instance configuration.

Conceptually, patching Data Guard with RAC databases is the same; you can’t use the step-by-step guides in this blog post series. Further, AutoUpgrade doesn’t support all methods of patching RAC databases (yet).

I suggest that you take a look at these blog posts instead:

Or even better, use Oracle Fleet Patching and Provisioning.

Oracle Restart

You can use these blog posts if you’re using Oracle Restart. You can even combine patching Oracle Restart and Oracle Database into one operation using standby-first with restart.

We’re Really Sensitive To Downtime?

In these blog posts, I choose the easy way – and that’s using AutoUpgrade. It automates many of the steps for me and has built-in safeguards to ensure things don’t go south.

But this convenience comes at a price: sligthly longer outage. Partly, because AutoUpgrade doesn’t finish a job before all post-upgrade tasks are done (like Datapatch and gathering dictionary stats).

If you’re really concerned about downtime, you might be better off with your own automation, where you can open the database for business as quickly as possible while you run Datapatch and other post-patching activities in the background.

Datapatch

Just a few words about patching Data Guard and Datapatch.

  • You always run Datapatch on the primary.
  • You run Datapatch just once, and the changes to the data dictionary propagates to the standby via redo.
  • You run Datapatch when all databases are running out of the new Oracle home or when redo transport is turned off. The important part is that the standby that applies the Datapatch redo must be on the same patch level as the primary.

Happy patching

Appendix

Standby-First Installable

You can only perform standby-first patch apply if all the patches are marked as standby-first installable.

Standby-first patch apply is when you patch the standby database first, and you don’t disable redo transport/apply.

You can only use standby-first patch apply if all the patches are classified as standby-first installable. For each of the patches, you must:

  • Examine the patch readme file.
  • One of the first lines will tell if this specific patch is standby-first installable. It typically reads: > This patch is Data Guard Standby-First Installable

Release Updates are always standby-first installable, and so are most of the patches for Oracle Database.

In rare cases, you find a non-standby-first installable patch, so you must patch Data Guard using all at once.

Other Blog Posts in the Series

How To Patch Oracle Data Guard Using AutoUpgrade And Standby-First Patch Apply With Switchover

Let me show you how I patch my Oracle Data Guard configuration. I make it as easy as possible using Oracle AutoUpgrade. I reduce the interruption by doing standby-first patch apply with a switchover.

  • My Data Guard configuration consists of two databases:
    • SID: SALES
    • Databases: SALES_COPENHAGEN and SALES_AARHUS
    • Hosts: copenhagen and aarhus
    • Primary database: SALES_COPENHAGEN running on copenhagen

Preparations

You should do these preparations in advance of your maintenance window. They don’t interupt operations on your databases.

  • I download the patches using AutoUpgrade.

    • Create a config file called sales-download.cfg:

      global.global_log_dir=/home/oracle/autoupgrade-patching/download
      global.keystore=/home/oracle/autoupgrade-patching/keystore
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.target_version=19
      patch1.platform=linux.x64
      
    • Start AutoUpgrade in download mode:

      java -jar autoupgrade.jar -config sales-download.cfg -patch -mode download
      
      • I can download the patches from any computer. It doesn’t have to be one of the database hosts, which typically don’t have internet access.
  • I verify all patches are standby-first installable and my configuration meets the requirements for standby-first patch apply.

  • I create a new Oracle home on all hosts.

    • Create a config file called sales.cfg:
      global.global_log_dir=/home/oracle/autoupgrade-patching/sales
      patch1.source_home=/u01/app/oracle/product/19.3.0.0/dbhome_1
      patch1.target_home=/u01/app/oracle/product/19/dbhome_19_26_0
      patch1.sid=SALES
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.download=no
      
      • Start AutoUpgrade in create_home mode:
      java -jar autoupgrade.jar -config sales.cfg -patch -mode create_home
      
      • AutoUpgrade also runs root.sh. It requires either:
        • oracle user has sudo privileges
        • Or I’ve stored the root credentials in the AutoUpgrade keystore
        • Else, I must manually execute root.sh.
  • Optionally, but recommended, I run an analysis on the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode analyze
    
    • Check the findings in the summary report.

Patching

Proceed with the following when your maintenance window starts.

  • Update listener.ora on the standby host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@aarhus] lsnrctl reload
    
  • Patch the standby database:

    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
    • I don’t disable redo transport/apply.
  • Optionally, test the application of patches using a snapshot standby database.

  • interruption starts!

  • Switch over to SALES_AARHUS:

    DGMGRL> switchover to sales_aarhus;
    
    • Perform draining in advance according to your practices.
    • Depending on how your application is configured, the users will experience this interruption as a brown-out or downtime.
  • Update listener.ora on the new standby host (copenhagen). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@copenhagen] lsnrctl reload
    
  • Patch the new standby database (see appendix):

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
  • Verify the Data Guard configuration and ensure the standby database is receiving and applying redo:

    DGMGRL> show database SALES_COPENHAGEN;
    DGMGRL> show database SALES_AARHUS;
    DGMGRL> validate database SALES_COPENHAGEN;
    DGMGRL> validate database SALES_AARHUS;
    

Post-Patching

  • Connect to the new primary database and execute Datapatch. You do that by calling AutoUpgrade in upgrade mode:
    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode upgrade
    

Happy Patching!

Appendix

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Further Reading

Other Blog Posts in the Series

Avoid Problems on the Primary Database by Testing on a Snapshot Standby

One of the advantages of standby-first patch apply, is that I can test the patches in a production-like environment (the standby) before applying them to the primary. Should I find any issues with the patches, I can stop the process and avoid impacting the primary database.

Here’s an overview of the process.

For demo purposes, my Data Guard configuration consists of two databases:

  • SID: SALES
  • Databases: SALES_COPENHAGEN and SALES_AARHUS
  • Hosts: copenhagen and aarhus
  • Primary database: SALES_COPENHAGEN running on copenhagen

How To

This procedure starts right after I’ve patched the standby (SALES_AARHUS). It runs out of the target Oracle home, whereas the primary database (SALES_COPENHAGEN) still runs on the source Oracle home.

  • Convert the patched standby to a snapshot standby:

    DGMGRL> convert database SALES_AARHUS to snapshot standby;
    
  • Test the patch apply by running Datapatch on the standby:

    [oracle@aarhus] $ORACLE_HOME/OPatch/datapatch
    
    • One always runs Datapatch on the primary database and the changes made by the patches goes into redo to the standby.
    • But, since I converted to a snapshot standby, it is now opened like a normal database and I can run Datapatch on it.
    • If Datapatch completes without problems on the standby, I can be pretty sure it will do so on the primary as well. The standby is after all an exact copy of the primary database.
  • Optionally, perform additional testing on the standby.

    • I can connect any application and perform additional tests.
    • I can use SQL Performance Analyzer to check for regressing SQL statements.
    • I can make changes to any data in the standby. It is protected by a restore point.
  • When done, convert the snapshot standby back to a physical standby:

    DGMGRL> convert database SALES_AARHUS to physical standby;
    
    • This implicitly shuts down the standby, flashes back to the restore point and re-opens the database as a physical standby.
    • All changes made when it was a snapshot standby, including the Datapatch run, are undone.

Continue the patching procedure on the primary database as described elsewhere.

Is It Safe?

Sometimes, when I suggest using the standby for testing, people are like: Huh! Seriously?

What Happens If I Need to Switch Over or Fail Over?

I can still perform a switchover or a failover. However, they will take a little bit longer.

When I convert to snapshot standby:

  • Redo transport is still active.
  • Redo apply is turned off.

So, the standby receives all redo from the primary but doesn’t apply it. Since you normally test for 10-20 minutes, this would be the maximum apply lag. On a well-oiled standby, it shouldn’t take more than a minute or two to catch up.

When performing a switchover or failover on a snapshot standby, you should expect an increase with the time it takes to:

  • Shut down
  • Flashback
  • Apply redo

I’d be surprised if that would be more than 5 minutes. If your RTO doesn’t allow for a longer period:

  • Get a second standby.
  • Consider the reduction in risk you get when you test on the standby. Perhaps a short increase in RTO could be allowed after all.

What Happens If Datapatch Fails

If Datapatch fails on my snapshot standby, I should be proud of myself. I just prevented the same problem from hitting production.

  • I grab all the diagnostic information I need, so I can work with Oracle Support on the issue.
  • Convert back to physical standby, which will undo the failed Datapatch run.
  • If I expect to solve the issue quickly, leave the standby running in the target Oracle home. Otherwise, put it back into the source Oracle home.

So, yes, it’s safe to use!

Happy testing

Appendix

Other Blog Posts in the Series

How To Patch Oracle Data Guard Using AutoUpgrade For Non-Standby-First Installable Patches

Let me show you how I patch my Oracle Data Guard configuration. I make it easy using Oracle AutoUpgrade. I patch all at once – all databases at the same time – which means a short downtime. I can use this approach for all patches – even those that are not standby-first installable.

I recommend this approach only when you have patches that are not standby-first installable.

  • My Data Guard configuration consists of two databases:
    • SID: SALES
    • Databases: SALES_COPENHAGEN and SALES_AARHUS
    • Hosts: copenhagen and aarhus
    • Primary database: SALES_COPENHAGEN running on copenhagen

Preparations

You should do these preparations in advance of your maintenance window. They don’t interupt operations on your databases.

  • I download the patches using AutoUpgrade.

    • Create a config file called sales-download.cfg:

      global.global_log_dir=/home/oracle/autoupgrade-patching/download
      global.keystore=/home/oracle/autoupgrade-patching/keystore
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.target_version=19
      patch1.platform=linux.x64
      
    • Start AutoUpgrade in download mode:

      java -jar autoupgrade.jar -config sales-download.cfg -patch -mode download
      
      • I can download the patches from any computer. It doesn’t have to be one of the database hosts, which typically don’t have internet access.
  • I create a new Oracle home on all hosts.

    • Create a config file called sales.cfg:
      global.global_log_dir=/home/oracle/autoupgrade-patching/sales
      patch1.source_home=/u01/app/oracle/product/19.3.0.0/dbhome_1
      patch1.target_home=/u01/app/oracle/product/19/dbhome_19_26_0
      patch1.sid=SALES
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.download=no
      
      • Start AutoUpgrade in create_home mode:
      java -jar autoupgrade.jar -config sales.cfg -patch -mode create_home
      
      • AutoUpgrade also runs root.sh. It requires either:
        • oracle user has sudo privileges
        • Else, I must manually execute root.sh.
  • Optionally, but recommended, I run an analysis on the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode analyze
    
    • Check the findings in the summary report.

Patching

Proceed with the following when your maintenance window starts.

  • I connect to the primary database using Data Guard broker and disable redo transport from the primary database:

    DGMGRL> edit database sales_copenhagen set state='TRANSPORT-OFF';
    
  • I update listener.ora on both hosts (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener on both hosts:

    lsnrctl reload
    
  • Downtime starts!

    • Perform draining in advance according to your practices.
    • Shut down your application.
  • Patch the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode deploy
    
  • Simultaneously, I patch the standby database:

    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
  • I update my profile and scripts so they point to the target Oracle home.

  • When patching completes in both hosts, I re-enable redo transport:

    DGMGRL> edit database sales_copenhagen set state='TRANSPORT-ON';
    
  • Verify the Data Guard configuration and ensure the standby database is receiving and applying redo:

    DGMGRL> show database SALES_COPENHAGEN;
    DGMGRL> show database SALES_AARHUS;
    DGMGRL> validate database SALES_COPENHAGEN;
    DGMGRL> validate database SALES_AARHUS;
    

That’s it.

Happy Patching!

Appendix

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Further Reading

Other Blog Posts in the Series

How To Patch Oracle Data Guard Using AutoUpgrade And Standby-First Patch Apply With Restart

Let me show you how I patch my Oracle Data Guard configuration. I make it as easy as possible using Oracle AutoUpgrade. I reduce the interuption by doing standby-first patch apply with a primary database restart.

  • My Data Guard configuration consists of two databases:
    • SID: SALES
    • Databases: SALES_COPENHAGEN and SALES_AARHUS
    • Hosts: copenhagen and aarhus
    • Primary database: SALES_COPENHAGEN running on copenhagen

Preparations

You should do these preparations in advance of your maintenance window. They don’t interupt operations on your databases.

  • I download the patches using AutoUpgrade.

    • Create a config file called sales-download.cfg:

      global.global_log_dir=/home/oracle/autoupgrade-patching/download
      global.keystore=/home/oracle/autoupgrade-patching/keystore
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.target_version=19
      patch1.platform=linux.x64
      
    • Start AutoUpgrade in download mode:

      java -jar autoupgrade.jar -config sales-download.cfg -patch -mode download
      
      • I can download the patches from any computer. It doesn’t have to be one of the database hosts, which typically don’t have internet access.
  • I verify all patches are standby-first installable and my configuration meets the requirements for standby-first patch apply.

  • I create a new Oracle home on all hosts.

    • Create a config file called sales.cfg:
      global.global_log_dir=/home/oracle/autoupgrade-patching/sales
      patch1.source_home=/u01/app/oracle/product/19.3.0.0/dbhome_1
      patch1.target_home=/u01/app/oracle/product/19/dbhome_19_26_0
      patch1.sid=SALES
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.download=no
      
      • Start AutoUpgrade in create_home mode:
      java -jar autoupgrade.jar -config sales.cfg -patch -mode create_home
      
      • AutoUpgrade also runs root.sh. It requires either:
        • oracle user has sudo privileges
        • Or I’ve stored the root credentials in the AutoUpgrade keystore
        • Else, I must manually execute root.sh.
  • Optionally, but recommended, I run an analysis on the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode analyze
    
    • Check the findings in the summary report.

Patching

Proceed with the following when your maintenance window starts.

  • Update listener.ora on the standby host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@aarhus] lsnrctl reload
    
  • Patch the standby database:

    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
    • I don’t disable redo transport/apply.
  • Optionally, test the application of patches using a snapshot standby database.

  • Downtime starts!

    • Perform draining in advance according to your practices.
    • Shut down your application.
  • Update listener.ora on the primary host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@copenhagen] lsnrctl reload
    
  • Patch the primary database (see appendix):

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode deploy
    
    • I use the sales.cfg config file.
    • AutoUpgrade detects it’s running against the primary database, and executes Datapatch and all the post-upgrade tasks.
  • Verify the Data Guard configuration and ensure the standby database is receiving and applying redo:

    DGMGRL> show database SALES_COPENHAGEN;
    DGMGRL> show database SALES_AARHUS;
    DGMGRL> validate database SALES_COPENHAGEN;
    DGMGRL> validate database SALES_AARHUS;
    

That’s it.

Happy Patching!

Appendix

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Further Reading

Other Blog Posts in the Series

How to Perform Standby-first Patch Apply When You Have Different Primary and Standby Databases in the Same Oracle Home

I am a big fan of Oracle Data Guard Standby-First Patch Apply. You can:

  • Reduce downtime to the time it takes to perform a switchover.
  • Test the patching procedure on the standby database.

I received an interesting question the other day:

I have the following two Data Guard configurations. I want to patch all the databases using standby-first patch apply. How do I do that when I have primary and standby databases running out of the same Oracle home on the same machine?

Overview of Data Guard standby-first environment

Requirements

In this case, the databases are on 19.17.0, and the customer wants to patch them to 19.23.0.

To use standby-first patch apply, you must meet a set of requirements, one being:

Data Guard Standby-First Patch Apply is supported between database patch releases that are a maximum of one year (1 year) apart based on the patch release date.

Here are the release dates of the following Release Updates:

  • 19.17.0: October 2022
  • 19.23.0: April 2024

So, in this case, the customer can’t use standby-first patch apply directly. There is a year and a half in between. They need to patch cycles in this case:

  • Patch to 19.21.0 (release October 2023)
  • Patch to 19.23.0 (release April 2024)

In the future, they should apply patches more often to avoid ending up in this situation again.

Patching Oracle Home

The customer has one Oracle home on each server from where both databases run. On any server, there is a primary and a standby database (from two different Data Guard configs).

The customer uses in-place patching. If they patch the entire Oracle home, it means one of the primary databases is now on a higher Oracle home than its standby database, which is not allowed. The standby database is the only one which may run on a higher patch level.

Using the above configuration with primary and standby databases running out of the same Oracle home, you can’t use in-place patching and standby-first patch apply.

The customer must switch to out-of-place patching to achieve this. Then you can patch standby databases first, then the primaries.

Plus, you get all the other benefits of out-of-place patching.

Datapatch

Once all the databases in a Data Guard configuration run in the new Oracle home, you still haven’t completed the patching process:

A patch or patch bundle is not considered fully installed until all of the following actions have occurred:

  • Patch binary installation has been performed to the database home on all standby systems.
  • Patch binary installation has been performed to the database home on the primary system.
  • Patch SQL installation, if required by the patch, has been performed on the primary database and the redo applied to the standby database(s).

You must do the above steps in the specified order, and the last step is to execute Datapatch:

$ $ORACLE_HOME/OPatch/datapatch

Step-by-step

You can use AutoUpgrade to patch Oracle Data Guard.

Happy Patching!

What Happens to Your Oracle Data Guard During Conversion to Multitenant

Oracle Data Guard is an amazing piece of tech. It helps keeping your data safe. When you convert to the multitenant architecture, it is crucial that you don’t jeopardize your Data Guard configuration.

Follow the below steps to bring along your standby database.

What’s The Problem

When you prepare for multitenant conversion, you prepare two things:

  • Data files – you make the data files consistent by opening the non-CDB in read-only mode.
  • Manifest file – you create an XML file which contains information about the non-CDB.

The manifest file contains information about the data files, including the location. However, the manifest file lists only the location on the primary database. There is no information about the standby database.

When you plug in the non-CDB, the plug-in happens without problems on the CDB primary database. It reads the manifest file and finds the data files.

But what about the CDB standby database? Since the manifest file does not list the file location on the standby host, how can the standby database find the corresponding data files?

The Options

There are two options which you control with the standbys clause on the create pluggable database statement:

  • Enabled recovery:
    • You specify standbys=all, or you explicitly list the standby database in the standbys clause.
    • On plug-in, the CDB standby database must find the data files. How the standby database finds the data files depends on the configuration.
    • The new PDB is protected by Data Guard immediately on plug-in.
    • If the standby database fails to find the data files, recovery stops for the entire CDB. All your PDBs are now unprotected unless you use PDB Recovery Isolation (see appendix).
  • Deferred recovery:
    • You specify standbys=none, or you don’t list the standby database in the standbys clause.
    • On plug-in, the CDB standby notes the creation of the PDB but does not attempt to find and recover the data files.
    • The new PDB is not protected by Data Guard until you provide the data files and re-enable recovery as described in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1). Typically, this means restoring all data files to the standby system. The other PDBs in the CDB standby are fully protected during the entire process.

Convert with AutoUpgrade

You must convert with deferred recovery on the CDB standby database. AutoUpgrade uses this approach by default:

upg1.manage_standbys_clause=standbys=none

When AutoUpgrade completes, you must follow the process to restore the data files on the CDB standby database and re-enable recovery.

There is no way to plug in with enabled recovery. This includes the alias trick. This requires work on the primary and standby systems. AutoUpgrade is a fully automated process that does not allow you to intervene midway.

If you set manage_standbys_clause to anything but the default to plug in with enabled recovery, you will most likely end up in problems. Either the data files are missing on the standby system or not at the right SCN. This stops the MRP process in the standby database. Since the MRP process is responsible for recovering all the other PDBs as well, you are not only breaking the recently added PDB, but also all other PDBs.

Convert Manually

ASM

You can plug-in with enabled recovery and use the data files on the standby. The standby database searches the OMF location for the data files. ASM does not you manually moving files into an OMF location. Instead, you can create aliases in the OMF location as described in Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1). The standby database follows the plug-in operation.

This option won’t work, if you use the as clone clause on the create pluggable database statement. The clause generates a new GUID and since the GUID is part of the OMF location, you won’t be able to create aliases upfront.

Alternatively, you can plug in with deferred recovery.

OMF in File System

You can plug-in with enabled recovery and use the data files on the standby. The CDB standby database searches the OMF location for the data files. Either:

  • Move the data files into the OMF location.
  • Create soft links in the OMF location for each data file pointing to the current location.

These options won’t work, if you want to use the as clone clause. The clause generates a new GUID and since the GUID is part of the OMF location, you don’t know the OMF location upfront.

If you set standby_pdb_source_file_directory in the CDB standby database, it looks for the data files in that directory. However, it will always copy the data files into the OMF location. Even if you specify create pluggable database ... nocopy. Setting standby_pdb_source_file_directory is, however, compatible with the as clone clause.

Alternatively, you can plug in with deferred recovery.

Regular Files

The database uses regular files when db_create_file_dest is empty.

If you plug in with enabled recovery, the CDB standby database expects to find the data files in the exact same location (path and file name) as on the primary database. The location is either the full path from the manifest file or the location specified by create pluggable database ... source_file_directory='<data_file_location>'.

If the data files are in a different location on the CDB standby database, you either:

  • Set db_file_name_convert in your CDB standby database. This changes the name of each of the data files accordingly.
  • Set standby_pdb_source_file_directory in your CDB standby database. When media recovery looks for a specific file during plug-in, it searches this directory instead of the full path from the manifest file.

You can plug-in using the as clone clause without problems.

Alternatively, you can plug in with deferred recovery.

Refreshable Clone PDBs

When you migrate a non-CDB using refreshable clone PDBs, you are using a clone of the non-CDB database. Thus, there are no existing data files on the standby database that you can use.

You can only create a refreshable clone PDB with deferred recovery (standbys=none). After you transition the refreshable clone PDB into a regular, stand-alone PDB using alter pluggable database ... refresh mode none, you must follow the process to restore the data files and re-enable recovery. If you use AutoUpgrade, you must wait until the entire job completes.

Until you have completed the recovery process, the PDB is not protected by Data Guard.

For further information, including how Oracle Cloud Infrastructure makes it easier for you, have a look at Sinan’s blog post.

Important

Whichever method you choose, you must check your Data Guard configuration before going live.

  1. Check the recovery status on all standby databases:

    select name, recovery_status
    from   v$pdbs;
    
  2. Test the Data Guard configuration by performing a switchover.

Don’t go live without checking your Data Guard configuration!

Appendix

PDB Recovery Isolation

PDB Recovery Isolation is a new feature in Oracle Database 21c.

In an Active Data Guard environment, PDB recovery isolation ensures that media recovery of a CDB on the standby is not impacted when one or more PDBs are not consistent with the rest of the CDB.

Source: About PDB Recovery Isolation

If you plug in a database with standbys=all (via a refreshable clone PDB) and the standby database can’t find the data files, PDB recovery isolation kicks in:

  • The standby database disables recovery of the affected PDB.
  • The standby database restores the data files from the primary database.
  • After restore, the standby database re-enables recovery of the PDB.
  • The affected PDB is unprotected until the process is completed.
  • The other PDBs are unaffected by the situation.

PDB Recovery Isolation reduces risk and automates the resolution of the problem.

At the time of writing, it requires a license for Active Data Guard.

Further Reading

Thank You

A big thank you to my valued colleague, Sinan Petrus Toma, for teaching me about PDB recovery isolation.