Upgrade Oracle Database 19c PDB to 26ai with Data Guard and Restoring Data Files (Deferred Recovery)

Let me show you how to upgrade a single PDB to Oracle AI Database 26ai.

This is called an unplug-plug upgrade and is much faster than a full CDB upgrade.

But what about Data Guard? I use deferred recovery, meaning I must restore the PDB after the upgrade.

Let’s see how it works.

1. Upgrade On Primary

I’ve already prepared my database and installed a new Oracle home. I’ve also created a new container database or decided to use an existing one. The new container database is configured for Data Guard.

The maintenance window has started, and users have left the database.

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/SALES-CDB26
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=CDB19
    upg1.pdbs=SALES
    upg1.target_cdb=CDB26
    upg1.target_pdb_copy_option.SALES=file_name_convert=none
    
    • I specify the source and target Oracle homes. I’ve already installed the target Oracle home.
    • sid contains the SID of my current CDB.
    • target_cdb specifies the SID of the container where I plug into.
    • pdbs is the PDB that I want to upgrade. I can specify additional PDBs in a comma-separated list.
    • I want to reuse the existing data files, so I omit target_pdb_copy_option.
    • To plug in with deferred recovery, I can either omit manage_standbys_clause or set manage_standbys_clause=none.
    • Check the appendix for additional parameters.
  2. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config SALES-CDB26.cfg -mode deploy
    
    • AutoUpgrade starts by analyzing the database for upgrade readiness and executes the pre-upgrade fixups.
    • Next, it creates a manifest file and unplugs from source CDB.
    • Then, it plugs into the target CDB with deferred recovery. At this point, the standby is not protecting the new PDB.
    • Finally, it upgrades the PDB.
  3. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  4. In the end, AutoUpgrade completes the upgrade:

    Job 100 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    The following PDB(s) were created with standbys=none option. Refer to the postcheck result CDB19_COPENHAGEN_postupgrade.log for more details on manual actions needed.
    SALES
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups.
    • AutoUpgrade informs me that it plugged in with deferred recovery. To protect the PDB on the standby, I must restore it.
  5. I review the Autoupgrade Summary Report. The path is printed to the console:

    vi /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
  6. I take care of the post-upgrade tasks.

  7. AutoUpgrade drops the PDB from the source CDB.

    • The data files used by the target CDB are in the original location. In the directory structure of the source CDB.
    • Take care you don’t delete them by mistake; they are now used by the target CDB.
    • Optionally, move them into the correct location using online datafile move.

2. Restore PDB On Standby

  • I execute all commands on the same host – the standby system.

  • On the standby, I verify that recovery status is disabled:

    SQL> select open_mode, recovery_status
         from v$pdbs where name='SALES';
    
    OPEN_MODE    RECOVERY_STATUS
    ____________ __________________
    MOUNTED      DISABLED
    
    • This means that I plugged in with deferred recovery.
    • The standby is not protecting this PDB.
  • Next, I connect to the standby using RMAN, and I restore the PDB:

    connect target /
    run{
       allocate channel disk1 device type disk;
       allocate channel disk2 device type disk;
    
       set newname for pluggable database SALES to new;
    
       restore pluggable database SALES
          from service <primary_service>
          section size 64G;
    }
    
    • You can add more channels depending on your hardware.
    • Replace SALES with the name of your PDB.
    • <primary_service> is a connect string to the primary database.
  • Next, I connect to the standby database using Data Guard broker and turn off redo apply:

    edit database <stdby_unique_name> set state='apply-off';
    
  • Back in RMAN, still connected to the standby, I switch to the newly restored data files:

    switch pluggable database SALES to copy;
    
  • Then, I connect to the standby and generate a list of commands that will online all the data files:

    alter session set container=SALES;
    select 'alter database datafile '||''''||name||''''||' online;' from v$datafile;
    
    • Save the commands for later.
    • There should be one row for each data file.
  • If my standby is an Active Data Guard, I must restart it into MOUNT mode.

    alter session set container=CDB$ROOT;
    shutdown immediate
    startup mount
    
  • Now, I can re-enable recovery and online the data files:

    alter session set container=SALES;
    alter pluggable database enable recovery;
    alter database datafile <file1> online;
    alter database datafile <file2> online;
    ...
    alter database datafile <filen> online;
    
    • I must connect to the PDB.
    • I must execute the alter database datafile ... online command for each data file.
  • I turn on redo apply:

    edit database <stdby_unique_name> set state='apply-on';
    
  • At this point, the standby protects my PDB.

  • After a minute or two, I check the Data Guard config:

    validate database <stdby_unique_name>;
    
  • Once my standby is in sync, I can do a switchover as the ultimate test:

    switchover to <stdby_unique_name>;
    
  • Now, I connect to the new primary and ensure the PDB opens in read write mode and unrestricted:

    select open_mode, restricted
    from v$pdbs
    where name='SALES';
    
    OPEN_MODE     RESTRICTED
    _____________ _____________
    READ WRITE    NO
    
  • You can find the full procedure in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (KB90519). Check sections Steps for Preparing to enable recovery of the PDB and Steps required for enabling recovery on the PDB after the files have been copied.

That’s It!

With AutoUpgrade, you can easily upgrade a single PDB using an unplug-plug upgrade. The easiest way to handle the standby is to restore the pluggable database after the upgrade.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Rollback Options

When you perform an unplug-plug upgrade, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • Copy data files by using target_pdb_copy_option.
  • RMAN backups.
  • Storage snapshots.

In this blog post, I reuse the data files, so I must have an alternate rollback plan.

Copy or Re-use Data Files

When you perform an unplug-plug upgrade, you must decide what to do with the data files.

Copy Re-use
Plug-in Time Slow, needs to copy data files. Fast, re-uses existing data files.
Disk space Need room for a full copy of the data files. No extra disk space.
Rollback Source data files are left untouched. Re-open PDB in source CDB. PDB in source CDB unusable. Rely on other rollback method.
AutoUpgrade Enable using target_pdb_copy_option. Default. Omit parameter target_pdb_copy_option.
Location after plug-in The data files are copied to the desired location. The data files are left in the source CDB location. Don’t delete them by mistake. Consider moving them using online data files move.
Syntax used CREATE PLUGGABLE DATABASE ... COPY CREATE PLUGGABLE DATABASE ... NOCOPY

target_pdb_copy_option

Use this parameter only when you want to copy the data files.

  • target_pdb_copy_option=file_name_convert=('/u02/oradata/SALES', '/u02/oradata/NEWSALES','search-1','replace-1') When you have data files in a regular file system. The value is a list of pairs of search/replace strings.
  • target_pdb_copy_option=file_name_convert=none When you want data files in ASM or use OMF in a regular file system. The database automatically generates new file names and puts data files in the right location.
  • target_pdb_copy_option=file_name_convert=('+DATA/SALES', '+DATA/NEWSALES') This is a very rare configuration. Only when you have data files in ASM, but don’t use OMF. If you have ASM, I strongly recommend using OMF.

You can find further explanation in the documentation.

Move

The CREATE PLUGGABLE DATABASE statement also has a MOVE clause in addition to COPY and NOCOPY. In a regular file system, the MOVE clause works as you would expect. However, in ASM, it is implemented via copy-and-delete, so you might as well use the COPY option.

AutoUpgrade doesn’t support the MOVE clause.

Compatible

During plug-in, the PDB automatically inherits the compatible setting of the target CDB. You don’t have to raise the compatible setting manually.

Typically, the target CDB has a higher compatible and the PDB raises it on plug-in. This means you don’t have the option of downgrading.

If you want to preserve the option of downgrading, be sure to set the compatible parameter in the target CDB to the same value as the source CDB.

Pre-plugin Backups

After doing an unplug-plug upgrade, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata automatically.

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade handles it transparently. You must manually recreate services in the target CDB using srvctl.

What If I Use Oracle Restart?

No changes. You must manually recreate services in the target CDB using srvctl.

What If My Database Is Encrypted

AutoUpgrade fully supports upgrading an encrypted PDB.

You’ll need to input the source and target CDB keystore passwords into the AutoUpgrade keystore. You can find the details in a previous blog post.

In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

Before restoring the pluggable database on the standby, you must copy the keystore from the primary to the standby.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the PDB time zone file after the actual upgrade. This might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner. In multitenant, a PDB can use a different time zone file than the CDB.

  • target_pdb_name: AutoUpgrade renames the PDB. I must specify the original PDB name (SALES) as a suffix to the parameter:

    upg1.target_pdb_name.SALES=NEWSALES
    

    If I have multiple PDBs, I can specify target_pdb_name multiple times:

    upg1.pdbs=SALES,OLDNAME1,OLDNAME2
    upg1.target_pdb_name.SALES=NEWSALES
    upg1.target_pdb_name.OLDNAME1=NEWNAME1
    upg1.target_pdb_name.OLDNAME2=NEWNAME2
    
  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

  • Check the documentation for the full list.

Multiple Standby Databases

You must restore the PDB to each standby database.

If you have multiple standby databases, it’s a lot of work for the primary to handle the restore pluggable database command from all standbys.

Imagine we have three standby databases. Here’s an alternative approach:

  • First, Standby 1 restores from primary.
  • Next, standby 2 restores from standby 1.
  • At the same time, standby 3 restores from primary.
  • This spreads the load throughout your databases, allowing you to complete faster.

Upgrade Oracle Database 19c CDB to 26ai with Data Guard – Standby Offline Method

Here’s how to upgrade an entire CDB to Oracle AI Database 26ai, including a standby database.

I’m using the Standby Offline Method (see appendix). The standby database is offline during the upgrade. This requires a little more downtime but adds an additional layer of safety.

1. Preparations

I’ve already prepared my database and installed a new Oracle home on both servers. The maintenance window has started, and users have left the database.

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/CDB19
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=CDB19
    upg1.defer_standby_log_shipping=yes
    
    • sid contains the name or SID of my database.
    • I specify the source and target Oracle homes. I’ve already installed the target Oracle home.
    • I use defer_standby_log_shipping to instruct AutoUpgrade to disable redo transport. This prevents the primary from transporting any upgrade redo to the standby.
    • Check the appendix for additional parameters.
  2. I disable Fast-Start Failover using Data Guard broker:

    disable fast_start failover;
    

2. Prepare Standby

  1. I stop redo apply:

    edit database <stdby_unique_name> set state=apply-off;
    
  2. I create a guaranteed restore point on the standby database.

    create restore point stdby_before_upg
       guarantee flashback database;
    

3. Upgrade Primary

  1. I update listener.ora on the primary host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  2. I reload the listener:

    lsnrctl reload
    
  3. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config CDB19.cfg -mode deploy
    
    • AutoUpgrade first sets a guaranteed restore point, then proceeds with the upgrade.
  4. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  5. In the end, AutoUpgrade completes the upgrade:

    Job 100 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    ---- Drop GRP at your convenience once you consider it is no longer needed ----
    Drop GRP from CDB1: drop restore point AUTOUPGRADE_9212_CDB1_COPENHAGEN1924000
    
    ---- Data Guard broker configuration is disabled. Enable it once ready to sync up standby database  ----
    Enable log shipping against CDB1_COPENHAGEN, job100 using the following DGMGRL command: EDIT DATABASE CDB1_COPENHAGEN SET STATE=TRANSPORT-ON;
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups, incl. updating /etc/oratab and Grid Infrastructure configuration.
    • AutoUpgrade instructs me to re-enable redo transport when I’m satisfied with the upgrade.
  6. I review the Autoupgrade Summary Report. The path is printed to the console:

    vi /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
  7. I take care of the post-upgrade tasks.

  8. I update any profiles or scripts that use the primary.

4. Upgrade Standby

I only proceed with this step if I’m happy about the upgrade on the primary.

  1. I update listener.ora on the standby host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  2. I reload the listener:

    lsnrctl reload
    
  3. I remove the defer_standby_log_shipping from my AutoUpgrade config file. It doesn’t make any sense during the upgrade of the standby.

  4. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config CDB19.cfg -mode deploy
    
    • AutoUpgrade restarts the standby in the new Oracle home.
    • It also takes care of all config files.
    • Plus, update /etc/oratab and Grid Infrastructure configuration.
  5. I update any profiles or scripts that use the standby.

  6. I re-enable redo transport and redo apply using Data Guard broker:

    edit database <prmy_unique_name> set state=transport-on;
    edit database <stdby_unique_name> set state=apply-on;
    
  7. It will take a few minutes for the standby to catch up. I monitor using:

    show database <stdby_unique_name>;
    validate database <stdby_unique_name>;
    
  8. When there is no transport and apply lag, my standby has caught up, and the overall upgrade is complete.

5. Finalizing

  1. When I’m done testing the database – including application testing – and I decide that a rollback is not needed, I drop the restore points – from primary and standby.

  2. I re-enable Fast-Start Failover using Data Guard broker:

    enable fast_start failover;
    

That’s It!

With AutoUpgrade, you can easily upgrade your entire CDB even when you use Data Guard.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Different Methods

When upgrading with Data Guard, there are two approaches:

  • Standby Offline – Before the upgrade starts on the primary database, you shut down the standby database. You keep it shut down until the upgrade has completed on the primary database and you have finished your tests. When you are sure you will stay on the new release, the standby database is restarted and synchronized with the primary database. It will take some time before you can go live because the standby database must apply all the redo generated during the upgrade.
    • If you need to fall back, you can use Flashback Database on the primary database. In addition, no matter what happens to the primary database, you still have the standby database immediately ready in the pre-upgrade state.
    • My team recommends this method. We prefer to sacrifice a little downtime to achieve even better protection.
  • Maximum Availability Architecture (MAA) – The standby database is open and applies redo while the primary database is upgraded. This means that the standby database is closely following the primary database. You can go live very soon after the upgrade completes because there is little or very little apply lag.
    • The downside is when you must fall back. In that case, you have two databases to bring back in time with Flashback Database. In the very unlikely event that something happens during flashback on both databases, you may need to restore your backup.
    • The MAA team recommends this method as it guarantees the lowest downtime.
Standby Offline MAA
Maximum protection Minimum downtime
Upgrade team recommendation MAA recommendation
Redo transport deferred Redo transport enabled
Redo apply stopped Redo apply active
Protected by offline standby and guaranteed restore point Protected by guaranteed restore point
AutoUpgrade default

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Oracle Data Guard Broker and Static Service Registration (KB142582)

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also updates the Grid Infrastructure configuration.

For warp-speed upgrades, take a look at distributed upgrade.

What If I Use Oracle Restart?

No changes. AutoUpgrade detects this and automatically updates the Grid Infrastructure configuration.

What If My Database Is Encrypted

You must use an auto-login keystore. AutoUpgrade checks this during the pre-upgrade analysis.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

  • em_target_name: Allow AutoUpgrade to create blackouts before restarting the database and to update the target configuration after the upgrade. Requires a local installation of emcli. See this blog post for details.

  • rman_catalog_connect_string: Relevant for databases that store backup metadata in a recovery catalog. Upgrades the recovery catalog schema following an upgrade. See this blog post for details.

Upgrade Oracle Database 19c CDB to 26ai with Data Guard – MAA Method

Here’s how to upgrade an entire CDB to Oracle AI Database 26ai, including all standby databases.

I’m using the MAA Method (see appendix). The standby database is up during the upgrade and continuously applies redo. This gives me shorter downtime compared to the alternative.

1. Preparations

I’ve already prepared my database and installed a new Oracle home on both servers. The maintenance window has started, and users have left the database.

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/CDB19
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=CDB19
    
    • sid contains the name or SID or my database.
    • I specify the source and target Oracle homes. I’ve already installed the target Oracle home.
    • Check the appendix for additional parameters.
  2. I disable Fast-Start Failover using Data Guard broker:

    disable fast_start failover;
    

2. Upgrade Standby

  1. I update listener.ora on the standby host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  2. I reload the listener:

    lsnrctl reload
    
  3. I create a guaranteed restore point on the standby database.

    alter database recover managed standby database cancel;
    create restore point stdby_before_upg guarantee flashback database;
    alter database recover managed standby database disconnect from session;
    
  4. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config CDB19.cfg -mode deploy
    
    • AutoUpgrade restarts the standby in the new Oracle home.
    • It also takes care of all config files.
    • Plus, update /etc/oratab and Grid Infrastructure configuration.
  5. I update any profiles or scripts that use the standby.

  6. I use Data Guard broker to ensure that my standby is still applying logs:

    validate database <stdby_unique_name>;
    

3. Upgrade Primary

  1. I update listener.ora on the primary host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  2. I reload the listener:

    lsnrctl reload
    
  3. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config CDB19.cfg -mode deploy
    
    • AutoUpgrade starts by setting a guaranteed restore point and then proceeds with the upgrade.
  4. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  5. In the end, AutoUpgrade completes the upgrade:

    Job 100 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    ---- Drop GRP at your convenience once you consider it is no longer needed ----
    Drop GRP from CDB19: drop restore point AUTOUPGRADE_9212_CDB191927000
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups, incl. updating /etc/oratab and Grid Infrastructure configuration.
  6. I review the Autoupgrade Summary Report. The path is printed to the console:

    vi /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
  7. I take care of the post-upgrade tasks.

  8. I update any profiles or scripts that use the primary.

4. Finalizing

  1. When I’m done testing the database – including application testing – and I decide that a rollback is not needed, I drop the restore points – from primary and standby.

  2. I re-enable Fast-Start Failover using Data Guard broker:

    enable fast_start failover;
    

That’s It!

With AutoUpgrade, you can easily upgrade your entire CDB even when you use Data Guard.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Different Methods

When upgrading with Data Guard, there are two approaches:

  • Standby Offline – Before the upgrade starts on the primary database, you shut down the standby database. You keep it shut down until the upgrade has completed on the primary database and you have finished your tests. When you are sure you will stay on the new release, the standby database is restarted and synchronized with the primary database. It will take some time before you can go live because the standby database must apply all the redo generated during the upgrade.
    • If you need to fall back, you can use Flashback Database on the primary database. In addition, no matter what happens to the primary database, you still have the standby database immediately ready in the pre-upgrade state.
    • My team recommends this method. We prefer to sacrifice a little downtime to achieve even better protection.
  • Maximum Availability Architecture (MAA) – The standby database is open and applies redo while the primary database is upgraded. This means that the standby database is closely following the primary database. You can go live very soon after the upgrade completes because there is little or very little apply lag.
    • The downside is when you must fall back. In that case, you have two databases to bring back in time with Flashback Database. In the very unlikely event that something happens during flashback on both databases, you may need to restore your backup.
    • The MAA team recommends this method as it guarantees the lowest downtime.
Standby Offline MAA
Maximum protection Minimum downtime
Upgrade team recommendation MAA recommendation
Redo transport deferred Redo transport enabled
Redo apply stopped Redo apply active
Protected by offline standby and guaranteed restore point Protected by guaranteed restore point
AutoUpgrade default

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Oracle Data Guard Broker and Static Service Registration (KB142582)

What If I Have Multiple Standbys?

  • Repeat step Upgrade Standby for each standby before moving on with the next step Upgrade Primary.

  • This also applies if you have cascading standbys or use far-sync instances.

What About Rolling Upgrades?

This is a very different technique. You can learn more in Virtual Classroom #10: Zero Downtime Operations of Oracle Database.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also updates the Grid Infrastructure configuration.

For warp-speed upgrades, take a look at distributed upgrade.

What If I Use Oracle Restart?

No changes. AutoUpgrade detects this and automatically updates the Grid Infrastructure configuration.

What If My Database Is Encrypted

You must use an auto-login keystore. AutoUpgrade checks this during the pre-upgrade analysis.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

  • em_target_name: Allow AutoUpgrade to create blackouts before restarting the database and to update the target configuration after the upgrade. Requires a local installation of emcli. See this blog post for details.

  • rman_catalog_connect_string: Relevant for databases that store backup metadata in a recovery catalog. Upgrades the recovery catalog schema following an upgrade. See this blog post for details.

Upgrade Oracle Database 19c PDB to 26ai with Data Guard using Refreshable Clone PDB

I’ve already shown you how to upgrade a single PDB to Oracle AI Database 26ai using refreshable clone PDB.

But what about Data Guard? Refreshable clone PDB supports only deferred recovery, which means I must restore the PDB after the upgrade.

Let’s see how it works.

Upgrade

For the upgrade, I follow the procedure for a non-Data Guard upgrade.

  • In my AutoUpgrade config file, I must either:
    • Omit manage_standbys_clause.
    • Or, set it to manage_standbys_clause=none.
  • Both methods trigger AutoUpgrade to plug in with deferred recovery, which is the only supported option when using refreshable clone PDB.

Once the upgrade completes, I can proceed and restore the PDB to my standby database.

Standby Database

  • I execute all commands on the same host – the standby system.

  • On the standby, I verify that recovery status is disabled:

    SQL> select open_mode, recovery_status
         from v$pdbs where name='NEWPDB1';
    
    OPEN_MODE    RECOVERY_STATUS
    ____________ __________________
    MOUNTED      DISABLED
    
    • This means that I plugged in with deferred recovery.
    • The standby is not protecting this PDB.
  • Next, I connect to the standby using RMAN, and I restore the PDB:

    connect target /
    run{
       allocate channel disk1 device type disk;
       allocate channel disk2 device type disk;
    
       set newname for pluggable database NEWPDB1 to new;
    
       restore pluggable database NEWPDB1
          from service <primary_service>
          section size 64G;
    }
    
    • You can add more channels depending on your hardware.
    • Replace NEWPDB1 with the name of your PDB.
    • <primary_service> is a connect string to the primary database.
  • Next, I connect to the standby database using Data Guard broker and turn off redo apply:

    edit database <stdby_unique_name> set state='apply-off';
    
  • Back in RMAN, still connected to the standby, I switch to the newly restored data files:

    switch pluggable database NEWPDB1 to copy;
    
  • Then, I connect to the standby and generate a list of commands that will online all the data files:

    alter session set container=NEWPDB1;
    select 'alter database datafile '||''''||name||''''||' online;' from v$datafile;
    
    • Save the commands for later.
    • There should be one row for each data file.
  • If my standby is an Active Data Guard, I must restart it into MOUNT mode.

    alter session set container=CDB$ROOT;
    shutdown immediate
    startup mount
    
  • Now, I can re-enable recovery and online the data files:

    alter session set container=NEWPDB1;
    alter pluggable database enable recovery;
    alter database datafile <file1> online;
    alter database datafile <file2> online;
    ...
    alter database datafile <filen> online;
    
    • I must connect to the PDB.
    • I must execute the alter database datafile ... online command for each data file.
  • I turn on redo apply:

    edit database <stdby_unique_name> set state='apply-on';
    
  • At this point, the standby protects my PDB.

  • After a minute or two, I check the Data Guard config:

    validate database <stdby_unique_name>;
    
  • Once my standby is in sync, I can do a switchover as the ultimate test:

    switchover to <stdby_unique_name>;
    
  • Now, I connect to the new primary and ensure the PDB opens in read write mode and unrestricted:

    select open_mode, restricted
    from v$pdbs
    where name='NEWPDB1';
    
    OPEN_MODE     RESTRICTED
    _____________ _____________
    READ WRITE    NO
    
  • You can find the full procedure in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (KB90519). Check sections Steps for Preparing to enable recovery of the PDB and Steps required for enabling recovery on the PDB after the files have been copied.

That’s It!

Refreshable clone PDB is a great way of upgrading a single PDB. It’s very easy, but you must take care of your standby afterward.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Multiple Standby Databases

You must restore the PDB to each standby database.

If you have multiple standby databases, it’s a lot of work for the primary to handle the restore pluggable database command from all standbys.

Imagine we have three standby databases. Here’s an alternative approach:

  • First, Standby 1 restores from primary.
  • Next, standby 2 restores from standby 1.
  • At the same time, standby 3 restores from primary.
  • This spreads the load throughout your databases, allowing you to complete faster.

What If My Database Is Encrypted

AutoUpgrade handles the keys in the primary database.

After the upgrade, but before you begin the restore, you must copy the database keystore from the primary to the standby.

What If My Database Is A RAC Database?

Works on Oracle RAC Database as well.

Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB with Data Guard and Reusing Data Files (Enabled Recovery)

Let me show you how to upgrade an Oracle Database 19c non-CDB with Data Guard to Oracle AI Database 26ai.

  • I reuse data files on the primary database.
  • I also reuse the data files on the standby database (enabled recovery).
  • This is a more complex produced compared to deferred recovery. However, the PDB is immediately protected by the standby after plug-in.

The demo environment:

  • Two servers:
    • COPENHAGEN (primary)
    • AARHUS (standby)
  • Source non-CDB:
    • SID: DB19
    • Primary unique name: DB19_COPENHAGEN
    • Standby unique name: DB19_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_19_27
  • Target CDB:
    • SID: CDB26
    • Primary unique name: CDB26_COPENHAGEN
    • Standby unique name: CDB26_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_26_1
  • Data Guard broker manages the Data Guard configuration.
  • Oracle Restart (Grid Infrastructure) manages the databases.

Overview of the demo environment for upgrade to Oracle Database 26ai

You can still use the procedure if you don’t use Data Guard broker or Grid Infrastructure. Just be sure to change the commands accordingly.

1. Preparations On Primary

I’ve already prepared my database and installed a new Oracle home. I’ve also created a new CDB or decided to use an existing one. The CDB is also configured for Data Guard.

The maintenance window has started, and users have left the database.

  1. I create an AutoUpgrade config file. I call it upgrade26.cfg:
    global.global_log_dir=/home/oracle/autoupgrade/upgrade26
    upg1.source_home=/u01/app/oracle/product/dbhome_19_27
    upg1.target_home=/u01/app/oracle/product/dbhome_26_1
    upg1.sid=DB19
    upg1.target_cdb=CDB26
    upg1.manage_standbys_clause=standbys=all
    upg1.export_rman_backup_for_noncdb_to_pdb=yes
    
    • I specify the source and target Oracle homes. These are the Oracle homes of the source non-CDB and target CDB, respectively.
    • sid contains the name of my database.
    • target_cdb is the name of the CDB where I want to plug in.
    • manage_standbys_clause=standbys=all instructs AutoUpgrade to plug the PDB in with enabled recovery.
    • export_rman_backup_for_noncdb_to_pdb instructs AutoUpgrade to export RMAN metadata using DBMS_PDB.EXPORTRMANBACKUP. This makes it easier to use pre-plugin backups (see appendix).

2. Preparations On Standby

My standby database uses ASM. This complicates the plug-in on the primary because the standby database doesn’t know where to find the data files.

I solve this with ASM aliases. This allows the standby CDB to find the data files from the standby non-CDB and reuse them.

  1. On the standby, I connect to the non-CDB and get a list of all data files:

    select name from v$datafile;
    
  2. I also find the GUID of my non-CDB:

    select guid from v$containers;
    
    • The GUID doesn’t change in a Data Guard environment. It’s the same in all database.
  3. Then I switch to the ASM instance on the standby host:

    export ORACLE_SID=+ASM1
    sqlplus / as sysasm	  
    
  4. Here, I create the directories for the OMF location of the future PDB (the non-CDB you are about to plug-in). I insert the GUID found above:

    alter diskgroup data add directory '+DATA/CDB26_AARHUS/<GUID>';
    alter diskgroup data add directory '+DATA/CDB26_AARHUS/<GUID>/DATAFILE';
    
    • Notice how the CDB database unique name (CDB26_AARHUS) is part of the path.
    • During plug-in, the database keeps its GUID. It never changes.
  5. For each data file in my source non-CDB (including undo, excluding temp files), I must create an ASM alias, for instance:

    alter diskgroup data add alias
       '+DATA/CDB26_AARHUS/<GUID>/DATAFILE/users_273_1103046827'
       for '+DATA/DB19_AARHUS/DATAFILE/users.273.1103046827';
    
    • I must create an alias for each data file. I create the alias in the OMF location of the future PDB in the standby CDB.
    • The alias must not contain dots/punctuation (.). That would be a violation of the OMF naming standard. Notice how I replaced those with an underscore.
    • The alias points to the location of the data file in the standby non-CDB location.
    • You can find a script to create the aliases in MOS note KB117147.
  6. I stop redo apply in the standby non-CDB:

    edit database 'DB19_AARHUS' set state='apply-off';
    
  7. If my standby is open (Active Data Gurd), I restart in MOUNT:

    srvctl stop db -d DB19_AARHUS -o immediate
    srvctl start db -d DB19_AARHUS -o mount
    

3. Upgrade Primary

  1. On the primary, I start the upgrade:

    java -jar autoupgrade.jar -mode deploy -config upgrade26.cfg
    
    • AutoUpgrade re-analyzes my non-CDB and executes the pre-upgrade fixups.
    • Next, AutoUpgrade mounts the non-CDB and flushes redo to all standbys.
    • Then, it opens the non-CDB in read-only, generates the manifest file and shuts down.
    • Finally, AutoUpgrade stops the job.
  2. AutoUpgrade prints the following message:

    ----------------- Continue with the manual steps -----------------
    There is a job with manual steps pending.
    The checkpoint change number is 2723057 for database DB19.
    For the standby database <DB19_AARHUS>, use the checkpoint number <2723057> to recover the database.
    
    You can find the SCN information in:
    /home/oracle/autoupgrade/upgrade26/DB19/101/drain/scn.json
    Once these manual steps are completed, you can resume job 101
    ------------------------------------------------------------------
    
    • At this point, all data files on the primary non-CDB is consistent at a specified SCN, 2723057.
    • I must recover the standby data files to the same SCN.
  3. On the standby host, I connect to the non-CDB:

    alter database recover managed standby database
    until change 2723057;
    
    • until change is set to the SCN provided by AutoUpgrade.
    • The standby now recovers all data files to the exact same SCN as the primary. This is a requirement for re-using the data files on plug-in.
  4. I verify that no data files are at a different SCN:

    select file#
    from v$datafile_header
    where checkpoint_change# != 2723057;
    
    • checkpoint_change# is the SCN specified by AutoUpgrade.
    • The query should return no rows.
  5. I shut down the standby non-CDB and removes the database configuration:

    srvctl stop database -d DB19_AARHUS -o immediate
    srvctl remove database -d DB19_AARHUS -noprompt
    
    • I remove the database from /etc/oratab as well.
    • From this point, the standby non-CDB must not start again.
    • The standby CDB will use the data files.
  6. Back on the primary, in the AutoUpgrade console, I instruct AutoUpgrade to continue the plug-in, upgrade and conversion.

    upg> resume -job 101
    
    • AutoUpgrade plugs the non-CDB into the CDB.
    • The plug-in operation propagates via redo to the standby.
  7. On the standby, I verify that the standby CDB found the data files on plug-in. I find the following in the alert log:

    2025-12-18T16:08:28.772125+00:00
    Recovery created pluggable database DB19
    DB19(3):Recovery scanning directory +DATA/CDB26_AARHUS/<GUID>/DATAFILE for any matching files.
    DB19(3):Recovery created file +DATA/CDB26_AARHUS/<GUID>/DATAFILE/system_289_1220179503
    DB19(3):Successfully added datafile 12 to media recovery
    DB19(3):Datafile #12: '+DATA/CDB26_AARHUS/<GUID>/DATAFILE/system_289_1220179503'
    DB19(3):Recovery created file +DATA/CDB26_AARHUS/<GUID>/DATAFILE/sysaux_291_1220179503
    DB19(3):Successfully added datafile 13 to media recovery
    DB19(3):Datafile #13: '+DATA/CDB26_AARHUS/<GUID>/DATAFILE/sysaux_291_1220179503'
    
    • I can see that the standby CDB scans the OMF location (+DATA/CDB26_AARHUS/<GUID>/DATAFILE) for the data files.
    • However, the data files are in the non-CDB location (+DATA/DB19_AARHUS/DATAFILE).
    • For each of the data files, the standby finds the alias that I created, and plugs it with those.
  8. Back on the primary, I wait for AutoUpgrade to complete the plug-in, upgrade and PDB conversion. In the end, AutoUpgrade prints the following:

    Job 101 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    Please check the summary report at:
    /home/oracle/autoupgrade/upgrade26/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/upgrade26/cfgtoollogs/upgrade/auto/status/status.log
    
    • The non-CDB (DB19) is now a PDB in my CDB (CDB26).
    • AutoUpgrade removes the non-CDB (DB19) registration from /etc/oratab and Grid Infrastruction on the primary. I’ve already done that for the standby.

4. Check Data Guard

  1. On the standby CDB, ensure that the PDB is MOUNTED and recovery status is ENABLED:

    --Should be MOUNTED, ENABLED
    select open_mode, recovery_status 
    from   v$pdbs 
    where  name='DB19';
    
  2. I also ensure that the recovery process is running:

    --Should be APPLYING_LOG
    select process, status, sequence# 
    from   v$managed_standby
    where  process like 'MRP%'; 	
    
  3. Next, I use Data Guard broker (dgmgrl) to validate my configuration:

    validate database "CDB26_COPENHAGEN"
    validate database "CDB26_AARHUS"
    
    • Both databases must report that they are ready for switchover.
  4. Optionally, but strongly recommended, I perform a switchover:

    switchover to "CDB26_AARHUS"
    
  5. The former standby (CDB26_AARHUS) is now primary. I verify that my PDB opens:

    alter pluggable database DB19 open;
    
    --Must return READ WRITE, ENABLED, NO
    select open_mode, recovery_status, restricted
    from   v$pdbs
    where  name='DB19';
    
    • If something went wrong previously in the process, the PDB won’t open.

5. Post-upgrade Tasks

  1. I take care of the post-upgrade tasks.

  2. I update any profiles or scripts that use the database.

  3. I clean up and remove the old source non-CDB (DB19). On both primary and standby, I remove:

    • Database files, like PFile, SPFile, password file, control file, and redo logs.
    • Database directory, like diagnostic_dest, adump, or db_recovery_file_dest.
    • Since I reused the data files in my CDB, I don’t delete those.

Final Words

I’m using AutoUpgrade for the entire upgrade; nice and simple. AutoUpgrade helps me recover the non-CDB standby database, so I can reuse the data files on all database in my Data Guard config.

When I reuse the data files, I no longer have the non-CDB for rollback. Be sure to plan accordingly.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

What Happens If I Make a Mistake?

If the standby database fails to find the data files, the recovery process stops. Not only for the recently plugged-in PDBs, but for all PDBs in the CDB.

This is, of course, a critical situation!

  • I ensure to perform the operation correctly and double-check afterward (and then check again).
  • If I have Active Data Guard and PDB Recovery Isolation turned on, recovery stops only in the newly plugged-in PDB. The CDB continues to recover the other PDBs.

Data Files in ASM or OMF in Regular File System

Primary Database

  • When I create the PDB, I’m re-using the data files. The data files are in the OMF location of the non-CDB database, e.g.:

    +DATA/DB19_COPENHAGEN/DATAFILE
    
  • However, the proper OMF location for my PDB is:

    +DATA/CDB26_COPENHAGEN/<PDB GUID>/DATAFILE
    
  • The CDB doesn’t care about this anomaly. However, if I want to conform to the OMF naming standard, I must move the data files. Find the data files:

    select file#, name from v$datafile;
    
  • Use online datafile move to move those files in the wrong location. The database automatically generates a new OMF name:

    alter database move datafile <file#>;
    
    • Online data file move creates a copy of the data files before switching to the new file and dropping the old one. So, I need additional disk space, and the operation takes time while the database copies the file.

Standby Database

The data files are in the wrong OMF location, like on the primary.

I can use online data file move on the standby database as well. However, I must stop redo apply and open the standby database to do that. If I don’t have a license for Active Data Guard, I must stop redo apply before opening the database.

  • Stop redo apply:

    edit database 'CDB26_AARHUS' set state='apply-off';
    
  • Open the standby CDB and PDB:

    alter database open;
    alter pluggable database DB19 open;
    
  • Find the data files:

    alter session set container=DB19;
    select file#, name from v$datafile;
    
  • Move those data files that are in the wrong location. This command also removes the original file and the alias:

    alter database move datafile <file#>;
    
  • Restart the standby CDB in mount mode. If I have Active Data Guard, I leave the standby database in OPEN mode:

    srvctl stop database -d CDB26_AARHUS
    srvctl start database -d CDB26_AARHUS -o mount
    
  • Restart redo apply:

    edit database 'CDB26_AARHUS' set state='apply-on';
    

There are two things to pay attention to:

  • I must open the standby database. If redo apply is turned off, I can open the standby database with a license for Active Data Guard. Be sure not to violate your license.
  • I must turn off redo apply while I perform the move. Redo transport is still active, so the standby database receives the redo logs. However, apply lag increases while I move the files. In case of a switchover or failover, I need more time to close the apply gap.

Rollback Options

When you convert a non-CDB to a PDB, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • RMAN backups
  • Storage snapshots
  • Additional standby database (another source non-CDB standby, which you leave behind)
  • Refreshable clone PDB

What If I Have Multiple Standby Databases?

You must repeat the relevant steps for each standby database.

You can mix and match standbys with deferred and enabled recovery. Let’s say that you want to use enabled recovery on DR standbys and deferred recovery on the reporting standbys.

Role Name Method
Local DR CDB26_COPENHAGEN2 Enabled recovery
Local reporting CDB26_COPENHAGENREP Deferred recovery
Remote DR CDB26_AARHUS Enabled recovery
Remote reporting CDB26_AARHUSREP Deferred recovery

You would set the following in your AutoUpgrade config file:

upg1.manage_standbys_clause=standbys=CDB26_COPENHAGEN2,CDB26_AARHUS

You would need to merge the two procedures together. This is left as a reader’s exercise.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also removes the non-CDB from the Grid Infrastructure configuration.

Pre-plugin Backups

After converting a non-CDB to PDB, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata (config file parameter export_rman_backup_for_noncdb_to_pdb).

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

Transparent Data Encryption (TDE)

AutoUpgrade fully supports upgrading an encrypted database. I can still use the above procedure with a few changes.

  • You’ll need to input the non-CDB database keystore password into the AutoUpgrade keystore. You can find the details in a previous blog post.

  • In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

  • At one point, you must copy the keystore to the standby database, and restart it. Check the MOS note KB117147 for additional details.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

Further Reading

Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB with Data Guard and Restoring Data Files (Deferred Recovery)

Let me show you how to upgrade an Oracle Database 19c non-CDB with Data Guard to Oracle AI Database 26ai.

  • I reuse data files on the primary database.
  • But restore the data files on the standby database after the migration (deferred recovery).

The demo environment:

  • Two servers:
    • COPENHAGEN (primary)
    • AARHUS (standby)
  • Source non-CDB:
    • SID: DB19
    • Primary unique name: DB19_COPENHAGEN
    • Standby unique name: DB19_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_19_27
  • Target CDB:
    • SID: CDB26
    • Primary unique name: CDB26_COPENHAGEN
    • Standby unique name: CDB26_AARHUS
    • Oracle home: /u01/app/oracle/product/dbhome_26_1
  • Data Guard broker manages the Data Guard configuration.
  • Oracle Restart (Grid Infrastructure) manages the databases.

Overview of the demo environment for upgrade to Oracle Database 26ai

You can still use the procedure if you don’t use Data Guard broker or Grid Infrastructure. Just be sure to change the commands accordingly.

1. Upgrade and Convert

I’ve already prepared my database and installed a new Oracle home. I’ve also created a new CDB or decided to use an existing one. The CDB is also configured for Data Guard.

The maintenance window has started, and users have left the database.

  1. I ensure my standby database has received and applied all changes. Then, I stop it:

    srvctl stop database -d DB19_AARHUS
    
    • I won’t need the non-CDB standby database anymore.
    • For now, I shut it down and keep it just in case I need to roll back.
  2. On the primary, I create an AutoUpgrade config file. I call it upgrade26.cfg:

    global.global_log_dir=/home/oracle/autoupgrade/upgrade26
    upg1.source_home=/u01/app/oracle/product/dbhome_19_27
    upg1.target_home=/u01/app/oracle/product/dbhome_26_1
    upg1.sid=DB19
    upg1.target_cdb=CDB26
    upg1.manage_standbys_clause=standbys=none
    upg1.export_rman_backup_for_noncdb_to_pdb=yes
    
    • sid contains the name or SID of my database.
    • I specify the source and target Oracle homes. These are the Oracle homes of the source non-CDB and target CDB, respectively.
    • manage_standbys_clause=standbys=none instructs AutoUpgrade to plug the PDB in with deferred recovery.
    • export_rman_backup_for_noncdb_to_pdb instructs AutoUpgrade to export RMAN metadata using DBMS_PDB.EXPORTRMANBACKUP. This makes it easier to use pre-plugin backups (see appendix).
  3. On the primary host, I start AutoUpgrade to plug in, upgrade, and convert:

    java -jar autoupgrade.jar -mode deploy -config upgrade26.cfg
    
  4. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  5. When the upgrade completes, check the summary report for details:

    cat /home/oracle/autoupgrade/upgrade26/cfgtoollogs/upgrade/auto/status/status.log
    
  6. The summary report lists the following details from the POSTCHECKS stage:

    The following PDB(s) were created with standbys=none option. Refer to the postcheck result /home/oracle/autoupgrade/upgrade26/DB19/100/postchecks/db19_copenhagen_postupgrade.log for more details on manual actions needed. DB19

  7. I find additional details in the referenced log file:

    [action] Manual steps need to be performed after upgrade to copy the files to the standby database and enable recovery of the PDB from PRIMARY to STANDBY. Refer to MOS document Doc ID 1916648.1 for detailed steps. [broken rule] The following PDB(s) [DB19] were created with standbys=none option. [rule] On a Data Guard configuration, the CREATE PLUGGABLE DATABASE statement needs to be executed with clause STANDBYS=NONE to avoid impacting redo apply. That clause allows for deferral of file instantiation on the standby and the physical standby database to continue to protect existing pluggable databases. The clause allows the general structure of the PDB to be created on all physical standbys but all files belonging to the PDB are marked as OFFLINE/RECOVER at the standby.

    • I deal with this issue in the next chapter.

2. Restore PDB on Standby

  1. On the standby database, stop redo apply in the standby CDB using dgmgrl:

    edit database CDB26_AARHUS set state='apply-off';
    
  2. Use RMAN to connect to the standby CDB:

    rman target sys@CDB26_AARHUS
    
  3. Restore the PDB and switch to the new data files:

    run {
       allocate channel d1 device type disk;
       allocate channel d2 device type disk;
       set newname for pluggable database DB19 to new;
       restore pluggable database DB19 from service CDB26_COPENHAGEN;
    }
    
  4. Connect to the standby CDB and enable recovery of the PDB:

    alter session set container=DB19;
    alter pluggable database enable recovery;
    
  5. Then, online all the data files in the PDB:

    select file#, status from v$datafile;
    alter database datafile <file#> online;
    alter database datafile <file#> online;
    ...
    alter database datafile <file#> online;
    
  6. Restart redo apply in the standby CDB:

    edit database CDB26_AARHUS set state='apply-on';
    
  7. Connect to the standby CDB and verify the PDB’s recovery status (should be ENABLED). Ensure that the recovery process is running (should be APPLYING_LOG):

    select recovery_status 
    from   v$pdbs 
    where  name='DB19';
    
    select process, status, sequence# 
    from   v$managed_standby
    where  process like 'MRP%'; 
    
  8. Optionally, but strongly recommended, perform a switchover as the ultimate test. Connect to dgmgrl using username and password:

    dgmgrl sys as sysdba
    
  9. Perform the switchover:

    validate database "CDB26_AARHUS";
    switchover to "CDB26_AARHUS";
    
  10. Then, finally, I connect to the new primary CDB, CDB26_AARHUS. I ensure the PDB is open in READ WRITE mode and unrestricted. I check the status of all data files is SYSTEM or ONLINE:

    alter pluggable database DB19 open;
    
    alter session set container=DB19;
    
    select open_mode, restricted, recovery_status 
    from   v$pdbs;
    
    select name, status 
    from   v$datafile;
    

3. Post-upgrade Tasks

  1. I take care of the post-upgrade tasks.

  2. I update any profiles or scripts that use the database.

  3. I remove the non-CDB entry from /etc/oratab and Grid Infrastructure on the standby. AutoUpgrade takes care of the primary.

  4. I clean up and remove the old source non-CDB. On both primary and standby, I remove: * Database files, like PFile, SPFile, password file, control file, and redo logs. * Database directory, like diagnostic_dest, adump, or db_recovery_file_dest.

  5. On the standby, I remove the source non-CDB data files using asmcmd:

    cd DATA
    rm -rf DATA/DB19_AARHUS/DATAFILE
    
    • Important: I don’t do that on the primary because I reused the data files during the plug-in.

That’s It!

I’m using AutoUpgrade for the entire upgrade; nice and simple. I must take care of the standby database after the migration.

When I reuse the data files on the primary database, I no longer have the non-CDB for rollback. Be sure to plan accordingly.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Data Files in ASM/OMF

Primary Database

  • When I create the PDB, I reuse the data files from the primary database. The data files are in the OMF location of the non-CDB database, e.g.:

    +DATA/DB19_COPENHAGEN/DATAFILE
    
  • However, after plug-in, the proper OMF location for my PDB data files is:

    +DATA/CDB26_COPENHAGEN/<PDB GUID>/DATAFILE
    
  • The CDB doesn’t care about this anomaly. However, if I want to conform to the OMF naming standard, I must move the data files. Find the data files:

    select file#, name from v$datafile;
    
  • I use online datafile move to move those files in the wrong location. I don’t specify a new data file name, so the database generates an OMF name:

    alter database move datafile <file#>;
    
    • Online data file move creates a copy of the data files before switching to the new file and dropping the old one. So I need additional disk space, and the operation takes time while the database copies the file.

Standby Database

  • There is nothing to do on the standby database. When I restore the data files, they are placed in the right OMF location.

Rollback Options

When you convert a non-CDB to a PDB, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • RMAN backups
  • Storage snapshots
  • Standby database (the source non-CDB standby database, which you leave behind)
  • Refreshable clone PDB

What If I Have Multiple Standby Databases?

You must repeat the relevant steps for each standby database.

You can mix and match standbys with deferred and enabled recovery. Let’s say that you want to use enabled recovery on DR standbys and deferred recovery on the reporting standbys.

Role Name Method
Local DR CDB26_COPENHAGEN2 Enabled recovery
Local reporting CDB26_COPENHAGENREP Deferred recovery
Remote DR CDB26_AARHUS Enabled recovery
Remote reporting CDB26_AARHUSREP Deferred recovery

You would set the following in your AutoUpgrade config file:

upg1.manage_standbys_clause=standbys=CDB26_COPENHAGEN2,CDB26_AARHUS

You would need to merge the two procedures together. This is left as a reader’s exercise.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also removes the non-CDB from the Grid Infrastructure configuration.

Pre-plugin Backups

After converting a non-CDB to PDB, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata (config file parameter export_rman_backup_for_noncdb_to_pdb).

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

What If My Database Is Encrypted

AutoUpgrade fully supports upgrading an encrypted database. I can still use the above procedure with a few changes.

You’ll need to input the non-CDB database keystore password into the AutoUpgrade keystore. You can find the details in a previous blog post.

In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

Further Information

Using Refreshable Clone PDBs from a Standby Database

The other day, I found myself praising refreshable clone PDBs to a customer (which I often do because it’s a killer feature). They liked the feature too but asked:

> We are concerned about the impact on the source database. When AutoUpgrade connects to the source database and clones the database, can we offload the work to a standby database?

Refreshable clone PDBs can eat up your resources if you don’t constrain the target CDB. So, let’s see what we can do.

Mounted Standby Database

This won’t work, because you must be able to connect to the database via a regular database link. Further, AutoUpgrade and the cloning process must be able to execute queries in the source database, which is not possible on a mounted database.

Open Standby Database / Active Data Guard

What if you stop redo apply and open the standby database? Or if you have Active Data Guard?

In this case, the database would be open in read-only mode, and those queries would work. However, the refreshable clone PDB feature was developed to work in and require a read-write database, so this won’t work either – Not even if you enable automatic redirection of DML operations (ADG_REDIRECT_DML).

Even if this case would work, we wouldn’t recommend it. Because, we recommend that you run analyze and fixups mode on the source database, which wouldn’t be possible on a read-only database. You could run analyze and fixups on the primary database. But is that really an option? If you’re worried about affecting your primary and want to offload to the standby, would running those commands on the primary be an option?

Snapshot Standby Database

What about a snapshot standby? That’s a read-write database. Let’s give it a try.

  1. Convert the source standby to a snapshot standby:
    DGMGRL&gt; convert database '...' to snapshot standby;
    
    • The standby must remain a snapshot standby for the entire duration of the job. If you need to switch over or fail over to the standby, you must restart the entire operation.
  2. Ensure the PDB is open on the source standby.
    alter pluggable database ... open;
    
    • Otherwise, you will run into ORA-03150 when querying the source database over the database link.
  3. In the source standby, create the user used by the database link and grant appropriate permissions:
    create user dblinkuser identified by ...;
    grant create session, create pluggable database, select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  4. In the target CDB, create a database link that points to the PDB in source standby:
    create database link clonepdb
    connect to dblinkuser identified by ...
    using '';
    
  5. Create an AutoUpgrade config file:
    global.global_log_dir=/home/oracle/autoupgrade/log
    global.keystore=/home/oracle/autoupgrade/keystore
    upg1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
    upg1.target_home=/u01/app/oracle/product/23.0.0/dbhome_1
    upg1.sid=CDB19
    upg1.target_cdb=CDB23
    upg1.pdbs=PDB1
    upg1.source_dblink.PDB1=CLONEPDB 300
    upg1.target_pdb_copy_option.PDB1=file_name_convert=none
    upg1.start_time=+12h
    
  6. Start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar ... -mode deploy
    
  7. Convert the source standby back to a physical standby:
    DGMGRL&gt; convert database '...' to physical standby;
    

Is It Safe?

Using a standby database for anything else than your DR strategy, is sometimes perceived as risky. But it is not, as I explain in this blog post (section What Happens If I Need to Switch Over or Fail Over?).

Happy upgrading!

Introduction to Patching Oracle Data Guard

Here’s a blog post series about patching Oracle Data Guard in single instance configuration. For simplicity, I am patching with Oracle AutoUpgrade to automate the process as much as possible.

First, a few ground rules:

The Methods

There are three ways of patching Data Guard:

All At Once

  • You patch all databases at the same time.
  • You need an outage until you’ve patched all databases.
  • You need to do more work during the outage.
  • You turn off redo transport while you patch.

Standby-first with restart

  • All the patches you apply must be standby-first installable (see appendix).
  • You need an outage to stop the primary database and restart it in the target Oracle home.
  • During the outage, you have to do less work to do compared to all at once and less work overall compared to standby-first with switchover.
  • The primary database remains the same. It is useful if you have an async configuration with a much more powerful primary database or just prefer to have a primary database at one specific location.

Standby-first with switchover

  • All the patches you apply must be standby-first installable (see appendix).
  • You need an outage to perform a switchover. If your application is well-configured, users will just experience it as a brownout (hanging for a short period while the switchover happens).
  • During the outage, you have little to do, but overall, there are more steps.
  • After the outage, if you switch over to an Active Data Guard, the workload from the read-only workload has pre-warmed the buffer cache and shared pool.

Summary

All at one Standby-first with restart Standby-first with switchover
Works for all patches Works for most patches Works for most patches
Bigger interruption Bigger interruption Smaller interruption
Downtime is a database restart Downtime is a database restart Downtime/brownout is a switchover
Slightly more effort Least effort Slightly more effort
Cold database Cold database Pre-warmed database if ADG

Here’s a decision tree you can use to find the method that suits you.

Decision tree showing which method to choose

What If

RAC

These blog posts focus on single instance configuration.

Conceptually, patching Data Guard with RAC databases is the same; you can’t use the step-by-step guides in this blog post series. Further, AutoUpgrade doesn’t support all methods of patching RAC databases (yet).

I suggest that you take a look at these blog posts instead:

Or even better, use Oracle Fleet Patching and Provisioning.

Oracle Restart

You can use these blog posts if you’re using Oracle Restart. You can even combine patching Oracle Restart and Oracle Database into one operation using standby-first with restart.

We’re Really Sensitive To Downtime?

In these blog posts, I choose the easy way – and that’s using AutoUpgrade. It automates many of the steps for me and has built-in safeguards to ensure things don’t go south.

But this convenience comes at a price: sligthly longer outage. Partly, because AutoUpgrade doesn’t finish a job before all post-upgrade tasks are done (like Datapatch and gathering dictionary stats).

If you’re really concerned about downtime, you might be better off with your own automation, where you can open the database for business as quickly as possible while you run Datapatch and other post-patching activities in the background.

Datapatch

Just a few words about patching Data Guard and Datapatch.

  • You always run Datapatch on the primary.
  • You run Datapatch just once, and the changes to the data dictionary propagates to the standby via redo.
  • You run Datapatch when all databases are running out of the new Oracle home or when redo transport is turned off. The important part is that the standby that applies the Datapatch redo must be on the same patch level as the primary.

Happy patching

Appendix

Standby-First Installable

You can only perform standby-first patch apply if all the patches are marked as standby-first installable.

Standby-first patch apply is when you patch the standby database first, and you don’t disable redo transport/apply.

You can only use standby-first patch apply if all the patches are classified as standby-first installable. For each of the patches, you must:

  • Examine the patch readme file.
  • One of the first lines will tell if this specific patch is standby-first installable. It typically reads: > This patch is Data Guard Standby-First Installable

Release Updates are always standby-first installable, and so are most of the patches for Oracle Database.

In rare cases, you find a non-standby-first installable patch, so you must patch Data Guard using all at once.

Other Blog Posts in the Series

How To Patch Oracle Data Guard Using AutoUpgrade And Standby-First Patch Apply With Switchover

Let me show you how I patch my Oracle Data Guard configuration. I make it as easy as possible using Oracle AutoUpgrade. I reduce the interruption by doing standby-first patch apply with a switchover.

  • My Data Guard configuration consists of two databases:
    • SID: SALES
    • Databases: SALES_COPENHAGEN and SALES_AARHUS
    • Hosts: copenhagen and aarhus
    • Primary database: SALES_COPENHAGEN running on copenhagen

Preparations

You should do these preparations in advance of your maintenance window. They don’t interupt operations on your databases.

  • I download the patches using AutoUpgrade.

    • I can download the patches from any computer. It doesn’t have to be one of the database hosts, which typically don’t have internet access.
  • I verify all patches are standby-first installable and my configuration meets the requirements for standby-first patch apply.

  • I create a new Oracle home on all hosts.

    • Create a config file called sales.cfg:
      global.global_log_dir=/home/oracle/autoupgrade-patching/sales
      patch1.source_home=/u01/app/oracle/product/19.3.0.0/dbhome_1
      patch1.target_home=/u01/app/oracle/product/19/dbhome_19_26_0
      patch1.sid=SALES
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.download=no
      
      • Start AutoUpgrade in create_home mode:
      java -jar autoupgrade.jar -config sales.cfg -patch -mode create_home
      
      • AutoUpgrade also runs root.sh if oracle user has sudo privileges. Else, I must manually execute root.sh.
  • Optionally, but recommended, I run an analysis on the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode analyze
    
    • Check the findings in the summary report.

Patching

Proceed with the following when your maintenance window starts.

  • Update listener.ora on the standby host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@aarhus] lsnrctl reload
    
  • Patch the standby database:

    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
    • I don’t disable redo transport/apply.
  • Optionally, test the application of patches using a snapshot standby database.

  • interruption starts!

  • Switch over to SALES_AARHUS:

    DGMGRL> switchover to sales_aarhus;
    
    • Perform draining in advance according to your practices.
    • Depending on how your application is configured, the users will experience this interruption as a brown-out or downtime.
  • Update listener.ora on the new standby host (copenhagen). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@copenhagen] lsnrctl reload
    
  • Patch the new standby database (see appendix):

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
  • Verify the Data Guard configuration and ensure the standby database is receiving and applying redo:

    DGMGRL> show database SALES_COPENHAGEN;
    DGMGRL> show database SALES_AARHUS;
    DGMGRL> validate database SALES_COPENHAGEN;
    DGMGRL> validate database SALES_AARHUS;
    

Post-Patching

  • Connect to the new primary database and execute Datapatch. You do that by calling AutoUpgrade in upgrade mode:
    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode upgrade -clear_recovery_data
    
    • Since I’m reusing the same config file, I must add the -clear_recovery_data flag. Otherwise, AutoUpgrade gets a little confused.

Happy Patching!

Appendix

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Further Reading

Other Blog Posts in the Series

Avoid Problems on the Primary Database by Testing on a Snapshot Standby

One of the advantages of standby-first patch apply, is that I can test the patches in a production-like environment (the standby) before applying them to the primary. Should I find any issues with the patches, I can stop the process and avoid impacting the primary database.

Here’s an overview of the process.

For demo purposes, my Data Guard configuration consists of two databases:

  • SID: SALES
  • Databases: SALES_COPENHAGEN and SALES_AARHUS
  • Hosts: copenhagen and aarhus
  • Primary database: SALES_COPENHAGEN running on copenhagen

How To

This procedure starts right after I’ve patched the standby (SALES_AARHUS). It runs out of the target Oracle home, whereas the primary database (SALES_COPENHAGEN) still runs on the source Oracle home.

  • Convert the patched standby to a snapshot standby:

    DGMGRL> convert database SALES_AARHUS to snapshot standby;
    
  • Test the patch apply by running Datapatch on the standby:

    [oracle@aarhus] $ORACLE_HOME/OPatch/datapatch
    
    • One always runs Datapatch on the primary database and the changes made by the patches goes into redo to the standby.
    • But, since I converted to a snapshot standby, it is now opened like a normal database and I can run Datapatch on it.
    • If Datapatch completes without problems on the standby, I can be pretty sure it will do so on the primary as well. The standby is after all an exact copy of the primary database.
  • Optionally, perform additional testing on the standby.

    • I can connect any application and perform additional tests.
    • I can use SQL Performance Analyzer to check for regressing SQL statements.
    • I can make changes to any data in the standby. It is protected by a restore point.
  • When done, convert the snapshot standby back to a physical standby:

    DGMGRL> convert database SALES_AARHUS to physical standby;
    
    • This implicitly shuts down the standby, flashes back to the restore point and re-opens the database as a physical standby.
    • All changes made when it was a snapshot standby, including the Datapatch run, are undone.

Continue the patching procedure on the primary database as described elsewhere.

Is It Safe?

Sometimes, when I suggest using the standby for testing, people are like: Huh! Seriously?

What Happens If I Need to Switch Over or Fail Over?

I can still perform a switchover or a failover. However, they will take a little bit longer.

When I convert to snapshot standby:

  • Redo transport is still active.
  • Redo apply is turned off.

So, the standby receives all redo from the primary but doesn’t apply it. Since you normally test for 10-20 minutes, this would be the maximum apply lag. On a well-oiled standby, it shouldn’t take more than a minute or two to catch up.

When performing a switchover or failover on a snapshot standby, you should expect an increase with the time it takes to:

  • Shut down
  • Flashback
  • Apply redo

I’d be surprised if that would be more than 5 minutes. If your RTO doesn’t allow for a longer period:

  • Get a second standby.
  • Consider the reduction in risk you get when you test on the standby. Perhaps a short increase in RTO could be allowed after all.

What Happens If Datapatch Fails

If Datapatch fails on my snapshot standby, I should be proud of myself. I just prevented the same problem from hitting production.

  • I grab all the diagnostic information I need, so I can work with Oracle Support on the issue.
  • Convert back to physical standby, which will undo the failed Datapatch run.
  • If I expect to solve the issue quickly, leave the standby running in the target Oracle home. Otherwise, put it back into the source Oracle home.

So, yes, it’s safe to use!

Happy testing

Appendix

Other Blog Posts in the Series