Upgrade Oracle Database PDB 19c to 26ai with Data Guard and Re-using Data Files (Enabled Recovery)

Let me show you how to upgrade a single PDB to Oracle AI Database 26ai.

This is called an unplug-plug upgrade and is much faster than a full CDB upgrade.

But what about Data Guard? I want to reuse the data files and plug in using enabled recovery. This ensures that my standby protects the PDB immediately.

Let’s see how it works.

Environment

The demo environment:

  • Two servers:
    • COPENHAGEN (primary)
    • AARHUS (standby)
  • Source CDB:
    • SID: CDB19
    • Primary unique name: CDB19_COPENHAGEN
    • Standby unique name: CDB19_AARHUS
  • Target CDB:
    • SID: CDB26
    • Primary unique name: CDB26_COPENHAGEN
    • Standby unique name: CDB26_AARHUS
  • PDB to upgrade: SALES

Overview of the demo environment for upgrade to Oracle Database 26ai

1. Prepare Standby

I’ve already prepared my database (SALES) and installed a new Oracle home. I’ve also created a new container database (CDB26) and configured it for Data Guard.

The maintenance window has started, and users have left the database.

  1. If I use Active Data Guard, I close my PDB (SALES) on the source standby (CDB19_AARHUS):
    alter pluggable database SALES close instances=all;
    
  2. I stop redo apply in my target standby (CDB26_AARHUS):
    edit database cdb26_aarhus set state='apply-off';
    
    • I turn off redo apply, so I can control when the target standby should claim the PDB data files.
    • I must ensure the source standby (CDB19_AARHUS) has released the PDB data files before doing the plug-in on the target standby (CDB26_AARHUS).

2. Upgrade On Primary

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/SALES-CDB26
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=CDB19
    upg1.pdbs=SALES
    upg1.target_cdb=CDB26
    upg1.manage_standbys_clause=standbys=all
    
    • I specify the source and target Oracle homes.
    • sid contains the SID of my current CDB.
    • target_cdb specifies the SID of the container where I plug into.
    • pdbs is the PDB that I want to upgrade. I can specify additional PDBs in a comma-separated list.
    • I want to reuse the existing data files, so I omit target_pdb_copy_option.
    • To plug in with enabled recovery, I set manage_standbys_clause=all.
    • Check the appendix for additional parameters.
  2. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config SALES-CDB26.cfg -mode deploy
    
    • AutoUpgrade starts by analyzing the database for upgrade readiness and executes the pre-upgrade fixups.
    • Next, it creates a manifest file and unplugs from source CDB (CDB19_COPENHAGEN).
    • Then, it plugs into the target CDB with enabled recovery (CDB26_COPENHAGEN).
    • The plug-in doesn’t happen on the target standby (CDB26_AARHUS), because I stopped redo apply.
    • Finally, it upgrades the PDB (SALES).
  3. I wait until AutoUpgrade reaches the DBUPGRADE phase. I monitor progress:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  4. When AutoUpgrade is on the DBUPGRADE phase, it means that the PDB has been unplugged from the source CDB (CDB19_COPENHAGEN) and plugged into the target CDB (CDB26_COPENHAGEN).

3. Check Standby

I must ensure the source standby (CDB19_AARHUS) has applied the redo, which unplugs the PDB.

  1. On the standby, I look in the source standby alert log:
    tail -100f $ORACLE_BASE/diag/rdbms/cdb19_aarhus/CDB1/trace/alert_CDB19.log
    
    
  2. I look for proof that SALES PDB has been unplugged:
    2026-03-23T07:50:44.450680+00:00
    SALES(3):Recovery deleting file #16:'/u02/oradata/CDB19_AARHUS/4DAD2FA764C642E4E0638338A8C0A72D/datafile/o1_mf_users_nw1vs832_.dbf' from controlfile.
    SALES(3):Recovery dropped tablespace 'USERS'
    SALES(3):Recovery dropped temporary tablespace 'TEMP'
    SALES(3):Recovery deleting file #15:'/u02/oradata/CDB19_AARHUS/4DAD2FA764C642E4E0638338A8C0A72D/datafile/o1_mf_undotbs1_nw1vrgby_.dbf' from controlfile.
    SALES(3):Recovery dropped tablespace 'UNDOTBS1'
    SALES(3):Recovery deleting file #14:'/u02/oradata/CDB19_AARHUS/4DAD2FA764C642E4E0638338A8C0A72D/datafile/o1_mf_sysaux_nw1vrgbx_.dbf' from controlfile.
    SALES(3):Recovery dropped tablespace 'SYSAUX'
    SALES(3):Recovery deleting file #13:'/u02/oradata/CDB19_AARHUS/4DAD2FA764C642E4E0638338A8C0A72D/datafile/o1_mf_system_nw1vrgbg_.dbf' from controlfile.
    SALES(3):Recovery dropped tablespace 'SYSTEM'
    SALES(3):Recovery dropped pluggable database 'SALES'
    
    • Notice the last line informing me that the SALES PDB has been unplugged from my source standby.
    • The redo from the ALTER PLUGGABLE DATABASE ... UNPLUG command on the primary has been applied on the standby.
    • All PDB data files are now consistent and available for the target standby.

4. Standby Data Files

I must ensure the target standby (CDB26_AARHUS) can find the PDB data files.

The plug-in operation happens on the primary. AutoUpgrade uses a manifest file for the plug-in. The manifest file contains a list of all the data files on the primary, but there’s no information about the standby database.

How does the standby find the data files to use for the plug-in?

1. Regular File System

  1. The standby (CDB26_AARHUS) expects to find the data files in the same location as on the primary (CDB26_COPENHAGEN).
  2. During plug-in, I reused the data files. So, the primary data files are in the location of the source CDB (CDB19_COPENHAGEN):
    /u02/oradata/CDB19_COPENHAGEN/SALES/system01.dbf
    /u02/oradata/CDB19_COPENHAGEN/SALES/sysaux01.dbf
    /u02/oradata/CDB19_COPENHAGEN/SALES/undo01.dbf
    /u02/oradata/CDB19_COPENHAGEN/SALES/users01.dbf
    
  3. However, on the standby (CDB19_AARHUS), the data files are here:
    /u02/oradata/CDB19_AARHUS/SALES/system01.dbf
    /u02/oradata/CDB19_AARHUS/SALES/sysaux01.dbf
    /u02/oradata/CDB19_AARHUS/SALES/undo01.dbf
    /u02/oradata/CDB19_AARHUS/SALES/users01.dbf
    
  4. On the standby, I move the data files to the correct location:
    mv /u02/oradata/CDB19_AARHUS/SALES /u02/oradata/CDB26_AARHUS/SALES
    
  5. And use DB_FILE_NAME_CONVERT to redirect the files to the new location:
    alter system set db_file_name_convert='/u02/oradata/CDB19_COPENHAGEN','/u02/oradata/CDB26_AARHUS' scope=both;
    
    • Notice how I translate the source primary location to the target standby location.
    • The plug-in operation only knows where the primary data files are, and expects the standby use the exact same location (which it doesn’t).

2. OMF In Regular File System

  1. The standby (CDB26_AARHUS) expects to find the data files in the OMF location (db_create_file_dest).

  2. On standby, I create the OMF location for the PDB in the target CDB (CDB26_AARHUS):

    mkdir -p /u02/oradata/CDB26_AARHUS/4DB38AE91A244341E0638338A8C0934A
    
    • 4DB38AE91A244341E0638338A8C0934A is the GUID of the PDB.
    • It doesn’t change when you move the PDB.
    • You can find the GUID of the PDB in the primary: select name, guid from v$containers;.
  3. I move the PDB data files from the source OMF location to the target OMF location:

    mv /u02/oradata/CDB19_AARHUS/4DB38AE91A244341E0638338A8C0934A/datafile \
       /u02/oradata/CDB26_AARHUS/4DB38AE91A244341E0638338A8C0934A/datafile
    

3. ASM

  1. The standby (CDB26_AARHUS) expects to find the data files in the OMF location (db_create_file_dest).

  2. On standby, I connect to the ASM instance:

    export ORACLE_SID=+ASM1
    sqlplus / as sysasm
    
  3. I create the OMF location for the PDB in the target CDB (CDB26_AARHUS):

    alter diskgroup data add directory '+DATA/CDB26_AARHUS/4DB38AE91A244341E0638338A8C0934A';
    alter diskgroup data add directory '+DATA/CDB26_AARHUS/4DB38AE91A244341E0638338A8C0934A/DATAFILE';
    
    • 4DB38AE91A244341E0638338A8C0934A is the GUID of the PDB.
    • It doesn’t change when you move the PDB.
    • You can find the GUID of the PDB in the primary: select name, guid from v$containers;.
  4. For each data file in my PDB (including undo, excluding temp files), I must create an ASM alias, for instance:

    alter diskgroup data add alias
    '+DATA/CDB26_AARHUS/4DB38AE91A244341E0638338A8C0934A/DATAFILE/users_273_1103046827'
    for '+DATA/CDB19_AARHUS/4DB38AE91A244341E0638338A8C0934A/DATAFILE/users.273.1103046827';
    
    • I must create an alias for each data file. I create the alias in the OMF location of the PDB in the target standby.
    • The alias must not contain dots/punctuation (.). That would violate the OMF naming standard. Notice how I replaced those with an underscore.
    • The alias points to the location of the data file in the source standby location.
    • You can find a script to create the aliases in MOS note KB106558.

5. Standby Redo Apply

At this point:

  • The PDB data files are consistent and unplugged from the source standby (CDB19_AARHUS).
  • I’ve ensured that the target standby (CDB26_AARHUS) can find the data files.

I can now re-enable redo apply in my target standby (CDB26_AARHUS).

  1. On the standby, I re-enable redo apply:

    edit database cdb26_aarhus set state='apply-on';
    
  2. I monitor the alert log (CDB26_AARHUS):

    cd $ORACLE_BASE/diag/rdbms/cdb26_aarhus/CDB26/trace
    tail -100f alert_CDB26.log
    
  3. I ensure that the target standby (CDB26_AARHUS) finds and plugs in the PDB data files:

    2026-03-23T07:52:26.550343+00:00
    PR00 (PID:26567): Media Recovery Log /u01/app/oracle/CDB26_AARHUS/archivelog/2026_03_23/o1_mf_1_30_nw1w78oo_.arc [krd.c:10255]
    Recovery created pluggable database SALES
    SALES(3):Recovery scanning directory +DATA/CDB26_AARHUS/4DB38AE91A244341E0638338A8C0934A/DATAFILE/ for any matching files
    SALES(3):Successfully added datafile 12 to media recovery
    SALES(3):Datafile #12: '+DATA/CDB26_AARHUS/4DB38AE91A244341E0638338A8C0934A/DATAFILE/o1_mf_system_nw1vrgbg_.dbf'
    
    • There should be one entry for each PDB data file.

6. Complete Upgrade

As the upgrade progresses, the redo is shipped to the target standby and applied.

  1. I wait until AutoUpgrade completes the upgrade:
    Job 100 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups.
  2. I review the Autoupgrade Summary Report. The path is printed to the console:
    vi /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
  3. I take care of the post-upgrade tasks.
  4. AutoUpgrade unplugs the PDB from the source CDB.
    • On the primary, the PDB data files are in the original location. In the directory structure of the source CDB.
    • Take care you don’t delete them by mistake; they are now used by the target CDB.
    • Optionally, move them into the correct location using online datafile move.
    • The same applies to the standby if I use ASM. For regular file system, I moved the PDB data files to the correct location.

7. Check Standby

That’s It!

With AutoUpgrade, you can easily upgrade a single PDB using an unplug-plug upgrade. For maximum protection and to minimize downtime, you can reuse the data files on both primary and standby.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Rollback Options

When you perform an unplug-plug upgrade, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • RMAN backups.
  • Storage snapshots.
  • If you have multiple standbys, you can leave one behind.

Compatible

During plug-in, the PDB automatically inherits the compatible setting of the target CDB. You don’t have to raise the compatible setting manually.

Typically, the target CDB has a higher compatible and the PDB raises it on plug-in. This means you don’t have the option of downgrading.

If you want to preserve the option to downgrade, be sure to set the compatible parameter in the target CDB to the same value as in the source CDB.

Pre-plugin Backups

After doing an unplug-plug upgrade, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata automatically.

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade handles it transparently. You must manually recreate services in the target CDB using srvctl.

What If I Use Oracle Restart?

No changes. You must manually recreate services in the target CDB using srvctl.

What If My Database Is Encrypted

AutoUpgrade fully supports upgrading an encrypted PDB.

You’ll need to input the source and target CDB keystore passwords into the AutoUpgrade keystore. You can find the details in a previous blog post.

In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

Before starting redo apply on the standby, you must copy the keystore from the primary to the standby. Check (KB106558) Reusing the Source Standby Database Files When Plugging a PDB into the Primary Database of a Data Guard Configuration for details.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: (default: Yes) AutoUpgrade upgrades the PDB time zone file after the actual upgrade. This might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner. In multitenant, a PDB can use a different time zone file than the CDB.

  • target_pdb_name: AutoUpgrade renames the PDB. I must specify the original PDB name (SALES) as a suffix to the parameter:

    upg1.target_pdb_name.SALES=NEWSALES
    

    If I have multiple PDBs, I can specify target_pdb_name multiple times:

    upg1.pdbs=SALES,OLDNAME1,OLDNAME2
    upg1.target_pdb_name.SALES=NEWSALES
    upg1.target_pdb_name.OLDNAME1=NEWNAME1
    upg1.target_pdb_name.OLDNAME2=NEWNAME2
    
  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

  • Check the documentation for the full list.

Upgrade Oracle Database 19c PDB to 26ai with Data Guard and Restoring Data Files (Deferred Recovery)

Let me show you how to upgrade a single PDB to Oracle AI Database 26ai.

This is called an unplug-plug upgrade and is much faster than a full CDB upgrade.

But what about Data Guard? I use deferred recovery, meaning I must restore the PDB after the upgrade.

Let’s see how it works.

1. Upgrade On Primary

I’ve already prepared my database and installed a new Oracle home. I’ve also created a new container database or decided to use an existing one. The new container database is configured for Data Guard.

The maintenance window has started, and users have left the database.

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/SALES-CDB26
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=CDB19
    upg1.pdbs=SALES
    upg1.target_cdb=CDB26
    upg1.target_pdb_copy_option.SALES=file_name_convert=none
    
    • I specify the source and target Oracle homes. I’ve already installed the target Oracle home.
    • sid contains the SID of my current CDB.
    • target_cdb specifies the SID of the container where I plug into.
    • pdbs is the PDB that I want to upgrade. I can specify additional PDBs in a comma-separated list.
    • I want to reuse the existing data files, so I omit target_pdb_copy_option.
    • To plug in with deferred recovery, I can either omit manage_standbys_clause or set manage_standbys_clause=none.
    • Check the appendix for additional parameters.
  2. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config SALES-CDB26.cfg -mode deploy
    
    • AutoUpgrade starts by analyzing the database for upgrade readiness and executes the pre-upgrade fixups.
    • Next, it creates a manifest file and unplugs from source CDB.
    • Then, it plugs into the target CDB with deferred recovery. At this point, the standby is not protecting the new PDB.
    • Finally, it upgrades the PDB.
  3. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  4. In the end, AutoUpgrade completes the upgrade:

    Job 100 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    The following PDB(s) were created with standbys=none option. Refer to the postcheck result CDB19_COPENHAGEN_postupgrade.log for more details on manual actions needed.
    SALES
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups.
    • AutoUpgrade informs me that it plugged in with deferred recovery. To protect the PDB on the standby, I must restore it.
  5. I review the Autoupgrade Summary Report. The path is printed to the console:

    vi /home/oracle/autoupgrade/logs/SALES-CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
  6. I take care of the post-upgrade tasks.

  7. AutoUpgrade drops the PDB from the source CDB.

    • The data files used by the target CDB are in the original location. In the directory structure of the source CDB.
    • Take care you don’t delete them by mistake; they are now used by the target CDB.
    • Optionally, move them into the correct location using online datafile move.

2. Restore PDB On Standby

  • I execute all commands on the same host – the standby system.

  • On the standby, I verify that recovery status is disabled:

    SQL> select open_mode, recovery_status
         from v$pdbs where name='SALES';
    
    OPEN_MODE    RECOVERY_STATUS
    ____________ __________________
    MOUNTED      DISABLED
    
    • This means that I plugged in with deferred recovery.
    • The standby is not protecting this PDB.
  • Next, I connect to the standby using RMAN, and I restore the PDB:

    connect target /
    run{
       allocate channel disk1 device type disk;
       allocate channel disk2 device type disk;
    
       set newname for pluggable database SALES to new;
    
       restore pluggable database SALES
          from service <primary_service>
          section size 64G;
    }
    
    • You can add more channels depending on your hardware.
    • Replace SALES with the name of your PDB.
    • <primary_service> is a connect string to the primary database.
  • Next, I connect to the standby database using Data Guard broker and turn off redo apply:

    edit database <stdby_unique_name> set state='apply-off';
    
  • Back in RMAN, still connected to the standby, I switch to the newly restored data files:

    switch pluggable database SALES to copy;
    
  • Then, I connect to the standby and generate a list of commands that will online all the data files:

    alter session set container=SALES;
    select 'alter database datafile '||''''||name||''''||' online;' from v$datafile;
    
    • Save the commands for later.
    • There should be one row for each data file.
  • If my standby is an Active Data Guard, I must restart it into MOUNT mode.

    alter session set container=CDB$ROOT;
    shutdown immediate
    startup mount
    
  • Now, I can re-enable recovery and online the data files:

    alter session set container=SALES;
    alter pluggable database enable recovery;
    alter database datafile <file1> online;
    alter database datafile <file2> online;
    ...
    alter database datafile <filen> online;
    
    • I must connect to the PDB.
    • I must execute the alter database datafile ... online command for each data file.
  • I turn on redo apply:

    edit database <stdby_unique_name> set state='apply-on';
    
  • At this point, the standby protects my PDB.

  • After a minute or two, I check the Data Guard config:

    validate database <stdby_unique_name>;
    
  • Once my standby is in sync, I can do a switchover as the ultimate test:

    switchover to <stdby_unique_name>;
    
  • Now, I connect to the new primary and ensure the PDB opens in read write mode and unrestricted:

    select open_mode, restricted
    from v$pdbs
    where name='SALES';
    
    OPEN_MODE     RESTRICTED
    _____________ _____________
    READ WRITE    NO
    
  • You can find the full procedure in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (KB90519). Check sections Steps for Preparing to enable recovery of the PDB and Steps required for enabling recovery on the PDB after the files have been copied.

That’s It!

With AutoUpgrade, you can easily upgrade a single PDB using an unplug-plug upgrade. The easiest way to handle the standby is to restore the pluggable database after the upgrade.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Rollback Options

When you perform an unplug-plug upgrade, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • Copy data files by using target_pdb_copy_option.
  • RMAN backups.
  • Storage snapshots.

In this blog post, I reuse the data files, so I must have an alternate rollback plan.

Copy or Re-use Data Files

When you perform an unplug-plug upgrade, you must decide what to do with the data files.

Copy Re-use
Plug-in Time Slow, needs to copy data files. Fast, re-uses existing data files.
Disk space Need room for a full copy of the data files. No extra disk space.
Rollback Source data files are left untouched. Re-open PDB in source CDB. PDB in source CDB unusable. Rely on other rollback method.
AutoUpgrade Enable using target_pdb_copy_option. Default. Omit parameter target_pdb_copy_option.
Location after plug-in The data files are copied to the desired location. The data files are left in the source CDB location. Don’t delete them by mistake. Consider moving them using online data files move.
Syntax used CREATE PLUGGABLE DATABASE ... COPY CREATE PLUGGABLE DATABASE ... NOCOPY

target_pdb_copy_option

Use this parameter only when you want to copy the data files.

  • target_pdb_copy_option=file_name_convert=('/u02/oradata/SALES', '/u02/oradata/NEWSALES','search-1','replace-1') When you have data files in a regular file system. The value is a list of pairs of search/replace strings.
  • target_pdb_copy_option=file_name_convert=none When you want data files in ASM or use OMF in a regular file system. The database automatically generates new file names and puts data files in the right location.
  • target_pdb_copy_option=file_name_convert=('+DATA/SALES', '+DATA/NEWSALES') This is a very rare configuration. Only when you have data files in ASM, but don’t use OMF. If you have ASM, I strongly recommend using OMF.

You can find further explanation in the documentation.

Move

The CREATE PLUGGABLE DATABASE statement also has a MOVE clause in addition to COPY and NOCOPY. In a regular file system, the MOVE clause works as you would expect. However, in ASM, it is implemented via copy-and-delete, so you might as well use the COPY option.

AutoUpgrade doesn’t support the MOVE clause.

Compatible

During plug-in, the PDB automatically inherits the compatible setting of the target CDB. You don’t have to raise the compatible setting manually.

Typically, the target CDB has a higher compatible and the PDB raises it on plug-in. This means you don’t have the option of downgrading.

If you want to preserve the option of downgrading, be sure to set the compatible parameter in the target CDB to the same value as the source CDB.

Pre-plugin Backups

After doing an unplug-plug upgrade, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata automatically.

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade handles it transparently. You must manually recreate services in the target CDB using srvctl.

What If I Use Oracle Restart?

No changes. You must manually recreate services in the target CDB using srvctl.

What If My Database Is Encrypted

AutoUpgrade fully supports upgrading an encrypted PDB.

You’ll need to input the source and target CDB keystore passwords into the AutoUpgrade keystore. You can find the details in a previous blog post.

In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

Before restoring the pluggable database on the standby, you must copy the keystore from the primary to the standby.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the PDB time zone file after the actual upgrade. This might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner. In multitenant, a PDB can use a different time zone file than the CDB.

  • target_pdb_name: AutoUpgrade renames the PDB. I must specify the original PDB name (SALES) as a suffix to the parameter:

    upg1.target_pdb_name.SALES=NEWSALES
    

    If I have multiple PDBs, I can specify target_pdb_name multiple times:

    upg1.pdbs=SALES,OLDNAME1,OLDNAME2
    upg1.target_pdb_name.SALES=NEWSALES
    upg1.target_pdb_name.OLDNAME1=NEWNAME1
    upg1.target_pdb_name.OLDNAME2=NEWNAME2
    
  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

  • Check the documentation for the full list.

Multiple Standby Databases

You must restore the PDB to each standby database.

If you have multiple standby databases, it’s a lot of work for the primary to handle the restore pluggable database command from all standbys.

Imagine we have three standby databases. Here’s an alternative approach:

  • First, Standby 1 restores from primary.
  • Next, standby 2 restores from standby 1.
  • At the same time, standby 3 restores from primary.
  • This spreads the load throughout your databases, allowing you to complete faster.

AutoUpgrade New Features: Control Start Time When Using Refreshable Clone PDBs

When you migrate or upgrade with refreshable clone PDBs, you sometimes want to decide when the final refresh happens. Perhaps you must finish certain activities in the source database before moving on.

I’ve discussed this in a previous post, but now there’s a better way.

The Final Refresh Dilemma

In AutoUpgrade, the final refresh happens at the time specified by the config file parameter start_time. This is the cut-over time where no further changes to the source database, gets replicated in the target database.

Overview of the phases when using refreshable clone PDBs

You specify start_time in the config file, and then you start the job. Typically, you start it a long time before start_time to allow the creation of the new PDB.

So, you must specify start_time in the config file and that’s when you believe the final refresh should happen. But things might change in your maintenance window. Perhaps it takes a little longer to shut down your application or there’s a very important batch job that must finish. Or perhaps you can start even earlier.

In that case, a fixed start time is not very flexible.

The Solution

You can use the proceed command in the AutoUpgrade console to adjust the start time, i.e., the final refresh.

  1. Get the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Start the job in deploy mode as you normally would:

    java -jar autoupgrade.jar ... -mode deploy
    
    • AutoUpgrade now starts the CLONEPDB stage and begins to copy the database.
  3. Wait until the job reaches the REFRESHPDB stage:

    +----+-------+----------+---------+-------+----------+-------+--------------------+
    |Job#|DB_NAME|     STAGE|OPERATION| STATUS|START_TIME|UPDATED|             MESSAGE|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    | 100|  CDB19|REFRESHPDB|EXECUTING|RUNNING|  14:10:29| 4s ago|Starts in 54 minutes|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    Total jobs 1
    
    • In this stage, AutoUpgrade is waiting for start_time to continue the migration. It refreshes the PDB with redo from the source at the specified refresh interval.
    • I must start well before the maintenance window, so AutoUpgrade has enough time to copy the database.
  4. You can now change the start time. If you want to perform the final refresh and continue immediately, use the proceed command:

    proceed -job 100
    

    Or, you can change the start time:

    proceed -job 100 -newStartTime 29/03/2025 02:00:00
    

    Or, you can change the start time to a relative value, example 1 hour 30 min from now:

    proceed -job 100 -newStartTime +1h30m
    
  5. After the final refresh, AutoUpgrade disconnects the refreshable clone PDB, turns it into a regular PDB, and moves on with the job.

Wrapping Up

AutoUpgrade offers complete control over the process. You define a start time upfront, but as things change, you can adjust it in flight.

Refreshable clone PDBs are a fantastic method for non-CDB to PDB migrations and for upgrades of individual PDBs.

There are a few quirks to be aware of, and if you are using Data Guard bear in mind that you can only plug in with deferred recovery. Other than that – it’s just to say…

Happy migrating, happy upgrading!

Further Reading

How to Upgrade a Single PDB

AutoUpgrade now supports unplug-plug upgrades. You unplug a PDB from a lower release CDB and you plug it into a higher release CDB. After plug-in the PDB is upgraded and eventually it can be opened in normal, READ WRITE mode.

Concept of unplug-plug upgrades which are supported with AutoUpgrade version 21.1.1

When it comes to upgrading in the multitenant world, I am a big fan of unplug-plug upgrades. The concept comes with a number of benefits:

  • It is much faster to upgrade an individual PDB using unplug-plug compared to a CDB with just one PDB in it. When you do an unplug-plug upgrade, the database just need to upgrade the PDB. Compare that to a CDB which first upgrades CDB$ROOT, and then PDB$SEED and any user PDBs.
  • You don’t have to arrange downtime for all the PDBs in the CDB. Downtime is just needed for the PDB that you will upgrade.
  • Combine it with refreshable PDBs and you can still have a really good fallback option. You can check out a previous blog post to see how you can use refreshable PDBs.

AutoUpgrade and Unplug-plug Upgrade

Starting from version 21, AutoUpgrade can now perform unplug-plug upgrades. A newer version of AutoUpgrade can upgrade to older database releases as well, so don’t worry if the AutoUpgrade version doesn’t match the Oracle Database release that you are upgrading to.

There are some requirements that must be met in order to perform unplug-upgrade, so I suggest that you take a look in the documentation.

You have to create the target CDB yourself. It is by design that AutoUpgrade doesn’t do this for you. First, creating a CDB requires a lot of information and it can be done in many different ways (ASM? Which components? RAC?). You would need a very long config file to supply all that information. Also, it takes time to create a CDB and if AutoUpgrade would have to do that inside the maintenance window, it would be prolonged considerably.

During unplug-plug upgrades AutoUpgrade also allows you to change the name of the PDBs and you can decide whether you want to reuse the unplugged data files or take a copy.

How to

Imagine the following AutoUpgrade config file:

upg1.sid=CDB1
upg1.target_cdb=CDB2
upg1.pdbs=hr,logistics
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.target_pdb_name.hr=people

AutoUpgrade will unplug PDBs hr and logistics from CDB1 and plug them into CDB2. In addition, it will change the name of hr to people when it is plugged into CDB2. Finally, you must specify the Oracle Home of the two CDBs, so AutoUpgrade can set the environment correctly and connect to the databases.

If you use lsj command to monitor the progress it does actually look like you are only upgrading one of the PDBs:

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+------------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|           MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+------------------+
| 100|   CDB1|DBUPGRADE|EXECUTING|RUNNING|20/12/22 15:25|15:29:03|13%Upgraded PEOPLE|
+----+-------+---------+---------+-------+--------------+--------+------------------+
Total jobs 1

But if you look into the details with status -job 100 you can see that both PDBs are upgraded in parallel:

upg> status -job 100

... (removed a lot of information)

Details:
[Upgrading] is [0%] completed for [cdb1-people] 
                 +---------+-------------+
                 |CONTAINER|   PERCENTAGE|
                 +---------+-------------+
                 |   PEOPLE|UPGRADE [13%]|
                 |LOGISTICS|UPGRADE [13%]|
                 +---------+-------------+

When the upgrade completes, the PDBs are ready to be used. I suggest that you verify that the databases are open in READ WRITE mode and not in restricted mode. Finally, save the state, so the PDBs start automatically together with the CDB:

SQL> select name, open_mode, restricted from v$pdbs where name in ('PEOPLE', 'LOGISTICS');
SQL> --Verify open_mode=read write and restricted=no
SQL> alter pluggable database people save state;
SQL> alter pluggable database logistics save state;

Caution

With unplug-plug upgrades you can’t use Flashback Database as your fallback plan. It doesn’t work across the plug-in operation. You either have to:

  • Instruct AutoUpgrade to copy the unplugged data files before it plugs into the higher release CDB. That way, you still have the old unplugged data files, and just re-create the PDB in the lower release CDB. But you will have extra downtime because you need to copy the data files.
  • Use Refreshable PDBs to build a copy of your PDB in the higher release, target CDB. When you want to do the upgrade, perform the last refresh and upgrade the refreshable PDB.

Both of the above options require additional disk space to hold a copy of the database.

Of course, you can also use your regular backups as fallback.

What If

Your Target CDB Has a Standby Database?

For now, don’t use AutoUpgrade to make unplug-plug upgrades, if the target CDB has standby databases. A plug-in operation with a standby database is a tricky maneuvre, at least when you want to re-use the data files. We are still trying to figure out how to implement it in AutoUpgrade.

Having said that, it is absolutely doable. You can read more about in the following MOS notes:

You Are Using TDE Tablespace Encryption?

For now, don’t use AutoUpgrade to perform unplug-plug upgrades, if any tablespace in the PDB is encrypted with TDE Tablespace Encryption. We are working on making AutoUpgrade capable of better interacting with the TDE keystore, so keep an eye out for coming versions.

If TDE Tablespace Encryption is enabled in the target CDB, you can still use AutoUpgrade. The PDB will be plugged in as an unencrypted PDB.

Conclusion

Doing unplug-plug upgrades is now supported by AutoUpgrade as of version 21. It includes useful features for renaming PDBs and using copies of unplugged data files.

There is a video on YouTube that shows the procedure. And while you are there, I suggest that you subscribe to our channel.

Further Reading