How To Downgrade a PDB from Oracle Database 23ai

When talking about upgrades, Oracle Database has a great fallback mechanism; a downgrade. Even after going live on Oracle Database 23ai, you can get back to Oracle Database 19c or 21c – with no data loss.

How to Downgrade From Oracle Database 23ai

My PDB, PDB1, has already been upgraded to Oracle Database 23ai. Now, I want to downgrade to Oracle Database 19c. In the downgrade process, I will unplug from CDB23 and into CDB19.

  1. I open the PDB in downgrade mode:
    alter pluggable database PDB1 close immediate;
    alter pluggable database PDB1 open downgrade;
    
    • Downgrade mode is a special mode – similar to upgrade mode. It enables exclusive access to the database and disables a lot of features.
  2. I set the environment to CDB23 and start the downgrade process:
    cd $ORACLE_HOME/bin
    ./dbdowngrade -c 'PDB1'
    
    • The -c command line parameter starts a downgrade of a specific PDB; not the entire CDB.
  3. After the downgrade, I close and unplug from CDB23:
    alter pluggable database PDB1 close;
    alter pluggable database PDB1 unplug into '/home/oracle/scripts/pdb1.xml';
    
  4. Now, I connect to CDB19 running on Oracle Database 19c. I plug in and open the PDB in upgrade mode:
    create pluggable database PDB1 using '/home/oracle/scripts/pdb1.xml';
    alter pluggable database PDB1 open upgrade;
    
  5. I switch to PDB1 and complete the downgrade by running the catrelod.sql script:
    alter session set container=PDB1;
    @$ORACLE_HOME/rdbms/admin/catrelod.sql
    
  6. Then, I recompile all invalid objects:
    @$ORACLE_HOME/rdbms/admin/utlrp.sql
    
  7. A restart of the PDB. I open in normal mode:
    alter pluggable database PDB1 close;
    alter pluggable database PDB1 open;
    
  8. I gather new dictionary statistics:
    exec dbms_stats.gather_dictionary_stats;
    
    • After a while, when the database is warmed up, I also gather fixed objects statistics.
  9. I verify that all components are VALID or OPTION OFF:
    select comp_id, version, status from dba_registry;
    
  10. I run Datapatch to ensure all SQL patches are properly applied:
    $ORACLE_HOME/OPatch/datapatch
    

That’s it!

Worth Knowing About Downgrades

  • The downgrade is a two-step process.

    • The first part happens while the database is running in the new Oracle Home. Startup in a special downgrade mode and execute catdwgrd.sql to start the downgrade.
    • Next, restart the database in the old Oracle Home and start in upgrade mode. catrelod.sql will re-install any missing objects in the database and finish the downgrade.
  • Oracle recommends that you install the latest Release Update in both Oracle homes; the one that you downgrade from, and the one to which you downgrade.

  • You can only downgrade if the compatible hasn’t been changed after the upgrade.

  • If the timezone file was upgraded, the same timezone file must be present in the old Oracle Home.

  • Before you start the downgrade, there’s no need to roll off any patches with Datapatch. The downgrade mechanism takes care of that.

  • The data dictionary in a downgraded database is not identical to the pre-upgraded database. The data dictionary will be different, but compatible. Here are some examples:

    • Generally, dropping objects is avoided.
    • New tables are most likely not dropped but truncated.
    • New indexes are most likely kept.
  • Although you can downgrade a database from Oracle Database 23ai to 19c, you can’t undo the multitenant migration. To get back to a non-CDB you must use other means like Data Pump, transportable tablespaces, or GoldenGate.

Want to Try?

Hopefully, you never need to downgrade a PDB. But I bet you can resist the urge to try it. Right? RIGHT?

In our hands-on lab, Hitchhiker’s Guide for Upgrading to Oracle Database 23ai, there is a downgrade exercise. Give it a try. The lab is free to use and doesn’t require any installation – it runs completely inside a browser.

Happy downgrading!

Appendix

Further Reading

AutoUpgrade New Features: Custom Oracle Home Name

When you create a new Oracle home using AutoUpgrade, you can now give it a custom name.

What do I mean by a custom name? Let’s examine the Oracle Inventory:

cat /u01/app/oraInventory/ContentsXML/inventory.xml

<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2025, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB19Home1" LOC="/u01/app/oracle/product/19" TYPE="O" IDX="1"/>
<HOME NAME="OraDB21Home1" LOC="/u01/app/oracle/product/21" TYPE="O" IDX="2"/>
<HOME NAME="OraDB23Home1" LOC="/u01/app/oracle/product/23" TYPE="O" IDX="3"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>

The Oracle home name is listed in the NAME attribute on the HOME element.

When you install a new Oracle home, the installer automatically generate a name for you, like OraDb19Home1.

In AutoUpgrade, you can decide to use a custom name instead of an auto-generated one.

How To Specify A Custom Oracle Home Name

  • I use the latest version of AutoUpgrade
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  • I create the following config file:
    global.global_log_dir=/home/oracle/autoupgrade-patching/log
    global.keystore=/home/oracle/autoupgrade-patching/keystore
    patch1.source_home=/u01/app/oracle/product/19
    patch1.target_home=/u01/app/oracle/product/19_28
    patch1.home_settings.home_name=DBHOME1928
    patch1.sid=FTEX
    patch1.folder=/home/oracle/patch-repo
    patch1.patch=RU,OCW,DPBP,OJVM,OPATCH
    
    • Notice the home_settings.home_name parameter. This is where I specify the custom name for the new Oracle home.
  • I patch the database using AutoUpgrade:
    java -jar autoupgrade.jar -config FTEX.cfg -mode deploy
    
  • I check the inventory:
    grep -i "19_28" /u01/app/oraInventory/ContentsXML/inventory.xml
    <HOME NAME="DBHOME1928" LOC="/u01/app/oracle/product/19_28" TYPE="O" IDX="4"/>
    
    • AutoUpgrade created the new Oracle home with the name DBHOME1928.

What Do I Use It For

For a regular Oracle home, it’s used seldom. I’ve seen some customers using certain Oracle home names as part of their corporate standard, but most people don’t care.

However, in a read-only Oracle home, the name is important. It now becomes part of Oracle Base Home which is the location for most of the writable files from the Oracle home.

Do you have a specific use case for the Oracle home name or do you have a corporate standard mandating a certain name? Leave a comment and let me know.

Happy patching!

When A Refreshable Clone Takes Over The Service

Following my advice, a customer migrated a database to multitenant using refreshable clone PDB. The CDB was on the same host as the non-CDB.

When we prepare the migration and create the refreshable clone, the users can no longer connect to the source database.

The users were getting ORA-01109: database not open when connecting to the source database.

But this was before the final refresh and before the migration was supposed to happen.

Why did the refreshable clone PDB interfere with operations in the source database?

The Details

The customer has a non-CDB called SALES, which they wanted to migrate into CDB1. They wanted the PDB to keep the original name, SALES.

Any database registers a default service at the listener. The name of the default service is the same as the name of the database.

In this case, the non-CDB registered the sales service:

$ lsnrctl status

Service "sales" has 1 instance(s).
  Instance "SALES", status READY, has 1 handler(s) for this service...

The customer used AutoUpgrade for the migration. When preparing for the migration, they started in deploy mode, and AutoUpgrade created the refreshable clone.

Bear in mind that these steps are preparations only. The switch to the PDB should happen at a later time.

Look what happens to the listener:

$ lsnrctl status

Service "sales" has 2 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
  Instance "SALES", status READY, has 1 handler(s) for this service...

The CDB also registers a sales service. It does so because of the refreshable clone PDB with the same name.

The users were connecting to the default service, sales. The listener handed off the connections to the CDB, not the non-CDB.

Since the PDB was still a refreshable clone PDB, it was not open, and users received ORA-01109: database not open.

Besides that, the refreshing process didn’t work either. The refresh of the PDB happens over a database link to the source database. Guess what happened?

2025-05-09T07:25:58.854934+00:00
Errors in file /u01/app/oracle/diag/rdbms/cdb19/CDB19/trace/CDB19_ora_61633.trc:
ORA-17627: ORA-01109: database not open
ORA-17629: Cannot connect to the remote database server
ORA-17627 signalled during: ALTER PLUGGABLE DATABASE FTEX REFRESH...

Yes, it couldn’t connect to the source database either. It ended up trying to connect to itself.

Optimal Solution

The real problem is the connections to the default service, sales. The service with the same name as the database.

This service is not meant for general use. You should create your own service and have your application connect through that.

Why is using default services a bad idea?

  • You can’t customize the default service.
  • The default service is for administrative use only.
  • You easily end up with collisions like this one. This can also happen with two PDBs in different CDBs on the same host.
  • If you rename the database, you also rename the default service and have to update all connection strings.

Why are custom services a good idea?

  • Custom services allow you to set many attributes. While this might not be important for a single-instance database, it is essential for Data Guard and RAC.
  • When you clone a database, a custom service doesn’t follow with it. You have to create the services in the clone when and if it is appropriate.

You can create custom services using DBMS_SERVICE or srvctl. You can find more about that in a previous blog post.

Other Solutions

Other feasible solutions exist, but none of them address the real issue, which I believe is the use of the default service.

  • Rename the PDB so it creates a default service with a different name. After migration, you can rename it.
  • Create a static listener entry that forces the listener to route connections to the non-CDB. However, static listener entries are really not nice, and you should use dynamic registration whenever possible.
  • Create a second listener for the CDB. That’s just cumbersome.

Recommendation

  • Keep using refreshable clone PDB for migrations. It’s a great way to migrate, patch, or upgrade databases.

  • Always create your own custom service. Don’t use the default service.

Connor McDonald Is Wrong!

In a recent blog post, Connor McDonald was calling out LinkedIn clickbait.

Someone made incorrect claims about Oracle Database on LinkedIn to draw attention. According to sources (himself), Connor tried to resist the urge to jump in but failed. We all know that Resistance is futile.

Connor wrote a good blog post showing that the attention-seeking author of the LinkedIn post is wrong!

Upgrades Are Risky and Complex

One of the false claims was that:

Upgrades are risky and complex. Want to upgrade? Prepare for a long, nerve-wracking process where something will break.

Connor replied:

Upgrades are now editing a 10 line configuration file and running autoupgrade. One line command and you’re good to go.

Here’s the config file Connor used:

global.autoupg_log_dir=/default/current/location
upg1.dbname=employee
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.1.0/dbhome_1
upg1.sid=emp
upg1.log_dir=/scratch/auto
upg1.upgrade_node=node1
upg1.target_version=19.1

But you can make it even simpler.

  • autoupg_log_dir defaults to $ORACLE_BASE/cfgtoollogs/autoupgrade.
  • dbname is not used anymore.
  • start_time=now is the default.
  • log_dir is not needed when you use global.autoupg_log_dir.
  • upgrade_node is rarely needed. Only when you have one config file that is used on multiple hosts.
  • target_version is only needed when the target Oracle home doesn’t exist yet.

You could do the same with this simplified config file:

upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.1.0/dbhome_1
upg1.sid=emp

You don’t like config files? OK – use environment variables instead. Here’s how to upgrade completely without a config file:

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export TARGET_ORACLE_HOME=/u01/app/oracle/product/19.1.0/dbhome_1
export ORACLE_SID=emp
java -jar autoupgrade.jar -config_values -mode deploy

Upgrades Are Simple, Fast, and Safe

I might be a little biased, but I think upgrades are:

Simple

  • Since we introduced AutoUpgrade in 2019, upgrades have become even easier.
  • Keep it simple or customize it to your exact needs – you decide.
  • Use the preupgrade analysis and run hundreds of tests on your database. If it passes the test, you’re good to go.
  • Still worried? Use the dictionary check to ease your mind.

Fast

Safe

Was Connor Wrong?

No, Connor was not wrong. Connor was right!

He just needed a tiny correction. Connor made it look like AutoUpgrade is easy, when it is in fact very easy.

Sorry for the clickbait!

Happy upgrading!

Upgrade a PDB to Oracle Database 23ai Using Replay Upgrade

When you upgrade a PDB to Oracle Database 23ai, there is a new method for performing the upgrade. It’s called Replay Upgrade.

I would call it a convenience feature. You simply plug in to a higher release CDB and open the PDB. The CDB detects the lower-release PDB and performs the upgrade. You don’t have to invoke AutoUpgrade.

Here’s how to do it.

A Few Words on Replay Upgrade

In Oracle Database 23ai, you can upgrade the data dictionary in two ways:

  • Parallel Upgrade – Has been around for quite a few releases. It’s what you’ve used before and can still use.
  • Replay Upgrade – The new thing that enables you to upgrade the data dictionary by simplying plugging in a lower-release PDB and allowing the CDB to perform the upgrade – without using AutoUpgrade.

I suggest you watch this video about the fundamental differences between the two methods.

Replay Upgrade is not a substitute for the entire upgrade project. Even with Replay Upgrade, you must still run the pre-upgrade and post-upgrade tasks. The version of the PDB must be one that allows for a direct upgrade to Oracle Database 23ai: 19c or 21c.

AutoUpgrade uses Parallel Upgrade. You can force AutoUpgrade to use Replay Upgrade in your config file:

upg1.replay=yes

How To Upgrade Using Replay Upgrade

  1. You must perform the pre-upgrade tasks while the PDB is in the lower-release CDB.
  2. One of such tasks is to analyze the PDB for upgrade readiness:
    java -jar autoupgrade.jar ... -mode analyze
    
  3. If needed, run the pre-upgrade fixups:
    java -jar autoupgrade.jar ... -mode fixups
    
  4. Plug in a lower-release PDB into a higher-release CDB. It doesn’t matter whether you plugged in from a manifest file, using refreshable clone PDBs or any other method.
  5. Open the PDB:
    alter pluggable database PDB1 open;
    
    • When you open the PDB in normal mode, Replay Upgrade starts.
    • The open command doesn’t complete until the upgrade completes. The command is not hanging; it’s simply upgrading in the background.
    • If you open the PDB in upgrade mode, Replay Upgrade does not start.
  6. During the open command, you can see in the alert log that the CDB upgrades the PDB:
    2025-03-31T14:02:37.955470+00:00
    ORANGE(6):Starting Upgrade on PDB Open
    
  7. When the open command completes, the PDB will be upgraded. But it will open in restricted mode until you run Datapatch. From alert.log:
    ORANGE(6) Error Violation: SQL Patch, Cause: '23.5.0.24.07 Release_Update2407102158' is installed in the CDB but no release updates are installed in the PDB, Action: Call datapatch to install in the PDB or the CDB
    2025-03-31T14:11:03.803899+00:00
    ORANGE(6):Opening pdb with no Resource Manager plan active
    Violations: Type: 1, Count: 1
    Completed: Pluggable database ORANGE opened read write
    Completed:    alter pluggable database orange open
    
  8. Run Datapatch:
    $ORACLE_HOME/OPatch/datapatch -pdbs PDB1
    
  9. Restart the PDB to remove restricted mode:
    alter pluggable database PDB1 close immediate;
    alter pluggable database PDB1 open;
    
  10. Run post-upgrade tasks.

Want To Try It?

In our upgrade lab, Hitchhiker’s Guide for Upgrading to Oracle Database 23ai, there is no lab on Replay Upgrade.

You can still perform a Replay Upgrade if you want. I’ve created instructions in the appendix that you can use. The lab takes 15 minutes to complete.

My Database Is A Non-CDB

Replay Upgrade performs an upgrade-on-open. Interestingly, it can also perform a convert-on-open. The latter will run the same commands you’ll find in noncdb_to_pdb.sql, which you normally run to convert a non-CDB to a PDB.

So, you can simply plug a 19c non-CDB into a 23ai CDB. When you open the PDB, the CDB upgrades and converts to a PDB.

My Recommendation

I recommend using AutoUpgrade. It ensures that you run all the tasks and automates them completely, giving you the safest upgrade.

Replay Upgrade does look a lot easier at first glance, but you still need to remember all the pre-upgrade and post-upgrade tasks. When there’s something you must run manually, there’s always the risk that you forget one or two of the tasks.

For me, Replay Upgrade is a convenience feature you can use in a lab or demo environment or if you think it’s easier to incorporate in your automation. But even with automation, you can still use AutoUpgrade with the -noconsole command line option.

But the choice is yours.

Happy upgrading!

Appendix

Replay Upgrade Queries and Commands

  • Here’s how you can tell whether Replay Upgrade (Upgrade on Open and Convert On Open) is enabled:
    select property_name, property_value 
    from   database_properties
    where  property_name like '%OPEN%';
    
    You can set the property in the root container and in the PDB.
  • Here’s how to disable Replay Update:
    alter database upgrade sync off;
    

Hands-On Lab

Here are the instructions for trying a Replay Upgrade in our Hands-On Lab.

The below steps perform a Replay Upgrade of the ORANGE PDB from CDB19 to CDB23.

  1. Start by provisioning a new lab and connecting to it. The lab runs in Oracle LiveLabs and is completely free. No installation is required.
  2. Start the CDB19 database. It’s a container database on Oracle Database 19c:
    . cdb19
    env | grep ORA
    sqlplus / as sysdba<<EOF
       startup;
    EOF
    
  3. Create an AutoUpgrade config file:
    cd /home/oracle/scripts
    cat > orange-replay.cfg <<EOF 
    global.autoupg_log_dir=/home/oracle/logs/orange-replay
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/23
    upg1.sid=CDB19
    upg1.target_cdb=CDB23
    upg1.pdbs=ORANGE
    upg1.target_pdb_copy_option.ORANGE=file_name_convert=none
    upg1.timezone_upg=NO
    EOF
    
  4. Run AutoUpgrade in analyze mode:
    cd
    java -jar autoupgrade.jar -config scripts/orange-replay.cfg -mode analyze
    
    • AutoUpgrade analyzes the ORANGE PDB for upgrade readiness.
  5. Check the preupgrade summary report:
    cat /home/oracle/logs/orange-replay/cfgtoollogs/upgrade/auto/status/status.log
    
    • The report states Check passed and no manual intervention needed.
  6. Run the preupgrade fixups:
    java -jar autoupgrade.jar -config scripts/orange-replay.cfg -mode fixups
    
    • AutoUpgrade runs pre-upgrade fixups, if any.
  7. Unplug ORANGE from the 19c CDB:
    . cdb19
    sqlplus / as sysdba<<EOF
        alter pluggable database ORANGE close;
    	alter pluggable database ORANGE unplug into '/home/oracle/orange.xml';
    	drop pluggable database ORANGE keep datafiles;
    EOF
    
  8. Plug into the 23ai CDB and open ORANGE:
    . cdb23
    env | grep ORA
    sqlplus / as sysdba<<EOF
       set timing on
       create pluggable database ORANGE using '/home/oracle/orange.xml' nocopy;
       alter pluggable database orange open;
    EOF
    
    • The open command upgrades the PDB. The command runs for several minutes.
    • In the end, the command completes but prints Warning: PDB altered with errors.
  9. Run Datapatch on the ORANGE PDB:
    $ORACLE_HOME/OPatch/datapatch -pdbs ORANGE
    
  10. Restart ORANGE:
    sqlplus / as sysdba<<EOF
    	alter pluggable database orange close;
    	alter pluggable database orange open;
    	select open_mode, restricted from v\$pdbs where name='ORANGE';
    EOF
    
    • The PDB now opens normally (READ WRITE) and unrestricted.
  11. Run the post-upgrade fixups:
    java -jar autoupgrade.jar \
       -preupgrade "dir=/home/oracle/logs/orange-replay/fixups,inclusion_list=ORANGE" \
       -mode postfixups
    
    
  12. That’s it. ORANGE has now been upgraded:
    sqlplus / as sysdba<<EOF
    	select open_mode, restricted from v\$pdbs where name='ORANGE';
    	alter session set container=ORANGE;
    	select version_full from v\$instance;
    EOF
    

AutoUpgrade New Features: Better Automation To Patch Oracle Database on Windows

Running Oracle Database on Microsoft Windows is slightly different from running it on other platforms. So, of course, patching Oracle Database is also slightly different.

The Oracle Database runs as a Windows service. AutoUpgrade must re-create the service when you perform out-of-place patching so the service starts oracle.exe from the new Oracle home.

Oracle Database on Windows runs as a Windows service with a hardcoded Oracle home path

To recreate the service, you must specify the credentials of the user who runs the service. Windows allows you to store the credentials in a special file; AutoUpgrade can use that when it recreates the service.

AutoUpgrade brings up a prompt to store credentials for a Windows service

For security purposes, AutoUpgrades deletes the credential file when it is no longer needed. For automation, however, that’s impractical because you would need to recreate the credential file every time you patch or upgrade.

AutoUpgrade now allows you to keep the file and reuse it. To do so, use the config file parameter delete_credential_file.

How To Patch Oracle Database on Windows

  1. Get the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Create an AutoUpgrade config file:
    global.keystore=c:\oracle\autoupgrade\keystore
    patch1.source_home=c:\oracle\product\dbhome_19_26_0
    patch1.target_home=c:\oracle\product\dbhome_19_27_0
    patch1.sid=DB19
    patch1.folder=c:\oracle\patches
    patch1.patch=RECOMMENDED
    patch1.wincredential=c:\oracle\autoupgrade\credential
    patch1.delete_credential_file=false
    
  3. Load the credentials for the user running the service into a credential file:
    java -jar autoupgrade.jar 
         -config ...
         –patch 
         -load_win_credential "DB19"	
    
  4. Start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar 
         -config ...
         –patch 
         -mode deploy
    
    • AutoUpgrade finds and downloads the right patches for Windows.
    • Creates a new Oracle home with the new patches.
    • Completes the entire patching process.

That’s it! You’ve patched your Oracle Database on Windows.

Here’s a little demo from our YouTube channel. Be sure to subscribe so you don’t miss out.

Happy patching!

Using Refreshable Clone PDBs from a Standby Database

The other day, I found myself praising refreshable clone PDBs to a customer (which I often do because it’s a killer feature). They liked the feature too but asked:

> We are concerned about the impact on the source database. When AutoUpgrade connects to the source database and clones the database, can we offload the work to a standby database?

Refreshable clone PDBs can eat up your resources if you don’t constrain the target CDB. So, let’s see what we can do.

Mounted Standby Database

This won’t work, because you must be able to connect to the database via a regular database link. Further, AutoUpgrade and the cloning process must be able to execute queries in the source database, which is not possible on a mounted database.

Open Standby Database / Active Data Guard

What if you stop redo apply and open the standby database? Or if you have Active Data Guard?

In this case, the database would be open in read-only mode, and those queries would work. However, the refreshable clone PDB feature was developed to work in and require a read-write database, so this won’t work either – Not even if you enable automatic redirection of DML operations (ADG_REDIRECT_DML).

Even if this case would work, we wouldn’t recommend it. Because, we recommend that you run analyze and fixups mode on the source database, which wouldn’t be possible on a read-only database. You could run analyze and fixups on the primary database. But is that really an option? If you’re worried about affecting your primary and want to offload to the standby, would running those commands on the primary be an option?

Snapshot Standby Database

What about a snapshot standby? That’s a read-write database. Let’s give it a try.

  1. Convert the source standby to a snapshot standby:
    DGMGRL&gt; convert database '...' to snapshot standby;
    
    • The standby must remain a snapshot standby for the entire duration of the job. If you need to switch over or fail over to the standby, you must restart the entire operation.
  2. Ensure the PDB is open on the source standby.
    alter pluggable database ... open;
    
    • Otherwise, you will run into ORA-03150 when querying the source database over the database link.
  3. In the source standby, create the user used by the database link and grant appropriate permissions:
    create user dblinkuser identified by ...;
    grant create session, create pluggable database, select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  4. In the target CDB, create a database link that points to the PDB in source standby:
    create database link clonepdb
    connect to dblinkuser identified by ...
    using '';
    
  5. Create an AutoUpgrade config file:
    global.global_log_dir=/home/oracle/autoupgrade/log
    global.keystore=/home/oracle/autoupgrade/keystore
    upg1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
    upg1.target_home=/u01/app/oracle/product/23.0.0/dbhome_1
    upg1.sid=CDB19
    upg1.target_cdb=CDB23
    upg1.pdbs=PDB1
    upg1.source_dblink.PDB1=CLONEPDB 300
    upg1.target_pdb_copy_option.PDB1=file_name_convert=none
    upg1.start_time=+12h
    
  6. Start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar ... -mode deploy
    
  7. Convert the source standby back to a physical standby:
    DGMGRL&gt; convert database '...' to physical standby;
    

Is It Safe?

Using a standby database for anything else than your DR strategy, is sometimes perceived as risky. But it is not, as I explain in this blog post (section What Happens If I Need to Switch Over or Fail Over?).

Happy upgrading!

Hey, Let Me Kill Your Network!

This short story is about the awesomeness of AutoUpgrade and refreshable clone PDBs.

Colleagues of mine were testing upgrades to Oracle Database 23ai using refreshable clone PDBs. They wanted to see how fast AutoUpgrade would clone the PDB and how that affected the source system.

The Systems

The source and target systems were identical:

  • Exadata X10M
  • 2-node RAC
  • 190 CPU/node
  • 25Gbps network/node

The database:

  • 1 TB in size
  • All data files on ASM

The Results

The source database is Oracle Database 19c. They configured AutoUpgrade to upgrade to Oracle Database 23ai using refreshable clone PDBs. However, this test measured only the initial copy of the data files – the CLONEDB stage in AutoUpgrade.

Parallel Time Throughput Source CPU %
Default 269s 3,6 GB/s 3%
Parallel 4 2060 0,47 GB/s 1%
Parallel 8 850 1,14 GB/s 1%
Parallel 16 591 1,65 GB/s 2%

A few observations:

  • Cloning a 1 TB database in just 5 minutes.
  • Very little effect on CPU + I/O on source, entirely network-bound.
  • The throughput could scale almost up to the limit of the network.
  • By the way, this corresponds with reports we’ve received from other customers.

Learnings

  • The initial cloning of the database is very fast and efficient.
  • You should be prepared for the load on the source system. Especially since the network is a shared resource, it might affect other databases on the source system, too.
  • The target CDB determines the default parallel degree based on its own CPU_COUNT. If the target system is way more powerful than the source, this situation may worsen.
  • Use the AutoUpgrade config file entry parallel_pdb_creation_clause to select a specific parallel degree. Since the initial copy happens before the downtime, you might want to set it low enough to prevent overloading the source system.
  • Be careful. Don’t kill your network!

Happy upgrading!

AutoUpgrade Never Fails, But When It does

Any piece of software has errors. It’s just a fact of life.

Should you encounter problems with AutoUpgrade, you can help us by compiling a zip package. This package contains valuable information that we need to troubleshoot.

Are You Using the Latest Version

  • Before generating a zip package, check that you’re using the latest version of AutoUpgrade… Perhaps the issue is already fixed:
    java -jar autoupgrade.jar -version
    
  • Check the latest version on oracle.com or AutoUpgrade Tool (Doc ID 2485457.1).
  • Or, simply get the latest version, and compare:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    java -jar autoupgrade.jar -version
    

How To Generate a Zip Package

  1. You add -zip to the AutoUpgrade command that failed.
  2. You remove all other parameters except -config.
  3. Execute the command:
    java -jar autoupgrade.jar -config db19.cfg -zip
    
  4. AutoUpgrade generates a zip package in the current directory and outputs the name of the file in the console.
    • You can specify the directory using -d <dir>.

What Does It Contain?

It contains:

  • All the files from the following locations:
    • global.global_log_dir
    • global.autoupg_log_dir
    • <prefix>.log_dir
  • Database alert log
  • Data Guard broker log
  • Attention log

When creating the zip package, AutoUpgrade doesn’t connect to the database but gathers the information from the file system.

If there are files that you don’t want to include in the package, you can exclude them using -zip_exclusion_list. Check the documentation for details.

AutoUpgrade New Features: Control Start Time When Using Refreshable Clone PDBs

When you migrate or upgrade with refreshable clone PDBs, you sometimes want to decide when the final refresh happens. Perhaps you must finish certain activities in the source database before moving on.

I’ve discussed this in a previous post, but now there’s a better way.

The Final Refresh Dilemma

In AutoUpgrade, the final refresh happens at the time specified by the config file parameter start_time. This is the cut-over time where no further changes to the source database, gets replicated in the target database.

Overview of the phases when using refreshable clone PDBs

You specify start_time in the config file, and then you start the job. Typically, you start it a long time before start_time to allow the creation of the new PDB.

So, you must specify start_time in the config file and that’s when you believe the final refresh should happen. But things might change in your maintenance window. Perhaps it takes a little longer to shut down your application or there’s a very important batch job that must finish. Or perhaps you can start even earlier.

In that case, a fixed start time is not very flexible.

The Solution

You can use the proceed command in the AutoUpgrade console to adjust the start time, i.e., the final refresh.

  1. Get the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Start the job in deploy mode as you normally would:

    java -jar autoupgrade.jar ... -mode deploy
    
    • AutoUpgrade now starts the CLONEPDB stage and begins to copy the database.
  3. Wait until the job reaches the REFRESHPDB stage:

    +----+-------+----------+---------+-------+----------+-------+--------------------+
    |Job#|DB_NAME|     STAGE|OPERATION| STATUS|START_TIME|UPDATED|             MESSAGE|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    | 100|  CDB19|REFRESHPDB|EXECUTING|RUNNING|  14:10:29| 4s ago|Starts in 54 minutes|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    Total jobs 1
    
    • In this stage, AutoUpgrade is waiting for start_time to continue the migration. It refreshes the PDB with redo from the source at the specified refresh interval.
    • I must start well before the maintenance window, so AutoUpgrade has enough time to copy the database.
  4. You can now change the start time. If you want to perform the final refresh and continue immediately, use the proceed command:

    proceed -job 100
    

    Or, you can change the start time:

    proceed -job 100 -newStartTime 29/03/2025 02:00:00
    

    Or, you can change the start time to a relative value, example 1 hour 30 min from now:

    proceed -job 100 -newStartTime +1h30m
    
  5. After the final refresh, AutoUpgrade disconnects the refreshable clone PDB, turns it into a regular PDB, and moves on with the job.

Wrapping Up

AutoUpgrade offers complete control over the process. You define a start time upfront, but as things change, you can adjust it in flight.

Refreshable clone PDBs are a fantastic method for non-CDB to PDB migrations and for upgrades of individual PDBs.

There are a few quirks to be aware of, and if you are using Data Guard bear in mind that you can only plug in with deferred recovery. Other than that – it’s just to say…

Happy migrating, happy upgrading!

Further Reading