Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB

Let me show you how to upgrade a non-CDB to Oracle AI Database 26ai. Since this release only supports the multitenant architecture, you must also convert it to a PDB.

How to Upgrade and Convert

I’ve already prepared my database and installed a new Oracle home. I’ve also created a new container database or decided to use an existing one.

The maintenance window has started, and users have left the database.

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/FTEX-CDB26
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=FTEX
    upg1.target_cdb=CDB26
    upg1.export_rman_backup_for_noncdb_to_pdb=yes
    upg1.target_pdb_name=PDBFTEX
    upg1.target_pdb_copy_option=file_name_convert=none
    
    • I specify the source and target Oracle homes. I’ve already installed the target Oracle home.
    • sid contains the SID of my database.
    • target_cdb specifies the SID of the container where I plug into.
    • export_rman_backup_for_noncdb_to_pdb instructs AutoUpgrade to export RMAN metadata using DBMS_PDB.EXPORTRMANBACKUP. This makes it easier to use pre-plugin backups (see appendix).
    • target_pdb_name tells AutoUpgrade to rename the PDB.
    • target_pdb_copy_option instructs AutoUpgrade to copy the data files on plug-in and generate new OMF names. If I omit the parameter, AutoUpgrade reuses the data files. See the appendix for more details on copying or re-using data files.
    • Check the appendix for additional parameters.
  2. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config FTEX-CDB26.cfg -mode deploy
    
    • AutoUpgrade starts by analyzing the database for upgrade readiness and executes the pre-upgrade fixups.
    • Next, it creates a manifest file and shuts down the non-CDB.
    • Finally, it plugs the database in and follows with an upgrade and non-CDB to PDB conversion.
  3. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  4. In the end, AutoUpgrade completes the upgrade:

    Job 101 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/FTEX-CDB26/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/FTEX-	CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups.
  5. I review the Autoupgrade Summary Report. The path is printed to the console:

    vi /home/oracle/autoupgrade/logs/FTEX-CDB26/cfgtoollogs/upgrade/auto/status/status.log
    
  6. I take care of the post-upgrade tasks.

  7. I update any profiles or scripts that use the database.

  8. AutoUpgrade removes the non-CDB entry from /etc/oratab and Grid Infrastructure.

    • However, AutoUpgrade doesn’t remove:
      • Database files, like PFile, SPFile, password file, control file, and redo logs.
      • Database directory, like diagnostic_dest, adump, or db_recovery_file_dest.
      • Data files or temp files
    • I can remove the above file and directories when I’m sure that I won’t need a rollback.
    • If I plug in and reuse the data files (omit target_pdb_copy_option), the data files remain in the non-CDB location. Take care you don’t delete them by mistake; they are now used by the PDB.

That’s It!

With AutoUpgrade, you can easily upgrade your non-CDB and convert it to a PDB. For maximum flexibility, AutoUpgrade lets me copy or reuse the data files.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Rollback Options

When you convert a non-CDB to a PDB, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:

  • Copy data files by using target_pdb_copy_option.
  • RMAN backups.
  • Storage snapshots.

In this blog post, I copied the data files (using target_pdb_copy_option). This means the original data files are left untouched. In case of a rollback, I just need to reinstate the non-CDB instance (FTEX).

java -jar autoupgrade.jar -config cdb1.cfg -restore -jobs 101
  • In the jobs parameter, I specify the job ID that originally upgraded the database.
  • AutoUpgrade now recreates the FTEX instance using the original files (data files, control file, redo logs, and so on).

Copy or Re-use Data Files

When you plug into a CDB and convert your database to a PDB, you must decide what to do with the data files.

Copy Re-use
Plug-in Time Slow, needs to copy data files Fast, re-uses existing data files
Disk space Need room for a full copy of the data files No extra disk space
Rollback Source data files are left untouched. Re-open source database Source database unusable. Rely on other rollback method.
AutoUpgrade Default. Omit parameter target_pdb_copy_option Enable using target_pdb_copy_option
Location after plug-in The data files are left in the non-CDB location. Don’t delete them by mistake. Consider moving them using online data files move The data files are copied to the desired location

target_pdb_copy_option

Use this parameter only when you want to copy the data files.

  • target_pdb_copy_option=file_name_convert=('/u02/oradata/FTEX', '/u02/oradata/PDBFTEX','search-1','replace-1') When you have data files in a regular file system. The value is a list of pairs of search/replace strings.
  • target_pdb_copy_option=file_name_convert=none When you want data files in ASM or use OMF in a regular file system. The database automatically generates new file names and puts data files in the right location.
  • target_pdb_copy_option=file_name_convert=('+DATA/FTEX', '+DATA/PDBFTEX') This is a very rare configuration. Only when you have data files in ASM, but don’t use OMF. If you have ASM, I strongly recommend using OMF.

You can find further explanation in the documentation.

Move

The CREATE PLUGGABLE DATABASE statement also has a MOVE clause in addition to COPY and NOCOPY. In a regular file system, the MOVE clause works as you would expect. However, in ASM, it is implemented via copy-and-delete, so you might as well use the COPY option.

AutoUpgrade doesn’t support the MOVE clause.

Pre-plugin Backups

After converting a non-CDB to PDB, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.

A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata (config file parameter export_rman_backup_for_noncdb_to_pdb).

I suggest that you:

  • Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
  • Practice restoring with pre-plugin backups.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also removes the non-CDB from the Grid Infrastructure configuration.

What If I Use Oracle Restart?

No changes. AutoUpgrade detects this and automatically removes the non-CDB from the Grid Infrastructure configuration.

What If My Database Is Encrypted

AutoUpgrade fully supports upgrading an encrypted database.

You’ll need to input the non-CDB database keystore password into the AutoUpgrade keystore. You can find the details in a previous blog post.

In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • raise_compatible: If you want to raise the initialization parameter COMPATIBLE immediately after the upgrade, you can use this parameter. Don’t use it if you have standby databases, because there’s a specific procedure for raising COMPATIBLE in a Data Guard configuration. Don’t use it if you want to keep the option of downgrading.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

AutoUpgrade New Features: Control Start Time When Using Refreshable Clone PDBs

When you migrate or upgrade with refreshable clone PDBs, you sometimes want to decide when the final refresh happens. Perhaps you must finish certain activities in the source database before moving on.

I’ve discussed this in a previous post, but now there’s a better way.

The Final Refresh Dilemma

In AutoUpgrade, the final refresh happens at the time specified by the config file parameter start_time. This is the cut-over time where no further changes to the source database, gets replicated in the target database.

Overview of the phases when using refreshable clone PDBs

You specify start_time in the config file, and then you start the job. Typically, you start it a long time before start_time to allow the creation of the new PDB.

So, you must specify start_time in the config file and that’s when you believe the final refresh should happen. But things might change in your maintenance window. Perhaps it takes a little longer to shut down your application or there’s a very important batch job that must finish. Or perhaps you can start even earlier.

In that case, a fixed start time is not very flexible.

The Solution

You can use the proceed command in the AutoUpgrade console to adjust the start time, i.e., the final refresh.

  1. Get the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Start the job in deploy mode as you normally would:

    java -jar autoupgrade.jar ... -mode deploy
    
    • AutoUpgrade now starts the CLONEPDB stage and begins to copy the database.
  3. Wait until the job reaches the REFRESHPDB stage:

    +----+-------+----------+---------+-------+----------+-------+--------------------+
    |Job#|DB_NAME|     STAGE|OPERATION| STATUS|START_TIME|UPDATED|             MESSAGE|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    | 100|  CDB19|REFRESHPDB|EXECUTING|RUNNING|  14:10:29| 4s ago|Starts in 54 minutes|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    Total jobs 1
    
    • In this stage, AutoUpgrade is waiting for start_time to continue the migration. It refreshes the PDB with redo from the source at the specified refresh interval.
    • I must start well before the maintenance window, so AutoUpgrade has enough time to copy the database.
  4. You can now change the start time. If you want to perform the final refresh and continue immediately, use the proceed command:

    proceed -job 100
    

    Or, you can change the start time:

    proceed -job 100 -newStartTime 29/03/2025 02:00:00
    

    Or, you can change the start time to a relative value, example 1 hour 30 min from now:

    proceed -job 100 -newStartTime +1h30m
    
  5. After the final refresh, AutoUpgrade disconnects the refreshable clone PDB, turns it into a regular PDB, and moves on with the job.

Wrapping Up

AutoUpgrade offers complete control over the process. You define a start time upfront, but as things change, you can adjust it in flight.

Refreshable clone PDBs are a fantastic method for non-CDB to PDB migrations and for upgrades of individual PDBs.

There are a few quirks to be aware of, and if you are using Data Guard bear in mind that you can only plug in with deferred recovery. Other than that – it’s just to say…

Happy migrating, happy upgrading!

Further Reading

A Few Details about Using Refreshable Clone PDB for Non-CDB to PDB Migration

Our team has been advocating the use of refreshable clone PDB for non-CDB to PDB migrations using AutoUpgrade. It is a great feature and our entire team loves it – so does many of the customers we work with.

However, in a recent non-CDB to PDB migration, we encountered some issues with refreshable clone PDB and AutoUpgrade.

Can My Target Container Database Be a RAC Database?

Yes, this works perfectly fine.

Be aware that CREATE PLUGGABLE DATABASE statement scales out on all nodes in your cluster. By default, the database also uses parallel processes, so potentially, this will put quite a load on the source non-CDB. Consider restricting the use of parallel processes using the AutoUpgrade config file parameter:

upg1.parallel_pdb_creation_clause=4

Since the creation scales out on all nodes, all nodes must be able to resolve the connect identifier to the source non-CDB. If you use an alias from tnsnames.ora, be sure to add that on all nodes. Failure to do so will lead to an error during the CREATE PLUGGABLE DATABASE command:

ERROR at line 1:
ORA-65169: error encountered while attempting to copy file
+DATAC1/SRCDB/DATAFILE/system.262.1178083869
ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified
ORA-17629: Cannot connect to the remote database server

What Happens If the Source Database Extends a Data File?

If the source database extends a data file – either through AUTOEXTEND ON NEXT or manually by a user – the target database extends the matching data file as well. Here is an extract from the target alert log when it extends a data file:

2024-08-27T07:01:26.671975+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178089277
2024-08-27T07:01:32.773191+00:00
PDB1(4):Resize operation completed for file# 26, fname +DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs02.290.1178089287, old size 10240K, new size 1058816K

It works with smallfile and bigfile tablespaces.

What Happens If I Create a Tablespace on the Source Database?

The target database attempts to create the same tablespace.

For this to work, one of the following must be true:

If either one of the above isn’t true, you’ll receive an error during ALTER PLUGGABLE DATABASE ... REFRESH:

ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as
'+DATAC1/SRCDB/DATAFILE/srctbs04.282.1178091655'
You can use PDB_FILE_NAME_CONVERT instead.

It works with smallfile and bigfile tablespaces.

What Happens If I Add a Data File to an Existing Tablespace?

The target database attempts to add a matching data file.

The target database must be able to translate the data file location according to the section above.

2024-08-27T06:51:19.294612+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178088679
2024-08-27T06:51:20.268208+00:00
PDB1(4):Successfully added datafile 25 to media recovery
PDB1(4):Datafile #25: '+DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs01.289.1178088681'

What Happens If I Set a Tablespace Read-Only?

The refreshable clone PDB does not support this. Neither is going the other way: setting a tablespace read-write.

If you do so, the database reports an error:

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:23:02.893946+00:00
PDB2(6):Error! unsupported source PDB operation: 21
2024-08-28T05:23:02.994035+00:00
PDB2(6):Media Recovery failed with error 65339

Operation 21 is setting a tablespace read-only. If you set a tablespace read-write, the database reports operation 20 instead.

PDB2(7):Error! unsupported source PDB operation: 20

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

What Happens If I Restart the Source Database?

Refreshable clone PDB does not support restarting the source database.

When you restart the source database, the source database places a special marker in the redo stream. This even happens for a clean shutdown (SHUTDOWN NORMAL). The target CDB does not understand how to recover beyond this marker.

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:27:00.451985+00:00
PDB2(4):Error! unsupported source PDB operation: 3
2024-08-28T05:27:00.710236+00:00
PDB2(4):Media Recovery failed with error 65339

Operation 3 is the source database restart.

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

How Do I Drain My Source Database Before Migration?

Right before the migration, when you cut the connection from the source non-CDB to the target PDB, it could be useful to restart the database. But that’s not possible.

I suggest that you:

  • Ensure that the target CDB connects to the source non-CDB using a dedicated service. This applies to the database link that you establish between the two databases.
  • Stop all other services and specify a drain timeout.
  • Shut down the application that connects to the source non-CDB.
  • Kill sessions manually.

Remember that the target database connects to the source database via a database link, so stopping the database listener is not an option. Nor is enabling RESTRICTED SESSION.

Update: Armando managed to perform the migration using restricted session. Check his comment (see below) for details.

What Happens If I Restart the Target Container Database?

You can safely restart the target CDB while you have a refreshable clone PDB. This works fine.

What About NOLOGGING Operations?

You can’t perform NOLOGGING operations on the source database.

Since refreshable clone PDB relies on redo, then a NOLOGGING operation on the source will prevent that data from going to the target. When you try to query the NOLOGGING table on the target database after the migration, you will receive an error:

SQL> select count(*) from t1
       *
ERROR at line 1:
ORA-28304: Oracle encrypted block is corrupt (file # 186, block # 131)
ORA-01110: data file 186:
'+DATA/TGTCDB_HBZ_FRA/20CF181D4A925E06E0633D01000ACB50/DATAFILE/srctbs01.297.117
8266961'
ORA-26040: Data block was loaded using the NOLOGGING option

Thanks to Marcelo for leaving a comment. He suggests that you set the source non-CDB in FORCE LOGGING mode. This is a good idea to avoid this potential nightmare:

alter database force logging;

You can read more about NOLOGGING operations in The Gains and Pains of Nologging Operations (Doc ID 290161.1).

What About Hot Backups?

You can’t perform hot backup operations on the source database.

If you do so, you’ll run into the following error:

2025-11-21T14:31:06.845676+00:00
SALES(4):Error! unsupported source PDB operation: 1
2025-11-21T14:31:07.845923+00:00
SALES(4):Media Recovery failed with error 65339

Please note that I’m not referring to RMAN online backups. I’m talking about the old-school ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP commands.

Any restrictions on data types or object types?

No. The refreshable clone is a physical copy of the database, so there are no restrictions on data types or object types.

Services

You must recreate your services after the migration. Neither database managed services nor Clusterware managed services survive the migration.

Further Readin

Summary

Despite these minor restrictions, migration from non-CDB to PDB using refreshable clone PDB and AutoUpgrade is still a very handy method. Knowing the restrictions upfront ensures that you can successfully migrate the database.

Happy migrating!