At the recent Oracle DatabaseWorld at CloudWorld I spoke to several customers that had to upgrade to Oracle Database 19c and convert their non-CDB into the multitenant architecture.
Here is how to do it using Refreshable Clone PDBs.
My source database is:
- A non-CDB
- On Oracle Database 12.2 or newer
I want to:
- Upgrade to Oracle Database 19c
- Convert the database to a PDB
- Plug it into an existing CDB
The Problem With PDB Conversion
The conversion to multitenant does not offer the same rollback options as an upgrade. Normally, when you upgrade a database, you rely on Flashback Database as the primary rollback option. However, that does not work for conversion to multitenant.
When you plug your non-CDB into a CDB, the CDB makes changes to the data file headers. Those changes are irreversible and prevents you from ever using those data files in a non-CDB. Not even Flashback Database can revert the changes.
So, what are your rollback options?
- Restore a backup It might take longer than your organization can accept.
- Make a copy of the data files before conversion It requires disk space and a longer downtime window to copy the data files.
This is where Refreshable Clone PDBs come into play.
Refreshable Clone PDBs
Here is an overview of what AutoUpgrade does for you:

- AutoUpgrade creates a PDB in the target CDB as a refreshable clone PDB of the source non-CDB.
- The target CDB starts to copy the data files from the source non-CDB.
- The target CDB refreshes the PDB. In other words, it rolls forward the data files using the redo from the source non-CDB.
- Now, downtime starts. AutoUpgrade issues a final refresh to bring over the latest changes.
- AutoUpgrade disconnects the refreshable clone PDB from its source. Now, the PDB is a real, stand-alone PDB. AutoUpgrade upgrades the PDB and converts it into a proper PDB.
If something happens during the upgrade or conversion and you want to roll back, simply start the original non-CDB. It is left completely untouched.
You can learn about the concept in detail in our AutoUpgrade 2.0 webinar:
Refreshable clone PDBs does not work for cross-endian migrations (like AIX to Linux), but cross-platform should work fine (like Windows to Linux).
How To
- In the source non-CDB, I create a user:
create user dblinkuser identified by ... ; grant create session, create pluggable database, select_catalog_role to dblinkuser; grant read on sys.enc$ to dblinkuser; - In my target CDB, I create a database link connecting to my source non-CDB:
You can drop the database link after the migration.create database link clonepdb connect to dblinkuser identified by ... using 'source-db-alias'; - I create an AutoUpgrade config file called noncdb1.cfg:
upg1.source_home=/u01/app/oracle/product/12.2.0.1 upg1.target_home=/u01/app/oracle/product/19 upg1.sid=NONCDB1 upg1.target_cdb=CDB1 upg1.source_dblink.NONCDB1=CLONEPDB 600 upg1.target_pdb_name.NONCDB1=PDB1 upg1.start_time=25/09/2023 06:30:00source_homeandtarget_homeis the Oracle Home of the source non-CDB and target CDB respectively.sidis the source non-CDB that I want to upgrade and convert.target_cdbis the CDB into which I want to plug in the non-CDB. You must create the CDB in advance or use an existing one.source_dblinkhas the name of the database link (CLONEPDB) and the rate at which the target CDB brings over redo and rolls forward the copy (600 seconds or 10 minutes).target_pdb_namespecifies that I want to rename the non-CDB to PDB1 when I plug it in. You can leave this out if you want to keep the name.start_timespecifies when downtime starts. At this point, AutoUpgrade refreshes the PDB for the last time and then moves on with upgrade and PDB conversion.
- Start AutoUpgrade in analyze mode on the source system:
java -jar autoupgrade.jar -mode analyze -config noncdb1.cfg - Run AutoUpgrade in fixups mode on the source system:
java -jar autoupgrade.jar -mode fixups -config noncdb1.cfg- This runs the fixups identified by AutoUpgrade in analyze mode. You can run this task even after you start AutoUpgrade in deploy mode. Just ensure that the fixups complete before the final refresh (as specified in the
start_timeparamter).
- This runs the fixups identified by AutoUpgrade in analyze mode. You can run this task even after you start AutoUpgrade in deploy mode. Just ensure that the fixups complete before the final refresh (as specified in the
- If there are no errors found in the analysis, I start AutoUpgrade in deploy mode:
java -jar autoupgrade.jar -mode deploy noncdb1.cfg- AutoUpgrade copies the data files over the database link.
- Rolls the copies of the data files forward with redo from the source non-CDB.
- At one point, issues a final refresh and disconnects the PDB from the source non-CDB.
- Upgrades and converts the database to a PDB.
Here’s a demo of it:
Words of Caution
Disconnect Users from Source Database
Right before the upgrade and conversion starts, AutoUpgrade executes a final refresh. The last redo from the source non-CDB is applied to ensure no data is lost. You must ensure that no users are connected to the source non-CDB after this time. Otherwise, that data will be lost.
AutoUpgrade starts the final refresh at the start time specified in the config file:
upg1.start_time=25/09/2023 06:30:00
You must be careful about disconnecting users from the source non-CDB. Remember, AutoUpgrade connects to the source non-CDB over a database link as a regular user (not SYS). This means the listener must be available, and you can’t enable restricted session or similar means.
Data Guard
If the target CDB is protected by Data Guard, special attention is needed to handle the standby databases. I explain the details in our AutoUpgrade 2.0 webinar:
Redo
The procedure relies on redo from the source non-CDB. Ensure that redo is kept in the Fast Recovery Area of the source non-CDB until it has been applied on the target PDB. Either postpone your archive backups or change the archive log deletion policy so the archive logs remain on disk.
Final Refresh
Check this blog post if you want to be in control over when the final refresh happens.
Services
You must recreate the services used in your connect strings.
Hi Daniel , is it possible to setup refreshable pdb clone between cross endian platforms? I.e source db in Solaris and Target db in RHEL.
LikeLike
Hi,
That’s a good question. Refreshable Clone PDBs does not work for cross-endian migrations. Cross-platform (like Windows to Linux) should work fine. I have updated the blog post with details.
Regards,
Daniel
LikeLike
Daniel
Thank you.
In step #5, should not “mode” be set to “deploy”? Seems like a typo. Just thought of checking.
Regards
Vijay
Oracle Exadata Tier1 Support
LikeLike
Hi Vijay,
Good catch. This is what happens when you copy/paste.
Thanks for letting me know.
Daniel
LikeLike
Can I use the autoupgrade methode of cloning to a remote system ? In other words both ORACLE_HOMES are on different systems.
In the past we did migrations from 12 to 19 by cloning :
– run preupgrade script against 12 pdb
– create database link from target to source CDB
– clone the pdb
– run dbupgrade
– apply post preupgrade scripts to 19 pdb
Can I do the above steps with autoupgrade ?
LikeLike
Hi,
This is exactly what you can do now with AutoUpgrade using refreshable clone PDBs.
Regards,
Daniel
LikeLike
what if target DB on onther host ?
LikeLike
This will work perfectly fine.
LikeLike
Hi Daniel,
thanks for the blog. I tried it out and it worked as espected.
One comment:
the command for deploy seems to be wrong:
java -jar autoupgrade.jar -mode analyze -deploy noncdb1.cfgshouldn’t that be:
java -jar autoupgrade.jar -mode deploy -config noncdb1.cfgAnd one question: is it possible to activate the refreshable pdb manually instead of using a distinct date?
Thx
Johannes
LikeLike
Hi Johannes,
Thanks for the feedback. You’re right – there is a typo. I changed the post to match your comment. Thanks!
Currently, having a fixed start time is the only option. However, there is a workaround. You can start AutoUpgrade and let it run until the phase where it waits for start time. Then you kill AutoUpgrade. When you want to resume and complete the process, you simply restart AutoUpgrade with the same command line and it will continue where it left. If “start_time” has now passed, then it will continue with the final refresh immediately.
Regards,
Daniel
LikeLike
Dan –
I have gotten this to work with the source being single-instance, non-CDB on a remote machine. However, it fails if the remote non-CDB is RAC (trying to check the status of all the instances in srvctl). Would this be expected?
LikeLike
Hi Pete,
I’ve never heard that there should be any issues when the source database is RAC. Can you share some more details like versions, platforms, error messages and such. Or, if you have an SR number, I can have a look.
Regards,
Daniel
LikeLike
Daniel –
A little more information for you.
I have only seen this with a source RAC database on a different RAC cluster than the target. I have gotten same error with a 19c RAC PDB and 12.2 RAC non-CDB. Target is 19.22 CDB, and I am using AutoUpgrade 24.1.240315.
Both source and target hosts are Exadata (one OEL 7, the other OEL8), but I do not think that is relevant.
I have opened SR 3-36464587071 with the logs from the 19c PDB “hot-clone” test case. Thanks for taking a look.
– Pete
LikeLike
Hi Pete,
Thanks for supplying the SR. I’ll take a look. However, I have some traveling coming up, so please expect a delay in my answer.
Regards,
Daniel
LikeLike
Hi Pete,
I’ve asked one of our developers to take a look. You can reach out to me on daniel.overby.hansen@oracle.com later on if we need to discuss details.
Thanks for letting me know about this issue!
Daniel
LikeLike
Hey Daniel,
Thanks for the amazing article. Regarding the “Words of Caution”: Disconnect Users from Source Database. Sometimes it’s impossible to prevent all DMLs, so I’d like to restart the source in “read-only” mode and perform the final refresh. But as soon as the source gets refreshed no refreshes are possible:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB
In alert.log:
PDBORCL(5):Errors with log +DATA/ORCL01/ARCHIVELOG/2024_07_20/thread_4_seq_55.935.1174841867
However the mentioned archivelog is accessible.
Is this an expected behavior?
Regards, Nariman
LikeLike
Hi Nariman,
That does not sound like expected behavior. I’ve tried the same myself and it worked fine. If the problem persists you should get in touch with Oracle Support.
Have you tried to enable restricted session instead? That still leaves the database in read write mode, but only privilieged users can connect.
Regards,
Daniel
LikeLike
Hi Daniel,
is it possible to separate the creation and refreshing the refresable pdb from the final upgrade. We would like to automate the migration but have some large databases (> 5 TB). So our idea was to create the refrashable pluggable databases some days before the action upgrade date. using a fixed time at that point is not feasible.
Any hint would be appreciated
Johannes
LikeLike
Hi Johannes,
Not at the moment in AutoUpgrade. If you want full control, you would need to do it outside of AutoUpgrade.
An idea:
1. Use AutoUpgrade as it is today and set an estimated – fixed – start time.
2. Let AutoUpgrade built the refreshable clone and go into the REFRESHPDB stage.
3. Kill AutoUpgrade
4. Manually refresh the PDB at regular intervals using ALTER PLUGGABLE DATABASE command
5. Restart AutoUpgrade (using the same command line). It will find the config file and see that “start time” is overdue and start the rest of the process.
If you kill AutoUpgrade – it can’t complete the migration. However, the target CDB can still refresh the PDB. When you restart AutoUpgrade, it will pick up from where it left. It will see that start time is already passed, so it will move on with the final refresh and the rest of the job.
I haven’t tested this personally, so please do so yourself. But it should work.
Regards,
Daniel
LikeLike
Hello,
Using the exact same config file, I can not run ANALYZE because of missing required parameter :
AutoUpgrade 24.2.240411 launched with default internal options
Processing config file …
Unplug-Relocate requires the target_pdb_copy_option parameter to be specified in the configuration file.
upg1.sid=NONCDB
upg1.source_home=/PATH_TO_ORACLE_HOME/19.24.0.0.240716
upg1.target_home=/PATH_TO_ORACLE_HOME/23.5.0.24.07
upg1.target_cdb=MYCDB
upg1.target_pdb_name=MY_NEW_PDB
upg1.source_dblink.NONCDB=CLONEPDB 60
upg1.start_time=+10m
And if I add the missing parameter, I have another erro… (But no error on Analyze)
PDB’s version does not match CDB’s version: PDB’s version 19.0.0.0.0. CDB’s version 23.0.0.0.0.
SR Opened for this issue.
Thank you !
LikeLike
Hi Baptiste,
Can you try to add the following line:
upg1.target_pdb_name.NONCDB=MY_NEW_PDB
And remove this line:
upg1.target_pdb_name=MY_NEW_PDB
Further, you are not using the latest version of AutoUpgrade. Please get the latest one from My Oracle Support.
Regards,
Daniel
LikeLike
Thank you Daniel.
Unforntunately, same problem with the last one (24.6.240905), and same problem when i change :
upg1.target_pdb_name.NONCDB=MY_NEW_PDB
Reading note about AutoUpgrade (AutoUpgrade Tool (Doc ID 2485457.1)), it seems not yet supported for Oracle 23ai :
=====
Target Versions Supported
========
In the meantime, this is the only way to upgrade to 23ai today… So I don’t get it :)
LikeLike
Sorry to hear that. You need to file a service request and Oracle Support will look at it.
The MOS notes was updated recently and now reflects that AutoUpgrade is supported for upgrades to 23ai.
Regards,
Daniel
LikeLike
Hello Daniel, just to let you know that I found the solution for my problem.
As i am using ASM, i needed to add this parameter, to value “none”
upg1.target_pdb_copy_option.NONCDB=file_name_convert=none
Thank you for your help
LikeLike
Thanks for the update. I’m glad you can move on and start exploring Oracle Database 23ai. Enjoy.
Daniel
LikeLike
Hi Daniel! We migrated one of our rather active DBs from nonCDB to Multitenant with the refeshable PDB funtionality. It looked just like what we needed, but there were some tricky bits. While it generally worked we struggled with the overall performance during peak times on the source. The refresh job (at least a year ago on 19c) on the target is designed do work on the redo apply with a single process and that was at times not enough. The parallel option used for creating the refreshable clone is only for the initial datafile copy. The refreshes after that are bound to a single CPU core which is then depending on the amount of redo at times permanently close to 100% usage. When we had some larger batch jobs the amount of redo was too much for that single process on the target to keep up so it fell way behind the source. Each refresh after the batch started took hours and we carefully hat to find a timeslot on the weekend with the least possible redo gerneration to do the switch to the new multitenant DB. Every night during the week we had phases were the target was 4-6 hours behind the source and only during the morning it finally catched up.
While it is a super usefull feature it could be even better. RMAN like section size for the initial copy of large bigfile tablespaces would be usefull and if the parallel n option would also be used for redo apply.
LikeLike
Hi Daniel,
Thanks for sharing the information and the many details that you uncovered.
I agree with your findings and assumptions. For a very active database, this feature might not be fast enough. The redo apply process is a single thread and there’s no way to use more processes.
I’m glad that you made it work, but as you say, there’s room for improvement.
Regards,
Daniel
LikeLike
I tried dbupgrade -l /home/oracle/logs -c “PDB Name” after clone to upgrade to 23ai. It works.
LikeLike
Hi,
If you want to upgrade to Oracle Database 23ai, you can use this article:
Please note that using “dbupgrade” directly is no longer supported. You must use AutoUpgrade.
Regards,
Daniel
LikeLike
Daniel,
Thank you . Today I successfully converted a Non-CDB(19c) database to PDB(23ai) between two servers using autoupgrade tool. Here my source is Non-ASM and target is ASM . Thank you for your help.
LikeLike
That’s good to hear.
Regards,
Daniel
LikeLike
Hi Daniel,
During fixups every time I am getting this warning . I am sure we can safely ignore this warning or any action required on config file? my config file looks good. But my fixups completed successful.
[oracle@src ~]$ java -jar /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar -config pdb19c.cfg -mode fixups
AutoUpgrade 24.8.241119 launched with default internal options
Processing config file …
+——————————–+
| Starting AutoUpgrade execution |
+——————————–+
1 PDB(s) will be processed
Type ‘help’ to list console commands
upg> WARNING: Target home entry is not available. This could be because is not specified in the configuration file or the specified path does not exist, this may lead AutoUpgrade to not be able to run the fixups for certain checks which need the target Oracle home presence.
Thanks in advance.
Regards,
Pad
LikeLike
Hi Paddy,
Yes, you can safely disregard that message.
Regards,
Daniel
LikeLike
Hi Daniel ,
since cross-platfrom (same endian) is supported, is it possible to use
refreshable pdb clone for migration of a non-CDB 19c RAC/ASM from Solaris
Sparc to a IBM Power 19c RAC/ASM pdb ?
Thanks in advance.
Regards,
Thanasis
LikeLike
Hi Thanasis,
Cross-platform is supported from AutoUpgrade perspective, however, there is also the question of redo compatibility between the two platforms. I’ve never tried that combination. I might be able to ask around whether anyone has had experiences with that combination. If you’d like that, please send me an e-mail (daniel.overby.hansen (a) oracle.com). Include some details about your project and I’ll see what I can find.
Regards,
Daniel
LikeLike