When A Refreshable Clone Takes Over The Service

Following my advice, a customer migrated a database to multitenant using refreshable clone PDB. The CDB was on the same host as the non-CDB.

When we prepare the migration and create the refreshable clone, the users can no longer connect to the source database.

The users were getting ORA-01109: database not open when connecting to the source database.

But this was before the final refresh and before the migration was supposed to happen.

Why did the refreshable clone PDB interfere with operations in the source database?

The Details

The customer has a non-CDB called SALES, which they wanted to migrate into CDB1. They wanted the PDB to keep the original name, SALES.

Any database registers a default service at the listener. The name of the default service is the same as the name of the database.

In this case, the non-CDB registered the sales service:

$ lsnrctl status

Service "sales" has 1 instance(s).
  Instance "SALES", status READY, has 1 handler(s) for this service...

The customer used AutoUpgrade for the migration. When preparing for the migration, they started in deploy mode, and AutoUpgrade created the refreshable clone.

Bear in mind that these steps are preparations only. The switch to the PDB should happen at a later time.

Look what happens to the listener:

$ lsnrctl status

Service "sales" has 2 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
  Instance "SALES", status READY, has 1 handler(s) for this service...

The CDB also registers a sales service. It does so because of the refreshable clone PDB with the same name.

The users were connecting to the default service, sales. The listener handed off the connections to the CDB, not the non-CDB.

Since the PDB was still a refreshable clone PDB, it was not open, and users received ORA-01109: database not open.

Besides that, the refreshing process didn’t work either. The refresh of the PDB happens over a database link to the source database. Guess what happened?

2025-05-09T07:25:58.854934+00:00
Errors in file /u01/app/oracle/diag/rdbms/cdb19/CDB19/trace/CDB19_ora_61633.trc:
ORA-17627: ORA-01109: database not open
ORA-17629: Cannot connect to the remote database server
ORA-17627 signalled during: ALTER PLUGGABLE DATABASE FTEX REFRESH...

Yes, it couldn’t connect to the source database either. It ended up trying to connect to itself.

Optimal Solution

The real problem is the connections to the default service, sales. The service with the same name as the database.

This service is not meant for general use. You should create your own service and have your application connect through that.

Why is using default services a bad idea?

  • You can’t customize the default service.
  • The default service is for administrative use only.
  • You easily end up with collisions like this one. This can also happen with two PDBs in different CDBs on the same host.
  • If you rename the database, you also rename the default service and have to update all connection strings.

Why are custom services a good idea?

  • Custom services allow you to set many attributes. While this might not be important for a single-instance database, it is essential for Data Guard and RAC.
  • When you clone a database, a custom service doesn’t follow with it. You have to create the services in the clone when and if it is appropriate.

You can create custom services using DBMS_SERVICE or srvctl. You can find more about that in a previous blog post.

Other Solutions

Other feasible solutions exist, but none of them address the real issue, which I believe is the use of the default service.

  • Rename the PDB so it creates a default service with a different name. After migration, you can rename it.
  • Create a static listener entry that forces the listener to route connections to the non-CDB. However, static listener entries are really not nice, and you should use dynamic registration whenever possible.
  • Create a second listener for the CDB. That’s just cumbersome.

Recommendation

  • Keep using refreshable clone PDB for migrations. It’s a great way to migrate, patch, or upgrade databases.

  • Always create your own custom service. Don’t use the default service.

Using Refreshable Clone PDBs from a Standby Database

The other day, I found myself praising refreshable clone PDBs to a customer (which I often do because it’s a killer feature). They liked the feature too but asked:

> We are concerned about the impact on the source database. When AutoUpgrade connects to the source database and clones the database, can we offload the work to a standby database?

Refreshable clone PDBs can eat up your resources if you don’t constrain the target CDB. So, let’s see what we can do.

Mounted Standby Database

This won’t work, because you must be able to connect to the database via a regular database link. Further, AutoUpgrade and the cloning process must be able to execute queries in the source database, which is not possible on a mounted database.

Open Standby Database / Active Data Guard

What if you stop redo apply and open the standby database? Or if you have Active Data Guard?

In this case, the database would be open in read-only mode, and those queries would work. However, the refreshable clone PDB feature was developed to work in and require a read-write database, so this won’t work either – Not even if you enable automatic redirection of DML operations (ADG_REDIRECT_DML).

Even if this case would work, we wouldn’t recommend it. Because, we recommend that you run analyze and fixups mode on the source database, which wouldn’t be possible on a read-only database. You could run analyze and fixups on the primary database. But is that really an option? If you’re worried about affecting your primary and want to offload to the standby, would running those commands on the primary be an option?

Snapshot Standby Database

What about a snapshot standby? That’s a read-write database. Let’s give it a try.

  1. Convert the source standby to a snapshot standby:
    DGMGRL> convert database '...' to snapshot standby;
    
    • The standby must remain a snapshot standby for the entire duration of the job. If you need to switch over or fail over to the standby, you must restart the entire operation.
  2. Ensure the PDB is open on the source standby.
    alter pluggable database ... open;
    
    • Otherwise, you will run into ORA-03150 when querying the source database over the database link.
  3. In the source standby, create the user used by the database link and grant appropriate permissions:
    create user dblinkuser identified by ...;
    grant create session, create pluggable database, select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  4. In the target CDB, create a database link that points to the PDB in source standby:
    create database link clonepdb
    connect to dblinkuser identified by ...
    using '';
    
  5. Create an AutoUpgrade config file:
    global.global_log_dir=/home/oracle/autoupgrade/log
    global.keystore=/home/oracle/autoupgrade/keystore
    upg1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
    upg1.target_home=/u01/app/oracle/product/23.0.0/dbhome_1
    upg1.sid=CDB19
    upg1.target_cdb=CDB23
    upg1.pdbs=PDB1
    upg1.source_dblink.PDB1=CLONEPDB 300
    upg1.target_pdb_copy_option.PDB1=file_name_convert=none
    upg1.start_time=+12h
    
  6. Start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar ... -mode deploy
    
  7. Convert the source standby back to a physical standby:
    DGMGRL> convert database '...' to physical standby;
    

Is It Safe?

Using a standby database for anything else than your DR strategy, is sometimes perceived as risky. But it is not, as I explain in this blog post (section What Happens If I Need to Switch Over or Fail Over?).

Happy upgrading!

Hey, Let Me Kill Your Network!

This short story is about the awesomeness of AutoUpgrade and refreshable clone PDBs.

Colleagues of mine were testing upgrades to Oracle Database 23ai using refreshable clone PDBs. They wanted to see how fast AutoUpgrade would clone the PDB and how that affected the source system.

The Systems

The source and target systems were identical:

  • Exadata X10M
  • 2-node RAC
  • 190 CPU/node
  • 25Gbps network/node

The database:

  • 1 TB in size
  • All data files on ASM

The Results

The source database is Oracle Database 19c. They configured AutoUpgrade to upgrade to Oracle Database 23ai using refreshable clone PDBs. However, this test measured only the initial copy of the data files – the CLONEDB stage in AutoUpgrade.

Parallel Time Throughput Source CPU %
Default 269s 3,6 GB/s 3%
Parallel 4 2060 0,47 GB/s 1%
Parallel 8 850 1,14 GB/s 1%
Parallel 16 591 1,65 GB/s 2%

A few observations:

  • Cloning a 1 TB database in just 5 minutes.
  • Very little effect on CPU + I/O on source, entirely network-bound.
  • The throughput could scale almost up to the limit of the network.
  • By the way, this corresponds with reports we’ve received from other customers.

Learnings

  • The initial cloning of the database is very fast and efficient.
  • You should be prepared for the load on the source system. Especially since the network is a shared resource, it might affect other databases on the source system, too.
  • The target CDB determines the default parallel degree based on its own CPU_COUNT. If the target system is way more powerful than the source, this situation may worsen.
  • Use the AutoUpgrade config file entry parallel_pdb_creation_clause to select a specific parallel degree. Since the initial copy happens before the downtime, you might want to set it low enough to prevent overloading the source system.
  • Be careful. Don’t kill your network!

Happy upgrading!

AutoUpgrade New Features: Control Start Time When Using Refreshable Clone PDBs

When you migrate or upgrade with refreshable clone PDBs, you sometimes want to decide when the final refresh happens. Perhaps you must finish certain activities in the source database before moving on.

I’ve discussed this in a previous post, but now there’s a better way.

The Final Refresh Dilemma

In AutoUpgrade, the final refresh happens at the time specified by the config file parameter start_time. This is the cut-over time where no further changes to the source database, gets replicated in the target database.

Overview of the phases when using refreshable clone PDBs

You specify start_time in the config file, and then you start the job. Typically, you start it a long time before start_time to allow the creation of the new PDB.

So, you must specify start_time in the config file and that’s when you believe the final refresh should happen. But things might change in your maintenance window. Perhaps it takes a little longer to shut down your application or there’s a very important batch job that must finish. Or perhaps you can start even earlier.

In that case, a fixed start time is not very flexible.

The Solution

You can use the proceed command in the AutoUpgrade console to adjust the start time, i.e., the final refresh.

  1. Get the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Start the job in deploy mode as you normally would:

    java -jar autoupgrade.jar ... -mode deploy
    
    • AutoUpgrade now starts the CLONEPDB stage and begins to copy the database.
  3. Wait until the job reaches the REFRESHPDB stage:

    +----+-------+----------+---------+-------+----------+-------+--------------------+
    |Job#|DB_NAME|     STAGE|OPERATION| STATUS|START_TIME|UPDATED|             MESSAGE|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    | 100|  CDB19|REFRESHPDB|EXECUTING|RUNNING|  14:10:29| 4s ago|Starts in 54 minutes|
    +----+-------+----------+---------+-------+----------+-------+--------------------+
    Total jobs 1
    
    • In this stage, AutoUpgrade is waiting for start_time to continue the migration. It refreshes the PDB with redo from the source at the specified refresh interval.
    • I must start well before the maintenance window, so AutoUpgrade has enough time to copy the database.
  4. You can now change the start time. If you want to perform the final refresh and continue immediately, use the proceed command:

    proceed -job 100
    

    Or, you can change the start time:

    proceed -job 100 -newStartTime 29/03/2025 02:00:00
    

    Or, you can change the start time to a relative value, example 1 hour 30 min from now:

    proceed -job 100 -newStartTime +1h30m
    
  5. After the final refresh, AutoUpgrade disconnects the refreshable clone PDB, turns it into a regular PDB, and moves on with the job.

Wrapping Up

AutoUpgrade offers complete control over the process. You define a start time upfront, but as things change, you can adjust it in flight.

Refreshable clone PDBs are a fantastic method for non-CDB to PDB migrations and for upgrades of individual PDBs.

There are a few quirks to be aware of, and if you are using Data Guard bear in mind that you can only plug in with deferred recovery. Other than that – it’s just to say…

Happy migrating, happy upgrading!

Further Reading

Recreate Database Services After Moving An Oracle Database

Oracle recommends that you connect to the database via custom services. In your connect string, don’t connect:

  • Directly to the SID
  • Or to the database’s default service (the service with the same name as the database).

When you move a database around, in some situations, the database does not retain these services, for instance, when you:

  • Migrate a non-CDB to PDB using refreshable clone PDB
  • Upgrade a PDB using refreshable clone PDB
  • Move a PDB to a different CDB using refreshable clone PDB
  • Migrating a database using Full Transportable Export/Import or transportable tablespaces

The services are important because your application and clients connect to the database through that service. Also, the service might define important properties for things like Application Continuity or set default drain timeout.

Here’s how to recreate such services.

Database Managed Services

A database-managed service is one that you create directly in the database using dbms_service:

begin
   dbms_service.create_service(
      service_name=>'SALESGOLD',
      network_name=>'SALESGOLD');
   dbms_service.start_service('SALESGOLD');   
end;
/

After the migration, you must manually recreate the service in the target database.

dbms_metadata does not support services. So, you must query v$services in the source database to find the service’s defition. Then, construct a call to dbms_service.create_service and dbms_serice.start_service.

Clusterware Managed Services

I recommend defining services in Grid Infrastructure if you are using Oracle RAC or using Oracle Restart to manage your single instance database. Luckily, Grid Infrastructure supports exporting and importing service defitions.

  • You export all the services defined in the source database:

    srvctl config service \
       -db $ORACLE_UNQNAME \
       -exportfile my_services.json \
       -S 2
    
  • You edit the JSON file.

    1. Remove the default services. Keep only your custom services.
    2. Remove the dbunique_name attribute for all services.
    3. If you are renaming the PDB, you must update the pluggable_database attribute.
    4. Update the res_name attribute so it matches the resource name of the target database. Probably you just need to exchange the db_unique_name part of the resource name. You can find the resource name as grid when you execute crsctl stat resource -t.
  • You can now import the services into the target database:

    srvctl add service \
       -db $ORACLE_UNQNAME \
       -importfile my_services.json
    
  • Finally, you start the service(s):

    export ORACLE_SERVICE_NAME=SALESGOLD
    srvctl start service \
       -db $ORACLE_UNQNAME \
       -service $ORACLE_SERVICE_NAME
    

Additional Information

  • The export/import features work from Oracle Database 19c, Release Update 19 and beyond.
  • You can also export/import the definition of:
    • Database: srvctl config database -db $ORACLE_UNQNAME -S 2 -exportfile my_db.json.json
    • PDB: srvctl config pdb -db $ORACLE_UNQNAME -S 2 -exportfile my_pdb.json
    • ACFS filesystem: srvctl config filesystem -S 2 -exportfile /tmp/my_filesystem.json
  • At time of writing, this functionality hasn’t made it into the documentation yet. Consider yourself lucky knowing this little hidden gem.

Final Words

Remember to recreate your custom services after a migration. Your application needs the service to connect in a proper way.

Further Reading

AutoUpgrade New Features: Drop Database Link When Using Refreshable Clones

With the latest version, 24.8, AutoUpgrade can drop the database link after using refreshable clones.

Details

  • Certain jobs in AutoUpgrade require a database link to the source database.
  • Whenever you specify a database link using source_dblink, you can optionally instruct AutoUpgrade to drop it.
  • The default value is no, meaning AutoUpgrade leaves the database link in place.

How To

  • Get the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  • Instruct AutoUpgrade to drop the database link after completing the job:

    upg1.drop_dblink=yes
    

Happy Upgrading

I’m a huge fan of using refreshable clones for upgrades and non-CDB to PDB migrations.

Granted, this is not the most ground-breaking enhancement we’ve introduced. But it’s yet another thing that makes your life a little easier.

What do you think could make AutoUpgrade even easier to use? Leave a comment and let us know.

Control The Final Refresh When Using Refreshable Clone PDBs in AutoUpgrade

When you migrate or upgrade with refreshable clone PDBs, you sometimes want to decide when the final refresh happens. Perhaps you must finish certain activities in the source database before moving on.

Here’s how to do that in AutoUpgrade.

The Final Refresh Dilemma

In AutoUpgrade, the final refresh happens at the time specified by the config file parameter start_time.

Overview of the phases when using refreshable clone PDBs

You specify start_time in the config file, but once you start the job, you cannot change it. Remember that you normally start AutoUpgrade a long time before start_time to allow the creation of the new PDB.

In some situations, you want more control. You might want to finish some work on the source database before AutoUpgrade starts the final refresh. Perhaps you need to kick users off or coordinate activities with other teams.

In that case, a fixed start time is not very flexible.

The Solution

Update: Check out the new proceed command.

Imagine my downtime window starting on Saturday, 30 November, 02:00.

At that time, I need to ask the application team to shut down the applications in the database, I need to run certain pre-migration tasks, and finally kill sessions if needed. So, I don’t want AutoUpgrade to start at 02:00 – I want to decide at which point after 02:00 that AutoUpgrade should start.

Here’s my approach

  • I create a config file and set the start_time parameter to the start of my downtime window.
    upg1.start_time=30/11/2024 02:00:00
    
  • I start AutoUpgrade in deploy mode before my downtime starts:
    java -jar autoupgrade.jar -config ... -mode deploy
    
    • I must start well before the downtime window so AutoUpgrade has enough time to copy the database.
    • Imagine my tests show it takes around four hours to copy the database. I decide to start on Friday, 29 November, 16:00, so the copy should end around 20:00 – well enough time before my downtime window.
  • AutoUpgrade now starts the CLONEPDB phase:
    +----+-------+--------+---------+-------+----------+-------+---------------------------+
     |Job#|DB_NAME|   STAGE|OPERATION| STATUS|START_TIME|UPDATED|                    MESSAGE|
     +----+-------+--------+---------+-------+----------+-------+---------------------------+
     | 100|   TEAL|CLONEPDB|EXECUTING|RUNNING|  02:00:00| 4s ago|Creating pluggable database|
     +----+-------+--------+---------+-------+----------+-------+---------------------------+
    
    • Note the START_TIME value. It is the time when the final refresh happens.
  • I wait for AutoUpgrade to create the PDB and enter the REFRESHPDB phase:
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    |Job#|DB_NAME|     STAGE|OPERATION| STATUS|START_TIME|UPDATED|               MESSAGE|
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    | 100|   TEAL|REFRESHPDB|EXECUTING|RUNNING|  02:00:00| 2s ago|PDB TEAL was refreshed|
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    
  • Then I stop the job:
    upg> stop -job 100
    
    • If I exit AutoUpgrade after stopping the job, don’t worry. As soon as I restart AutoUpgrade, it will pick up from where it left and continue with the job.
  • When I stop the job, there is no periodic refresh. I should refresh the PDB in the target CDB manually at regular intervals:
    SQL> alter pluggable database teal refresh;
    
    • If I don’t perform any periodic refresh, the redo will accumulate, and the final refresh will take longer. Keep the final refresh shorter by refreshing more often.
  • After the start of my downtime window (the start_time parameter), when I’m done on the source database and want to proceed with the final refresh, I resume the job in AutoUpgrade.
    upg> resume -job 100
    
  • AutoUpgrade now realizes it is past the defined start_time and immediately moves on with the final refresh and the rest of the job.

Wrapping Up

Ideally, AutoUpgrade should offer better control over the process. We have a task on our backlog to come up with a better solution.

Update: Use the proceed command in AutoUpgrade to control the start time

However, refreshable clone PDBs are still a fantastic method for non-CDB to PDB migrations and for upgrades of individual PDBs.

There are a few quirks to be aware of, and if you are using Data Guard bear in mind that you can only plug in with deferred recovery. Other than that – it’s just to say…

Happy Migrating!

Further Reading

A Few Details about Using Refreshable Clone PDB for Non-CDB to PDB Migration

Our team has been advocating the use of refreshable clone PDB for non-CDB to PDB migrations using AutoUpgrade. It is a great feature and our entire team loves it – so does many of the customers we work with.

However, in a recent non-CDB to PDB migration, we encountered some issues with refreshable clone PDB and AutoUpgrade.

Can My Target Container Database Be a RAC Database?

Yes, this works perfectly fine.

Be aware that CREATE PLUGGABLE DATABASE statement scales out on all nodes in your cluster. By default, the database also uses parallel processes, so potentially, this will put quite a load on the source non-CDB. Consider restricting the use of parallel processes using the AutoUpgrade config file parameter:

upg1.parallel_pdb_creation_clause=4

Since the creation scales out on all nodes, all nodes must be able to resolve the connect identifier to the source non-CDB. If you use an alias from tnsnames.ora, be sure to add that on all nodes. Failure to do so will lead to an error during the CREATE PLUGGABLE DATABASE command:

ERROR at line 1:
ORA-65169: error encountered while attempting to copy file
+DATAC1/SRCDB/DATAFILE/system.262.1178083869
ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified
ORA-17629: Cannot connect to the remote database server

What Happens If the Source Database Extends a Data File?

If the source database extends a data file – either through AUTOEXTEND ON NEXT or manually by a user – the target database extends the matching data file as well. Here is an extract from the target alert log when it extends a data file:

2024-08-27T07:01:26.671975+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178089277
2024-08-27T07:01:32.773191+00:00
PDB1(4):Resize operation completed for file# 26, fname +DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs02.290.1178089287, old size 10240K, new size 1058816K

It works with smallfile and bigfile tablespaces.

What Happens If I Create a Tablespace on the Source Database?

The target database attempts to create the same tablespace.

For this to work, one of the following must be true:

If either one of the above isn’t true, you’ll receive an error during ALTER PLUGGABLE DATABASE ... REFRESH:

ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as
'+DATAC1/SRCDB/DATAFILE/srctbs04.282.1178091655'
You can use PDB_FILE_NAME_CONVERT instead.

It works with smallfile and bigfile tablespaces.

What Happens If I Add a Data File to an Existing Tablespace?

The target database attempts to add a matching data file.

The target database must be able to translate the data file location according to the section above.

2024-08-27T06:51:19.294612+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178088679
2024-08-27T06:51:20.268208+00:00
PDB1(4):Successfully added datafile 25 to media recovery
PDB1(4):Datafile #25: '+DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs01.289.1178088681'

What Happens If I Set a Tablespace Read-Only?

The refreshable clone PDB does not support this. Neither is going the other way: setting a tablespace read-write.

If you do so, the database reports an error:

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:23:02.893946+00:00
PDB2(6):Error! unsupported source PDB operation: 21
2024-08-28T05:23:02.994035+00:00
PDB2(6):Media Recovery failed with error 65339

Operation 21 is setting a tablespace read-only. If you set a tablespace read-write, the database reports operation 20 instead.

PDB2(7):Error! unsupported source PDB operation: 20

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

What Happens If I Restart the Source Database?

Refreshable clone PDB does not support restarting the source database.

When you restart the source database, the source database places a special marker in the redo stream. This even happens for a clean shutdown (SHUTDOWN NORMAL). The target CDB does not understand how to recover beyond this marker.

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:27:00.451985+00:00
PDB2(4):Error! unsupported source PDB operation: 3
2024-08-28T05:27:00.710236+00:00
PDB2(4):Media Recovery failed with error 65339

Operation 3 is the source database restart.

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

How Do I Drain My Source Database Before Migration?

Right before the migration, when you cut the connection from the source non-CDB to the target PDB, it could be useful to restart the database. But that’s not possible.

I suggest that you:

  • Ensure that the target CDB connects to the source non-CDB using a dedicated service. This applies to the database link that you establish between the two databases.
  • Stop all other services and specify a drain timeout.
  • Shut down the application that connects to the source non-CDB.
  • Kill sessions manually.

Remember that the target database connects to the source database via a database link, so stopping the database listener is not an option. Nor is enabling RESTRICTED SESSION.

Update: Armando managed to perform the migration using restricted session. Check his comment (see below) for details.

What Happens If I Restart the Target Container Database?

You can safely restart the target CDB while you have a refreshable clone PDB. This works fine.

What About NOLOGGING Operations?

You can’t perform NOLOGGING operations on the source database.

Since refreshable clone PDB relies on redo, then a NOLOGGING operation on the source will prevent that data from going to the target. When you try to query the NOLOGGING table on the target database after the migration, you will receive an error:

SQL> select count(*) from t1
       *
ERROR at line 1:
ORA-28304: Oracle encrypted block is corrupt (file # 186, block # 131)
ORA-01110: data file 186:
'+DATA/TGTCDB_HBZ_FRA/20CF181D4A925E06E0633D01000ACB50/DATAFILE/srctbs01.297.117
8266961'
ORA-26040: Data block was loaded using the NOLOGGING option

Thanks to Marcelo for leaving a comment. He suggests that you set the source non-CDB in FORCE LOGGING mode. This is a good idea to avoid this potential nightmare:

alter database force logging;

You can read more about NOLOGGING operations in The Gains and Pains of Nologging Operations (Doc ID 290161.1).

What About Hot Backups?

You can’t perform hot backup operations on the source database.

If you do so, you’ll run into the following error:

2025-11-21T14:31:06.845676+00:00
SALES(4):Error! unsupported source PDB operation: 1
2025-11-21T14:31:07.845923+00:00
SALES(4):Media Recovery failed with error 65339

Please note that I’m not referring to RMAN online backups. I’m talking about the old-school ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP commands.

Any restrictions on data types or object types?

No. The refreshable clone is a physical copy of the database, so there are no restrictions on data types or object types.

Services

You must recreate your services after the migration. Neither database managed services nor Clusterware managed services survive the migration.

Further Readin

Summary

Despite these minor restrictions, migration from non-CDB to PDB using refreshable clone PDB and AutoUpgrade is still a very handy method. Knowing the restrictions upfront ensures that you can successfully migrate the database.

Happy migrating!

Upgrade Encrypted PDB in Cloud to Oracle AI Database 26ai

Here’s a cool way of upgrading your Oracle Database in OCI to Oracle AI Database 26ai.

This post was originally written for Oracle Database 23ai, but it can be used to Oracle AI Database 26ai as well.

I will move my PDB to a new Base Database System using refreshable clone PDBs and AutoUpgrade.

The benefit of using this approach is:

  • Much shorter downtime than an in-place upgrade.
  • A brand-new Base Database System, which means the operating system and Oracle Grid Infrastructure is already on a newer version.

I’m using a Base Database Service as an example, but the target system could also be Exadata Database Service, Exadata Cloud@Customer, or, in fact, any other offering of Oracle Database.

My Environment

I have one PDB that I want to upgrade. It’s called SALES.

Source

  • Base Database System on 19.23.0
  • Name DBS19

Target

  • Base Database System on 23.4.0
  • Name DBS23

How To

Prepare AutoUpgrade

  • Get the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
    Copy the new version of AutoUpgrade to your source and target system. You can put it into $ORACLE_HOME/rdbms/admin if you like, but it is not a requirement.
  • I create an AutoUpgrade config file, named sales.cfg, which I store on both servers:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/cfgtoollogs/keystore
    upg1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
    upg1.target_home=/u01/app/oracle/product/23.0.0.0/dbhome_1
    upg1.sid=CDB19
    upg1.pdbs=SALES
    upg1.target_cdb=CDB23
    upg1.source_dblink.SALES=CLONEPDB 600
    upg1.target_pdb_copy_option.SALES=file_name_convert=none
    upg1.target_version=23
    upg1.start_time=12/05/2024 02:00:00
    
    • I must specify global.keystore to allow AutoUpgrade to create a keystore to work with my encrypted PDB.
    • source_home and target_home list the Oracle Home of the source and target CDB, respectively. It doesn’t matter that the two homes exist on one server only.
    • sid and target_cdb contain the SID of the source and target CDB, respectively. The parameters are case sensitive.
    • pdbs contains the name of the PDB I want to upgrade, sales. If needed, I could specify more PDBs. Don’t include the domain or use the service name.
    • source_dblink has the name of the database link (CLONEPDB) and the rate at which the target CDB brings over redo and rolls forward the copy (600 seconds or 10 minutes).
    • I want to use ASM and Oracle Managed Files, so I set target_pdb_copy_option accordingly.
    • Since my source and target CDB are not on the same host, AutoUpgrade can’t automatically determine the target version. I specify that manually using target_version.
    • start_time specifies when downtime starts. At this point, AutoUpgrade refreshes the PDB for the last time and then moves on with the upgrade.

Prepare Source PDB

  • I connect to the source PDB. I create a user (for a database link) and grant privileges:

    create user dblinkuser identified by ... ;
    grant create session, 
          create pluggable database, 
          select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  • After the upgrade, I can drop the user.

Prepare Target CDB

  • I connect to the target CDB and create a database link pointing to my source PDB:

    create database link clonepdb
    connect to dblinkuser
    identified by dblinkuser
    using '';
    
  • Check that the database link works:

    select * from dual@clonepdb;
    

    If the query fails with ORA-02085, then you can use alter system set global_names=false;.

Analyze and Fix Source PDB

  • First, I analyze the source PDB for upgrade readiness. On the source system:
    java -jar autoupgrade.jar -config sales.cfg -mode analyze
    
  • The summary report lists the following precheck failures, which I safely ignore:
    • TDE_PASSWORDS_REQUIRED – I will fix that on the target system.
    • TARGET_CDB_AVAILABILITY – the target CDB is remote, and AutoUpgrade can’t analyze it.
  • Then, I execute the preupgrade fixups:
    java -jar autoupgrade.jar -config sales.cfg -mode fixups
    
    • This changes my source PDB, so I do it as close to my maintenance window as possible.
    • AutoUpgrade issues a warning that the target Oracle home is not present. You can safely disregard this.

Upgrade

  • Since my PDB is encrypted, I must add the source and target CDB keystore password to the AutoUpgrade keystore. I start the load password console on the target host:

    java -jar autoupgrade.jar -config sales.cfg -load_password
    
  • In the load password console, I add the keystore passwords of the source and target CDB:

    TDE> add CDB19
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    TDE> add CDB23
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    
  • I save the passwords and convert the AutoUpgrade keystore to an auto-login keystore:

    TDE> save
    Convert the keystore to auto-login [YES|NO] ? 
    
  • I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
    • AutoUpgrade:
      • Copies the data files over the database link.
      • Rolls the copies of the data files forward with redo from the source.
      • At the time specified by start_time, issues a final refresh and disconnects the PDB from the source. You can change the start time using the proceed command in the AutoUpgrade console.
      • Upgrades the PDB.
    • You can monitor the upgrade by using the command lsj -a 30.
  • I have now upgraded my PDB to Oracle Database 23ai.

The Fine Print

Before Upgrade

  • Ensure that redo is kept in the Fast Recovery Area of the source database until it has been applied on the target. Either postpone your archive backups or change the archive log deletion policy so the archive logs remain on disk.
  • Familiarize yourself with the concept of the AutoUpgrade keystore.
  • The source PDB must be Oracle Database 19c or newer to upgrade directly to Oracle Database 23ai.

During Upgrade

  • Disconnect users from the source database. Right before the upgrade starts, AutoUpgrade executes a final refresh. The last redo from the source database is applied to ensure no data is lost. You must ensure that no users are connected to the source database after this time. Otherwise, that data will be lost.

  • AutoUpgrade starts the final refresh at the start time specified in the config file:

    upg1.start_time=25/09/2023 06:30:00
    
  • You must be careful about disconnecting users from the source database. Remember, AutoUpgrade connects to the source database over a database link as a regular user (not SYS). This means the listener must be available, and you can’t enable restricted session or similar means.

  • Check this blog post if you want to be in control over when the final refresh happens.

After Upgrade

  • Recreate the services that you use in your connect strings.
  • Update your connection string. The PDB is now on a different Base Database System.
  • Start a new full backup of the target database after the migration.
  • The OCI console will recognize and display the new PDB after a while. You don’t have to do anything … than to wait for the automatic sync job.

Data Guard

When using refreshable clone PDBs you can’t reuse the data files on the standby database. It is similar to STANDBYS=NONE on the CREATE PLUGGABLE DATABASE statement. So, you plug in with deferred recovery on the standby database.

The easiest solution is to configure Data Guard after you have upgraded the database. However, it might not always be feasible. If your target CDB already has Data Guard, then you need to restore the data files to the standby database and enable recovery. Check Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1) for details.

Real Application Clusters (RAC)

AutoUpgrade detects if the target CDB is a RAC database. You don’t have to specify or do anything. AutoUpgrade handles everything for you.

Further Reading

Upgrade Base Database Cloud Service to Oracle Database 23c

Please see updated post on Oracle Database 23ai:


Here’s a cool way of upgrading your database in OCI to Oracle Database 23c. I will move my PDB to a new Base Database System using refreshable clone PDBs and AutoUpgrade.

The benefit of using this approach is:

  • Shorter downtime than an in-place upgrade.
  • A brand-new Base Database System, which means the operating system and Oracle Grid Infrastructure is already on a newer version.

My Environment

I have one PDB that I want to upgrade. It’s called SALES.

Source

  • Base Database System on 19.20.0
  • Name DBS19

Target

  • Base Database System on 23.3.0
  • Name DBS23

How To

Prepare AutoUpgrade

  • I must use a version of AutoUpgrade that supports upgrades to Oracle Database 23c, and I must have AutoUpgrade on the source and target system.
    $ java -jar autoupgrade.jar -version
    
  • At the time of writing, the latest version of AutoUpgrade on My Oracle Support does not support 23 as a target release. Instead, I copy AutoUpgrade from the target Oracle Home (on DBS23) to the source Oracle Home (on DBS19). This version allows upgrades to Oracle Database 23c.
  • I create an AutoUpgrade config file, named sales.cfg, which I store on both servers:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/cfgtoollogs/keystore
    upg1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
    upg1.target_home=/u01/app/oracle/product/23.0.0.0/dbhome_1
    upg1.sid=CDB19
    upg1.pdbs=SALES
    upg1.target_cdb=CDB23
    upg1.source_dblink.SALES=CLONEPDB 600
    upg1.target_pdb_copy_option.SALES=file_name_convert=none
    upg1.target_version=23
    upg1.start_time=05/11/2023 02:00:00
    
    • I must specify global.keystore to allow AutoUpgrade to create a keystore to work with my encrypted PDB.
    • source_home and target_home list the Oracle Home of the source and target CDB, respectively. It doesn’t matter that the two homes existing on one server only.
    • sid and target_cdb contain the SID of the source and target CDB, respectively.
    • pdbs contains the name of the PDB I want to upgrade, sales. If needed, I could specify more PDBs.
    • source_dblink has the name of the database link (CLONEPDB) and the rate at which the target CDB brings over redo and rolls forward the copy (600 seconds or 10 minutes).
    • I want to use ASM and Oracle Managed Files, so I set target_pdb_copy_option accordingly.
    • Since my source and target CDB are not on the same host, AutoUpgrade can’t automatically determine the target version. I specify that manually using target_version.
    • start_time specifies when downtime starts. At this point, AutoUpgrade refreshes the PDB for the last time and then moves on with the upgrade.

Prepare Source PDB

  • I connect to the source PDB. I create a user (for a database link) and grant privileges:

    create user dblinkuser identified by ... ;
    grant create session, 
          create pluggable database, 
          select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  • After the upgrade, I can drop the user again.

Prepare Target CDB

  • I connect to the target CDB and create a database link pointing to my source PDB:
    create database link clonepdb
    connect to dblinkuser
    identified by dblinkuser
    using '<connection-string-to-source-pdb>';
    

Analyze and Fix Source PDB

  • First, I analyze the source PDB for upgrade readiness. On the source system:
    java -jar autoupgrade.jar -config sales.cfg -mode analyze
    
  • The summary report lists the following precheck failures, which I safely ignore:
    • TDE_PASSWORDS_REQUIRED – I will fix that on the target system.
    • TARGET_CDB_AVAILABILITY – the target CDB is remote, and AutoUpgrade can’t analyze it.
  • Then, I execute the preupgrade fixups:
    java -jar autoupgrade.jar -config sales.cfg -mode fixups
    
    • This changes my source PDB, so I do it as close to my maintenance window as possible.

Upgrade

  • Since my PDB is encrypted, I must add the source and target CDB keystore password to the AutoUpgrade keystore. I start the TDE console on the target host:

    java -jar autoupgrade.jar -config sales.cfg -load_password
    
  • In the TDE console, I add the keystore passwords of the source and target CDB:

    TDE> add CDB19
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    TDE> add CDB23
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    
  • I save the passwords and convert the AutoUpgrade keystore to an auto-login keystore:

    TDE> save
    Convert the keystore to auto-login [YES|NO] ? 
    
  • I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
    • AutoUpgrade copies the data files over the database link.
    • Rolls the copies of the data files forward with redo from the source.
    • At one point, issues a final refresh and disconnects the PDB from the source.
    • Upgrades the PDB.
  • I have now upgraded my PDB to Oracle Database 23c.

The Fine Print

You should:

  • Check the words for caution in my previous blog post on AutoUpgrade and refreshable clone PDBs.
  • Start a new full backup of the target database after the migration.
  • Familiarize yourself with the concept of the AutoUpgrade keystore.

Also, notice:

  • The PDB is now on a different Base Database System. You need to update your connection string.
  • The source PDB must be Oracle Database 19c or newer to upgrade directly to Oracle Database 23c.
  • The OCI console will recognize and display the new PDB after a while. You don’t have to do anything … than to wait for the automatic sync job.