Control The Final Refresh When Using Refreshable Clone PDBs in AutoUpgrade

When you migrate or upgrade with refreshable clone PDBs, you sometimes want to decide when the final refresh happens. Perhaps you must finish certain activities in the source database before moving on.

Here’s how to do that in AutoUpgrade.

The Final Refresh Dilemma

In AutoUpgrade, the final refresh happens at the time specified by the config file parameter start_time.

Overview of the phases when using refreshable clone PDBs

You specify start_time in the config file, but once you start the job, you cannot change it. Remember that you normally start AutoUpgrade a long time before start_time to allow the creation of the new PDB.

In some situations, you want more control. You might want to finish some work on the source database before AutoUpgrade starts the final refresh. Perhaps you need to kick users off or coordinate activities with other teams.

In that case, a fixed start time is not very flexible.

The Solution

Update: Check out the new proceed command.

Imagine my downtime window starting on Saturday, 30 November, 02:00.

At that time, I need to ask the application team to shut down the applications in the database, I need to run certain pre-migration tasks, and finally kill sessions if needed. So, I don’t want AutoUpgrade to start at 02:00 – I want to decide at which point after 02:00 that AutoUpgrade should start.

Here’s my approach

  • I create a config file and set the start_time parameter to the start of my downtime window.
    upg1.start_time=30/11/2024 02:00:00
    
  • I start AutoUpgrade in deploy mode before my downtime starts:
    java -jar autoupgrade.jar -config ... -mode deploy
    
    • I must start well before the downtime window so AutoUpgrade has enough time to copy the database.
    • Imagine my tests show it takes around four hours to copy the database. I decide to start on Friday, 29 November, 16:00, so the copy should end around 20:00 – well enough time before my downtime window.
  • AutoUpgrade now starts the CLONEPDB phase:
    +----+-------+--------+---------+-------+----------+-------+---------------------------+
     |Job#|DB_NAME|   STAGE|OPERATION| STATUS|START_TIME|UPDATED|                    MESSAGE|
     +----+-------+--------+---------+-------+----------+-------+---------------------------+
     | 100|   TEAL|CLONEPDB|EXECUTING|RUNNING|  02:00:00| 4s ago|Creating pluggable database|
     +----+-------+--------+---------+-------+----------+-------+---------------------------+
    
    • Note the START_TIME value. It is the time when the final refresh happens.
  • I wait for AutoUpgrade to create the PDB and enter the REFRESHPDB phase:
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    |Job#|DB_NAME|     STAGE|OPERATION| STATUS|START_TIME|UPDATED|               MESSAGE|
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    | 100|   TEAL|REFRESHPDB|EXECUTING|RUNNING|  02:00:00| 2s ago|PDB TEAL was refreshed|
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    
  • Then I stop the job:
    upg> stop -job 100
    
    • If I exit AutoUpgrade after stopping the job, don’t worry. As soon as I restart AutoUpgrade, it will pick up from where it left and continue with the job.
  • When I stop the job, there is no periodic refresh. I should refresh the PDB in the target CDB manually at regular intervals:
    SQL> alter pluggable database teal refresh;
    
    • If I don’t perform any periodic refresh, the redo will accumulate, and the final refresh will take longer. Keep the final refresh shorter by refreshing more often.
  • After the start of my downtime window (the start_time parameter), when I’m done on the source database and want to proceed with the final refresh, I resume the job in AutoUpgrade.
    upg> resume -job 100
    
  • AutoUpgrade now realizes it is past the defined start_time and immediately moves on with the final refresh and the rest of the job.

Wrapping Up

Ideally, AutoUpgrade should offer better control over the process. We have a task on our backlog to come up with a better solution.

Update: Use the proceed command in AutoUpgrade to control the start time

However, refreshable clone PDBs are still a fantastic method for non-CDB to PDB migrations and for upgrades of individual PDBs.

There are a few quirks to be aware of, and if you are using Data Guard bear in mind that you can only plug in with deferred recovery. Other than that – it’s just to say…

Happy Migrating!

Further Reading

Easier Patching of Oracle Database on Windows

We just released a new version, 24.8, of AutoUpgrade that supports the new patching features on Windows.

Further, we also made it easier to download the latest version of AutoUpgrade:

wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar

How Does It Work?

Here’s a simple demo of the new functionality.

Also, you can get much more information about the new patching features in general in our webinar One-Button Patching – makes life easier for every Oracle DBA.

A Few Notes

Credential File

Currently, if you use a credential file to store the username/password of the Windows service, then AutoUpgrade delete it afterwards. We consider that the most secure approach.

But that’s not suitable for automation, so in the next version of AutoUpgrade, a config file parameter will allow you to re-use the credential file repeatedly.

Missing Registry Entries

If you use the credential file – depending on your configuration – you might see that entries are missing in Windows Registry for the database instance.

You can add those manually after patching by executing:

%NEW_ORACLE_HOME%\bin\oradim.exe -edit -sid %ORACLE_SID% -startmode auto

Oradim will see that registry keys are missing and add those.

This will also be automated in a future version of AutoUpgrade. We’re also working on adding support for virtual users.

Happy Patching

AutoUpgrade patching makes it easier to patch your Oracle Database.

Give it a try on Windows, and let us know if something is missing.

Patching Oracle Database should be as easy as patching a smartphone.

How to Upgrade Oracle Database with Username and Password

Normally, you upgrade an Oracle Database with native operating system authentication. The upgrade tooling connects to the database as / as sysdba. However, in some organizations, this is not allowed for security reasons. Every connection must be with username and password.

How can you upgrade an Oracle Database using username and password?

Set-Up

First, I’ll disable connections using native operating system authentication. I do that in sqlnet.ora.

$ env | grep TNS
TNS_ADMIN=/u01/app/oracle/product/19/network/admin
$ cat $TNS_ADMIN/sqlnet.ora
sqlnet.authentication_services=(none)

Let me check that it is disabled:

$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 6 09:55:05 2024
Version 19.21.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

Upgrade Without Operating System Authentication

  1. I ensure that my AutoUpgrade is the latest version, or at least version 24.7.241021:
$ java -jar autoupgrade.jar -version
build.version 24.7.241021
build.date 2024/10/21 11:16:20 -0400
build.hash babf5a631
build.hash_date 2024/10/18 18:36:27 -0400
build.supported_target_versions 12.2,18,19,21,23
build.type production
build.label (HEAD, tag: v24.7, origin/stable_devel, stable_devel)
build.MOS_NOTE 2485457.1
build.MOS_LINK https://support.oracle.com/epmos/faces/DocumentDisplay?id=2485457.1
  1. This is my AutoUpgrade config file. There’s nothing special in it, except for global.keystore which tells AutoUpgrade where to store its keystore. AutoUpgrade uses the keystore to keep your password safe until it is needed.
$ cat UPGR.cfg
global.autoupg_log_dir=/home/oracle/logs/autoupgrade-UPGR
global.keystore=/home/oracle/autoupgrade-keystore
upg1.source_home=/u01/app/oracle/product/19
upg1.target_home=/u01/app/oracle/product/23
upg1.sid=CDB19
upg1.timezone_upg=NO
  1. Now, I’m starting AutoUpgrade in -load_password mode. This is the first time I use it, so AutoUpgrade prompts for a password that it can use to encrypt the keystore:
$ java -jar autoupgrade.jar -config UPGR.cfg -load_password
Processing config file ...

Starting AutoUpgrade Password Loader - Type help for available options
Creating new AutoUpgrade keystore - Password required
Enter password:
Enter password again:
AutoUpgrade keystore was successfully created
  1. To enter the username and password for my database, I change the group to PWD:’
PWD> group PWD
Group [PWD] is already active
  1. Then, I can enter my username and password for my database (CDB19). AutoUpgrade validates the credentials, so you better make sure they work at this point:
PWD> add CDB19 -user SYS
Enter your secret/Password:
Re-enter your secret/Password:
Database SID: cdb19              User: SYS
  1. I save the keystore and convert it into an auto-login keystore, so I don’t have to enter the keystore password every time I use AutoUpgrade:
PWD> save
Convert the AutoUpgrade keystore to auto-login [YES|NO] ? yes
  1. Exit the keystore.
PWD> exit

AutoUpgrade Password Loader finished - Exiting AutoUpgrade
  1. Finally, I start the upgrade by starting AutoUpgrade in deploy mode:
java -jar autoupgrade.jar -config UPGR.cfg -mode deploy

That’s it!

Now, I can upgrade my Oracle Database without relying on native operating system authentication, using username and password instead.

SQLNET.ORA

Just a few words about sqlnet.ora and how to set up native operating system authentication.

  • Your operating system user (for instance, oracle) must be added to the appropriate groups in your operating system.
  • On UNIX/Linux, you allow native operating system authentication by authentication_services=beq in sqlnet.ora, or you can omit the parameter completely.
  • On Windows, you allow native operating system authentication by authentication_services=nts in sqlnet.ora. If you omit the parameter, then it won’t work.

Happy Upgrading!

How to Patch Oracle Database With One Command

With the latest release of Oracle AutoUpgrade, patching an Oracle Database is much easier. Once I’ve configured AutoUpgrade, it takes just one command. That includes finding and downloading the right patches and creating a new Oracle home.

Let’s see how I can do it.

Configuration

  1. Get the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. I need to create a config file:
    • global.global_log_dir is the logging and working directory of AutoUpgrade.
    • global.keystore is where AutoUpgrade can create a software keystore that stores my My Oracle Support credentials.
    • sid and source_home describe the database that I want to patch.
    • target_home is the location of the new Oracle home that AutoUpgrade creates for me.
    • folder is the location where AutoUpgrade can find or download patches.
    • patch tells AutoUpgrade to apply the latest Release Update including the OJVM and Data Pump bundle patches, plus update OPatch.
global.global_log_dir=/home/oracle/autoupgrade-patching/log
global.keystore=/home/oracle/autoupgrade-patching/keystore
patch1.source_home=/u01/app/oracle/product/19
patch1.target_home=/u01/app/oracle/product/19_25_0
patch1.sid=FTEX
patch1.folder=/home/oracle/autoupgrade-patching/patch
patch1.patch=RECOMMENDED
  1. Currently, AutoUpgrade can’t download the Oracle Database 19c base release (LINUX.X64_193000_db_home.zip). I must do that manually and place it in the folder specified by the config file entry folder. I expect the next version to handle it automatically.
$ cd /home/oracle/autoupgrade-patching/patch
$ ls -l
total 2987996
-rw-r--r--. 1 oracle oinstall 3059705302 Mar 18  2020 LINUX.X64_193000_db_home.zip
  1. I need to load my My Oracle Support credentials into the AutoUpgrade keystore so AutoUpgrade can use them to download patches. The first time AutoUpgrade creates the keystore, and I must provide a keystore password:
$ java -jar autoupgrade.jar -config FTEX.cfg -patch -load_password
Processing config file ...

Starting AutoUpgrade Patching Password Loader - Type help for available options
Creating new AutoUpgrade Patching keystore - Password required
Enter password:
Enter password again:
AutoUpgrade Patching keystore was successfully created
  1. I add my credentials. This is the username and password that I use to connect to My Oracle Support:
MOS> add -user daniel.overby.hansen@oracle.com
Enter your secret/Password:
Re-enter your secret/Password:
  1. I save the keystore and convert it to an auto-login keystore:
MOS> save
Convert the AutoUpgrade Patching keystore to auto-login [YES|NO] ? YES
  1. I exit the keystore.
MOS> exit

AutoUpgrade Patching Password Loader finished - Exiting AutoUpgrade Patching

Patch

With one command, I will patch my Oracle Database:

$ java -jar autoupgrade.jar -config FTEX.cfg -patch -mode deploy
AutoUpgrade Patching 24.7.241021 launched with default internal options
Processing config file ...
Loading AutoUpgrade Patching keystore
AutoUpgrade Patching keystore was successfully loaded

Connected to MOS - Searching for specified patches

------------------------------------------------------------
Downloading files to /home/oracle/autoupgrade-patching/patch
------------------------------------------------------------
DATABASE RELEASE UPDATE 19.25.0.0.0
    File: p36912597_190000_Linux-x86-64.zip - VALIDATED

DATAPUMP BUNDLE PATCH 19.25.0.0.0
    File: p37056207_1925000DBRU_Generic.zip - VALIDATED

OJVM RELEASE UPDATE 19.25.0.0.0
    File: p36878697_190000_Linux-x86-64.zip - VALIDATED

OPatch 12.2.0.1.44 for DB 19.0.0.0.0 (Oct 2024)
    File: p6880880_190000_Linux-x86-64.zip - VALIDATED
------------------------------------------------------------

+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
1 Non-CDB(s) will be processed
Type 'help' to list console commands
patch> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished                  [1]
Jobs failed                    [0]
Jobs restored                  [0]
Jobs pending                   [0]

---- Drop GRP at your convenience once you consider it is no longer needed ----
Drop GRP from FTEX: drop restore point AU_PATCHING_9212_FTEX1921000


Please check the summary report at:
/home/oracle/autoupgrade-patching/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade-patching/log/cfgtoollogs/patch/auto/status/status.log

That’s it! My database is now running on Oracle Database 19.25.0.

Fine Print

AutoUpgrade does out-of-place patching according to our recommendations. It will use a brand-new Oracle home, and copies all configuration files from the old to the new Oracle home.

AutoUpgrade currently requires:

  • Oracle Database 19c
  • Single instance database (non-CDB or entire CDB)

If you need more details, please check out the webinar One-Button Patching with AutoUpgrade. The slide deck has additional details.

What About

This is the first release, and naturally, it comes with restrictions. We’re working hard behind the scenes to lift those requirements.

Will AutoUpgrade support other platforms? Yes. AutoUpgrade supports most of the platforms where you run Oracle Database, including Windows, AIX and SPARCH Solaris.

What about in-place patching, using cloned Oracle homes, and support for gold images? It’s in the plans.

Will you support other Oracle Database releases? Yes – anything newer than Oracle Database 19c

What about Data Guard and RAC? You can use AutoUpgrade for Data Guard. RAC is in the plans.

What about Grid Infrastructure? We’re focusing on Oracle Database. You should have a look at Oracle Fleet Patching and Provisioning.

What about … We’re looking for new, great ideas. Please create an enhancement request in My Oracle Support and send me the number. I’ll check it and add it to our backlog. Feel free to also send me an email with your suggestion.

Happy Patching!

Can I Name My PDB in Lowercase?

A customer asked me:

I’m using AutoUpgrade to convert to a PDB, and I want to name the PDB in lowercase. How do I do that?

First, let’s understand how AutoUpgrade decides on the name for the PDB when you convert a non-CDB.

AutoUpgrade and PDB Name

AutoUpgrade uses the DB_UNIQUE_NAME of the non-CDB as the name of the PDB.

In the beginning, AutoUpgrade used the SID of the database, but that wasn’t smart for a RAC database since the SID is suffixed by the instance ID.

Now, DB_UNIQUE_NAME might not be smart for a Data Guard configuration, but that’s how it is at the moment. We have a better solution on our backlog.

Anyway, you can override the default and choose the PDB name with the target_pdb_name config file parameter:

upg1.source_home=/u01/app/oracle/product/19
upg1.target_home=/u01/app/oracle/product/23
upg1.sid=DB19
upg1.target_cdb=CDB23
upg1.target_pdb_name.DB19=SALES
  • In the above case, AutoUpgrade renames the DB19 to SALES during plug-in.

If you write sales in lowercase, AutoUpgrade converts it to uppercase. If you put quotes around “sales”, AutoUpgrade throws an error.

AutoUpgrade accepts uppercase PDB names only. Why?

PDB Naming Rules

Let’s take a look in the documentation. I’ll find the CREATE PLUGGABLE DATABASE statement.

Syntax diagram for the CREATE PLUGGABLE DATABASE statement

The semantics for pdb_name lists:

The name must satisfy the requirements listed in “Database Object Naming Rules”. The first character of a PDB name must be an alphabet character. The remaining characters can be alphanumeric or the underscore character (_).

Let’s take a look at the Database Object Naming Rules:

… However, database names, global database names, database link names, disk group names, and pluggable database (PDB) names are always case insensitive and are stored as uppercase. If you specify such names as quoted identifiers, then the quotation marks are silently ignored. …

  • Names of disk groups, pluggable databases (PDBs), rollback segments, tablespaces, and tablespace sets are limited to 30 bytes.

So, AutoUpgrade is just playing by the rules.

The Answer

So, the answer is that the database use PDB names in alphanumeric uppercase. AutoUpgrade knows this and automatically converts to uppercase. The customer must accept that PDB names are uppercase.

These are the requirements for the PDB names

  • First character must be an alphabet character.
  • The name must be all uppercase.
  • The name can contain alphanumeric (A-Z) and the underscore (_) characters.
  • No longer than 30 bytes.
  • Don’t try to enquoute the name.
  • Nonquoted identifiers (like PDB names) cannot be Oracle SQL reserved words.
  • The PDB name must be unique in the CDB, and it must be unique within the scope of all the CDBs whose instances are reached through a specific listener.

Daniel’s Recommendation

I recommend that you use globally unique PDB names. In your entire organization, no PDBs have the same name. That way, you can move PDBs around without worrying about name collisions.

I know one customer that generates a unique number and prefix with P:

  • P00001
  • P00002
  • P00003

They have a database with a simple sequence and a function that returns P concatenated with the sequence number. The expose the function in their entire organization through a REST API using ORDS. Simple and yet elegant.

Final Words

I’ve spent more than 20 years working with computers. I have been burnt by naming issues so many times that I’ve defined a law: Daniel’s law for naming in computer science:

  • Use only uppercase alphanumeric characters
  • US characters only (no special Danish characters)
  • Underscores are fine
  • Never use spaces
  • Don’t try to push your luck when it comes to names :-)

A Few Details about Using Refreshable Clone PDB for Non-CDB to PDB Migration

Our team has been advocating the use of refreshable clone PDB for non-CDB to PDB migrations using AutoUpgrade. It is a great feature and our entire team loves it – so does many of the customers we work with.

However, in a recent non-CDB to PDB migration, we encountered some issues with refreshable clone PDB and AutoUpgrade.

Can My Target Container Database Be a RAC Database?

Yes, this works perfectly fine.

Be aware that CREATE PLUGGABLE DATABASE statement scales out on all nodes in your cluster. By default, the database also uses parallel processes, so potentially, this will put quite a load on the source non-CDB. Consider restricting the use of parallel processes using the AutoUpgrade config file parameter:

upg1.parallel_pdb_creation_clause=4

Since the creation scales out on all nodes, all nodes must be able to resolve the connect identifier to the source non-CDB. If you use an alias from tnsnames.ora, be sure to add that on all nodes. Failure to do so will lead to an error during the CREATE PLUGGABLE DATABASE command:

ERROR at line 1:
ORA-65169: error encountered while attempting to copy file
+DATAC1/SRCDB/DATAFILE/system.262.1178083869
ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified
ORA-17629: Cannot connect to the remote database server

What Happens If the Source Database Extends a Data File?

If the source database extends a data file – either through AUTOEXTEND ON NEXT or manually by a user – the target database extends the matching data file as well. Here is an extract from the target alert log when it extends a data file:

2024-08-27T07:01:26.671975+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178089277
2024-08-27T07:01:32.773191+00:00
PDB1(4):Resize operation completed for file# 26, fname +DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs02.290.1178089287, old size 10240K, new size 1058816K

It works with smallfile and bigfile tablespaces.

What Happens If I Create a Tablespace on the Source Database?

The target database attempts to create the same tablespace.

For this to work, one of the following must be true:

If either one of the above isn’t true, you’ll receive an error during ALTER PLUGGABLE DATABASE ... REFRESH:

ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as
'+DATAC1/SRCDB/DATAFILE/srctbs04.282.1178091655'
You can use PDB_FILE_NAME_CONVERT instead.

It works with smallfile and bigfile tablespaces.

What Happens If I Add a Data File to an Existing Tablespace?

The target database attempts to add a matching data file.

The target database must be able to translate the data file location according to the section above.

2024-08-27T06:51:19.294612+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178088679
2024-08-27T06:51:20.268208+00:00
PDB1(4):Successfully added datafile 25 to media recovery
PDB1(4):Datafile #25: '+DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs01.289.1178088681'

What Happens If I Set a Tablespace Read-Only?

The refreshable clone PDB does not support this. Neither is going the other way: setting a tablespace read-write.

If you do so, the database reports an error:

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:23:02.893946+00:00
PDB2(6):Error! unsupported source PDB operation: 21
2024-08-28T05:23:02.994035+00:00
PDB2(6):Media Recovery failed with error 65339

Operation 21 is setting a tablespace read-only. If you set a tablespace read-write, the database reports operation 20 instead.

PDB2(7):Error! unsupported source PDB operation: 20

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

What Happens If I Restart the Source Database?

Refreshable clone PDB does not support restarting the source database.

When you restart the source database, the source database places a special marker in the redo stream. This even happens for a clean shutdown (SHUTDOWN NORMAL). The target CDB does not understand how to recover beyond this marker.

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:27:00.451985+00:00
PDB2(4):Error! unsupported source PDB operation: 3
2024-08-28T05:27:00.710236+00:00
PDB2(4):Media Recovery failed with error 65339

Operation 3 is the source database restart.

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

How Do I Drain My Source Database Before Migration?

Right before the migration, when you cut the connection from the source non-CDB to the target PDB, it could be useful to restart the database. But that’s not possible.

I suggest that you:

  • Ensure that the target CDB connects to the source non-CDB using a dedicated service. This applies to the database link that you establish between the two databases.
  • Stop all other services and specify a drain timeout.
  • Shut down the application that connects to the source non-CDB.
  • Kill sessions manually.

Remember that the target database connects to the source database via a database link, so stopping the database listener is not an option. Nor is enabling RESTRICTED SESSION.

Update: Armando managed to perform the migration using restricted session. Check his comment (see below) for details.

What Happens If I Restart the Target Container Database?

You can safely restart the target CDB while you have a refreshable clone PDB. This works fine.

What About NOLOGGING Operations?

You can’t perform NOLOGGING operations on the source database.

Since refreshable clone PDB relies on redo, then a NOLOGGING operation on the source will prevent that data from going to the target. When you try to query the NOLOGGING table on the target database after the migration, you will receive an error:

SQL> select count(*) from t1
       *
ERROR at line 1:
ORA-28304: Oracle encrypted block is corrupt (file # 186, block # 131)
ORA-01110: data file 186:
'+DATA/TGTCDB_HBZ_FRA/20CF181D4A925E06E0633D01000ACB50/DATAFILE/srctbs01.297.117
8266961'
ORA-26040: Data block was loaded using the NOLOGGING option

Thanks to Marcelo for leaving a comment. He suggests that you set the source non-CDB in FORCE LOGGING mode. This is a good idea to avoid this potential nightmare:

alter database force logging;

You can read more about NOLOGGING operations in The Gains and Pains of Nologging Operations (Doc ID 290161.1).

What About Hot Backups?

You can’t perform hot backup operations on the source database.

If you do so, you’ll run into the following error:

2025-11-21T14:31:06.845676+00:00
SALES(4):Error! unsupported source PDB operation: 1
2025-11-21T14:31:07.845923+00:00
SALES(4):Media Recovery failed with error 65339

Please note that I’m not referring to RMAN online backups. I’m talking about the old-school ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP commands.

Any restrictions on data types or object types?

No. The refreshable clone is a physical copy of the database, so there are no restrictions on data types or object types.

Services

You must recreate your services after the migration. Neither database managed services nor Clusterware managed services survive the migration.

Further Readin

Summary

Despite these minor restrictions, migration from non-CDB to PDB using refreshable clone PDB and AutoUpgrade is still a very handy method. Knowing the restrictions upfront ensures that you can successfully migrate the database.

Happy migrating!

How to Prepare Your Oracle Database for Release 23ai

Here is a list of things to check and carry out while preparing for Oracle Database 23ai.

These tasks are not mandatory, but I recommend them based on my experience. It increases your chance of upgrading with success.

Get ready for Oracle Database 23ai upgrade

Weeks Before

Platform Certification

Ensure that the new release supports your operating system. When available, you can check it in the installation guides. However, you can find the most up-to-date information in the Certifications tab in My Oracle Support.

Use Certifications tab in My Oracle Support to find up-to-date platform certifications

Clients

Check the clients connecting to the database and ensure their client driver version is compatible with Oracle Database 23ai. The client/server interoperability matrix gives you an overview of which clients work with which servers.

Optionally, monitor the database over time to generate a list of clients connecting.

AutoUpgrade

Get the latest version of AutoUpgrade from My Oracle Support. I recommend that you always use the latest version. AutoUpgrade is fully backward compatible, so any newer version can upgrade older releases of Oracle Database.

Upgrade Readiness

Run AutoUpgrade in analyze mode to determine the upgrade readiness of your Oracle Database. Check the summary report for findings with no fixups. Such findings must be dealt with manually.

You can run the analysis even before you install the new Oracle home. Simply specify target_version=23 in your AutoUpgrade config file, and AutoUpgrade knows which checks to run.

Dictionary Check

Check the dictionary on your most important databases. You can run a dictionary check together with an AutoUpgrade analysis. In your config file:

upg1.run_dictionary_health=full

Or, you can run it separately using DBMS_DICTIONARY_CHECK.

Behavior Changes

Review the upgrade guide to find information about deprecated and desupported features, plus any behavior changes.

Grid Infrastructure

If Grid Infrastructure manages your database either in RAC or Oracle Restart configuration, you should upgrade it as well. If you can tolerate an additional maintenance window, I’d recommend upgrading Grid Infrastructure in advance.

Days Before

Statistics

Gather dictionary and fixed objects statistics:

begin
   dbms_stats.gather_fixed_objects_stats;
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM')
end;

I prefer gathering schema stats rather than dictionary stats. I’ve seen a few edge cases solved by schema stats, and generally, current stats on SYS and SYSTEM are sufficient.

Gather statistics no earlier than seven days before the upgrade. If AutoUpgrade determines your statistics are older than that, it will gather them as part of the upgrade. This extends down time unnecessarily.

If you’re curious, you can find details about the importance of fixed objects statistics in a blog post by Maria Colgan.

Install Oracle Home

Install an Oracle Database 23ai Oracle home. In release 23ai, the Oracle homes that you download are already patched with the latest Release Update.

If you have additional patches to apply, do so now. You should have all patches in place before you start the upgrade.

On the Day of Upgrade

Backup

If time allows, run a level 0 backup. If not, you have to settle with a level 1.

Scheduler

Normally, it is not necessary to disable the scheduler. But in some situations, it can be beneficial.

That’s It

You are now ready to start the journey to Oracle Database 23ai and the many exciting new features.

Happy upgrading!

Show Me Your Upgrade Runbook and Let’s See What You Can Remove

Recently, I talked to a customer about upgrading Oracle Database. I was showing how you can get the most out of AutoUpgrade. The customer showed me their runbook, and together, we found several redundant tasks. AutoUpgrade already handled many of the checks and tasks.

Leave a comment and show me the tasks in your runbook. Let’s see if we can find something to remove.

What Did We Find At the Customer?

Before Upgrade

  • Purging recycle bin The customer actively used the recycle bin and manually purged it before upgrading. However, AutoUpgrade has a check on the recycle bin. From the prechecks log file:
    2024-04-17 10:15:51.775 INFO Running check PURGE_RECYCLEBIN [DB12] - CheckTrigger.call#96 
       EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM sys.recyclebin$'
    
    And purges it – if needed – during prefixups. From the prefixups log file:
    2024-04-17 10:15:44.972 INFO Running fixup [PURGE_RECYCLEBIN][DB12][PLSQL][PURGE DBA_RECYCLEBIN
    2024-04-17 10:15:44.972 INFO [EB5A67] [PURGE DBA_RECYCLEBIN
    2024-04-17 10:15:44.976 INFO [EB5A67] Executing SQL [/* [EB5A67]  */PURGE DBA_RECYCLEBIN
    2024-04-17 10:15:45.316 INFO # PURGE_RECYCLEBIN - Check.runFix#292 
    2024-04-17 10:15:45.317 INFO Finished fixup [PURGE_RECYCLEBIN][DB12][SUCCESSFUL] - FixUpTrigger.executeFixUp#231 
    

After Upgrade

  • Execute catuppst.sql This script performs:

    … remaining upgrade actions that do not require that the database be open in UPGRADE mode

    AutoUpgrade handles this part for you. Here is an extract from the phase overview during upgrade:

    $ grep -B2 "catuppst"  dbupgrade/phase.log
    [phase 102] type is 1 with 1 Files
    
    @catuppst.sql
    
  • Recompiling objects The customer recompiled objects using utlrp.sql. This is usually not needed. By default, AutoUpgrade performs a recompilation in the postfixups phase unless you override the behavior with run_utlrp. Here is an extract from the autocompile log file found in the postfixups directory:

    13:12:42   6      select count(*) into cnt from sys.dba_objects
    13:12:42   7         where status = 'INVALID';
    13:12:42   8      IF cnt > 0 THEN
    13:12:42   9          :utlprp_name := '/u01/app/oracle/product/23.0.0.0/dbhome_2/rdbms/admin/utlprpom.sql 2';
    13:12:42  10          :utldtchk_name := '/u01/app/oracle/product/23.0.0.0/dbhome_2/rdbms/admin/utldtchk.sql';
    13:12:42  11      END IF;
    
  • Datapatch The customer executed Datapatch after a successful upgrade. The upgrade engine already does this as one of the very last phases. Here is an extract from upg_summary.log:

    Component                               Current         Full     Elapsed Time
    Name                                    Status          Version  HH:MM:SS
    
    ...
    Datapatch                                                        00:00:09
    
  • Gathering database statistics The customer already knew that AutoUpgrade gathers dictionary statistics and that they needed to gather fixed objects statistics after a while. But, they were gathering database statistics to refresh the statistics on their application schemas. Although this is a good idea in some cases, I would not recommend this. AutoUpgrade does not touch user data during a database upgrade, so if you had good statistics before the upgrade, they are also good after. The only case I can think of is upgrades from Oracle Database 11.2.0.4. Oracle Database introduced new histogram types in Oracle Database 12c, which could significantly benefit some databases.

    By the way, here is a handy trick to gather fixed objects statistics some time after the upgrade.

Conclusion

While working with this customer, we found several redundant items. Getting those items out of the runbook means less overall downtime – and a less complex upgrade. AutoUpgrade makes upgrading Oracle Database much easier.

Please leave a comment if there are tasks that AutoUpgrade should do for you.

Appendix

Further Reading

How to Upgrade to Oracle Database 19c and Migrate to a PDB Using Refreshable Clone PDBs

At the recent Oracle DatabaseWorld at CloudWorld I spoke to several customers that had to upgrade to Oracle Database 19c and convert their non-CDB into the multitenant architecture.

Here is how to do it using Refreshable Clone PDBs.

My source database is:

  • A non-CDB
  • On Oracle Database 12.2 or newer

I want to:

  • Upgrade to Oracle Database 19c
  • Convert the database to a PDB
  • Plug it into an existing CDB

The Problem With PDB Conversion

The conversion to multitenant does not offer the same rollback options as an upgrade. Normally, when you upgrade a database, you rely on Flashback Database as the primary rollback option. However, that does not work for conversion to multitenant.

When you plug your non-CDB into a CDB, the CDB makes changes to the data file headers. Those changes are irreversible and prevents you from ever using those data files in a non-CDB. Not even Flashback Database can revert the changes.

So, what are your rollback options?

  • Restore a backup It might take longer than your organization can accept.
  • Make a copy of the data files before conversion It requires disk space and a longer downtime window to copy the data files.

This is where Refreshable Clone PDBs come into play.

Refreshable Clone PDBs

Here is an overview of what AutoUpgrade does for you:

Overview of the process

  1. AutoUpgrade creates a PDB in the target CDB as a refreshable clone PDB of the source non-CDB.
  2. The target CDB starts to copy the data files from the source non-CDB.
  3. The target CDB refreshes the PDB. In other words, it rolls forward the data files using the redo from the source non-CDB.
  4. Now, downtime starts. AutoUpgrade issues a final refresh to bring over the latest changes.
  5. AutoUpgrade disconnects the refreshable clone PDB from its source. Now, the PDB is a real, stand-alone PDB. AutoUpgrade upgrades the PDB and converts it into a proper PDB.

If something happens during the upgrade or conversion and you want to roll back, simply start the original non-CDB. It is left completely untouched.

You can learn about the concept in detail in our AutoUpgrade 2.0 webinar:

Refreshable clone PDBs does not work for cross-endian migrations (like AIX to Linux), but cross-platform should work fine (like Windows to Linux).

How To

  1. In the source non-CDB, I create a user:
    create user dblinkuser identified by ... ;
    grant create session, 
       create pluggable database, 
       select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  2. In my target CDB, I create a database link connecting to my source non-CDB:
    create database link clonepdb 
       connect to dblinkuser identified by ...
       using 'source-db-alias';
    
    You can drop the database link after the migration.
  3. I create an AutoUpgrade config file called noncdb1.cfg:
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=NONCDB1
    upg1.target_cdb=CDB1
    upg1.source_dblink.NONCDB1=CLONEPDB 600
    upg1.target_pdb_name.NONCDB1=PDB1
    upg1.start_time=25/09/2023 06:30:00
    
    • source_home and target_home is the Oracle Home of the source non-CDB and target CDB respectively.
    • sid is the source non-CDB that I want to upgrade and convert.
    • target_cdb is the CDB into which I want to plug in the non-CDB. You must create the CDB in advance or use an existing one.
    • source_dblink has the name of the database link (CLONEPDB) and the rate at which the target CDB brings over redo and rolls forward the copy (600 seconds or 10 minutes).
    • target_pdb_name specifies that I want to rename the non-CDB to PDB1 when I plug it in. You can leave this out if you want to keep the name.
    • start_time specifies when downtime starts. At this point, AutoUpgrade refreshes the PDB for the last time and then moves on with upgrade and PDB conversion.
  4. Start AutoUpgrade in analyze mode on the source system:
    java -jar autoupgrade.jar -mode analyze -config noncdb1.cfg
    
  5. Run AutoUpgrade in fixups mode on the source system:
    java -jar autoupgrade.jar -mode fixups -config noncdb1.cfg
    
    • This runs the fixups identified by AutoUpgrade in analyze mode. You can run this task even after you start AutoUpgrade in deploy mode. Just ensure that the fixups complete before the final refresh (as specified in the start_time paramter).
  6. If there are no errors found in the analysis, I start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar -mode deploy noncdb1.cfg
    
    • AutoUpgrade copies the data files over the database link.
    • Rolls the copies of the data files forward with redo from the source non-CDB.
    • At one point, issues a final refresh and disconnects the PDB from the source non-CDB.
    • Upgrades and converts the database to a PDB.

Here’s a demo of it:

Words of Caution

Disconnect Users from Source Database

Right before the upgrade and conversion starts, AutoUpgrade executes a final refresh. The last redo from the source non-CDB is applied to ensure no data is lost. You must ensure that no users are connected to the source non-CDB after this time. Otherwise, that data will be lost.

AutoUpgrade starts the final refresh at the start time specified in the config file:

upg1.start_time=25/09/2023 06:30:00

You must be careful about disconnecting users from the source non-CDB. Remember, AutoUpgrade connects to the source non-CDB over a database link as a regular user (not SYS). This means the listener must be available, and you can’t enable restricted session or similar means.

Data Guard

If the target CDB is protected by Data Guard, special attention is needed to handle the standby databases. I explain the details in our AutoUpgrade 2.0 webinar:

Redo

The procedure relies on redo from the source non-CDB. Ensure that redo is kept in the Fast Recovery Area of the source non-CDB until it has been applied on the target PDB. Either postpone your archive backups or change the archive log deletion policy so the archive logs remain on disk.

Final Refresh

Check this blog post if you want to be in control over when the final refresh happens.

Services

You must recreate the services used in your connect strings.

Appendix

Further Reading

Upgrading with Oracle Database Vault – AIOUG Follow-up

Last week I presented to AIOUG. My session was Upgrades and Migrations – What’s Cooking. I managed to answer most of the questions except one. As promised, here’s the answer.

Oracle Database Vault

One question came up on Oracle Database Vault. I couldn’t answer the question live. I knew we had recently made changes in this area, but the details were lost.

What do you need to consider when upgrading an Oracle Database that uses Oracle Database Vault?

First, when AutoUpgrade performs the pre-upgrade analysis, it will detect the presence of Oracle Database Vault. Information is written in the pre-upgrade summary reminding you to take due care.

You have two options when you upgrade to Oracle Database 19c or later:

  • Disable Oracle Database Vault during the upgrade
  • Or, grant the role DV_PATCH_ADMIN to SYS during the upgrade

You can find more information in Requirement for Upgrading Database with Database Vault (Doc ID 2757126.1).

Thanks

Thanks to AIOUG for hosting my webinar. I really enjoy presenting to the community in India. The audience is always really engaging and asks a lot of questions.

Unfortunately, I have yet to have the opportunity to present in person in India. I hope to change that one day.