Support for RAC rolling patching in AutoUpgrade has been on our wishlist for a while. Finally, our developers are about to write the last lines of code.
We plan to release the feature next year, but we’d like to get some early feedback on the functionality.
If you’d like to give it a try, here’s your chance.
What Do You Get
Early access to the functionality.
The latest version of AutoUpgrade, plus the code for RAC rolling patching.
Introduction to the feature and a short description of how it works.
Possibility to provide feedback and influence our decisions.
A big THANK YOU for helping us.
What Do We Expect
You have time and resources to test the functionality in your own environment.
You will report the errors as service requests using My Oracle Support.
You can share the AutoUpgrade log files.
What We Hope
If you like the feature, you will help us spread the message.
We’d also love it if you could provide a quote, a video, or even a reference story.
How
Send me an e-mail at daniel.overby.hansen (a) oracle.com.
Add a few words about your setup and such.
I can’t guarantee that everyone will be able to help us.
I’m really excited about this feature, and I know many of you are waiting for it.
Get ready for the future and enjoy the many new cool features in Oracle AI Database 26ai. It is just an upgrade away.
This blog post gives you a quick overview of the upgrade to Oracle AI Database 26ai. Plus, it is the starting point of a whole series of blog posts with all the details.
If you have an older database, you must first upgrade to 19c and then upgrade again to 26ai.
If you’re on 23ai, you can just patch the database; no upgrade needed, no need to re-certify your app.
Oracle AI Database 26ai supports the multitenant architecture only. If your database is a non-CDB, you must also convert it to a pluggable database as part of the upgrade.
Although the Multitenant Option requires a separate license, you can still run databases on the multitenant architecture without it. Oracle allows a certain number of pluggable databases in a container database without the Multitenant Option. Check the license guide for details, and be sure to set max_pdbs=3 if you don’t have the license.
Oracle AI Database 26ai is the next long-term support release. It means you can stay current with patches for many years. At the time of writing, Premier Support ends in December 2031, but check Release Schedule of Current Database Releases (Doc ID 742060.1) for up-to-date information.
In Oracle AI Database 26ai, AutoUpgrade is the only recommended tool for upgrading your database. Oracle desupported the Database Upgrade Assistant (DBUA).
You can also use Data Pump or Transportable Tablespaces to migrate your data directly into a 26ai PDB. Even if the source database runs on a lower release and in a non-CDB. In fact, you can export from Oracle v5 and import directly into a 26ai PDB.
Important Things about Multitenant Migration
The multitenant conversion is irreversible. Not even Flashback Database can help if you want to roll back. You must consider this when planning for a potential rollback.
For smaller databases, you can rely on RMAN backups. However, for larger databases, a restore may take too long.
For rollback, you can use a copy of the data files:
The CREATE PLUGGABLE DATABASE statement has a COPY clause, which copies the data files and uses the copies for the new PDB.
Refreshable clone PDB can minimize the time needed to copy the data files by doing it in advance and rolling forward with redo.
Use image copies of your data files and roll forward with RMAN.
Use a standby database for rollback.
Storage snapshots
Depending on your Data Guard configuration, the plug-in operation needs special attention on your standby databases. If you have standby databases, be very thorough and test the procedure properly.
In the worst case, you can break your standby databases without knowing it. Be sure to check your standby databases at the end of the migration. I recommend performing a switchover to be sure.
The multitenant conversion requires additional downtime. Normally, I’d say around 10-20 minutes of additional downtime. But if you have Data Guard and must fix your standby databases within the maintenance window, then you need even more time.
And Then …
Over the coming months, I will publish several blog posts with step-by-step instructions and other info. Stay tuned!
By the way, this blog post was originally posted a while ago for Oracle Database 23ai, but we all know what happened there. So, let’s start over with Oracle AI Database 26ai.
You can use NOLOGGING operations to speed up data loads. Imports are all about loading data.
When Data Pump loads the rows into your database and creates indexes, you can avoid generating redo.
You save:
Time – because a NOLOGGING operation is faster.
Space – because no redo means no archive logs.
But, there are serious implications! If you need to restore the database, those tables and indexes will not be there.
How To
You can toggle the logging clause of a table or index using the Data Pump import parameter transform. To use NOLOGGING.
impdp ... transform=disable_archive_logging:y
The Effect
I made a small test with a simple schema with four identical tables. Each table is 3 GB.
Setting
Time to Import
Redo, GB
LOGGING
4m 22s
12.2
NOLOGGING
1m 45s
0.047
Many factors affect the numbers, so your result might be very different. Check for yourself.
Underneath the Hood
The LOGGING information is only changed temporarily during the import. At the end of the import, the LOGGING information is reset to its original state.
And by grepping in the database trace directory, I could find these events:
FTEX_dw00_854409.trc:KUPW:09:38:39.351: 1: ALTER TABLE "CONSTR_VALIDATE"."T1" NOLOGGING
FTEX_dm00_854407.trc:KUPM:09:40:20.527: W-3 . . imported "CONSTR_VALIDATE"."T1" 381.6 MB 23312384 rows in 101 seconds using direct_path
FTEX_dw01_854413.trc:KUPW:09:40:20.619: 2: ALTER TABLE "CONSTR_VALIDATE"."T1" LOGGING
Words of Caution
I strongly recommend that you only use this feature when:
You fully understand the implications of NOLOGGING operations.
You install the Data Pump Bundle Patch. There are a handful of bugs related to this feature, including one bug where the Data Pump fails to restore the original LOGGING value, leaving the object in NOLOGGING mode.
You have a plan for ensuring the recoverability of your database. Either you don’t care about the data at all, or you start a level 0 backup right away.
This feature doesn’t work on Oracle Autonomous Database (ADB) Serverless and Dedicated because it ignores the NOLOGGING clause. ADB protects your data at any cost, and, thus, completely ignores that setting.
A Data Pump import always generates redo, even when you are using NOLOGGING operations. The database logs all actions on the data dictionary and UNDO. Furthermore, DML operations on the Data Pump control table are also logged.
Migrations
You can safely use this feature during a migration if you plan to perform a level 0 backup and build your standby databases within the maintenance window.
However, if you’re tight on time, you often take a level 0 backup before the import and rely on archive logs for recoverability. In such a situation, you must not use this feature. The same applies if you build your standby database before the import.
Just the Indexes
A compromise is to use NOLOGGING on the indexes only:
The rationale being that you can always rebuild your indexes again. Keep your tables safe, but if something happens, simply recreate the indexes.
In such a situation, it’s good to have the index definitions ready. You can use the SQLFILE option to extract the index DDLs from the dump file:
impdp ... sqlfile=indexes.sql include=index
Conclusion
You can speed up imports by using NOLOGGING operations. It also reduces the amount of redo generation.
Appendix
Give It a Try
You can use our Data Pump hands-on lab to try it yourself. Provision a lab and use the commands below:
# Create directory and copy dump file
mkdir /home/oracle/dpdir
cp /home/oracle/scripts/faster-import-lob.dmp /home/oracle/dpdir
# Create parameter file
# Change the transform parameter accordingly
cd
cat > imp.par <<EOF
directory=dpdir
dumpfile=faster-import-constraints.dmp
parallel=4
logtime=all
metrics=yes
transform=constraint_novalidate:y
transform=disable_archive_logging:y
trace=1FF0300
EOF
# Get rid of previous archived logs
. ftex
rman target /<<EOF
delete noprompt archivelog all;
EOF
rm -rf /u02/fast_recovery_area/FTEX/archivelog/*
# Data Pump prereqs and a restart to reset metrics
sqlplus / as sysdba<<EOF
drop user constr_validate cascade;
grant datapump_exp_full_database, datapump_imp_full_database to dpuser identified by oracle;
alter user dpuser default tablespace users;
alter user dpuser quota unlimited on users;
create or replace directory dpdir as '/home/oracle/dpdir';
alter system set streams_pool_size=128m scope=spfile;
shutdown immediate
startup
EOF
# Remove existing trace files
rm -rf /u01/app/oracle/diag/rdbms/ftex/FTEX/trace/*
# Start import
impdp dpuser/oracle parfile=imp.par
# Measure redo via DB and file system
du -h /u02/fast_recovery_area/FTEX/archivelog
sqlplus / as sysdba<<EOF
SELECT VALUE / 1024 / 1024 AS redo_generated_mb FROM v\$sysstat WHERE name = 'redo size';
EOF
In an Oracle home, you find a Java installation – a full JDK.
cd $ORACLE_HOME
ls -l jdk
Some of the tools related to Oracle AI Database are Java-based, like AutoUpgrade, making it convenient to have.
How do you patch the JDK in your Oracle home?
Updating JDK
When you apply a Release Update to your database, you’re also updating your JDK. Those patches are part of the Release Update.
Here are two Oracle homes with different Release Updates:
cd /u01/app/oracle/product/dbhome_1927
jdk/bin/java -version
java version "1.8.0_441"
Java(TM) SE Runtime Environment (build 1.8.0_441-b07)
[ Java HotSpot(TM) 64-Bit Server VM (build 25.441-b07, mixed mode)](https://docs.oracle.com/en/database/oracle/oracle-database/26/upgrd/patch-parameters-autoupgrade-config-file.html#UPGRD-GUID-A7E6221E-5964-4553-9A63-61B2E4AB1CBD)
cd /u01/app/oracle/product/dbhome_1929
jdk/bin/java -version
java version "1.8.0_461"
Java(TM) SE Runtime Environment (build 1.8.0_461-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.461-b11, mixed mode)
Notice how the JDK is newer in the later Release Update.
So, all’s good, right?
The Latest Ones
The Java team builds its patches around the same time as the database team builds the Release Update.
This means that the JDK patches you find in a Release Update are from the previous quarter. There’s simply not enough time to include the very latest JDK patches in the Release Update. We also need time for regression testing and other tasks.
So, the very latest JDK patches are made available as a one-off patch.
You can instruct AutoUpgrade to get the latest available JDK patches using the jdk keyword in your patch configuration.
patch1.patch=RU,OPATCH,JDK
Here’s an example of two 19.29 Oracle homes – one without and one with the JDK patch:
cd /u01/app/oracle/product/dbhome_1929
jdk/bin/java -version
java version "1.8.0_461"
Java(TM) SE Runtime Environment (build 1.8.0_461-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.461-b11, mixed mode)
cd /u01/app/oracle/product/dbhome_1929jdk
jdk/bin/java -version
java version "1.8.0_471"
Java(TM) SE Runtime Environment (build 1.8.0_471-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.471-b11, mixed mode)
Notice how the version changes from 461 to 471.
Daniel’s Recommendation
Now that it has become so easy to update the JDK, I recommend doing it.
Security is paramount!
I see minimal risk in doing that.
The Java installation in Oracle Database 19c is version 8 that is very mature.
In Oracle AI Database 26ai, it is Java 11 that is also very mature.
Those two Java releases are in a part of their lifecycle where a few things are updated.
I recall no major issues caused by a JDK patch.
But what do you think?
Are you already updating the JDK, or do you plan to do so in the future?
Have you experienced issues with JDK in your Oracle home?
Let me know in the comments below.
Now that I’ve finished with Oracle AI World and the EMEA Community Tour is coming to a close, I have some free time on my hands. What better way to spend that time than to create webinars?
Webinar 26: Best Practices for Upgrading to Oracle AI Database 26ai
Webinar 27: Database Patching for DBAs – Patch smarter, not harder
Webinar 28: Patch smarter, not harder – MS Windows Special Edition
When and How
We’ll be airing the first webinar in December, and the other two will follow early next year.
Webinar
Date
Best Practices for Upgrading to Oracle AI Database 26ai
December 11 2025, 14:00 CET
Database Patching for DBAs – Patch smarter, not harder
February 12 2026, 14:00 CET
Patch smarter, not harder – MS Windows Special Edition
March 12 2026, 14:00 CET
All our webinars are free. Just sign up and we’ll send you the details.
Why
First of all, these webinars are all tech, no marketing!
You can learn how to tackle your next upgrade project to Oracle AI Database 26ai. Although it’s not yet available for all platforms, there are still important preparations that you should start today.
Furthermore, we’ll share how other customers have upgraded their mission-critical systems to the latest release and whether there are any secret underscores (spoiler: there is…).
Over the last year, we’ve made significant enhancements to AutoUpgrade that make it so much easier to patch your databases. After watching this webinar, you’ll never download patches from My Oracle Support again.
Oracle on Windows is slightly different – but don’t worry, AutoUpgrade got you covered. Use the latest enhancements to streamline your operations of Oracle AI Database on Windows.
See You
If you can’t wait, remember that you can watch all our previous webinars on demand.
At Oracle AI World, I spoke to a customer who used Data Pump as part of their CI/CD pipeline. Exporting and importing a 7 TB database took more than 36 hours.
Unloading and loading rows were generally just slow.
They were using NFS storage for the dump files.
I’ve seen far too many cases where misconfigured NFS caused slowness, so I suggested using ASM storage instead. At least, that could rule out NFS as the issue.
Here’s how to use Data Pump and ASM storage.
Export
It turned out that I already blogged about this topic. You can visit that for step-by-step instructions.
Move File
We now have a dump file on our source database. But we need to move it to the target host.
DBMS_FILE_TRANSFER
On the remote database:
I create a directory in ASM where I can store the dump file:
ASMCMD> create directory +DATA/DMPDIR
I create a user and grant privileges to connect. I’ll use this user to connect via a database link:
SQL> create user transportuser identified by ... ;
SQL> grant connect to transportuser;
I create a directory and allow my user to write to it:
SQL> create directory dmpdir as '+DATA/DMPDIR';
SQL> grant write on directory dmpdir to transportuser;
On the source database:
I create a user with the right privileges:
SQL> create user transportuser identified by ... ;
SQL> grant connect to transportuser;
SQL> grant create database link to transportuser;
SQL> grant read on directory myasmdir to transportuser;
SQL> grant execute on dbms_file_transfer to transportuser;
I connect as transportuser and create a database link to the remote database/PDB:
SQL> create database link transportlink
connect to transportuser identified by ...
using '<connect-string-to-remote-pdb>';
A closing remark:DBMS_FILE_TRANSFER is an effective means of copying the file. My wristwatch measurements show it’s slightly faster than using scp in the operating system.
ASMCMD
You could also transfer the file directly from one ASM instance to a remote one. Check out the cp command.
But I don’t recommend this approach, because you need the SYSDBA or SYSASM privilege to connect to asmcmd. Since the issue was related to a CI/CD pipeline, it’s a fully automated flow. If at all possible, I’d avoid using such a powerful privilege in my automation.
Import
Since we have the dump file on our target system, we can perform a Data Pump import. Use the same approach as described above.
Outcome
If the NFS storage indeed caused the slowness, you should see much faster times.
If that’s not the case, here are some other ideas to explore.
Use the Data Pump bundle patch. The customer told me they already did, but I’ve seen far too many cases where it wasn’t the case after all. Trust, but verify.
If you’re a frequent visitor to my blog, you’ll know I’m a big AutoUpgrade fanboy – especially the download mode, which can save you hours of downloading patches on My Oracle Support.
Recently, I was preaching to another customer, and when they finally gave in, here’s what happened:
We gave it a shot, but we keep running into errors when we try the download mode.
So, let’s see how we can solve the most common problems.
My Oracle Support
You must have an Oracle SSO account.
That account must be linked to a Customer Support Identifier (CSI) that has an active support agreement.
In that CSI, you must have the privilege to download patches. The MOS administrator in your organization can help you with that.
In User Details, select the appropriate Support Identifier, set the Patches drop-down to Download.
If AutoUpgrade tells you that your username or password is incorrect:
*Connection Failed - You entered an incorrect user name or password.*
You should ensure that you’re using the latest version of AutoUpgrade. We recently fixed a bug for passwords with certain special characters.
Network Connectivity
AutoUpgrade must connect to the following servers:
Those servers are part of a Content Delivery Network (CDN), so expect changing IP addresses. Base your firewall rules on DNS names rather than IP addresses.
InsaneDBA, aka Osman, wrote a good blog post on this topic – especially for those of you on Windows.
*SSL peer shut down incorrectly*
Check firewall settings and security groups to ensure they permit SSL traffic on the required ports.
Windows
If AutoUpgrade requires that you use an elevated command prompt (the run as administrator thing) when you use download mode, you should update to the latest version of AutoUpgrade. It no longer requires this.
I recently helped a customer get the most out of Data Pump. Here’s how it all started.
For legal reasons, we must keep a backup of our data for five years. We want to use Data Pump exports. Do you have any recommendations?
Before someone flames me, I know that an export is not a backup, but we’re not talking about disaster recovery here.
Data Pump Export
Here are some of the Data Pump export parameters I would use.
Log File
logtime=all
metrics=yes
logfile=dataexport.log
Whenever you store data for the long term, I think it’s a good idea to store relevant metadata too. So, be sure to save the log file together with the dump files, and include diagnostic information using logtime and metrics.
The dumpfile specification contains the %T wildcard, which translates into YYYYMMDD at runtime. It’s handy to have the date in the file names. Using %L allows the creation of multiple dump files.
I recommend breaking the dump files into smaller pieces using filesize. Smaller files are easier to handle and transport, and avoid some issues I’ve seen with object storage and huge files.
Content
schemas=myapp
tables=mytab
exclude=statistics
I would advise against doing a full export. It contains far too much information that you won’t need, like tablespace definitions, SQL management objects, and audit trail. A schema or table export is more suitable.
Be sure to exclude statistics. They take time to export and have no value. To narrow down the objects you export, you can also use the include parameter.
The parameters include and exclude are mutually exclusive in Oracle Database 19c. That’s no longer the case in later releases.
Compression
compression=all
compression_algorithm=high
Use compression to make the dump files smaller. I think it’s worth spending a few more CPU cycles on export and compressing as much as possible to reduce the dump file size, so use the highalgorithm.
Creating a compressed export requires the Advanced Compression Option. However, importing a compressed backup does not require the same option. Always check the license guide for up-to-date information.
If you don’t have a license for Advanced Compression Option, you can compress the dump file using OS utilities. Unless you also encrypt the dump file. Compressing an encrypted dump file gives no benefit.
Integrity
data_options=verify_stream_format
checksum=yes
Since you are storing data for long-term storage, it’s good to double-check that there’s no corruption in the data. Data Pump writes into the dump file in a streaming manner, and you can check it along the way using data_options.
I’d also recommend generating a checksum and storing that in the dump file using the checksum parameter.
You can periodically check your dump for corruption by using impdp ... verify_only=yes. It will re-calculate the dump file checksum and match that against what is stored in the dump file.
Note that checksum is a feature of Oracle Database 21c.
Usually, using the default checksum_algorithm is fine for checking data integrity. If you want to guard against someone tampering with the dump file, it’s better to use checksum_algorithm=sha512. But for proper protection of your data, use encryption.
Encryption
encryption=all
encryption_mode=password
You can encrypt the data in the dump using the encryption parameter.
I prefer using a separate passphrase to protect the dump file, instead of using the database master encryption key. If you use the latter, you also need to have a backup of the database keystore.
Using Data Pump encryption requires a license for Advanced Security Option. Always check the license guide for up-to-date information.
Miscellaneous
flashback_scn=systimestamp
parallel=<n>
Remember to make a consistent export using flashback_scn. Alternatively, export from a quiesced database or a snapshot standby.
Use whatever parallel degree that you can afford. Normally, on x86-64 architectures, the optimal setting is twice the number of physical cores.
Using OCI Object Storage
The customer also stated:
We plan on storing the dumps in OCI object storage.
In the multitenant architecture in Oracle AI Database, the container database is essential. If you create it the right way, you can avoid some difficulties later on.
Here’s my recommendation for creating container databases.
3xC
The three most important things all start with “C”. Makes it easier to remember.
Character Set
The first one is straightforward: stick with the default, AL32UTF8.
If the root container is AL32UTF8, you can plug in PDBs with any character set. You can have a mix of the classic US7ASCII, old-school European without the Euro sign in WE8ISO8859P1, Japanese JA16SJIS, and original Unicode UTF8, and whatever.
But as soon as you set the root character set to anything but AL32UTF8, all PDBs are bound to the same character set.
Remain flexible and use AL32UTF8.
Components
The number of components has a direct influence on the time it takes to upgrade and patch a database.
Any component used by a PDB must also be installed in the root container. So, install as many components as you need in root. No more than that (selectively tick checkboxes in the blue box).
Also, from a least privilege kind-of-perspective don’t install more than you need.
When you decide on a set of components to install, do that in root only. Ensure that new PDBs have none of the components installed (untick all checkboxes in the red box). This gives you the leanest PDBs. If a PDB requires a component, it’s easy to install afterward.
Compatible
The most important thing is that you have a uniform compatible setting in all your CDBs. This gives you the most flexibility.
When you plug in a PDB, it must have the same compatible setting as the root container.
If the PDB has a lower compatible setting, it will automatically and silently raise the setting to that of the root container.
If the PDB has a higher compatible setting, you won’t be able to plug it in.
Important Parameters
Once you have created a CDB, I recommend setting these parameters.
The database now throws an error if you try to create a fourth PDB.
_exclude_seed_cdb_view
By default, the CDB views do not show information about PDB$SEED. I’m not too fond of this behavior. I want to know if there is a problem with it.
alter system set "_exclude_seed_cdb_view"=false scope=both;
Records from PDB$SEED now show up in the CDB views with CON_ID=2.
Create The CDB Using DBCA
You need to use the advanced configuration in Database Configuration Assistance (DBCA) to create a CDB that matches the above recommendations.
Select Create a database.
Select Advanced configuration.
Create a Custom Database.
On the following few pages, you can make the changes you like until you reach the Database Options page.
When you reach the Database Options page, select the components you need in the root container (blue box). In the Include in PDBs list, select only the component you need in all your PDBs. Otherwise, leave them out so you get a lean PDB. You can easily add components later on.
On the Configuration Options page, on the Character sets tab, ensure that you select Use Unicode (AL32UTF8).
On the Creation Option page, click on All initialization Parameters, and select the compatible setting you use. I recommend using the default.
On the Summary page, you should Save Response File so you can create additional CDBs in silent mode.
Create Manually
After creating the first database using DBCA, you can create another using the response file and DBCA in silent mode.
Make the required changes to the response file.
Update the name of the database.
Verify the list of parameters.
Other settings.
For reference, in the appendix, you can find a sample response file that we use in our hands-on labs.
If you need to create the database with a non-default timezone file version, set the following parameter:
Back from Oracle AI World in Las Vegas – the conference formerly known as CloudWorld formerly known as OpenWorld. Badge now joins the others on my wall.
Oracle AI World is special, especially for someone like me who usually works from home. In a few days, I reunited with colleagues, customers, and friends—a very intense social event for someone used to meeting people on Zoom.
It’s also the time of the year when I get to spend time with my team.
Long days left me with sore feet from extended standing and walking.
The motto of the conference: AI is changing everything! I only had time to play with AI generated pictures.
Nine-hour time difference usually causes me jetlag, but not this time. I used the Timeshifter app, drank lots of water with electrolytes (no caffeine), got sunshine and darkness at the right times, and took Melatonin (this is not a medical endorsement, always check with your doctor). I recovered quickly. If you suffer from jetlag too, give it a shot.
Fruit and a protein bar in my bag served as emergency fuel when meals were delayed—a tip for future conferences.
First Air Canada flight—a pleasure. Flew a Boeing Dreamliner, my second favorite after the Airbus A350. As an aviation geek, I love watching planes. What’s your favorite aircraft?
While making a layover in Toronto, I saw Canada’s vast autumn forests from above on approach—a magnificent sight. I’d love to visit Canada in autumn. Any recommendations for must-see sights?
Saw a remake of The Wizard of Oz at The Sphere. What a wild ride—unbelievable. If you visit Las Vegas, catch a show there. That’s a true big screen.
So many people were doing selfies, that we had to make a selfie-with-a-selfie.
Job Done
That’s it! Celebrated an amazing conference with the team and said our goodbyes.
Moving forward, stay tuned for news on the upcoming Oracle AI World Tour 2026—coming soon to a city near you.
Next year’s conference returns to The Venetian in late October. Hope to see you there! If you have a story to share, reach out—perhaps you can present it with us.
THANKS to everyone that took part in making Oracle AI World a big success!