Upgrade Oracle Database 19c CDB to Oracle AI Database 26ai

Let me show you how you upgrade an entire container database, including all PDBs, to Oracle AI Database 26ai.

How to Upgrade

I’ve already prepared my database and installed a new Oracle home. The maintenance window has started, and users have left the database.

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/CDB19
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=CDB19
    
    • sid contains the name of SID or my database.
    • I specify the source and target Oracle homes. I’ve already installed the target Oracle home.
    • Check the appendix for additional parameters.
  2. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config CDB19.cfg -mode deploy
    
    • AutoUpgrade starts by creating a guaranteed restore point to protect my database.
    • Then, it analyzes the database for upgrade readiness and executes the pre-upgrade fixups. Next is the actual upgrade, followed by post-upgrade checks and fixups.
  3. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  4. In the end, AutoUpgrade completes the upgrade:

    Job 100 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    ---- Drop GRP at your convenience once you consider it is no longer needed ----
    Drop GRP from CDB19: drop restore point AUTOUPGRADE_9212_CDB191927000
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups, incl. updating /etc/oratab and Grid Infrastructure configuration.
  5. I review the Autoupgrade Summary Report. The path is printed to the console:

    vi /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
  6. I take care of the post-upgrade tasks.

  7. I update any profiles or scripts that use the database.

  8. When I’m done testing the database – including application testing – and I decided that a rollback is not needed, I’ll drop the GRP:

    SQL> drop restore point AUTOUPGRADE_9212_CDB191927000;
    

That’s It!

With AutoUpgrade, you can easily upgrade your entire CDB.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also updates the Grid Infrastructure configuration.

For warp-speed upgrades, take a look at distributed upgrade.

What If I Use Oracle Restart?

No changes. AutoUpgrade detects this and automatically updates the Grid Infrastructure configuration.

What If My Database Is Encrypted

You must use an auto-login keystore. AutoUpgrade checks this during the pre-upgrade analysis.

Multitenant Upgrade

A few words about an upgrade of an entire CDB. AutoUpgrade first upgrades the root container. When that completes, it will upgrade the seed container and your PDBs. The latter part happens in parallel.

So, upgrading a CDB is always slower than just upgrading a single PDB. If you’re tight on time, consider upgrading just the PDB. This is called an unplug-plug upgrade.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • drop_grp_after_upgrade: AutoUpgrade doesn’t drop the GRP after the upgrade. This allows you to roll back even after a successful upgrade if your application testing reveals a problem. However, it means that you must manually remember to drop the GRP. Otherwise, you’ll eventually run out of space in FRA, and that will halt your database. Set this parameter to yes, and AutoUpgrade drops the GRP if the upgrade completes without problems.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • raise_compatible: If you want to raise the initialization parameter COMPATIBLE immediately after the upgrade, you can use this parameter. Don’t use if you have standby databases, because there’s a specific procedure for raising COMPATIBLE in a Data Guard configuration. Don’t use it if you want to keep the option of downgrading.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

  • em_target_name: Allow AutoUpgrade to create blackouts before restarting the database and to update the target configuration after the upgrade. Requires a local installation of emcli. See this blog post for details.

  • rman_catalog_connect_string: Relevant for databases that store backup metadata in a recovery catalog. Upgrades the recovery catalog schema following an upgrade. See this blog post for details.

How to Exclude Audit Trail From Data Pump Export

A user left a comment in the Utilities Guide:

I want to exclude the audit trail from a Data Pump export, but I can’t find a specific example.

First, thanks to the user for leaving a comment. We read every one of them and use them to improve the documentation.

Let me show you how to exclude the audit trail from your Data Pump export.

Exclude Audit Trail You can exclude the audit during export and import using the exclude parameter.

expdp … full=y exclude=audit_trails Data Pump includes the audit trail only in full exports. Data Pump excludes all kinds of audit trail, including traditional, unified, and fine-grained. It’s not possible to exclude just one audit trail while keeping the others. It’s all or nothing. I recommend using the exclude on the export command, if possible. Don’t try to exclude the underlying parts of the audit trails (such as the AUDSYS schema or the AUD$ table). What About The Policies Data Pump handles audit policies differently, and they might be included in schema and table exports.

To exclude traditional audit policies: expdp … exclude=audit To exclude unified audit policies: expdp … exclude=audit_policy How Did I Know That? The exclude parameter is very powerful, and there’s no room in the documentation to list all the different options you can exclude.

But the database can tell you exactly what you can exclude. Here’s a list of the audit-related object paths that you can exclude in a full export:

SQL> select object_path, comments from database_export_objects where upper(comments) like ‘%AUD%’;

OBJECT_PATH COMMENTS


AUDIT Audits of SQL statements AUDIT_CONTEXT Audit Namespace Context AUDIT_DEFAULT Procedural system privilege audits AUDIT_OBJ Object audits on the selected tables AUDIT_POLICY Audit Policy AUDIT_POLICY_ENABLE Audit Policy Enable AUDIT_TRAILS Database Audit Trails and their configuration

(output truncated)

75 rows selected. For schema and table exports, check schema_export_objects and table_export_objects, respectively.

Additional Notes Remember, traditional auditing is deprecated in Oracle AI Database 26ai. It’s time to start using Unified Auditing. Check out the syntax converter script in Traditional to Unified Audit Syntax Converter – Generate Unified Audit Policies from Current Traditional Audit Configuration (Doc ID 2909718.1). The audit trail is important information. If you exclude the audit trail during a database migration, I recommend archiving it beforehand. You don’t want to lose your audit information. Happy exporting!

How to Prepare Your Oracle Database for Release 26ai

Here is a list of things to check and carry out while preparing for Oracle AI Database 26ai.

These tasks are not mandatory, but I recommend them based on my experience. It increases your chances of upgrading successfully.

Get ready for Oracle AI Database 26ai upgrade

Weeks Before

Versioning

Oracle AI Database 26ai introduces a new versioning schema. Although 26ai replaces 23ai, in the database, the version remains the same:

After upgrade to 26ai, the version remains 23.0.0.0.0:

SQL> select version from v$instance;

VERSION
-------
23.0.0.0.0

If you check version_full, you’ll see that the second digit reflects the new version:

SQL> select version_full from v$instance;

VERSION
-------
23.26.0.0.0

Platform Certification

Ensure that the new release supports your operating system. When available, you can check it in the installation guides. However, you can find the most up-to-date information in the Product Certification Matrix tab in My Oracle Support.

Use Certifications tab in My Oracle Support to find up-to-date platform certifications

Clients

Check which clients are connecting to the database and ensure their client driver versions are compatible with Oracle AI Database 26ai. The client/server interoperability matrix gives you an overview of which clients work with which servers.

Client Database 26ai Database 19c
26ai Yes Yes
21c Yes Yes
19c Yes Yes
18c No Yes
12.2.0.1 No Yes
12.1.0.2 No Yes
11.2.0.4 No Yes

19c supports much older clients compared to 26ai. If you’re using such old clients, start upgrading them right away. This also applies to database links.

Optionally, monitor the database over time to generate a list of clients connecting.

AutoUpgrade

Get the latest version of AutoUpgrade from My Oracle Support. I recommend that you always use the latest version. AutoUpgrade is fully backward compatible, so any newer version can upgrade older releases of Oracle AI Database.

Upgrade Readiness

Run AutoUpgrade in analyze mode to determine your database’s upgrade readiness. Check the summary report for findings with no fixups. Such findings must be dealt with manually.

You can run the analysis even before you install the new Oracle home. Simply specify target_version=26 in your AutoUpgrade config file, and AutoUpgrade knows which checks to run.

Dictionary Check

Check the dictionary in your most important databases. You can run a dictionary check together with an AutoUpgrade analysis. In your config file:

upg1.run_dictionary_health=full

Or, you can run it separately using DBMS_DICTIONARY_CHECK.

Behavior Changes

Review the upgrade guide for information on deprecated and desupported features, plus any behavior changes.

Also, use ORAdiff to check for changes between your current release and 26ai. Use the Report menu item to generate a report that you can share with your company.

It’s also a good idea to look at the new reserved keywords. Especially, those marked as reserved. Don’t use those keywords in object/column names, queries, or the like.

Grid Infrastructure

  • If Grid Infrastructure manages your database either in RAC or Oracle Restart configuration, you should upgrade it as well. If you can tolerate an additional maintenance window, I’d recommend upgrading Grid Infrastructure in advance.

  • Grid Infrastructure 26ai can manage databases from 19c and onwards. Any older database must be moved away from the system.

  • I recommend keeping the GI and database patch levels in sync. If you want to upgrade the database to 26.1, then upgrade GI to the same Release Update. If that’s not possible, at least keep it within two Release Updates.

Days Before

Statistics

Gather dictionary and fixed objects statistics:

begin
   dbms_stats.gather_fixed_objects_stats;
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM')
end;
  • I prefer gathering schema stats rather than dictionary stats. I’ve seen a few edge cases solved by schema stats, and generally, current stats on SYS and SYSTEM are sufficient.

  • Gather statistics no earlier than seven days before the upgrade. If AutoUpgrade determines your statistics are older than that, it will gather them as part of the upgrade. This extends downtime unnecessarily.

  • If you’re curious, you can find details about the importance of fixed objects statistics in a blog post by Maria Colgan.

Install Oracle Home

You can use AutoUpgrade to:

In release 26ai, the Oracle homes that you download are already patched with the latest Release Update and OPatch.

On the Day of Upgrade

Backup

If time allows, run a level 0 backup. If not, you have to settle with a level 1.

Don’t run the backups inside your maintenance windows; that’s a waste of downtime. Start the backup in advance, so it finishes just before the maintenance window begins.

AutoUpgrade automatically creates a guaranteed restore point before starting the upgrade.

Scheduler

Normally, it is not necessary to disable the scheduler. But in some situations, it can be beneficial.

That’s It

You are now ready to begin your journey to Oracle AI Database 26ai and its many exciting new features.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

The Best Approach For Backup Before Upgrade

While doomscrolling on LinkedIn the other day, I came across an interesting post:

What is the best approach to take 65TB database backup before upgrade from 12c to 19c?

This post started a discussion with many good answers and ideas. Let me add my thoughts.

The Ideas

Cold Backup

Stop DB and take a cold backup.

  • I don’t recommend this.
  • A cold backup means downtime.
  • An online backup works just as fine.

Don’t Make Any Preparations

You should be able to restore at any time within the timeframe specified by your business/SLA.

  • You should always be prepared!
  • If our SLA allows us to use four hours, why not make it faster?
  • You could start an additional level 0 or level 1 backup before the maintenance window. That would reduce the number of archive logs, thereby reducing the time to restore.

Partial Offline Backup

Set app tablespace to read-only, then perform a partial offline backup (or a regular backup). In case of a restore, you need to restore less data.

  • This is a special backup strategy that you normally don’t use. Be sure to test and practice it.
  • A good solution for Standard Edition 2 databases, since you don’t have access to Flashback Database or parallel RMAN backups/restores.
  • A viable approach, but not my favorite, since it’s a technique that you rarely use.

Data Guard

Just keep your standby not upgraded.

  • This conflicts with the MAA guidelines that suggest keeping the standbys online and applying redo.
  • But I like this approach. You need a little more time after the upgrade to go live, because the standby must catch up. But that shouldn’t take long, as an upgrade doesn’t generate that much redo.
  • If you have multiple standbys, this technique becomes even more attractive. Have some standbys follow the primary (to go live faster) and a few lagging behind (in case you need to rollback).

Flashback Database

Or keep compatible at 12 and create a guaranteed restore point before the upgrade so you can flashback.

  • Yes – my favorite.
  • Flashback Database should be your first rollback option. It’s so fast and so convenient.

Another reply:

I just enable Flashback Database, create guaranteed restore point, and start the upgrade. After validation of application, drop the guarantee restore point.

  • The answer suggests that Flashback Database was the only rollback option.
  • As much as I love Flashback Database, don’t rely solely on it.
  • It should be your first option, but not the only one.
  • By the way, you don’t have to enable Flashback Database to create a guaranteed restore point. Without Flashback Database turned on, you can still revert back to a restore point.

Downgrade

Oracle Database also has downgrade method.

  • Downgrade is a cool option. But it’s a fallback option – something you can use days or weeks later following a successful upgrade.
  • If the upgrade crashes midway, you can’t use a downgrade to go back.

Data Pump

Export import is a good option.

  • Export/import is not a rollback plan. Similar to downgrade, if the upgrade crashes midway, you won’t be able to use Data Pump.
  • Days or weeks later, you can use Data Pump. But it might be a viable option for larger databases.

GoldenGate

Another approach can be use of GoldenGate

  • This is an awesome way to upgrade.
  • Especially if you go the extra mile and use active/active replication. Then, you can gradually move sessions to the new database and test things slowly.
  • But are you willing to pay the price? Not only license fees, but also the added complexity.

What Does Oracle Say?

If you check the Upgrade Guide you will find one recommendation:

Perform a level 0 backup if time allows. Otherwise, at least a level 1 backup.

By doing a level 1 backup close to the maintenance window, then you can limit the amount of archive logs that you would need to apply on top of your level 0/1 backups. This can dramatically reduce the time it takes to restore.

In addition, AutoUpgrade automatically creates a guaranteed restore point prior to the upgrade.

My Final Thoughts

This discussion on LinkedIn is a really good example that there’s no such thing as one-size-fits-all in tech. It always depends…

I recommend finding a solution that:

  • You’re comfortable with.
  • Fits your business requirements and SLA.
  • You know work, and you gain that knowledge by testing it. You never know that it works until you test it.
  • Doesn’t stand alone. You need multiple rollback options.

What are your thoughts? Which rollback options do you have? Leave a comment below.

Happy upgrading!

RAC Rolling Patching in Autoupgrade – Looking for Early Testers

Support for RAC rolling patching in AutoUpgrade has been on our wishlist for a while. Finally, our developers are about to write the last lines of code.

We plan to release the feature next year, but we’d like to get some early feedback on the functionality.

If you’d like to give it a try, here’s your chance.

What Do You Get

  • Early access to the functionality.
  • The latest version of AutoUpgrade, plus the code for RAC rolling patching.
  • Introduction to the feature and a short description of how it works.
  • Possibility to provide feedback and influence our decisions.
  • A big THANK YOU for helping us.

What Do We Expect

  • You have time and resources to test the functionality in your own environment.
  • You will report the errors as service requests using My Oracle Support.
  • You can share the AutoUpgrade log files.

What We Hope

  • If you like the feature, you will help us spread the message.
  • We’d also love it if you could provide a quote, a video, or even a reference story.

How

  • Send me an e-mail at daniel.overby.hansen (a) oracle.com.
  • Add a few words about your setup and such.
  • I can’t guarantee that everyone will be able to help us.

I’m really excited about this feature, and I know many of you are waiting for it.

Upgrade to Oracle AI Database 26ai

Get ready for the future and enjoy the many new cool features in Oracle AI Database 26ai. It is just an upgrade away.

This blog post gives you a quick overview of the upgrade to Oracle AI Database 26ai. Plus, it is the starting point of a whole series of blog posts with all the details.

Things to Know

  • You can upgrade to Oracle AI Database 26ai if your database runs 19c or 21c.

    • If you have an older database, you must first upgrade to 19c and then upgrade again to 26ai.
    • If you’re on 23ai, you can just patch the database; no upgrade needed, no need to re-certify your app.
  • Oracle AI Database 26ai supports the multitenant architecture only. If your database is a non-CDB, you must also convert it to a pluggable database as part of the upgrade.

  • Although the Multitenant Option requires a separate license, you can still run databases on the multitenant architecture without it. Oracle allows a certain number of pluggable databases in a container database without the Multitenant Option. Check the license guide for details, and be sure to set max_pdbs=3 if you don’t have the license.

  • Oracle AI Database 26ai is the next long-term support release. It means you can stay current with patches for many years. At the time of writing, Premier Support ends in December 2031, but check Release Schedule of Current Database Releases (Doc ID 742060.1) for up-to-date information.

  • In Oracle AI Database 26ai, AutoUpgrade is the only recommended tool for upgrading your database. Oracle desupported the Database Upgrade Assistant (DBUA).

  • You can also use Data Pump or Transportable Tablespaces to migrate your data directly into a 26ai PDB. Even if the source database runs on a lower release and in a non-CDB. In fact, you can export from Oracle v5 and import directly into a 26ai PDB.

Important Things about Multitenant Migration

  • The multitenant conversion is irreversible. Not even Flashback Database can help if you want to roll back. You must consider this when planning for a potential rollback.
  • For smaller databases, you can rely on RMAN backups. However, for larger databases, a restore may take too long.
  • For rollback, you can use a copy of the data files:
    • The CREATE PLUGGABLE DATABASE statement has a COPY clause, which copies the data files and uses the copies for the new PDB.
    • Refreshable clone PDB can minimize the time needed to copy the data files by doing it in advance and rolling forward with redo.
    • Use image copies of your data files and roll forward with RMAN.
    • Use a standby database for rollback.
    • Storage snapshots
  • Depending on your Data Guard configuration, the plug-in operation needs special attention on your standby databases. If you have standby databases, be very thorough and test the procedure properly.
  • In the worst case, you can break your standby databases without knowing it. Be sure to check your standby databases at the end of the migration. I recommend performing a switchover to be sure.
  • The multitenant conversion requires additional downtime. Normally, I’d say around 10-20 minutes of additional downtime. But if you have Data Guard and must fix your standby databases within the maintenance window, then you need even more time.

And Then …

Over the coming months, I will publish several blog posts with step-by-step instructions and other info. Stay tuned!

By the way, this blog post was originally posted a while ago for Oracle Database 23ai, but we all know what happened there. So, let’s start over with Oracle AI Database 26ai.

Happy upgrading!

Other Blog Posts

Non-CDB

  • Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB

  • Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB Using Refreshable Clone PDB

  • Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB with Data Guard and Re-using Data Files (Enabled Recovery)

  • Upgrade Oracle Database 19c Non-CDB to 26ai and Convert to PDB with Data Guard and Restoring Data Files (Deferred Recovery)

CDB

  • Upgrade Oracle Database 19c CDB to 26ai

  • Upgrade Oracle Database 19c CDB to 26ai with Data Guard

PDB

  • Upgrade Oracle Database 19c PDB to 26ai

  • Upgrade Oracle Database 19c PDB to 26ai using Refreshable Clone PDB

  • Upgrade Oracle Database 19c PDB to 26ai with Data Guard using Refreshable Clone PDB

  • Upgrade Oracle Database 19c PDB to 26ai with Data Guard and Re-using Data Files (Enabled Recovery)

  • Upgrade Oracle Database 19c PDB to 26ai with Data Guard and Restoring Data Files (Deferred Recovery)

OCI

How to Use NOLOGGING to Make Imports Faster

You can use NOLOGGING operations to speed up data loads. Imports are all about loading data.

When Data Pump loads the rows into your database and creates indexes, you can avoid generating redo.

You save:

  • Time – because a NOLOGGING operation is faster.
  • Space – because no redo means no archive logs.

But, there are serious implications! If you need to restore the database, those tables and indexes will not be there.

How To

You can toggle the logging clause of a table or index using the Data Pump import parameter transform. To use NOLOGGING.

impdp ... transform=disable_archive_logging:y

The Effect

I made a small test with a simple schema with four identical tables. Each table is 3 GB.

Setting Time to Import Redo, GB
LOGGING 4m 22s 12.2
NOLOGGING 1m 45s 0.047

Many factors affect the numbers, so your result might be very different. Check for yourself.

Underneath the Hood

The LOGGING information is only changed temporarily during the import. At the end of the import, the LOGGING information is reset to its original state.

I enabled Data Pump trace:

impdp ... \
   transform=disable_archive_logging:y \
   trace=1FF0300

And by grepping in the database trace directory, I could find these events:

FTEX_dw00_854409.trc:KUPW:09:38:39.351: 1:       ALTER TABLE "CONSTR_VALIDATE"."T1"  NOLOGGING
FTEX_dm00_854407.trc:KUPM:09:40:20.527: W-3 . . imported "CONSTR_VALIDATE"."T1"                      381.6 MB 23312384 rows in 101 seconds using direct_path
FTEX_dw01_854413.trc:KUPW:09:40:20.619: 2:       ALTER TABLE "CONSTR_VALIDATE"."T1" LOGGING

Words of Caution

I strongly recommend that you only use this feature when:

  1. You fully understand the implications of NOLOGGING operations.
  2. You install the Data Pump Bundle Patch. There are a handful of bugs related to this feature, including one bug where the Data Pump fails to restore the original LOGGING value, leaving the object in NOLOGGING mode.
  3. You have a plan for ensuring the recoverability of your database. Either you don’t care about the data at all, or you start a level 0 backup right away.
  4. You have a plan for your standby databases if you use Automatic Correction of Non-logged Blocks at a Data Guard Standby Database.

Notes

  • This feature doesn’t work if you have FORCE LOGGING enabled. The database silently ignores the NOLOGGING clause.

    select force_logging from v$database;
    
  • In a Data Guard configuration, you most likely use FORCE LOGGING, so don’t expect it to work here. Unless you’re using Automatic Correction of Non-logged Blocks at a Data Guard Standby Database.

  • This feature doesn’t work on Oracle Autonomous Database (ADB) Serverless and Dedicated because it ignores the NOLOGGING clause. ADB protects your data at any cost, and, thus, completely ignores that setting.

  • A Data Pump import always generates redo, even when you are using NOLOGGING operations. The database logs all actions on the data dictionary and UNDO. Furthermore, DML operations on the Data Pump control table are also logged.

Migrations

You can safely use this feature during a migration if you plan to perform a level 0 backup and build your standby databases within the maintenance window.

However, if you’re tight on time, you often take a level 0 backup before the import and rely on archive logs for recoverability. In such a situation, you must not use this feature. The same applies if you build your standby database before the import.

Just the Indexes

A compromise is to use NOLOGGING on the indexes only:

transform=disable_archive_logging:y:index
transform=disable_archive_logging:n:table

The rationale being that you can always rebuild your indexes again. Keep your tables safe, but if something happens, simply recreate the indexes.

In such a situation, it’s good to have the index definitions ready. You can use the SQLFILE option to extract the index DDLs from the dump file:

impdp ... sqlfile=indexes.sql include=index

Conclusion

You can speed up imports by using NOLOGGING operations. It also reduces the amount of redo generation.

Appendix

Give It a Try

You can use our Data Pump hands-on lab to try it yourself. Provision a lab and use the commands below:

# Create directory and copy dump file
mkdir /home/oracle/dpdir
cp /home/oracle/scripts/faster-import-lob.dmp /home/oracle/dpdir

# Create parameter file
# Change the transform parameter accordingly
cd
cat > imp.par <<EOF
directory=dpdir
dumpfile=faster-import-constraints.dmp
parallel=4
logtime=all
metrics=yes
transform=constraint_novalidate:y
transform=disable_archive_logging:y
trace=1FF0300
EOF

# Get rid of previous archived logs
. ftex
rman target /<<EOF
   delete noprompt archivelog all;
EOF
rm -rf /u02/fast_recovery_area/FTEX/archivelog/*

# Data Pump prereqs and a restart to reset metrics
sqlplus / as sysdba<<EOF
   drop user constr_validate cascade;
   grant datapump_exp_full_database, datapump_imp_full_database to dpuser identified by oracle;
   alter user dpuser default tablespace users;
   alter user dpuser quota unlimited on users;
   create or replace directory dpdir as '/home/oracle/dpdir';
   alter system set streams_pool_size=128m scope=spfile;
   shutdown immediate
   startup
EOF

# Remove existing trace files
rm -rf /u01/app/oracle/diag/rdbms/ftex/FTEX/trace/*

# Start import
impdp dpuser/oracle parfile=imp.par

# Measure redo via DB and file system
du -h /u02/fast_recovery_area/FTEX/archivelog
sqlplus / as sysdba<<EOF
   SELECT VALUE / 1024 / 1024 AS redo_generated_mb FROM v\$sysstat WHERE name = 'redo size';
EOF

AutoUpgrade New Features: Get Latest JDK Patches

In an Oracle home, you find a Java installation – a full JDK.

cd $ORACLE_HOME
ls -l jdk

Some of the tools related to Oracle AI Database are Java-based, like AutoUpgrade, making it convenient to have.

How do you patch the JDK in your Oracle home?

Updating JDK

When you apply a Release Update to your database, you’re also updating your JDK. Those patches are part of the Release Update.

Here are two Oracle homes with different Release Updates:

cd /u01/app/oracle/product/dbhome_1927
jdk/bin/java -version

java version "1.8.0_441"
Java(TM) SE Runtime Environment (build 1.8.0_441-b07)
[   Java HotSpot(TM) 64-Bit Server VM (build 25.441-b07, mixed mode)](https://docs.oracle.com/en/database/oracle/oracle-database/26/upgrd/patch-parameters-autoupgrade-config-file.html#UPGRD-GUID-A7E6221E-5964-4553-9A63-61B2E4AB1CBD)

cd /u01/app/oracle/product/dbhome_1929
jdk/bin/java -version

java version "1.8.0_461"
Java(TM) SE Runtime Environment (build 1.8.0_461-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.461-b11, mixed mode)
  • Notice how the JDK is newer in the later Release Update.

So, all’s good, right?

The Latest Ones

The Java team builds its patches around the same time as the database team builds the Release Update.

This means that the JDK patches you find in a Release Update are from the previous quarter. There’s simply not enough time to include the very latest JDK patches in the Release Update. We also need time for regression testing and other tasks.

So, the very latest JDK patches are made available as a one-off patch.

You can find the patch number of the latest JDK patch in JDK and PERL Patches for Oracle Database Home and Grid Home (Doc ID 2584628.1). Apply that and you’re fully up to date.

AutoUpgrade

You can instruct AutoUpgrade to get the latest available JDK patches using the jdk keyword in your patch configuration.

patch1.patch=RU,OPATCH,JDK

Here’s an example of two 19.29 Oracle homes – one without and one with the JDK patch:

cd /u01/app/oracle/product/dbhome_1929
jdk/bin/java -version

java version "1.8.0_461"
Java(TM) SE Runtime Environment (build 1.8.0_461-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.461-b11, mixed mode)

cd /u01/app/oracle/product/dbhome_1929jdk
jdk/bin/java -version

java version "1.8.0_471"
Java(TM) SE Runtime Environment (build 1.8.0_471-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.471-b11, mixed mode)
  • Notice how the version changes from 461 to 471.

Daniel’s Recommendation

Now that it has become so easy to update the JDK, I recommend doing it.

Security is paramount!

I see minimal risk in doing that.

  • The Java installation in Oracle Database 19c is version 8 that is very mature.
  • In Oracle AI Database 26ai, it is Java 11 that is also very mature.
  • Those two Java releases are in a part of their lifecycle where a few things are updated.
  • I recall no major issues caused by a JDK patch.

But what do you think?

  • Are you already updating the JDK, or do you plan to do so in the future?
  • Have you experienced issues with JDK in your Oracle home? Let me know in the comments below.

Happy patching!

New Webinars

Now that I’ve finished with Oracle AI World and the EMEA Community Tour is coming to a close, I have some free time on my hands. What better way to spend that time than to create webinars?

  • Webinar 26: Best Practices for Upgrading to Oracle AI Database 26ai
  • Webinar 27: Database Patching for DBAs – Patch smarter, not harder
  • Webinar 28: Patch smarter, not harder – MS Windows Special Edition

When and How

We’ll be airing the first webinar in December, and the other two will follow early next year.

Webinar Date
Best Practices for Upgrading to Oracle AI Database 26ai December 11 2025, 14:00 CET
Database Patching for DBAs – Patch smarter, not harder February 12 2026, 14:00 CET
Patch smarter, not harder – MS Windows Special Edition March 12 2026, 14:00 CET

All our webinars are free. Just sign up and we’ll send you the details.

Why

First of all, these webinars are all tech, no marketing!

  • You can learn how to tackle your next upgrade project to Oracle AI Database 26ai. Although it’s not yet available for all platforms, there are still important preparations that you should start today.
  • Furthermore, we’ll share how other customers have upgraded their mission-critical systems to the latest release and whether there are any secret underscores (spoiler: there is…).
  • Over the last year, we’ve made significant enhancements to AutoUpgrade that make it so much easier to patch your databases. After watching this webinar, you’ll never download patches from My Oracle Support again.
  • Oracle on Windows is slightly different – but don’t worry, AutoUpgrade got you covered. Use the latest enhancements to streamline your operations of Oracle AI Database on Windows.

See You

If you can’t wait, remember that you can watch all our previous webinars on demand.

I hope to see you there!

How To Export To ASM Storage – The Full Picture

At Oracle AI World, I spoke to a customer who used Data Pump as part of their CI/CD pipeline. Exporting and importing a 7 TB database took more than 36 hours.

That’s far too much, I said.

A meme showing a man and a cat shouting 36 hours is too much

A few details:

  • They had the Data Pump bundle patch installed.
  • Unloading and loading rows were generally just slow.
  • They were using NFS storage for the dump files.

I’ve seen far too many cases where misconfigured NFS caused slowness, so I suggested using ASM storage instead. At least, that could rule out NFS as the issue.

Here’s how to use Data Pump and ASM storage.

Export

It turned out that I already blogged about this topic. You can visit that for step-by-step instructions.

My feeling when I saw I blogged about this already

Move File

We now have a dump file on our source database. But we need to move it to the target host.

DBMS_FILE_TRANSFER

On the remote database:

  1. I create a directory in ASM where I can store the dump file:
    ASMCMD> create directory +DATA/DMPDIR
    
  2. I create a user and grant privileges to connect. I’ll use this user to connect via a database link:
    SQL> create user transportuser identified by ... ;
    SQL> grant connect to transportuser;
    
  3. I create a directory and allow my user to write to it:
    SQL> create directory dmpdir as '+DATA/DMPDIR';
    SQL> grant write on directory dmpdir to transportuser;
    

On the source database:

  1. I create a user with the right privileges:
    SQL> create user transportuser identified by ... ;
    SQL> grant connect to transportuser;
    SQL> grant create database link to transportuser;
    SQL> grant read on directory myasmdir to transportuser;
    SQL> grant execute on dbms_file_transfer to transportuser;
    
  2. I connect as transportuser and create a database link to the remote database/PDB:
    SQL> create database link transportlink
         connect to transportuser identified by ...
         using '<connect-string-to-remote-pdb>';
    
  3. I copy the file:
    begin
    dbms_file_transfer.put_file(
       source_directory_object      => 'MYASMDIR',
       source_file_name             => 'exp01.dmp',
       destination_directory_object => 'DMPDIR',
       destination_file_name        => 'exp01.dmp',
       destination_database         => 'TRANSPORTLINK');
    end;
    /
    

A closing remark: DBMS_FILE_TRANSFER is an effective means of copying the file. My wristwatch measurements show it’s slightly faster than using scp in the operating system.

ASMCMD

You could also transfer the file directly from one ASM instance to a remote one. Check out the cp command.

But I don’t recommend this approach, because you need the SYSDBA or SYSASM privilege to connect to asmcmd. Since the issue was related to a CI/CD pipeline, it’s a fully automated flow. If at all possible, I’d avoid using such a powerful privilege in my automation.

Import

Since we have the dump file on our target system, we can perform a Data Pump import. Use the same approach as described above.

Outcome

If the NFS storage indeed caused the slowness, you should see much faster times.

If that’s not the case, here are some other ideas to explore.

Happy Data Pumping!