Pump Your Data Pump Skills

Oracle Data Pump is a powerful tool for moving data between databases, but many users only scratch the surface of what it can do. That’s why we created a hands-on lab that will take you beyond the basics and into the details.

Get started with Supercharge data movement with Data Pump.

Learn Oracle Database on Oracle LiveLabs

What Can I Learn

If you invest two hours, this is some of what you get in return:

  • Apply best practices
  • Deal with LOBs effectively
  • Speed up imports using NOVALIDATE constraints
  • Use checksums and encryption to validate your dump files
  • Monitor and trace a job
  • Start a job using the PL/SQL interface

If you’re looking for more details, check out the lab instructions.

Oracle LiveLabs

The lab runs in Oracle LiveLabs.

  • Completely free
  • Nothing to install
  • Runs in a browser

You can check our other labs or flip through the huge catalog of labs on Oracle Database.

Get Started

Start your lab today.

If you want more on Data Pump, check our webinars Data Pump Extreme – Deep Dive with Development and Data Pump Best Practices and Real World Scenarios.

Never Stop Learning

When A Refreshable Clone Takes Over The Service

Following my advice, a customer migrated a database to multitenant using refreshable clone PDB. The CDB was on the same host as the non-CDB.

When we prepare the migration and create the refreshable clone, the users can no longer connect to the source database.

The users were getting ORA-01109: database not open when connecting to the source database.

But this was before the final refresh and before the migration was supposed to happen.

Why did the refreshable clone PDB interfere with operations in the source database?

The Details

The customer has a non-CDB called SALES, which they wanted to migrate into CDB1. They wanted the PDB to keep the original name, SALES.

Any database registers a default service at the listener. The name of the default service is the same as the name of the database.

In this case, the non-CDB registered the sales service:

$ lsnrctl status

Service "sales" has 1 instance(s).
  Instance "SALES", status READY, has 1 handler(s) for this service...

The customer used AutoUpgrade for the migration. When preparing for the migration, they started in deploy mode, and AutoUpgrade created the refreshable clone.

Bear in mind that these steps are preparations only. The switch to the PDB should happen at a later time.

Look what happens to the listener:

$ lsnrctl status

Service "sales" has 2 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
  Instance "SALES", status READY, has 1 handler(s) for this service...

The CDB also registers a sales service. It does so because of the refreshable clone PDB with the same name.

The users were connecting to the default service, sales. The listener handed off the connections to the CDB, not the non-CDB.

Since the PDB was still a refreshable clone PDB, it was not open, and users received ORA-01109: database not open.

Besides that, the refreshing process didn’t work either. The refresh of the PDB happens over a database link to the source database. Guess what happened?

2025-05-09T07:25:58.854934+00:00
Errors in file /u01/app/oracle/diag/rdbms/cdb19/CDB19/trace/CDB19_ora_61633.trc:
ORA-17627: ORA-01109: database not open
ORA-17629: Cannot connect to the remote database server
ORA-17627 signalled during: ALTER PLUGGABLE DATABASE FTEX REFRESH...

Yes, it couldn’t connect to the source database either. It ended up trying to connect to itself.

Optimal Solution

The real problem is the connections to the default service, sales. The service with the same name as the database.

This service is not meant for general use. You should create your own service and have your application connect through that.

Why is using default services a bad idea?

  • You can’t customize the default service.
  • The default service is for administrative use only.
  • You easily end up with collisions like this one. This can also happen with two PDBs in different CDBs on the same host.
  • If you rename the database, you also rename the default service and have to update all connection strings.

Why are custom services a good idea?

  • Custom services allow you to set many attributes. While this might not be important for a single-instance database, it is essential for Data Guard and RAC.
  • When you clone a database, a custom service doesn’t follow with it. You have to create the services in the clone when and if it is appropriate.

You can create custom services using DBMS_SERVICE or srvctl. You can find more about that in a previous blog post.

Other Solutions

Other feasible solutions exist, but none of them address the real issue, which I believe is the use of the default service.

  • Rename the PDB so it creates a default service with a different name. After migration, you can rename it.
  • Create a static listener entry that forces the listener to route connections to the non-CDB. However, static listener entries are really not nice, and you should use dynamic registration whenever possible.
  • Create a second listener for the CDB. That’s just cumbersome.

Recommendation

  • Keep using refreshable clone PDB for migrations. It’s a great way to migrate, patch, or upgrade databases.

  • Always create your own custom service. Don’t use the default service.

Data Pump Creates Your Indexes Even Faster

In Oracle Database 23ai, Oracle has enhanced Data Pump to create indexes more efficiently. This can significantly reduce the time it takes to create indexes during a Data Pump import.

Oracle also backported the enhancement. You find the new features in:

In any case, the new feature is on by default. No configuration is needed; just enjoy faster imports.

Benchmark

I made a benchmark using a schema with:

  • 100 small tables (125 MB)
  • 50 medium tables (1,5 GB)
  • 10 big tables (25 GB)
  • 1 huge table (100 GB)
  • Each table had three indexes – 483 indexes in total

Using the new index method, the import went from almost 18 minutes to 11 minutes.

Here are extracts from the import log file:

# The old method
10-MAY-25 16:36:46.902: W-30 Completed 483 INDEX objects in 1071 seconds

# The new method
10-MAY-25 15:59:17.006: W-3 Completed 483 INDEX objects in 686 seconds

Details

So far, I haven’t seen a case where the new method is slower than the former method. However, should you want to revert to the old way of creating indexes, you can do that with the Data Pump parameter ONESTEP_INDEX=TRUE.

What Happens

To understand what happens, let’s go back in time to Oracle Database 11g. Imagine an import with PARALLEL=16. Data Pump would use one worker process to create indexes one at a time using CREATE INDEX ... PARALLEL 16. This is efficient for large indexes.

In Oracle Database 12c, the algorithm changed to better fit schemas with more indexes and especially many smaller indexes. Now, Data Pump would use all 16 workers, and each would create indexes using CREATE INDEX ... PARALLEL 1. However, this turned out to be a performance-killer for large indexes.

In Oracle Database 23ai (and 19c), you get the best of both worlds. Data Pump uses the size of the table to determine an optimal parallel degree. It creates smaller indexes in large batches with PARALLEL 1, and larger indexes using an optimal parallel degree up to PARALLEL 15.

Happy importing!

AutoUpgrade New Features: Run root.sh After Oracle Home Creation

AutoUpgrade relies on out-of-place patching, thus, it creates a new Oracle home. After installing a new Oracle home, you must execute root.sh as the root user.

AutoUpgrade now does this for you under the following circumstances:

  • oracle user has sudo privileges.
  • The sudo command doesn’t require a password.

AutoUpgrade handles root.sh in the ROOTSH stage, and writes log messages to <autoupgrade_log_dir>/<sid>/<job-no>/rootsh/rootsh.log.

Sudo Privileges

If oracle has sudo privileges, no configuration is needed. AutoUpgrade sees this and automatically executes root.sh after installing the Oracle home and applying the desired patches.

What If

If AutoUpgrade can’t execute root.sh, you must do it manually. AutoUpgrade writes a message into the rootsh.log:

2025-03-24 07:39:22.297 INFO Root Script Path : /u01/app/oracle/product/19_25/root.sh - RootScript.executePatchStage#66
2025-03-24 07:39:22.305 INFO Executing Root Script - RootScript.updateAndLogInfoMessage#91
sudo: a password is required
2025-03-24 07:39:22.327 INFO Root script execution failed sudo: a password is required
 - RootScript.executeRootScript#112
2025-03-24 07:39:22.327 INFO AutoUpgrade Patching has not run /u01/app/oracle/product/19_25/root.sh for the newly installed ORACLE_HOME. This needs to be performed manually after AutoUpgrade Patching completes. - RootScript.executePatchStage#83

If AutoUpgrade can’t run the root.sh for you, you shouldn’t use the one-button approach, where it handles everything in one operation. Instead, you should follow this approach:

  • Install the new Oracle home using -mode create_home.

  • Manually run root.sh.

  • Patch the database using -mode deploy. AutoUpgrade sees that you’ve already created the new Oracle home, and skips that part.

Why do you need to do it in multiple steps?

Some functionality in the database requires that you run root.sh. Also, there are certain patches that require you to run root.sh before starting the database. One example is patch 34672698, ORA-00800: SOFT EXTERNAL ERROR, ARGUMENTS: [SET PRIORITY FAILED], [VKTM] , DISM(16). Suppose you haven’t set the proper file permissions with root.sh before starting the database, the database won’t be able to elevate the priority of some of the most critical background processes.

The Special Patches

As mentioned above, there are some special patches that require that you fiddle with file system permissions during the apply.

One example is patch 34672698, ORA-00800: SOFT EXTERNAL ERROR, ARGUMENTS: [SET PRIORITY FAILED], [VKTM] , DISM(16). Check the readme to see the instructions explaining how to change permissions on $ORACLE_HOME/bin/oradism.

Using AutoUpgrade, installing such a patch is very easy. AutoUpgrade takes all that into consideration. This is how AutoUpgrade creates the new Oracle home:

  • Unzip the Oracle home.
  • Installs the Oracle home using runInstaller, but doesn’t run root.sh.
  • Applies patches using OPatch, including such special patches.
  • Executes root.sh, which applies all the required file permissions.

The execution of root.sh in the end takes care of all the extra work with permissions.

Happy patching!

Connor McDonald Is Wrong!

In a recent blog post, Connor McDonald was calling out LinkedIn clickbait.

Someone made incorrect claims about Oracle Database on LinkedIn to draw attention. According to sources (himself), Connor tried to resist the urge to jump in but failed. We all know that Resistance is futile.

Connor wrote a good blog post showing that the attention-seeking author of the LinkedIn post is wrong!

Upgrades Are Risky and Complex

One of the false claims was that:

Upgrades are risky and complex. Want to upgrade? Prepare for a long, nerve-wracking process where something will break.

Connor replied:

Upgrades are now editing a 10 line configuration file and running autoupgrade. One line command and you’re good to go.

Here’s the config file Connor used:

global.autoupg_log_dir=/default/current/location
upg1.dbname=employee
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.1.0/dbhome_1
upg1.sid=emp
upg1.log_dir=/scratch/auto
upg1.upgrade_node=node1
upg1.target_version=19.1

But you can make it even simpler.

  • autoupg_log_dir defaults to $ORACLE_BASE/cfgtoollogs/autoupgrade.
  • dbname is not used anymore.
  • start_time=now is the default.
  • log_dir is not needed when you use global.autoupg_log_dir.
  • upgrade_node is rarely needed. Only when you have one config file that is used on multiple hosts.
  • target_version is only needed when the target Oracle home doesn’t exist yet.

You could do the same with this simplified config file:

upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.1.0/dbhome_1
upg1.sid=emp

You don’t like config files? OK – use environment variables instead. Here’s how to upgrade completely without a config file:

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export TARGET_ORACLE_HOME=/u01/app/oracle/product/19.1.0/dbhome_1
export ORACLE_SID=emp
java -jar autoupgrade.jar -config_values -mode deploy

Upgrades Are Simple, Fast, and Safe

I might be a little biased, but I think upgrades are:

Simple

  • Since we introduced AutoUpgrade in 2019, upgrades have become even easier.
  • Keep it simple or customize it to your exact needs – you decide.
  • Use the preupgrade analysis and run hundreds of tests on your database. If it passes the test, you’re good to go.
  • Still worried? Use the dictionary check to ease your mind.

Fast

Safe

Was Connor Wrong?

No, Connor was not wrong. Connor was right!

He just needed a tiny correction. Connor made it look like AutoUpgrade is easy, when it is in fact very easy.

Sorry for the clickbait!

Happy upgrading!

How to Patch Oracle Net Listener

When running Oracle Database on a server, the Oracle Net Listener is a vital component.

Let’s discuss how you can patch it.

The Basics

Normally, on a server, you have:

  • One listener
  • One or many database instances

That one listener handles new connections to all database instances. You patch the listener simply by starting it in the new Oracle home.

You can have multiple listeners and use remote listeners, but if you do that, you’re smart and probably won’t need the advice in this blog post.

How to Patch the Listener

  1. Copy listener.ora from the old Oracle home to the new Oracle home.
    export OLD_ORACLE_HOME=/u01/app/oracle/product/19.26.0/dbhome_1
    export NEW_ORACLE_HOME=/u01/app/oracle/product/19.27.0/dbhome_1
    cp $OLD_ORACLE_HOME/network/admin/listener.ora $NEW_ORACLE_HOME/network/admin
    
    • You can avoid this step by redirecting the location of your network files using the TNS_ADMIN environment variable.
    • If you have a completely default installation with no listener configuration, you don’t have a listener.ora file and can skip this step.
    • In an advanced configuration you might have references to the listener Oracle home. Those you must update. But don’t confuse the listener Oracle home, with the Oracle homes of the databases with static registration. Those you update, when you patch the matching database.
  2. Restart the listener in the new Oracle home:
    export ORACLE_HOME=$OLD_ORACLE_HOME
    $OLD_ORACLE_HOME/bin/lsnrctl stop
    export ORACLE_HOME=$NEW_ORACLE_HOME
    $NEW_ORACLE_HOME/bin/lsnrctl start 
    
    • If you use Oracle Grid Infrastructure or Oracle Restart, they manage the listener. You automatically restart, and thus patch, the listener when you patch those components.
  3. If you use the dbstart script to start the listener, update it with the new Oracle home.
    • On Windows, you must delete the matching Windows service when you stop the listener. The lsnrctl start command should create a new listener. Be sure to set it to start automatically.

The Outage

  • When you restart the listener, there’s a short period where there’s no listener. It takes a second or so for the new listener to start. In that period, any new connection attempt fails:

    ORA-12541: TNS:no listener
    

    This affects new connections only. Existing connections continue to work unaffected.

  • As soon as the listener starts, it’ll read listener.ora and go through the static listener registrations defined by the SID_LIST. Connections to those databases are ready immediately.

  • For optimal flexibility, however, it’s better to use dynamic listener registration, where the database instance automatically registers with the listener. When you restart the listener, it doesn’t know of all the dynamic registrations and will have to wait until the database registers (up to 60 seconds).

    • You can force a database to register with the listener:
      SQL> alter system register;
      
    • It’s a good idea to run through all of your databases after the listener restart and force a listener registration with the above command.
    • Until the dynamic registration completes, any new connection to the database will fail:
      ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
      
  • You can hide the short outage from the client by using RETRY_COUNT and RETRY_DELAY in your connect strings. Allow the client to retry for a short period before throwing the above errors.

  • You can avoid the outage completely by using multiple listeners and patching them separately. Your connect strings should have an ADDRESS_LIST containing both listeners.

Do I Need to Patch the Listener and Database At the Same Time?

No, that’s up to you. If you want just one interruption, then do it all at once. Otherwise, you can do it in separate maintenance windows.

What About AutoUpgrade?

At the moment, AutoUpgrade doesn’t patch the listener. But it’s on the backlog.

Happy patching!

How to Patch Oracle Database on Exadata Cloud@Customer using Out-Of-Place

At our recent workshop in London, I was advocating for using out-of-place patching and the many benefits it brings. One of the questions was:

How do I do that on Exadata Cloud@Customer?

When you use the patching functionality in the cloud tooling, it will patch an entire Oracle home, including all databases, using in-place patching. This is a convenient option, but it extends the maintenance window on your database.

Let’s find out how to do it using out-of-place patching!

Benefits of Out-Of-Place Patching

  • You can patch the database with less downtime. You just shut down the database in the old Oracle home and immediately restart it in the new Oracle home.
    • With in-place patching, you need more downtime. While the database is down, you need time to roll off patches and apply the new ones. Depending on the number of patches you have, this can take 5, 10, or 20 minutes.
  • You prepare the new Oracle home in advance.
    • With in-place patching, all changes to the Oracle home occur during the maintenance window when the database is down.
  • There are fewer tasks in your downtime window, so less risk and less stress.
  • You can use a fresh, brand-new Oracle home to avoid issues with rolling off patches. Plus, a brand-new Oracle home means you don’t carry any old luggage in your Oracle home.
    • With in-place patching, your Oracle home will grow over time. You’ll see that disk space keeps increasing.
  • In the unlikely event that you need to roll back or fall back, that’s a lot easier. You just restart the database in the old Oracle home.
    • With in-place patching, you have to reverse the entire process. So, all that was cumbersome when you patched will be cumbersome when you roll back.

AutoUpgrade

In the workshop, I showed some cool demos of using AutoUpgrade for out-of-place patching. You can’t do that on Exadata Cloud@Customer. You must use the cloud tooling when you patch an entire CDB.

Alternatively, you can patch an individual PDB using AutoUpgrade and refreshable clone PDBs. I’ll have to write a blog post about that.

How to Patch Out-Of-Place on Exadata Cloud@Customer

  1. Create a software image at the desired Release Update with any additional one-off patches. You can skip this if you want to use the Oracle-provided images.
  2. Create a database home using the software image you just created, or use an Oracle-provided one.
  3. I recommend that you update the cloud tooling.
  4. Your maintenance window starts.
  5. Move the database to the new Oracle home. The cloud tooling does that in a rolling manner, so there’s no database outage.
    • It also executes Datapatch and other post-patching steps.

That’s it; your database has been patched!

  • After a while, you can remove the old Oracle home.
  • If you want to script the process, you can also use the REST APIs or dbaascli.
  • This also works for Exadata Database Service.
  • There’s no way to use out-of-place patching on Grid Infrastructure.

Happy patching!

Appendix

Other Methods

You are free to use Exadata Fleet Update for your Exadata Cloud@Customer system. It can greatly reduce your workload and automate your patching operations. Check out these resources:

Further Reading

Upgrade a PDB to Oracle Database 23ai Using Replay Upgrade

When you upgrade a PDB to Oracle Database 23ai, there is a new method for performing the upgrade. It’s called Replay Upgrade.

I would call it a convenience feature. You simply plug in to a higher release CDB and open the PDB. The CDB detects the lower-release PDB and performs the upgrade. You don’t have to invoke AutoUpgrade.

Here’s how to do it.

A Few Words on Replay Upgrade

In Oracle Database 23ai, you can upgrade the data dictionary in two ways:

  • Parallel Upgrade – Has been around for quite a few releases. It’s what you’ve used before and can still use.
  • Replay Upgrade – The new thing that enables you to upgrade the data dictionary by simplying plugging in a lower-release PDB and allowing the CDB to perform the upgrade – without using AutoUpgrade.

I suggest you watch this video about the fundamental differences between the two methods.

Replay Upgrade is not a substitute for the entire upgrade project. Even with Replay Upgrade, you must still run the pre-upgrade and post-upgrade tasks. The version of the PDB must be one that allows for a direct upgrade to Oracle Database 23ai: 19c or 21c.

AutoUpgrade uses Parallel Upgrade. You can force AutoUpgrade to use Replay Upgrade in your config file:

upg1.replay=yes

How To Upgrade Using Replay Upgrade

  1. You must perform the pre-upgrade tasks while the PDB is in the lower-release CDB.
  2. One of such tasks is to analyze the PDB for upgrade readiness:
    java -jar autoupgrade.jar ... -mode analyze
    
  3. If needed, run the pre-upgrade fixups:
    java -jar autoupgrade.jar ... -mode fixups
    
  4. Plug in a lower-release PDB into a higher-release CDB. It doesn’t matter whether you plugged in from a manifest file, using refreshable clone PDBs or any other method.
  5. Open the PDB:
    alter pluggable database PDB1 open;
    
    • When you open the PDB in normal mode, Replay Upgrade starts.
    • The open command doesn’t complete until the upgrade completes. The command is not hanging; it’s simply upgrading in the background.
    • If you open the PDB in upgrade mode, Replay Upgrade does not start.
  6. During the open command, you can see in the alert log that the CDB upgrades the PDB:
    2025-03-31T14:02:37.955470+00:00
    ORANGE(6):Starting Upgrade on PDB Open
    
  7. When the open command completes, the PDB will be upgraded. But it will open in restricted mode until you run Datapatch. From alert.log:
    ORANGE(6) Error Violation: SQL Patch, Cause: '23.5.0.24.07 Release_Update2407102158' is installed in the CDB but no release updates are installed in the PDB, Action: Call datapatch to install in the PDB or the CDB
    2025-03-31T14:11:03.803899+00:00
    ORANGE(6):Opening pdb with no Resource Manager plan active
    Violations: Type: 1, Count: 1
    Completed: Pluggable database ORANGE opened read write
    Completed:    alter pluggable database orange open
    
  8. Run Datapatch:
    $ORACLE_HOME/OPatch/datapatch -pdbs PDB1
    
  9. Restart the PDB to remove restricted mode:
    alter pluggable database PDB1 close immediate;
    alter pluggable database PDB1 open;
    
  10. Run post-upgrade tasks.

Want To Try It?

In our upgrade lab, Hitchhiker’s Guide for Upgrading to Oracle Database 23ai, there is no lab on Replay Upgrade.

You can still perform a Replay Upgrade if you want. I’ve created instructions in the appendix that you can use. The lab takes 15 minutes to complete.

My Database Is A Non-CDB

Replay Upgrade performs an upgrade-on-open. Interestingly, it can also perform a convert-on-open. The latter will run the same commands you’ll find in noncdb_to_pdb.sql, which you normally run to convert a non-CDB to a PDB.

So, you can simply plug a 19c non-CDB into a 23ai CDB. When you open the PDB, the CDB upgrades and converts to a PDB.

My Recommendation

I recommend using AutoUpgrade. It ensures that you run all the tasks and automates them completely, giving you the safest upgrade.

Replay Upgrade does look a lot easier at first glance, but you still need to remember all the pre-upgrade and post-upgrade tasks. When there’s something you must run manually, there’s always the risk that you forget one or two of the tasks.

For me, Replay Upgrade is a convenience feature you can use in a lab or demo environment or if you think it’s easier to incorporate in your automation. But even with automation, you can still use AutoUpgrade with the -noconsole command line option.

But the choice is yours.

Happy upgrading!

Appendix

Replay Upgrade Queries and Commands

  • Here’s how you can tell whether Replay Upgrade (Upgrade on Open and Convert On Open) is enabled:
    select property_name, property_value 
    from   database_properties
    where  property_name like '%OPEN%';
    
    You can set the property in the root container and in the PDB.
  • Here’s how to disable Replay Update:
    alter database upgrade sync off;
    

Hands-On Lab

Here are the instructions for trying a Replay Upgrade in our Hands-On Lab.

The below steps perform a Replay Upgrade of the ORANGE PDB from CDB19 to CDB23.

  1. Start by provisioning a new lab and connecting to it. The lab runs in Oracle LiveLabs and is completely free. No installation is required.
  2. Start the CDB19 database. It’s a container database on Oracle Database 19c:
    . cdb19
    env | grep ORA
    sqlplus / as sysdba<<EOF
       startup;
    EOF
    
  3. Create an AutoUpgrade config file:
    cd /home/oracle/scripts
    cat > orange-replay.cfg <<EOF 
    global.autoupg_log_dir=/home/oracle/logs/orange-replay
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/23
    upg1.sid=CDB19
    upg1.target_cdb=CDB23
    upg1.pdbs=ORANGE
    upg1.target_pdb_copy_option.ORANGE=file_name_convert=none
    upg1.timezone_upg=NO
    EOF
    
  4. Run AutoUpgrade in analyze mode:
    cd
    java -jar autoupgrade.jar -config scripts/orange-replay.cfg -mode analyze
    
    • AutoUpgrade analyzes the ORANGE PDB for upgrade readiness.
  5. Check the preupgrade summary report:
    cat /home/oracle/logs/orange-replay/cfgtoollogs/upgrade/auto/status/status.log
    
    • The report states Check passed and no manual intervention needed.
  6. Run the preupgrade fixups:
    java -jar autoupgrade.jar -config scripts/orange-replay.cfg -mode fixups
    
    • AutoUpgrade runs pre-upgrade fixups, if any.
  7. Unplug ORANGE from the 19c CDB:
    . cdb19
    sqlplus / as sysdba<<EOF
        alter pluggable database ORANGE close;
    	alter pluggable database ORANGE unplug into '/home/oracle/orange.xml';
    	drop pluggable database ORANGE keep datafiles;
    EOF
    
  8. Plug into the 23ai CDB and open ORANGE:
    . cdb23
    env | grep ORA
    sqlplus / as sysdba<<EOF
       set timing on
       create pluggable database ORANGE using '/home/oracle/orange.xml' nocopy;
       alter pluggable database orange open;
    EOF
    
    • The open command upgrades the PDB. The command runs for several minutes.
    • In the end, the command completes but prints Warning: PDB altered with errors.
  9. Run Datapatch on the ORANGE PDB:
    $ORACLE_HOME/OPatch/datapatch -pdbs ORANGE
    
  10. Restart ORANGE:
    sqlplus / as sysdba<<EOF
    	alter pluggable database orange close;
    	alter pluggable database orange open;
    	select open_mode, restricted from v\$pdbs where name='ORANGE';
    EOF
    
    • The PDB now opens normally (READ WRITE) and unrestricted.
  11. Run the post-upgrade fixups:
    java -jar autoupgrade.jar \
       -preupgrade "dir=/home/oracle/logs/orange-replay/fixups,inclusion_list=ORANGE" \
       -mode postfixups
    
    
  12. That’s it. ORANGE has now been upgraded:
    sqlplus / as sysdba<<EOF
    	select open_mode, restricted from v\$pdbs where name='ORANGE';
    	alter session set container=ORANGE;
    	select version_full from v\$instance;
    EOF
    

AutoUpgrade New Features: Better Automation To Patch Oracle Database on Windows

Running Oracle Database on Microsoft Windows is slightly different from running it on other platforms. So, of course, patching Oracle Database is also slightly different.

The Oracle Database runs as a Windows service. AutoUpgrade must re-create the service when you perform out-of-place patching so the service starts oracle.exe from the new Oracle home.

Oracle Database on Windows runs as a Windows service with a hardcoded Oracle home path

To recreate the service, you must specify the credentials of the user who runs the service. Windows allows you to store the credentials in a special file; AutoUpgrade can use that when it recreates the service.

AutoUpgrade brings up a prompt to store credentials for a Windows service

For security purposes, AutoUpgrades deletes the credential file when it is no longer needed. For automation, however, that’s impractical because you would need to recreate the credential file every time you patch or upgrade.

AutoUpgrade now allows you to keep the file and reuse it. To do so, use the config file parameter delete_credential_file.

How To Patch Oracle Database on Windows

  1. Get the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Create an AutoUpgrade config file:
    global.keystore=c:\oracle\autoupgrade\keystore
    patch1.source_home=c:\oracle\product\dbhome_19_26_0
    patch1.target_home=c:\oracle\product\dbhome_19_27_0
    patch1.sid=DB19
    patch1.folder=c:\oracle\patches
    patch1.patch=RECOMMENDED
    patch1.wincredential=c:\oracle\autoupgrade\credential
    patch1.delete_credential_file=false
    
  3. Load the credentials for the user running the service into a credential file:
    java -jar autoupgrade.jar 
         -config ...
         –patch 
         -load_win_credential "DB19"	
    
  4. Start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar 
         -config ...
         –patch 
         -mode deploy
    
    • AutoUpgrade finds and downloads the right patches for Windows.
    • Creates a new Oracle home with the new patches.
    • Completes the entire patching process.

That’s it! You’ve patched your Oracle Database on Windows.

Here’s a little demo from our YouTube channel. Be sure to subscribe so you don’t miss out.

Happy patching!

AutoUpgrade New Features: Include Monthly Recommended Patches When Patching

Staying even more up-to-date with Monthly Recommended Patches (MRP) is now easier when patching using AutoUpgrade.

MRP contains fixes from 555.1 conveniently bundled together in a single patch.

> To provide customers more frequent access to recommended and well-tested collections of patches, Oracle is pleased to introduce Monthly Recommended Patches (MRPs)

From My Oracle Support note ID 555.1

How To

  1. Get the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Add MRP to the patch entry in your AutoUpgrade config file:
    global.keystore=/home/oracle/autoupgrade-patching/keystore
    patch1.source_home=/u01/app/oracle/product/19/dbhome_19_25_0
    patch1.target_home=/u01/app/oracle/product/19/dbhome_19_26_0
    patch1.sid=DB19
    patch1.folder=/home/oracle/autoupgrade-patching/patch
    patch1.patch=RECOMMENDED,MRP
    
  3. Run AutoUpgrade in analyze, download or deploy mode, and AutoUpgrade gets: * RECOMMENDED – latest available Release Update, OPatch, Data Pump Bundle Patch and OJVM Bundle Patch * MRP – latest available MRP matching the Release Update
  4. Optionally, you can add one-off fixes that are not part of the Release Update or MRP:
    patch1.patch=RECOMMENDED,MRP,34672698
    
    • You can add more one-off fixes using a comma-separated list

Tell Me More About Monthly Recommended Patches

Here’s a little snippet from Virtual Classroom #16: Oracle Database Release and Patching Strategy for 19c and 23ai, where Mike explains the concept.

I recommend that you also install MRPs when possible. Using AutoUpgrade, it’s really easy to add them, so why miss out?

Happy patching

Appendix

Further Reading