Patching Eats Your SYSTEM Tablespace

Everyone says: Patch your software!

I keep saying: Patch your Oracle Database!

Let me tell you a little secret. All that patching is eating space in your SYSTEM tablespace

This blog post is a continuation of Mike’s many blog posts on the topic. It adds an example with some real numbers.

Apply And Rollback Scripts

Normally, the apply and rollback scripts are stored in the Oracle home. Here’s an example:

$ ll $ORACLE_HOME/sqlpatch
drwxr-xr-x. 4 oracle oinstall     38 Apr 18  2019 29517242
drwxr-xr-x. 3 oracle oinstall     22 Jul 28 05:28 36878697
drwxr-xr-x. 3 oracle oinstall     22 Jul 28 05:27 36912597
drwxr-xr-x. 3 oracle oinstall     22 Jul 28 05:30 37056207
drwxr-xr-x. 3 oracle oinstall     22 Aug 11 08:34 37102264
drwxr-xr-x. 3 oracle oinstall     22 Aug 11 08:32 37260974
drwxr-xr-x. 3 oracle oinstall     22 Aug 11 08:36 37470729
drwxr-xr-x. 3 oracle oinstall     22 Aug 11 08:47 37499406
drwxr-xr-x. 3 oracle oinstall     22 Aug 11 08:44 37642901
drwxr-xr-x. 3 oracle oinstall     22 Aug 11 08:49 37777295

When Datapatch applies patches to your Oracle Database, it uses the apply and rollback scripts from the Oracle home.

But during apply, Datapatch also takes the rollback scripts and stores them inside the database – in the SYSTEM tablespace. This ensures that Datapatch can always roll back patches, regardless of whether the rollback scripts are in the Oracle home.

When you use the OPatch cleanup or uses out-of-place patching, there is a risk that Datapatch might need a rollback script which is no longer found in the Oracle home. But then Datapatch simply finds it in the SYSTEM tablespace instead.

This avoild a lot of chaotic situations with missing rollback scripts.

Storing The Scripts

Datapatch uses two tables that both have a column named PATCH_DIRECTORY:

select * from dba_registry_sqlpatch;
select * dba_registry_sqlpatch_ru_info;

This query shows each patch action and corresponding usage for apply/rollback script:

select * from (
   select description, round(dbms_lob.getlength(PATCH_DIRECTORY)/1024/1024, 2) as size_mb
   from dba_registry_sqlpatch
   where action='APPLY' and description not like 'Database Release Update%'
   union
   select 'Release Update ' || RU_version as description, round(dbms_lob.getlength(PATCH_DIRECTORY)/1024/1024) as size_mb
   from dba_registry_sqlpatch_ru_info)
order by description;

Here’s an example of an Oracle Database that I have patched a few times.

DESCRIPTION                                       SIZE_MB
------------------------------------------------- -------
DATAPUMP BUNDLE PATCH 19.25.0.0.0                    1.03
DATAPUMP BUNDLE PATCH 19.26.0.0.0                    1.03
DATAPUMP BUNDLE PATCH 19.27.0.0.0                    1.03
DATAPUMP BUNDLE PATCH 19.28.0.0.0                    1.04
OJVM RELEASE UPDATE: 19.25.0.0.241015 (36878697)      .01
OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)      .02
OJVM RELEASE UPDATE: 19.27.0.0.250415 (37499406)      .02
OJVM RELEASE UPDATE: 19.28.0.0.250715 (37847857)      .02
Release Update 19.25.0.0.0                            175
Release Update 19.26.0.0.0                            184
Release Update 19.27.0.0.0                            194
Release Update 19.28.0.0.0                            203
Release Update 19.3.0.0.0                               4

10 rows selected.

That’s around 750 MB.

Cleaning Up

Datapatch only needs the rollback scripts for the patches that are currently applied. You can remove all other scripts:

$ORACLE_HOME/OPatch/datapatch -purge_old_metadata

Using the above environment, this is the result of the cleanup:

DESCRIPTION                                       SIZE_MB
------------------------------------------------- -------
DATAPUMP BUNDLE PATCH 19.25.0.0.0                    
DATAPUMP BUNDLE PATCH 19.26.0.0.0                    
DATAPUMP BUNDLE PATCH 19.27.0.0.0                    
DATAPUMP BUNDLE PATCH 19.28.0.0.0                    1.04
OJVM RELEASE UPDATE: 19.25.0.0.241015 (36878697)     
OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)     
OJVM RELEASE UPDATE: 19.27.0.0.250415 (37499406)     
OJVM RELEASE UPDATE: 19.28.0.0.250715 (37847857)      .02
Release Update 19.25.0.0.0                           
Release Update 19.26.0.0.0                           
Release Update 19.27.0.0.0                           
Release Update 19.28.0.0.0                            203
Release Update 19.3.0.0.0                               

10 rows selected.

Datapatch now only uses little more than 200 MB.

A few comments about the cleanup functionality:

  • It is available via patch 37738908. Hopefully, it will be part of the 19.29 Release Update.
  • You can safely execute the cleanup. Datapatch doesn’t remove scripts that it might need at a later point.
  • Oracle recommends that you run the cleanup in an off-peak period.
  • The cleanup happens via a TRUNCATE TABLE command which effectively reclaims space so other segments may use it. However, it doesn’t shrink the tablespace, so the physical size of the data files remain the same.
  • To facilitate the TRUNCATE TABLE command, those records that must remain is copied to a new table. After truncating the original table, those records are moved back and the temporary table is dropped. This might lead to a little increase in space usage while Datapatch cleans up.
  • When you upgrade, the upgrade engine truncates those tables. They are of no use following an upgrade.

Final Words

Check your Oracle Database. How much space does Datapatch use? What was the largest amount of space you could reclaim? Let me know in the comments below.

Update

  • 03-SEP-2025: I correctly wrote that the tables use the SYSAUX tablespace. They use the SYSTEM tablespace. Also, I added a detail about the cleanup might take up a little more space temporarily. Thanks to Pete for letting me know.

AutoUpgrade New Features: Better Automation To Patch Oracle Database on Windows

Running Oracle Database on Microsoft Windows is slightly different from running it on other platforms. So, of course, patching Oracle Database is also slightly different.

The Oracle Database runs as a Windows service. AutoUpgrade must re-create the service when you perform out-of-place patching so the service starts oracle.exe from the new Oracle home.

Oracle Database on Windows runs as a Windows service with a hardcoded Oracle home path

To recreate the service, you must specify the credentials of the user who runs the service. Windows allows you to store the credentials in a special file; AutoUpgrade can use that when it recreates the service.

AutoUpgrade brings up a prompt to store credentials for a Windows service

For security purposes, AutoUpgrades deletes the credential file when it is no longer needed. For automation, however, that’s impractical because you would need to recreate the credential file every time you patch or upgrade.

AutoUpgrade now allows you to keep the file and reuse it. To do so, use the config file parameter delete_credential_file.

How To Patch Oracle Database on Windows

  1. Get the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Create an AutoUpgrade config file:
    global.keystore=c:\oracle\autoupgrade\keystore
    patch1.source_home=c:\oracle\product\dbhome_19_26_0
    patch1.target_home=c:\oracle\product\dbhome_19_27_0
    patch1.sid=DB19
    patch1.folder=c:\oracle\patches
    patch1.patch=RECOMMENDED
    patch1.wincredential=c:\oracle\autoupgrade\credential
    patch1.delete_credential_file=false
    
  3. Load the credentials for the user running the service into a credential file:
    java -jar autoupgrade.jar 
         -config ...
         –patch 
         -load_win_credential "DB19"	
    
  4. Start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar 
         -config ...
         –patch 
         -mode deploy
    
    • AutoUpgrade finds and downloads the right patches for Windows.
    • Creates a new Oracle home with the new patches.
    • Completes the entire patching process.

That’s it! You’ve patched your Oracle Database on Windows.

Here’s a little demo from our YouTube channel. Be sure to subscribe so you don’t miss out.

Happy patching!

How To Roll Back After Patching

Here’s a question I received from a customer:

I’ve patched my database to 19.25 using AutoUpgrade and out-of-place patching. How do I roll back, if needed?

Basically, a rollback is the same as patching the database. You just do it the other way around – from the higher to the lower Oracle home. But let’s look at the details.

AutoUpgrade

I’m glad to hear that the customer uses AutoUpgrade for patching. It’s my recommended method, and it has many benefits.

If you use AutoUpgrade to patch your Oracle Database, you can also use it to roll back, but only before going live:

java -jar autoupgrade.jar -restore -jobs <n>
  • n is the job ID of the patching job.
  • AutoUpgrade undoes everything it did.

AutoUpgrade relies on Flashback Database as its rollback mechanism. So, it’s no good if users have already connected to the database and added/changed data.

Allow me to repeat: Only use AutoUpgrade to roll back before go-live!

After go-live, you must roll back manually.

Manually

You can manually roll back at any time – even after go-live.

Imagine you want to roll back from 19.25 (the new Oracle home) to 19.24 (the old Oracle home). Here’s how to do it.

  • You start by setting the environment.
    export OLD_ORACLE_HOME=/u01/app/oracle/product/dbhome_19_24
    export NEW_ORACLE_HOME=/u01/app/oracle/product/dbhome_19_25
    export ORACLE_HOME=$NEW_ORACLE_HOME
    export PATH=$ORACLE_HOME/bin:$PATH
    
  • Optionally, you run Datapatch sanity check in the new Oracle home (thanks Erik for pointing that out).
    $ORACLE_HOME/OPatch/datapatch -sanity_checks
    
  • You shut down the database running in the new Oracle home.
    sqlplus / as sysdba<<EOF
       shutdown immediate
    EOF
    
  • You move the following files back to the old Oracle home:
    • PFile
    • SPFile
    • Password file
    • Network files (like tnsnames.ora and sqlnet.ora)
    • Some of the files might not be present at all or be placed outside the Oracle home
    • Check this blog post for other files that might be stored in the Oracle home
    mv $NEW_ORACLE_HOME/dbs/init$ORACLE_SID.ora $OLD_ORACLE_HOME/dbs
    mv $NEW_ORACLE_HOME/dbs/spfile$ORACLE_SID.ora $OLD_ORACLE_HOME/dbs
    mv $NEW_ORACLE_HOME/dbs/orapw$ORACLE_SID $OLD_ORACLE_HOME/dbs
    mv $NEW_ORACLE_HOME/network/admin/sqlnet.ora $OLD_ORACLE_HOME/network/admin
    mv $NEW_ORACLE_HOME/network/admin/tnsnames.ora $OLD_ORACLE_HOME/network/admin
    
  • Update /etc/oratab and set the Oracle home to the old one.
  • Update your profile scripts to reflect the old Oracle home. It could be .bashrc.
  • Start the database in the old Oracle home.
    export ORACLE_HOME=$OLD_ORACLE_HOME
    export PATH=$ORACLE_HOME/bin:$PATH
    sqlplus / as sysdba<<EOF
       startup
    EOF
    
  • Run Datapatch.
    $ORACLE_HOME/OPatch/datapatch
    
  • Finally, you fix internal directories that point to paths inside the Oracle home:
    @?/rdbms/admin/utlfixdirs.sql
    

Datapatch

When you roll back, you must execute Datapatch. It will automatically detect that you are rolling back and perform the necessary actions.

For each patch there is an apply script that brings changes into the database. Datapatch executes the apply script during patching.

For each apply script, there is always a rollback script. It will reverse the actions of the apply script. Datapatch executes the rollback script when you roll back.

You can learn much more about Datapatch in this video.

Normally, you would roll back to the Oracle home from where you came, but that’s not a requirement. This scenario is fully supported:

  • Patch from 19.23 to 19.25
  • Roll back to 19.24

How To Practice?

We have a hands-on lab in which you can try rollbacks – using AutoUpgrade and manually.

Patch Me If You Can

The lab runs in Oracle Live Labs.

  • It’s completely free
  • It runs in just a browser

Happy patching!

Further Reading

How to Patch Oracle Database With One Command

With the latest release of Oracle AutoUpgrade, patching an Oracle Database is much easier. Once I’ve configured AutoUpgrade, it takes just one command. That includes finding and downloading the right patches and creating a new Oracle home.

Let’s see how I can do it.

Configuration

  1. Get the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. I need to create a config file:
    • global.global_log_dir is the logging and working directory of AutoUpgrade.
    • global.keystore is where AutoUpgrade can create a software keystore that stores my My Oracle Support credentials.
    • sid and source_home describe the database that I want to patch.
    • target_home is the location of the new Oracle home that AutoUpgrade creates for me.
    • folder is the location where AutoUpgrade can find or download patches.
    • patch tells AutoUpgrade to apply the latest Release Update including the OJVM and Data Pump bundle patches, plus update OPatch.
global.global_log_dir=/home/oracle/autoupgrade-patching/log
global.keystore=/home/oracle/autoupgrade-patching/keystore
patch1.source_home=/u01/app/oracle/product/19
patch1.target_home=/u01/app/oracle/product/19_25_0
patch1.sid=FTEX
patch1.folder=/home/oracle/autoupgrade-patching/patch
patch1.patch=RECOMMENDED
  1. Currently, AutoUpgrade can’t download the Oracle Database 19c base release (LINUX.X64_193000_db_home.zip). I must do that manually and place it in the folder specified by the config file entry folder. I expect the next version to handle it automatically.
$ cd /home/oracle/autoupgrade-patching/patch
$ ls -l
total 2987996
-rw-r--r--. 1 oracle oinstall 3059705302 Mar 18  2020 LINUX.X64_193000_db_home.zip
  1. I need to load my My Oracle Support credentials into the AutoUpgrade keystore so AutoUpgrade can use them to download patches. The first time AutoUpgrade creates the keystore, and I must provide a keystore password:
$ java -jar autoupgrade.jar -config FTEX.cfg -patch -load_password
Processing config file ...

Starting AutoUpgrade Patching Password Loader - Type help for available options
Creating new AutoUpgrade Patching keystore - Password required
Enter password:
Enter password again:
AutoUpgrade Patching keystore was successfully created
  1. I add my credentials. This is the username and password that I use to connect to My Oracle Support:
MOS> add -user daniel.overby.hansen@oracle.com
Enter your secret/Password:
Re-enter your secret/Password:
  1. I save the keystore and convert it to an auto-login keystore:
MOS> save
Convert the AutoUpgrade Patching keystore to auto-login [YES|NO] ? YES
  1. I exit the keystore.
MOS> exit

AutoUpgrade Patching Password Loader finished - Exiting AutoUpgrade Patching

Patch

With one command, I will patch my Oracle Database:

$ java -jar autoupgrade.jar -config FTEX.cfg -patch -mode deploy
AutoUpgrade Patching 24.7.241021 launched with default internal options
Processing config file ...
Loading AutoUpgrade Patching keystore
AutoUpgrade Patching keystore was successfully loaded

Connected to MOS - Searching for specified patches

------------------------------------------------------------
Downloading files to /home/oracle/autoupgrade-patching/patch
------------------------------------------------------------
DATABASE RELEASE UPDATE 19.25.0.0.0
    File: p36912597_190000_Linux-x86-64.zip - VALIDATED

DATAPUMP BUNDLE PATCH 19.25.0.0.0
    File: p37056207_1925000DBRU_Generic.zip - VALIDATED

OJVM RELEASE UPDATE 19.25.0.0.0
    File: p36878697_190000_Linux-x86-64.zip - VALIDATED

OPatch 12.2.0.1.44 for DB 19.0.0.0.0 (Oct 2024)
    File: p6880880_190000_Linux-x86-64.zip - VALIDATED
------------------------------------------------------------

+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
1 Non-CDB(s) will be processed
Type 'help' to list console commands
patch> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished                  [1]
Jobs failed                    [0]
Jobs restored                  [0]
Jobs pending                   [0]

---- Drop GRP at your convenience once you consider it is no longer needed ----
Drop GRP from FTEX: drop restore point AU_PATCHING_9212_FTEX1921000


Please check the summary report at:
/home/oracle/autoupgrade-patching/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade-patching/log/cfgtoollogs/patch/auto/status/status.log

That’s it! My database is now running on Oracle Database 19.25.0.

Fine Print

AutoUpgrade does out-of-place patching according to our recommendations. It will use a brand-new Oracle home, and copies all configuration files from the old to the new Oracle home.

AutoUpgrade currently requires:

  • Oracle Database 19c
  • Single instance database (non-CDB or entire CDB)

If you need more details, please check out the webinar One-Button Patching with AutoUpgrade. The slide deck has additional details.

What About

This is the first release, and naturally, it comes with restrictions. We’re working hard behind the scenes to lift those requirements.

Will AutoUpgrade support other platforms? Yes. AutoUpgrade supports most of the platforms where you run Oracle Database, including Windows, AIX and SPARCH Solaris.

What about in-place patching, using cloned Oracle homes, and support for gold images? It’s in the plans.

Will you support other Oracle Database releases? Yes – anything newer than Oracle Database 19c

What about Data Guard and RAC? You can use AutoUpgrade for Data Guard. RAC is in the plans.

What about Grid Infrastructure? We’re focusing on Oracle Database. You should have a look at Oracle Fleet Patching and Provisioning.

What about … We’re looking for new, great ideas. Please create an enhancement request in My Oracle Support and send me the number. I’ll check it and add it to our backlog. Feel free to also send me an email with your suggestion.

Happy Patching!

It’s a Wrap – MakeIT 2024

In the last days of May, I had the absolute pleasure of attending the MakeIT 2024 conference. Once again, the conference was held in the beautiful seaside city of Portorož, Slovenia.

The conference is a joint venture with the JCON conference, so you can catch one or two sessions on Java as well. I really value learning about related technologies as well. This year I listened in on caching options in Java and the use of JDBC driver.

You should come next year. There is also a good selection of English speakers, so even for foreigners, there is a reason to travel.

MakeIT 2024 banner

The Slides

Workshop – Patching Oracle Database

This is a full workshop on patching Oracle Database. It includes a hands-on lab, so you can try it on your own.

You should flip through the slides if you want a deep dive on patching.

You can also try the hands-on lab for free.

Best Practices for Upgrade to Oracle Database 23ai

This session and the slides help you prepare for the next long-term support release of Oracle Database.

Patch Me If You Can

This session and the slides give a quick fly-over of the essentials of patching.

Oracle Data Pump – News, Internals, Tips and Tricks

I had the pleasure of talking about Oracle Data Pump and presenting some new features. If you’re curious about a faster way of creating indexes and adding constraints, you can flip through the slides.

Thanks

Thanks to the organizer of MakeIT 2024 for pulling off yet another successful conference, to the sponsors for making it all possible, and to everyone who attended my sessions or the conference in general.

Impressions

Quote of the conference Quote of the conference

My Data Pump talk My Data Pump talk

Going to conference is hard Going to conference is hard

Low hanging clouds at the airport Low hanging clouds at the airport

How to Apply Patches Out-of-place to Oracle Grid Infrastructure and Oracle Data Guard Using Standby-First

I strongly recommend that you always patch out-of-place. Here’s an example of how to do it on Oracle Grid Infrastructure (GI) and Oracle Data Guard using Standby-First Patch Apply.

Standby-First Patch Apply allows you to minimize downtime to the time it takes to perform a Data Guard switchover. Further, it allows you to test the apply mechanism on the standby database by temporarily converting it into a snapshot standby database.

The scenario:

  • Oracle Grid Infrastructure 19c and Oracle Database 19c
  • Patching from Release Update 19.17.0 to 19.19.0
  • Vertical patching – GI and database at the same time
  • Data Guard setup with two RAC databases
    • Cluster 1: copenhagen1 and copenhagen2
    • Cluster 2: aarhus1 and aarhus2
    • DB_NAME: CDB1
    • DB_UNIQUE_NAME: CDB1_COPENHAGEN and CDB1_AARHUS
  • Using Data Guard broker
  • Patching GI using SwitchGridHome method

Let’s get started!

Step 1: Prepare

I can make the preparations without interrupting the database.

  • I ensure my environment meets the requirements for Standby-First Patch Apply.

  • I deploy new GI homes to all four hosts.

    • I use the SwitchGridHome method.
    • Very important: I only perform step 1 (Prepare a New Grid Home).
    • I apply the Release Update 19.19.0 as part of the deployment using gridSetup.sh ... -applyRU ... -applyOneOffs as described in the blog post.
  • I deploy new database homes to all four hosts.

  • I also recompile invalid objects. This can make it easier for Datapatch later in the process:

    PRIMARY SQL> @?/rdbms/admin/utlrp
    

Step 2: Restart Standby in New Oracle Homes

Now, I can move the standby database to the new GI and database homes.

  • On the standby hosts, aarhus1 and aarhus2, I first move the database configuration files from the old database home to the new one.

  • I change the database configuration in GI. Next time the database restarts, it will be in the new Oracle Home:

    [oracle@aarhus1]$ $OLD_ORACLE_HOME/bin/srvctl modify database \
       -db $ORACLE_UNQNAME \
       -oraclehome $NEW_ORACLE_HOME
    
  • I switch to the new GI on all standby hosts, aarhus1 and aarhus2, by executing step 2 (Switch to the new Grid Home) of the SwitchGridHome method.

    • It involves running gridSetup.sh ... -switchGridHome and root.sh.
    • You can perform the switch in a rolling manner or all at once.
    • The switch restarts the standby database instance. The standby database instance restarts in the new Oracle Home.
    • If the profile of grid (like .bashrc) sets the ORACLE_HOME environment variable, I ensure to update it.
  • If I had multiple standby databases, I would process all standby databases in this step.

Step 3: Test Standby Database

This is an optional step, but I recommend that you do it.

  • I convert the standby database (CDB1_AARHUS) to a snapshot standby database:
    DGMGRL> convert database CDB1_AARHUS to snapshot standby;
    
  • I test Datapatch on the standby database. It is important that I run the command on the standby database:
    [oracle@aarhus1]$ $ORACLE_HOME/OPatch/datapatch -verbose
    
  • I can also test my application on the standby database.
  • At the end of my testing, I revert the standby database to a physical standby database. The database automatically reverts all the changes made during testing:
    DGMGRL> convert database CDB1_AARHUS to physical standby;
    

Step 4: Switchover

I can perform the previous steps without interrupting my users. This step requires a maintenance window because I am doing a Data Guard switchover.

  • I check that my standby database is ready to become primary. Then, I start a Data Guard switchover:
    DGMGRL> connect sys/<password> as sysdba
    DGMGRL> validate database CDB1_AARHUS;
    DGMGRL> switchover to CDB1_AARHUS;
    

A switchover does not have to mean downtime.

If my application is configured properly, the users will experience a brownout; a short hang, while the connections switch to the new primary database.

Step 5: Restart New Standby in New Oracle Homes

Now, the primary database runs on aarhus1 and aarhus2. Next, I can move the new standby hosts, copenhagen1 and copenhagen2, to the new GI and database homes.

  • I repeat step 2 (Restart Standby In New Oracle Homes) but this time for the new standby hosts, copenhagen1 and copenhagen2.

Step 6: Complete Patching

Now, both databases in my Data Guard configuration run out of the new Oracle Homes.

Only proceed with this step once all databases run out of the new Oracle Home.

I need to run this step as fast as possible after I have completed the previous step.

  • I complete the patching by running Datapatch on the primary database (CDB1_AARHUS). I add the recomp_threshold parameter to ensure Datapatch recompiles all objects that the patching invalidated:

    [orale@aarhus1]$ $ORACLE_HOME/OPatch/datapatch \
       -verbose \
       -recomp_threshold 10000
    
    • I only need to run Datapatch one time. On the primary database and only on one of the instances.
  • I can run Datapatch while users are connected to my database.

  • Optionally, I can switch back to the original primary database on copenhagen1 and copenhagen2, if I prefer to run it there.

That’s it. Happy patching!

Appendix

Further Reading

Other Blog Posts in This Series

Patching Oracle Grid Infrastructure And Oracle Data Guard

How do you patch Oracle Grid Infrastructure 19c (GI) when Oracle Data Guard protects your Oracle Database?

I had a talk with Ludovico Caldara, the product manager for Oracle Data Guard, about it:

To provide more details, I will use the following setup as an example:

  • Data Guard setup with two databases.
  • Each database is a 2-node RAC database.
  • Sites are called copenhagen and aarhus.

Patching Oracle Grid Infrastructure Only

  1. Prepare new GI homes on all nodes in both sites (copenhagen and aarhus).
  2. Disable Fast-Start Failover (FSFO) for the reasons described below. You can leave the observer running.
  3. Start with the standby site, aarhus.
  4. Complete the patching process by switching to the new GI home in a rolling manner on all nodes at aarhus site.
  5. If you use Active Data Guard and have read-only sessions in your standby database, you should ensure that instances are properly drained before restarting the GI stack (via root.sh).
  6. Proceed with the primary site, copenhagen.
  7. Complete the patching process by switching to the new GI home in a rolling manner on all nodes at copenhagen site.
  8. Be sure to handle draining properly to ensure there are no interuptions.
  9. Re-enable FSFO.

Later, when you want to patch the database, you can follow up the standby-first method described in Oracle Patch Assurance – Data Guard Standby-First Patch Apply (Doc ID 1265700.1). If the database patches you install are RAC Rolling Installable (like Release Updates), you should choose option 1 in phase 3 to avoid any downtime or brownout.

Alternative Approach

If you have many nodes in your cluster and an application that doesn’t behave well during draining, consider switching over to the standby site instead of patching the primary site in a rolling manner. When you switch over, there is only one interruption, whereas many interruptions in a rolling patch apply.

  1. Patch standby site, aarhus.
  2. Switch over to aarhus.
  3. Patch former primary, copenhagen.

What If You Want to Patch the Database At the Same Time?

Out-of-place SwitchGridHome

You get complete control over the process with Out-of-place SwitchGridHome. It is my preferred method. There are more commands to execute, but it doesn’t matter if you automate it.

Here is an overview of the process. You can use many of the commands from this blog post:

  1. Prepare new GI homes using gridSetup. Be sure to apply the needed patches. Do it on one node in both primary (copenhagen) and standby site (aarhus). The process will copy the new GI home to all other nodes in the cluster. Do not execute root.sh.
  2. Prepare new database homes. Be sure to apply the needed patches. Here is an example. Do it on one node in both primary (copenhagen) and standby site (aarhus). The process will copy the new database home to all other nodes in the cluster. Remember to execute root.sh.
  3. Disable FSFO.
  4. Start with the standby site, aarhus.
  5. Configure the standby database to start in the new database home:
    $ $OLD_ORACLE_HOME/bin/srvctl modify database \
         -db $STDBY_ORACLE_UNQNAME \
         -oraclehome $NEW_ORACLE_HOME
    
  6. If you use Active Data Guard and have read-only sessions connected, drain the instance.
  7. Switch to the new GI home using gridSetup.sh -switchGridHome ... and root.sh.
    1. root.sh restarts the entire GI stack. When it restarts the database, the database instance runs in the new database home.
    2. Repeat the process on all nodes in the standby site (aarhus).
  8. Proceed with the primary site, copenhagen.
  9. Configure the primary database to start in the new database home:
    $ $OLD_ORACLE_HOME/bin/srvctl modify database \
         -db $PRMY_ORACLE_UNQNAME \
         -oraclehome $NEW_ORACLE_HOME
    
  10. Be sure to drain the instance.
  11. Switch to the new GI home using gridSetup.sh -switchGridHome ... and root.sh.
    1. root.sh restarts the entire GI stack. When it restarts the database, the database instance runs in the new database home.
    2. Repeat the process on all nodes in the primary site (copenhagen).
  12. Execute datapatch -verbose on one of the primary database instances to finish the patch apply.
  13. Re-enable FSFO.

Out-of-place OPatchAuto

Out-of-place OPatchAuto is a convenient way of patching because it also automates the database operations. However, I still recommend using Out-of-place SwitchGridHome method because it gives you more control over draining.

Here is an overview of the process:

  1. Deploy new GI and database homes using opatchauto apply ... -prepare-clone. Do it on all nodes in both primary (copenhagen) and standby site (aarhus). Since you want to patch GI and database homes, you should omit the -oh parameter.
  2. Disable FSFO.
  3. Start with the standby site, aarhus.
  4. Complete patching of all nodes in the standby site (aarhus) using opatchauto apply -switch-clone.
    1. When OPatchAuto completes the switch on a node, it takes down the entire GI stack on that node, including database instance.
    2. GI restarts using the new GI home. But the database instance still run on the old database home.
    3. On the last node, after the GI stack has been restarted, all database instances restart again to switch to the new database home. This means that each database instance will restart two times.
  5. Proceed with the primary site, copenhagen.
  6. Complete patching of all nodes in the primary site (copenhagen) using opatchauto apply -switch-clone.
    1. The procedure is the same as on the standby site.
    2. In addition, OPatchAuto executes Datapatch to complete the database patching.
  7. Re-enable FSFO.

Fast-Start Failover

When you perform maintenance operations, like patching, consider what to do about Fast-Start Failover (FSFO).

If you have one standby database

  • Single instance standby I recommend disabling FSFO. If something happens to the primary database while you are patching the standby site, you don’t want to switch over or fail over automatically. Since the standby site is being patched, the standby database might restart shortly. You should evaluate the situation and determine what to do rather than relying on FSFO handling it.
  • RAC standby I recommend disabling FSFO for the same reasons as above. Now, you could argue that the standby database is up all the time if you perform rolling patching. That’s correct, but nodes are being restarted as part of the patching process, and services are being relocated. Having sessions switching over or failing over while you are in the middle of a rolling patch apply is a little delicate situation. Technically, it works; the Oracle stack can handle it. But I prefer to evaluate the situation before switching or failing over. Unless you have a super-cool application that can transparently handle it.

Nevertheless, leaving FSFO enabled when you patch GI or a database is fully supported.

If you have more standby databases

I recommend keeping FSFO enabled if you have multiple standby databases.

When you patch one standby database, you can set FastStartFailoverTarget to the other standby database. When patching completes, you can set FastStartFailoverTarget to the first standby database and continue patching the second standby database. This keeps your primary database protected at all times.

The Easy Way

As shown above, you can patch Oracle Grid Infrastructure even when you have Oracle Data Guard configured. But why not take the easy way and use Oracle Fleet Patching and Provisioning (FPP)?

FPP automatically detects the presence of Data Guard and executes the commands in the appropriate order, including invoking Datapatch when needed.

If you need to know more, you can reach out to Philippe Fierens, product manager for FPP. He is always willing to get you started.

Happy Patching

Appendix

Other Blog Posts in This Series

Can I Run Datapatch When Users Are Connected

The short answer is: Yes! The longer answer is: Yes, but very busy systems or in certain situations, you might experience a few hiccups.

The obvious place to look for the answer would be in the documentation. Unfortunately, there is no Patching Guide similar to the Upgrade Guide. The information in this blog post is pieced together from many different sources.

A few facts about patching with Datapatch:

  • The database must be open in read write mode.
  • You can’t run Datapatch on a physical standby database – even if it’s open (Active Data Guard).
  • A patch is not fully installed until you have executed Datapatch successfully.

How To

First, let me state that it is fully supported to run Datapatch on a running database with users connected.

The procedure:

  1. Install a new Oracle Home and use OPatch to apply the desired patches.
  2. Shut down the database.
  3. Restart the database in the new, patched Oracle Home.
  4. Downtime is over! Users are allowed to connect to the database
  5. Execute ./datapatch -verbose.
  6. End of procedure. The patch is now fully applied.

Often users move step 4 (Downtime is over) to the end of the procedure. That’s of course also perfectly fine, but it does extend the downtime needed and often is not needed.

What About RAC and Data Guard

The above procedure is exactly what happens in a rolling patch apply on a RAC database. When you perform a rolling patch apply on a RAC database, there is no downtime at all. You use opatchauto to patch a RAC database. opatchauto restarts all instances of the database in the patched Oracle Home in a rolling manner. Finally, it executes datapatch on the last node. Individual instances are down temporarily, but the database is always up.

It is a similar situation when you use the Standby First Patch Apply. First, you restart all standby databases in the patched Oracle Home. Then, you perform a switchover and restart the former primary database in the patched Oracle Home. Finally, you execute datapatch to complete the patch installation. You must execute datapatch on the primary database.

Either way, don’t use Datapatch until all databases or instances run on the new, patched Oracle Home.

That’s It?

Yes, but I did write initally that there might be hiccups.

Waits

Datapatch connects to the database like any other session to make changes inside the database. These changes could be:

  • Creating new tables
  • Altering existing tables
  • Creating or altering views
  • Recreating PL/SQL packages like DBMS_STATS

Imagine this scenario:

  1. You restart the database in the patched Oracle Home.
  2. A user connects and starts to use DBMS_STATS.
  3. You execute datapatch.
    1. Datapatch must replace DBMS_STATS to fix a bug.
    2. Datapatch executes CREATE OR REPLACE PACKAGE SYS.DBMS_STATS .....
    3. The Datapatch database session go into a wait.
  4. User is done with DBMS_STATS.
  5. The Datapatch session come out of wait and replace the package.

In this scenario, the patching procedure was prolonged due to the wait. But it completed eventually.

Hangs

From time to time, we are told that Datapatch hangs. Most likely, it is not a real hang, but just a wait on a lock. You can identify the blocking session by using How to Analyze Library Cache Timeout with Associated: ORA-04021 ‘timeout occurred while waiting to lock object %s%s%s%s%s.’ Errors (Doc ID 1486712.1).

You might even want to kill the blocking session to allow Datapatch to do its work.

Timeouts

What will happen in the above scenario if the user never releases the lock on DBMS_STATS? By default, Datapatch waits for 15 minutes (controlled by _kgl_time_to_wait_for_locks) before throwing an error:

ORA-04021: timeout occurred while waiting to lock object

To resolve this problem, restart Datapatch and ensure that there are no blocking sessions. Optionally, increase the DDL timeout:

./datapatch -ddl_lock_timeout <time-in-sec>

Really Busy Databases

I recommend patching at off-peak hours to reduce the likelihood of hitting the above problems.

If possible, you can also limit the activity in the database while you perform the patching. If your application is using e.g. DBMS_STATS and locking on that object is often a problem, you can hold off these sessions for a little while.

The Usual Suspects

Based on my experience, when there is a locking situation, these are often the sinner:

  • Scheduler Jobs – if you have jobs runnings very frequently, they may all try to start when you restart your database in the new Oracle Home. Suspend the workload temporarilty by setting job_queue_processes to 0.
  • Advanced Queeing – if you have lots of activities happening via AQ, you can suspend it temporarily by setting aq_tm_processes to 0. If you disable the scheduler, you also disable AQ.
  • Materialized Views – when the database refreshes materialized views it uses internal functionality (or depending objects) that Datapatch needs to replace. By disabling the scheduler, you also disable the materialized view refreshes.
  • Backup jobs – I have seen several situations where Datapatch couldn’t replace the package dbms_backup_restore because the backup system took archive backups.

Last Resort

If you want to be absolutely sure no one intervenes with your patching, use this approach. But it means downtime:

  1. SQL> startup restrict
  2. ./datapatch -verbose
  3. SQL> alter system disable restricted session;

I don’t recommend starting in upgrade mode. To get out of upgrade mode a database restart is needed extending the downtime window.

Datapatch And Resources

How much resources does Datapatch need? Should I be worried about Datapatch depleting the system?

No, you should not. The changes that Datapatch needs to make are not resource-intensive. However, a consequence of the DDL statements might be object invalidation. But even here, you should not worry. Datapatch will automatically recompile any ORACLE_MAINTAINED object that was invalidated by the patch apply. But the recompilation happens serially, i.e., less resources needed.

Of course, if you system is running at 99% capacity, it might be a problem. On the other hand, if your system is at 99%, patching problems are probably the least of your worries.

What About OJVM

If you are using OJVM and you apply the OJVM bundle patch, things are a little different.

Release RAC Rolling Standby-First Datapatch
Oracle Database 21c Fully No No Datapatch downtime.
Oracle Database 19c + 18c Partial No No Datapatch downtime, but java system is patched which requires ~10 second outage. Connected clients using java will receive ORA-29548.
Oracle Database 12.2 + 12.1 No No Datapatch must execute in upgrade mode.
Oracle Database 11.2.0.4 No No Similar to 12.2 and 12.1 except you don’t use Datapatch.

Mike Dietrich also has a good blog that you might want to read: Do you need STARTUP UPGRADE for OJVM?

What About Oracle GoldenGate

You should stop Oracle GoldenGate when you execute datapatch. When datapatch is done, you can restart Oracle GoldenGate.

If you are manually recompiling objects after datapatch, I recommend that you restart Oracle GoldenGate after the recompilation.

The above applies even if the patches being applied does not contain any Oracle GoldenGate specific patches.

Oracle GoldenGate uses several objects owned by SYS. When datapatch is running it might change some of those objects. In that case, unexpected errors might occur.

Recommendations

Based on my experience, these are my recommendations

Before Patching

  • Recompile invalid objects (utlrp).
  • Perform a Datapatch sanity check ($ORACLE_HOME/OPatch/datapatch -sanity_checks).
  • Postpone your backup jobs.
  • Stop any Oracle GoldenGate processes that connects to the database.
  • Disable the scheduler.

Patching

  • Always use the latest OPatch.
  • Always use out-of-place patching, even for RAC databases.
  • Always enable verbose output in Datapatch ($ORACLE_HOME/OPatch/datapatch -verbose).

After Patching

  • If applicable, re-enable
    • Backup jobs.
    • Oracle GoldenGate processes.
    • The scheduler.
  • Check Datapatch output. If Datapatch failed to recompile any objects, a message is printed to the console. If you patch interactively, you can find the same information in the log files.

Still Don’t Believe Me?

In Autonomous Database (ADB), there is no downtime for patching. An ADB runs on RAC and patching is fully rolling. The automation tooling executes Datapatch while users are connected to the database.

Of course, one might run into the same issues described above. But Oracle have automation to handle the situation. If necessary, the database kills any sessions blocking Datapatch. In the defined maintenance window in your ADB, you may end up in a situation that a long-running, blocking session terminates because it was blocking a Datapatch execution. But if you minimize your activities in the defined maintenance windows, then chances of that happening is minimal.

Conclusion

Go ahead and patch your database with Datapatch while users are connected.

Further Reading