AutoUpgrade New Features: Patch OCW Component In Oracle Home

Every Oracle home contains an Oracle Clusterware (OCW) component. It’s used to interact with Grid Infrastructure when you are using Oracle Restart or Oracle RAC. But even when you don’t use those, the component is still part of your Oracle home.

The Database Release Update doesn’t update the OCW component in your Oracle home. You must use the Grid Infrastructure Release Update for that.

In AutoUpgrade, it is easy to update the OCW component. Let’s see how it works.

How To Also Patch The OCW Component

  • My database hasn’t been patched for a while:

    $ORACLE_HOME/OPatch/opatch lspatches
    
    35648110;OJVM RELEASE UPDATE: 19.21.0.0.231017 (35648110)
    35787077;DATAPUMP BUNDLE PATCH 19.21.0.0.0
    35643107;Database Release Update : 19.21.0.0.231017 (35643107)
    29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
    
    • I’ve never updated the OCW component, so it’s still on the patch level of the base release, 19.3.0.0.0.
  • I use the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  • I create an AutoUpgrade config file, FTEX.cfg:

    global.global_log_dir=/home/oracle/autoupgrade-patching/log
    global.keystore=/home/oracle/autoupgrade-patching/keystore
    patch1.source_home=/u01/app/oracle/product/19
    patch1.target_home=/u01/app/oracle/product/19_27
    patch1.sid=FTEX
    patch1.folder=/home/oracle/patch-repo
    patch1.patch=OPATCH,RU,OCW,DPBP,OJVM
    
    • By adding OCW to the patch parameter, AutoUpgrade also downloads the GI Release Update and updates the OCW component.
  • I patch the database:

    java -jar autoupgrade.jar -config FTEX.cfg -patch -mode deploy
    
  • When AutoUpgrade completes, I check the new patch level:

    $ORACLE_HOME/OPatch/opatch lspatches
    
    37499406;OJVM RELEASE UPDATE: 19.27.0.0.250415 (37499406)
    37654975;OCW RELEASE UPDATE 19.27.0.0.0 (37654975)
    37777295;DATAPUMP BUNDLE PATCH 19.27.0.0.0
    37642901;Database Release Update : 19.27.0.0.250415 (37642901)
    
    • Notice how the OCW Release Update is now 19.27.0.0.0.

Some Details

  • When AutoUpgrade downloads patches, because I specified OCW, it will also download the GI Release Update:

     --------------------------------------------
     Downloading files to /home/oracle/patch-repo
     --------------------------------------------
     DATABASE RELEASE UPDATE 19.27.0.0.0
         File: p37642901_190000_Linux-x86-64.zip - LOCATED
     
     DATAPUMP BUNDLE PATCH 19.27.0.0.0
         File: p37777295_1927000DBRU_Generic.zip - LOCATED
     
     GI RELEASE UPDATE 19.27.0.0.0
         File: p37641958_190000_Linux-x86-64.zip / 83%
    
  • Including OCW is a smart way of downloading the GI Release Update. You can use it to patch your Grid Infrastructure.

  • In Oracle Database 23ai, you can download fully updated gold images. Besides having the latest Release Update, they also come with fully updated OCW components.

Is It Needed?

Should you update the OCW component when you patch your Oracle Database? Is it needed if you don’t use Oracle Restart, Oracle RAC, or Oracle ASM?

It is optional, but even if no GI Stack (ASM, Clusterware or RAC) is used inside the server, it is recommended not to ignore the security patches of the installed components. And apply the most recent OCW Patch.

How to apply OCW Release Update patches on db_home non-RAC / non-ASM (Doc ID 2970542.1)

Mike Dietrich has a good point as well:

As I neither use RHP/FPP or any of the HA components nor EM in my tiny little lab environments, I’m pretty certain that I won’t need the OCW bundle. But this may be different in your environments. And it doesn’t harm to apply it of course.

Adding the Oracle 19.14.0 OCW / GI bundle patch to my database home

Further, I know many customers who never patch the OCW component and haven’t run into related problems.

My recommendation: Update the OCW component when you patch your Oracle Database. Using AutoUpgrade it is so easy, that there’s no reason not to.

Happy patching!

The Easiest Way To Download Patches for Oracle Grid Infrastructure

Whether you patch Oracle Grid Infrastructure manually, using your own automation, or Oracle Fleet Patching and Provisioning, it all starts with downloading the patches.

Although AutoUpgrade doesn’t patch Grid Infrastructure, it can still download the latest version of OPatch and the GI Release Update.

How To Download Grid Infrastructure Release Update

I have already configured the AutoUpgrade keystore and saved my My Oracle Support credentials. You find instructions on how to do that here (search for Creating an AutoUpgrade Keystore).

  1. I download the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. I create a config file called get-gi-patches.cfg:

    global.global_log_dir=/home/oracle/autoupgrade/logs
    global.keystore=/home/oracle/autoupgrade/keystore
    
    patch1.patch=RU,OPATCH,OCW
    patch1.target_version=19
    patch1.platform=LINUX.X64
    patch1.folder=/home/oracle/autoupgrade/patches
    
  3. I create the folder for the patches (config file parameter folder).

    mkdir -p /home/oracle/autoupgrade/patches
    
  4. I download the patches by starting AutoUpgrade in download mode:

    java -jar autoupgrade.jar -config get-gi-patches.cfg -mode download
    
  5. Here’s the output from AutoUpgrade:

    AutoUpgrade Patching 25.3.250509 launched with default internal options
    Processing config file ...
    Loading AutoUpgrade Patching keystore
    AutoUpgrade Patching keystore is loaded
    
    Connected to MOS - Searching for specified patches
    
    -----------------------------------------------------
    Downloading files to /home/oracle/autoupgrade/patches
    -----------------------------------------------------
    DATABASE RELEASE UPDATE 19.27.0.0.0
        File: p37642901_190000_Linux-x86-64.zip - VALIDATED
    
    GI RELEASE UPDATE 19.27.0.0.0
        File: p37641958_190000_Linux-x86-64.zip - VALIDATED
    
    OPatch 12.2.0.1.46 for DB 19.0.0.0.0 (Apr 2025)
        File: p6880880_190000_Linux-x86-64.zip - VALIDATED
    -----------------------------------------------------   
    
  6. That’s it! After a few minutes, I’ve downloaded the latest GI Release Update and OPatch. GI RELEASE UPDATE 19.27.0.0.0 – p37641958_190000_Linux-x86-64.zip OPatch 12.2.0.1.46 for DB 19.0.0.0.0 (Apr 2025) – p6880880_190000_Linux-x86-64.zip

Is it really that easy? Yes, it is…

I can now patch my GI Oracle home – for Oracle RAC Database and Oracle Restart.

Happy patching!

Appendix

I Want To Download A Previous Release Update

Rather than downloading the latest Release Update, you can choose to download a specific Release Update. You can specify that in the patch parameter:

patch1.patch=RU:19.26,OPATCH,OCW
  • Notice how the RU keyword has a suffix specifying the exact Release Update.

Which Grid Infrastructure Should I Install on My Brand New Exadata?

I received a question from a customer:

We just got a brand-new Exadata. We will use it for critical databases and stay on Oracle Database 19c for the time being. Which Grid Infrastructure should we install: 19c or 23ai?

I recommend installing Oracle Grid Infrastructure 19c (GI 19c).

The Reason

GI 19c has been out since 2019. It is currently at the 23rd Release Update (19.26), and used on many thousands of systems. Those systems include some of the most critical systems you can find.

GI 19c is a very proven release that has reached a very stable state. Proven and stable – two attributes that are very valuable for a mission-critical system.

Additionally, I would apply the latest Release Update – at the time of writing that’s 19.26. Also, I would include fixes from Oracle Database 19c Important Recommended One-off Patches (Doc ID 555.1).

Further, I would ensure the databases were on the same Release Update, 19.26. If that’s impossible, at least keep the database within two Release Updates of Grid Infrastructure, so, minimum 19.24.

In this case, the customer migrates the databases from a different platform onto the new Exadata system. The source database is already running GI 19c, and keeping the same GI release on the new system means there’s one less change to deal with.

Why Not Oracle Grid Infrastructure 23ai?

First, there’s absolutely nothing wrong with the quality of Oracle Grid Infrastructure 23ai (GI 23ai).

When I recommend GI 19c over GI 23ai, it is a matter of choosing between two good options.

But GI 23ai has been out for Exadata for over half a year. Much less than GI 19c, which is about to reach six years of general availability.

Every piece of software as a few rough edges to grind off and I would expect that for GI 23ai as well.

For a mission-critical system, there’s no need to take any chances, which is why I recommend GI 19c.

When To Use Oracle Grid Infrastructure 23ai

If the customer wants to use Oracle Database 23ai – either now or in the foreseeable future – then they should install GI 23ai. No doubt about that.

Also, for less critical systems, including test and development systems, I would recommend GI 23ai as well.

Why Not Both?

I added this after a report on LinkedIn by my colleague, Alex Blyth.

Alex agrees with my recommendation but adds the following:

What I also would have said is, you can have your cake, and you can eat it too. This means Exadata can do more than one thing at a time. With virtualization, you can have a VM cluster with 19c Database and GI for critical databases, and another VM cluster (up to 50 per DB server with X10M / X11M and the latest Exadata System Software) that is running DB and GI 23ai. What’s more, you could also deploy Exadata Exascale and take advantage of the high-performance shared storage for the VM images, and the awesome instant database snapshot and cloning capabilities for DB 23ai.

He raises a really good point.

Exadata is the world’s best database platform and the flexibility it offers with virtualization would allow this customer the stability they need for their mission-critical database, plus, getting started with the many new features on 23ai.

The best of both worlds!

Final Words

Although I work in the upgrade team and love upgrades, I don’t recommend them at any cost.

For mission-critical systems, stability and maturity are paramount, and that influences my recommendation of GI 19c.

But get started with Oracle Grid Infrastructure and Database 23ai today. Install it in your lab and then on your less important systems. There are many great enhancements to exploring on Oracle Database 23ai.

Prepare yourself for the next release in due time.

Recreate Database Services After Moving An Oracle Database

Oracle recommends that you connect to the database via custom services. In your connect string, don’t connect:

  • Directly to the SID
  • Or to the database’s default service (the service with the same name as the database).

When you move a database around, in some situations, the database does not retain these services, for instance, when you:

  • Migrate a non-CDB to PDB using refreshable clone PDB
  • Upgrade a PDB using refreshable clone PDB
  • Move a PDB to a different CDB using refreshable clone PDB
  • Migrating a database using Full Transportable Export/Import or transportable tablespaces

The services are important because your application and clients connect to the database through that service. Also, the service might define important properties for things like Application Continuity or set default drain timeout.

Here’s how to recreate such services.

Database Managed Services

A database-managed service is one that you create directly in the database using dbms_service:

begin
   dbms_service.create_service(
      service_name=>'SALESGOLD',
      network_name=>'SALESGOLD');
   dbms_service.start_service('SALESGOLD');   
end;
/

After the migration, you must manually recreate the service in the target database.

dbms_metadata does not support services. So, you must query v$services in the source database to find the service’s defition. Then, construct a call to dbms_service.create_service and dbms_serice.start_service.

Clusterware Managed Services

I recommend defining services in Grid Infrastructure if you are using Oracle RAC or using Oracle Restart to manage your single instance database. Luckily, Grid Infrastructure supports exporting and importing service defitions.

  • You export all the services defined in the source database:

    srvctl config service \
       -db $ORACLE_UNQNAME \
       -exportfile my_services.json \
       -S 2
    
  • You edit the JSON file.

    1. Remove the default services. Keep only your custom services.
    2. Remove the dbunique_name attribute for all services.
    3. If you are renaming the PDB, you must update the pluggable_database attribute.
    4. Update the res_name attribute so it matches the resource name of the target database. Probably you just need to exchange the db_unique_name part of the resource name. You can find the resource name as grid when you execute crsctl stat resource -t.
  • You can now import the services into the target database:

    srvctl add service \
       -db $ORACLE_UNQNAME \
       -importfile my_services.json
    
  • Finally, you start the service(s):

    export ORACLE_SERVICE_NAME=SALESGOLD
    srvctl start service \
       -db $ORACLE_UNQNAME \
       -service $ORACLE_SERVICE_NAME
    

Additional Information

  • The export/import features work from Oracle Database 19c, Release Update 19 and beyond.
  • You can also export/import the definition of:
    • Database: srvctl config database -db $ORACLE_UNQNAME -S 2 -exportfile my_db.json.json
    • PDB: srvctl config pdb -db $ORACLE_UNQNAME -S 2 -exportfile my_pdb.json
    • ACFS filesystem: srvctl config filesystem -S 2 -exportfile /tmp/my_filesystem.json
  • At time of writing, this functionality hasn’t made it into the documentation yet. Consider yourself lucky knowing this little hidden gem.

Final Words

Remember to recreate your custom services after a migration. Your application needs the service to connect in a proper way.

Further Reading

Grid Infrastructure 19c Out-Of-Place Patching Fails on AIX

I’m a strong advocate for out-of-place patching, and I can see that many of my blog readers are interested in that topic as well. Thank you for that!

But a reader notified me about a specific issue that occurs during out-of-place patching of Oracle Grid Infrastructure 19c. The issue occurs when using OPatchAuto as well as SwitchGridHome.

Normally, I recommend creating a new Oracle home using the base release (so 19.3.0) and then applying the latest Release Update on top:

# Unzipping base release, 19.3.0
unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
# Install and patch Oracle home
./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
   -applyRU ...

However, that fails on AIX:

Preparing the home to patch...
Applying the patch /u01/software/36916690/36916690/36917416...
OPatch command failed while applying the patch. For details look at the logs 
from /u01/app/19.25.0/grid/cfgtoollogs/opatchauto/.

The log file has a little more detail:

DeleteAction : Destination File ''/u01/app/19.25.0/grid/perl/bin/perl'' is not writeable.
Copy Action: Destination File ''/u01/app/19.25.0/grid/perl/bin/perl'' is not writeable.

The Solution

There is already a MOS note that describes a potential workaround:

Out of place (OOP) patching of 19c Release Update (RU) fails on AIX (Doc ID 2948468.1)

But the reader leaving the comment asked for a few more words.

My Words

First, you should continue to use out-of-place patching despite the above issue.

Second, instead of using the base release (19.3.0) as the basis for any new Oracle home, you must create a new base release. One that doesn’t contain the error that leads to the above issue.

  1. On a non-prod system, create a brand-new Grid Infrastructure installation using the base release (19.3.0).
  2. Use in-place patching to patch it to the latest Release Update (currently 19.25.0). You need to add a few parameters to the opatchauto command:
    <path_to_temp_home>/OPatch/opatchauto \
       apply <path-to-patch> \
       -binary \
       -oh <path_to_temp_home> \
       -target_type cluster
    
  3. Create a gold image of this 19.25.0 Oracle home.
    export NEW_GRID_HOME=/u01/app/19.25.0/grid
    $NEW_GRID_HOME/gridSetup.sh -createGoldImage \
       -destinationLocation $GOLDIMAGEDIR \
       -name gi_gold_image.zip \
       -silent
    
  4. You now have a new base release. It is almost as pristine as the 19.3.0 base release. It just contains the additional Release Update (19.3.0 + 19.25.0).
  5. When you need to patch another system, use out-of-place patching using SwitchGridHome. But instead of using the base release 19.3.0, you use your new gold image that is already patched to 19.25.0.
    #Don't do this
    #unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    #Do this
    unzip -oq /u01/software/gi_gold_image.zip
    
  6. When you install the using gridSetup.sh you don’t have to apply the Release Update because the gold image contains it already. You can still apply any one-offs you need.
    ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyOneOffs <path_to_one_offs> \
       ...
    
  • There are no other changes to the procedure.

The issue is fixed in bug 34962446. However, I doesn’t seem to be available in 19c, so you have to repeat the above process for every Release Update.

If you still run into patching the Perl component, take a look at this MOS note:

Final Words

  • Is it a viable workaround? Yes, I believe so. There’s a little more work to, on the other hard, you’ve now started to use gold images, which is a huge advantage.

  • If you continue patching in-place or out-of-place using OPatchAuto, be sure to clean up the Oracle home from time to time.

  • The issue occurs starting with Release Update 19.18 because that’s where Oracle started to add patches to Perl in the Oracle home.

  • Thanks to Axel Dellin for helping me with some details.

You should not let this little bump on the road prevent you from using out-of-place patching.

Happy Patching

How to Patch Oracle Restart 19c and Oracle Database Using Out-Of-Place Switch Home

Let me show you how I patch Oracle Restart and Oracle Database 19c using the out-of-place method by switching to the new Oracle homes.

The advantages of this solution:

  • I get more control over the process
  • I can perform the entire operation with just one database restart
  • I can create my Oracle homes using gold images
  • I can prepare the new Oracle homes in advance
  • Overall, I find this method less riskier

My demo system

  • Single instance database in Oracle Restart configuration
  • Runs Oracle Linux
  • GI and database home are currently on 19.24

I want to:

  • patch to 19.25
  • patch both the GI and database home in one operation

Preparation

I need to download:

  1. The base releases of:
    • Oracle Grid Infrastructure (LINUX.X64_193000_grid_home.zip)
    • Oracle Database (LINUX.X64_193000_db_home.zip)
  2. Latest OPatch from My Oracle Support (6880880).
  3. Patches from My Oracle Support:
    • 19.25 Release Update for Grid Infrastructure (36916690)
    • Matching OJVM bundle patch (36878697)
    • Matching Data Pump bundle patch (36682332)

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Restart 19c and Oracle Database

1. Prepare a New GI Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new GI home. I must do this as root:

    [root@node1]$ mkdir -p /u01/app/19.25.0/grid
    [root@node1]$ chown -R grid:oinstall /u01/app/19.25.0
    [root@node1]$ chmod -R 775 /u01/app/19.25.0
    
  2. I switch to the GI home owner, grid.

  3. I extract the base release of Oracle Grid Infrastructure into the new GI home:

    [grid@node1]$ export OLDGRIDHOME=$ORACLE_HOME
    [grid@node1]$ export NEWGRIDHOME=/u01/app/19.25.0/grid
    [grid@node1]$ cd $NEWGRIDHOME
    [grid@node1]$ unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    

    Optionally, I can use a golden image.

  4. I update OPatch to the latest version:

    [grid@node1]$ cd $NEWGRIDHOME
    [grid@node1]$ rm -rf OPatch
    [grid@node1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  5. Then, I check the Oracle Grid Infrastructure prerequisites. I am good to go if the check doesn’t write any error messages to the console:

    [grid@node1]$ export ORACLE_HOME=$NEWGRIDHOME
    [grid@node1]$ $ORACLE_HOME/gridSetup.sh -executePrereqs -silent
    
  6. I want to apply the 19.25 Release Update while I install the GI home. To do that, I must extract the patch file:

     [grid@node1]$ cd /u01/software
     [grid@node1]$ unzip -oq p36916690_190000_Linux-x86-64.zip -d 36916690
    
    • The GI Release Update is a bundle patch consisting of:
      • OCW Release Update (patch 36917416)
      • Database Release Update (36912597)
      • ACFS Release Update (36917397)
      • Tomcat Release Update (36940756)
      • DBWLM Release Update (36758186)
    • I will apply all of them.
  7. Finally, I can install the new GI home:

    • The parameter -applyRU is the path to the OCW Release Update.
    • The parameter -applyOneOffs is a comma-separated list of the paths to each of the other Release Updates in the GI bundle patch.
    • The environment variable CLUSTER_NAME is the name of my Oracle Restart stack.
    [grid@node1]$ export ORACLE_BASE=/u01/app/grid
    [grid@node1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [grid@node1]$ export ORACLE_HOME=$NEWGRIDHOME
    [grid@node1]$ cd $ORACLE_HOME
    [grid@node1]$ ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyRU /u01/software/36916690/36916690/36917416 \
       -applyOneOffs /u01/software/36916690/36916690/36912597,/u01/software/36916690/36916690/36917397,/u01/software/36916690/36916690/36940756,/u01/software/36916690/36916690/36758186 \ \   
       -responseFile $ORACLE_HOME/install/response/gridsetup.rsp \
       INVENTORY_LOCATION=$ORA_INVENTORY \
       ORACLE_BASE=$ORACLE_BASE \
       SELECTED_LANGUAGES=en \
       oracle.install.option=CRS_SWONLY \
       oracle.install.asm.OSDBA=asmdba \
       oracle.install.asm.OSOPER=asmoper \
       oracle.install.asm.OSASM=asmadmin \
       oracle.install.crs.config.ClusterConfiguration=STANDALONE \
       oracle.install.crs.config.configureAsExtendedCluster=false \oracle.install.crs.config.gpnp.configureGNS=false \
       oracle.install.crs.config.autoConfigureClusterNodeVIP=false
    
    • Although the script says so, I don’t run root.sh.
    • I install it in silent mode, but I could use the wizard instead.
    • You need to install the new GI home in a way that matches your environment.
    • For inspiration, you can check the response file used in the previous GI home on setting the various parameters.
    • If I have additional one-off patches to install, I add them to the comma-separated list.

2. Prepare a New Database Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new database home. I must do this as oracle:

    [oracle@node1]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/dbhome_1925
    [oracle@node1]$ mkdir -p $NEW_ORACLE_HOME
    
  2. I extract the base release of Oracle Database into the new database home:

    [oracle@node1]$ cd $NEW_ORACLE_HOME
    [oracle@node1]$ unzip -oq /u01/software/LINUX.X64_193000_db_home.zip
    

    Optionally, I can use a golden image.

  3. I update OPatch to the latest version:

    [oracle@node1]$ rm -rf OPatch
    [oracle@node1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  4. I want to apply the 19.25 Database Release Update. In addition, I must also apply the OCW Release Update to the database home. I take those from the GI Release Update that I used earlier. In addition, I want to apply the OJVM and Data Pump bundle patches. Those I must extract.

    [oracle@node1]$ cd /u01/software
    [oracle@node1]$ unzip -oq p36878697_190000_Linux-x86-64.zip -d 36878697
    [oracle@node1]$ unzip -oq p37056207_1925000DBRU_Generic_1925.zip -d 37056207   
    
  5. Then, I can install the new database home and apply the patches at the same time:

    • The parameter -applyRU is the path to the Database Release Update.
    • The parameter -applyOneOffs is a comma-separated list of the paths to the OCW Release Update plus OJVM and Data Pump bundle patches.
    [oracle@node1]$ export ORACLE_BASE=/u01/app/grid
    [oracle@node1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [oracle@node1]$ export OLD_ORACLE_HOME=$ORACLE_HOME
    [oracle@node1]$ export ORACLE_HOME=$NEW_ORACLE_HOME
    [oracle@node1]$ cd $ORACLE_HOME
    [oracle@node1]$ ./runInstaller -ignorePrereqFailure -waitforcompletion -silent \
         -responseFile $ORACLE_HOME/install/response/db_install.rsp \
         -applyRU /u01/software/36916690/36916690/36912597 \
         -applyOneOffs /u01/software/36916690/36916690/36917416,/u01/software/36878697/36878697,/u01/software/37056207/37056207 \
         oracle.install.option=INSTALL_DB_SWONLY \
         UNIX_GROUP_NAME=oinstall \
         INVENTORY_LOCATION=$ORA_INVENTORY \
         SELECTED_LANGUAGES=en \
         ORACLE_HOME=$ORACLE_HOME \
         ORACLE_BASE=$ORACLE_BASE \
         oracle.install.db.InstallEdition=EE \
         oracle.install.db.OSDBA_GROUP=dba \
         oracle.install.db.OSBACKUPDBA_GROUP=dba \
         oracle.install.db.OSDGDBA_GROUP=dba \
         oracle.install.db.OSKMDBA_GROUP=dba \
         oracle.install.db.OSRACDBA_GROUP=dba \
         oracle.install.db.isRACOneInstall=false \
         oracle.install.db.rac.serverpoolCardinality=0 \
         SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
         DECLINE_SECURITY_UPDATES=true
    
    • I install it in silent mode, but I could use the wizard instead.
    • You need to install the new database home in a way that matches your environment.
    • For inspiration, you can check the response file used in the previous database home on setting the various parameters.
    • If I have additional one-off patches to install, I add them to the comma-separated list.
  6. I run the database root script:

    [root@node1]$ $NEW_ORACLE_HOME/root.sh
    
    • I run just the database root script. Not the GI root script.

3. Prepare Database

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

I will move the database into a new Oracle home, so I need to ensure the database configuration files are either outside the Oracle home or move them to the new Oracle home.

  1. I verify that my SP file and password file are stored in ASM – or at least outside the Oracle home:
    [oracle@node1]$ export ORACLE_HOME=$OLD_ORACLE_HOME
    [oracle@node1]$ srvctl config database -db $ORACLE_UNQNAME | grep file  
    
    • If the files are stored in the dbs folder, I copy them to new Oracle home.
  2. I copy tnsnames.ora and sqlnet.ora to the new Oracle home:
    [oracle@node1]$ cp $OLD_ORACLE_HOME/network/admin/sqlnet.ora $NEW_ORACLE_HOME/network/admin
    [oracle@node1]$ cp $OLD_ORACLE_HOME/network/admin/tnsnames.ora $NEW_ORACLE_HOME/network/admin
    
  3. I take care of any other configuration files in the Oracle home.
  4. I modify the database so it starts in the new Oracle home on the next restart.
    [oracle@node1]$ srvctl modify database -d $ORACLE_UNQNAME -o $NEW_ORACLE_HOME
    

4. Switch to the New GI and Database Homes

Now, I can complete the patching process by switching to the new Oracle homes.

  1. I connect as root and start the switch:

    [root@node1]$ export ORACLE_HOME=/u01/app/19.25.0/grid
    [root@node1]$ $ORACLE_HOME/rdbms/install/rootadd_rdbms.sh
    [root@node1]$ $ORACLE_HOME/crs/install/roothas.sh -prepatch -dstcrshome $ORACLE_HOME
    
  2. Downtime starts now!

  3. Then, I complete the switch.

    • This step stops the entire GI stack, including resources it manages (databases, listener, etc.).
    • Everything is restarted in the new Oracle homes.
    [root@node1]$ $ORACLE_HOME/crs/install/roothas.sh -postpatch -dstcrshome $ORACLE_HOME
    
  4. Downtime ends now. Users may connect to the database.

  5. As grid, I update the inventory, so the new GI home is registered as the active one:

    [grid@node1]$ export OLD_ORACLE_HOME=/u01/app/19.24.0/grid
    [grid@node1]$ export NEW_ORACLE_HOME=/u01/app/19.25.0/grid
    [grid@node1]$ $NEW_ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$NEW_ORACLE_HOME CRS=TRUE
    [grid@node1]$ $OLD_ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$OLD_ORACLE_HOME CRS=FALSE
    
  6. I update any profiles (e.g., .bash_profile) and other scripts referring to the GI home.

  7. As oracle, I update any profiles (e.g., .bash_profile) and other scripts referring to the database home.

5. Complete Patching

  1. I complete patching of the database by running Datapatch (ensure the environment is set correctly):
    [oracle@node1]$ env | grep ORA
    [oracle@node1]$ $ORACLE_HOME/OPatch/datapatch
    

Most likely, there are other changes that you need to make in your own environment:

  • Update Enterprise Manager registration
  • Upgrade RMAN catalog
  • Update other scripts
  • Update /etc/oratab

That’s it! I have now patched my Oracle Restart deployment.

Happy Patching!

Appendix

Deinstall

In the future, I should remove the old Oracle homes. I use the deinstall tool in the respective Oracle homes.

I would recommend waiting a week or two until I’m confident the new Release Updates are fine.

CRS-0245: User doesn’t have enough privilege to perform the operation

  • If you get the following error:
    [oracle@node1]$ srvctl modify database -d $ORACLE_UNQNAME -o $NEW_ORACLE_HOME
    PRCD-1163 : Failed to modify database DB19
    PRCR-1071 : Failed to register or update resource ora.db19.db
    CRS-0245:  User doesn't have enough privilege to perform the operation
    
  • Be sure to include patch 29326865 in GI and database home.
  • Run the srvctl modify database command as grid instead.
  • Be sure that the Oracle user is still set to oracle after running the command as grid:
    [oracle@node1]$ srvctl config database -db $ORACLE_UNQNAME | grep user
    

Rollback

If you need to roll back, you more or less reverse the process. The switch home method works to a newer and lower patch level.

OCW Release Update

Thanks to Jan for commenting on the blog post. The initial version didn’t include the OCW Release Update into the database home, which is needed when the database is managed by Grid Infrastructure in any way.

Incorrect Information in ocr.loc

In the ocr.loc file for Oracle Restart, only the local_only property is used. All other properties can be ignored (like ocrconfig_loc).

In Oracle Database 23ai, the file will be cleaner in Oracle Restart. But for Oracle Database 19c there will be these superfluous properties.

Further Reading

Other Blog Posts in This Series

Oracle Grid Infrastructure and Apache Tomcat

Oracle Grid Infrastructure (GI) uses some functionality from Apache Tomcat. You can find Apache Tomcat in the GI Home. How do you ensure that Apache Tomcat is up to date?

It’s Easy

The Release Updates for Oracle Grid Infrastructure also contains patches for Apache Tomcat:

Patching of Tomcat within the GI home is handled via the Quarterly Grid Infrastructure Release Updates.

The following example shows the output of a GI home on 19.19.0. You can see there is a specific patch for Apache Tomcat (TOMCAT RELEASE UPDATE):

$ cd $ORACLE_HOME/OPatch
$ ./opatch lspatches
35050341;OJVM RELEASE UPDATE: 19.19.0.0.230418 (35050341)
35004974;JDK BUNDLE PATCH 19.0.0.0.230418
35107512;TOMCAT RELEASE UPDATE 19.0.0.0.0 (35107512)
35050325;ACFS RELEASE UPDATE 19.19.0.0.0 (35050325)
35042068;Database Release Update : 19.19.0.0.230418 (35042068)
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)

How to Find Tomcat Version

Use the following command to find the Apache Tomcat version:

$ cd $ORACLE_HOME/tomcat/lib

$ java -cp catalina.jar org.apache.catalina.util.ServerInfo
Server version: Apache Tomcat/8.5.84
Server built:   Nov 16 2022 13:34:24 UTC
Server number:  8.5.84.0
OS Name:        Linux
OS Version:     4.14.35-2047.510.5.5.el7uek.x86_64
Architecture:   amd64
JVM Version:    1.8.0_371-b11
JVM Vendor:     Oracle Corporation

Can I Update Tomcat Manually?

No, you can only update Apache Tomcat as part of a Release Update:

Oracle is continuously monitoring TOMCAT fixes for CVEs, once a fix is found and the fix the fix is in a object in one of the JAR files of the compact distribution, we start the process to incorporate the TOMCAT version with the fix in GI. … Patching outside of GI Release Updates is NOT supported.

My Security Team Flags Tomcat as Out-of-date

Many customers use tools to scan for vulnerabilities. Such tools might scan a GI home and find an out-of-date Apache Tomcat. To update Apache Tomcat, you must apply a newer Release Update.

If the latest Release Update does not contain a fix for a specific issue in Apache Tomcat:

  • Check 555.1 for a one-off patch
  • Wait for the next Release Update

If you find the issue so critical that you can’t wait, reach out to Oracle Support with your concerns.

Further Reading

Tomcat in the Grid Infrastructure Home (Doc ID 2655066.1)

How to Clone Oracle Grid Infrastructure Home Using Golden Images

Cloning Oracle Grid Infrastructure (GI) homes is a convenient way of getting a new GI Home. It’s particularly helpful when you need to patch out-of-place using the SwitchGridHome method.

When you have created a new GI home and applied all the necessary patches, you can turn it into a golden image. Later on, you can deploy from that golden image and avoid updating OPatch and apply patches.

How to Create a Golden Image

  1. First, only create a golden image from a freshly installed Oracle Home. Never use an Oracle Home that is already in use. As soon as you start to use an Oracle Home you taint it with various files and you don’t want to carry those files around in your golden image. The golden image must be completely clean.

  2. Then, you create a directory where you can store the golden image:

    export GOLDIMAGEDIR=/u01/app/grid/goldimages
    mkdir -p $GOLDIMAGEDIR
    
  3. Finally, you create the golden image. This command creates a golden image of the specified GI home:

    export NEW_GRID_HOME=/u01/app/19.20.0/grid
    $NEW_GRID_HOME/gridSetup.sh -createGoldImage \
       -destinationLocation $GOLDIMAGEDIR \
       -silent
    

    Be sure to do this before you start to use the new GI home.

  4. The installer creates the golden image as a zip file in the specified directory. The name of the zip file is unique and printed on the console. You can also use the secret parameter -name to specify a name for the zip file. To name the zip file gi_19_20_0.zip:

    $NEW_GRID_HOME/gridSetup.sh -createGoldImage \
       ... \
       -name gi_19_20_0.zip
    

No software must run out of the Oracle Home, when you create the gold image. Don’t use a production Oracle Home. I recommend using a test or staging server instead.

Check the documentation for further details.

How to Deploy from a Golden Image

  1. You must create a folder for the new GI home. You do it as root:

    export NEW_GRID_BASE=/u01/app/19.20.0
    export NEW_GRID_HOME=$NEW_GRID_BASE/grid
    mkdir -p $NEW_GRID_HOME
    chown -R grid:oinstall $NEW_GRID_BASE
    chmod -R 775 $NEW_GRID_BASE
    

    If you install the new GI home in a cluster, you must create the folder on all nodes.

  2. Then, you extract the golden image as grid:

    export NEW_GRID_HOME=/u01/app/19.20.0/grid
    cd $NEW_GRID_HOME
    unzip -q /u01/app/grid/goldimages/gi_19_20_0.zip
    
  3. Finally, you use gridSetup.sh to perform the installation:

    ./gridSetup.sh 
    

That’s it!

I recommend using golden images when you patch out-of-place using the SwitchGridHome method.

Appendix

Oracle Restart vs. Oracle RAC

If you create a GI home for use with Oracle RAC, you can’t use that gold image for a new GI home for Oracle Restart.

Such two GI homes would be very different. You must have two gold images. One for RAC and one for Restart.

Further Reading

Other Blog Posts in This Series

How to Remove an Old Oracle Grid Infrastructure 19c Home

When you patch your Oracle Grid Infrastructure 19c (GI) using the out-of-place method, you should also remove the old GI homes.

I recommend that you keep the old GI home for a while. At least until you are convinced that a rollback is not needed. Once you are comfortable with the new GI home, you can safely get rid of it.

How to Remove an Oracle Grid Infrastructure 19c Home

  1. I set the path to my old GI home as an environment variable:
    export REMOVE_ORACLE_HOME=/u01/app/19.0.0.0/grid
    
  2. Optionally, I take a backup of the GI home for safekeeping:
    export GOLDIMAGEDIR=/u01/app/grid/goldimages
    mkdir -p $GOLDIMAGEDIR
    $REMOVE_ORACLE_HOME/gridSetup.sh -createGoldImage \
       -destinationLocation $GOLDIMAGEDIR \
       -silent
    
  3. I verify that the GI home, is not the active one. This command returns the active GI home. It must not return the path of the GI home, which I want to delete. As grid:
    $REMOVE_ORACLE_HOME/srvm/admin/getcrshome
    
  4. I double-check that the GI home to remove is not the active one. The XML tag returned must not contain an CRS=“true” attribute. As grid:
    export ORA_INVENTORY_XML=/u01/app/oraInventory/ContentsXML/inventory.xml
    grep "$REMOVE_ORACLE_HOME" $ORA_INVENTORY_XML
    
    #This is good
    #   <HOME NAME="OraGrid190" LOC="/u01/app/19.0.0.0/grid" TYPE="O" IDX="1"/>
    #This is bad
    #.  <HOME NAME="OraGrid190" LOC="/u01/app/19.0.0.0/grid" TYPE="O" IDX="1" CRS="true"/>      
    
  5. I run the deinstall tool. I switch to my home directory to ensure I am not interfering with the de-installation. As grid:
    cd ~
    $REMOVE_ORACLE_HOME/deinstall/deinstall
    
    The script:
    • Detects the nodes in my cluster.
    • Prints a summary and prompts for confirmation.
    • Deinstalls the GI home on all nodes.
    • Instructs me to run a script as root on all nodes.
    • Prints a summary including any manual tasks in the end.
  6. I verify that the GI home is marked as deleted in the inventory. The XML tag should have a Removed=“T” attribute. As grid:
    export ORA_INVENTORY_XML=/u01/app/oraInventory/ContentsXML/inventory.xml
    grep "$REMOVE_ORACLE_HOME" $ORA_INVENTORY_XML
    
    #This is good
    #   <HOME NAME="OraGrid190" LOC="/u01/app/19.0.0.0/grid" TYPE="O" IDX="1" Removed="T"/>
    
  7. Often the deinstall tool can’t remove some files because of missing permissions. I remove the GI home manually. As root on all nodes:
    export REMOVE_ORACLE_HOME=/u01/app/19.0.0.0/grid
    rm -rf $REMOVE_ORACLE_HOME
    

Silent Mode

There is also a silent mode if you want to script the removal. Check the -checkonly and -silent parameters in the documentation.

You can also find a sample response file in the documentation.

Appendix

Further Reading

Other Blog Posts in This Series

How to Roll Back Oracle Grid Infrastructure 19c Using SwitchGridHome

Let me show you how I roll back a patch from Oracle Grid Infrastructure 19c (GI) using the out-of-place method and the -switchGridHome parameter.

My demo system:

  • Is a 2-node RAC (nodes copenhagen1 and copenhagen2).
  • Runs Oracle Linux.
  • Was patched from 19.17.0 to 19.19.0. I patched both GI and database. Now I want GI back on 19.17.0.

I only roll back the GI home. See the appendix for a few thoughts on rolling back the database as well.

This method works if you applied the patch out-of-place – regardless of whether you used the OPatchAuto or SwitchGridHome method.

Preparation

  • I use the term old Oracle Home for the original, lower patch level Oracle Home.

    • It is my 19.17.0 Oracle Home
    • It is stored in /u01/app/19.0.0.0/grid
    • I refer to this home using the environment variable OLD_ORACLE_HOME
    • This is the Oracle Home that I want to roll back to
  • I use the term new Oracle Home for the higher patch level Oracle Home.

    • It is my 19.19.0 Oracle Home
    • It is stored in /u01/app/19.19.0/grid
    • I refer to this home using the environment variable NEW_ORACLE_HOME
    • This is the Oracle Home that I want to roll back from

Both GI homes are present in the system already.

How to Roll Back Oracle Grid Infrastructure 19c

1. Sanity Checks

I execute the following checks on both nodes, copenhagen1 and copenhagen2. I show the commands for one node only.

  • I verify that the active GI home is the new GI home:

    [grid@copenhagen1]$ export ORACLE_HOME=$NEW_GRID_HOME
    [grid@copenhagen1]$ $ORACLE_HOME/srvm/admin/getcrshome
    
  • I verify that the cluster upgrade state is NORMAL:

    [grid@copenhagen1]$ $ORACLE_HOME/bin/crsctl query crs activeversion -f
    
  • I verify all CRS services are online:

    [grid@copenhagen1]$ $ORACLE_HOME/bin/crsctl check cluster
    
  • I verify that the cluster patch level is 19.19.0 – the new patch level:

    [grid@copenhagen1]$ $ORACLE_HOME/bin/crsctl query crs releasepatch
    

2. Cluster Verification Utility

  • I use Cluster Verification Utility (CVU) to verify that my cluster meets all prerequisites for a patch/rollback. I do this on one node only:
    [grid@copenhagen1]$ $CVU_HOME/bin/cluvfy stage -pre patch
    
    • You can find CVU in the GI home, but I recommend always getting the latest version from My Oracle Support.

3. Roll Back Node 1

The GI stack (including database, listener, etc.) needs to restart on each instance. But I do the rollback in a rolling manner, so the database stays up all the time.

  • I drain connections from the first node, copenhagen1.

  • I unlock the old GI home, root:

    [root@copenhagen1]$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ cd $OLD_GRID_HOME/crs/install
    [root@copenhagen1]$ ./rootcrs.sh -unlock -crshome $OLD_GRID_HOME
    
    • This is required because the next step (gridSetup.sh) runs as grid and must have access to the GI home.
    • Later on, when I run root.sh, the script will lock the GI home.
  • I switch to old GI home as grid:

    [grid@copenhagen1]$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
    [grid@copenhagen1]$ export ORACLE_HOME=$OLD_GRID_HOME
    [grid@copenhagen1]$ export CURRENT_NODE=$(hostname)
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \
       -silent -switchGridHome \
       oracle.install.option=CRS_SWONLY \
       ORACLE_HOME=$ORACLE_HOME \
       oracle.install.crs.config.clusterNodes=$CURRENT_NODE \
       oracle.install.crs.rootconfig.executeRootScript=false
    
  • I complete the switch by running root.sh as root:

    [root@copenhagen1]$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ $OLD_GRID_HOME/root.sh
    
    • This step restarts the entire GI stack, including resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
    • In the end, GI restarts the resources (databases and the like).
  • I update any profiles (e.g., .bashrc) and other scripts referring to the GI home.

  • I verify that the active GI home is the new GI home:

    [grid@copenhagen1]$ $OLD_ORACLE_HOME/srvm/admin/getcrshome
    
  • I verify that the cluster upgrade state is ROLLING PATCH:

    [grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl query crs activeversion -f
    
  • I verify all CRS services are online:

    [grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl check cluster
    
  • I verify all resources are online:

    [grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl stat resource -t 
    
  • I verify that the GI patch level is 19.17.0 – the old patch level:

    [grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl query crs releasepatch
    

4. Roll Back Node 2

  • I roll back the second node, copenhagen2, using the same process as the first node, copenhagen1.
    • I double-check that the CURRENT_NODE environment variable gets updated to copenhagen2.
    • When I use crsctl query crs activeversion -f to check the cluster upgrade state, it will now be back in NORMAL mode, because copenhagen2 is the last node in the cluster.

5. Cluster Verification Utility

  • I use Cluster Verification Utility (CVU) again. Now I perform a post-rollback check. I do this on one node only:
    [grid@copenhagen1]$ $CVU_HOME/bin/cluvfy stage -post patch
    

That’s it!

My cluster is now operating at the previous patch level.

Appendix

SwitchGridHome Does Not Have Dedicated Rollback Functionality

OPatchAuto has dedicated rollback functionality that will revert the previous patch operation. Similar functionality does not exist when you use the SwitchGridHome method.

This is described in Steps for Minimal Downtime Grid Infrastructure Out of Place ( OOP ) Patching using gridSetup.sh (Doc ID 2662762.1). To rollback, simply switch back to the previous GI home using the same method as for the patch.

There is no real rollback option as this is a switch from OLD_HOME to NEW_HOME To return to the old version you need to recreate another new home and switch to that.

Should I Roll Back the Database as Well?

This post describes rolling back the GI home only. Usually, I recommend keeping the database and GI patch level in sync. If I roll back GI, should I also roll back the database?

The short answer is no!

Keeping the GI and database patch in sync is a good idea. But when you need to roll back, you are in a contingency. Only roll back the component that gives you problems. Then, you will be out of sync for a period of time until you can get a one-off patch or move to the next Release Update. Being in this state for a shorter period is perfectly fine – and supported.

Other Blog Posts in This Series