Files to Move During Oracle Grid Infrastructure Out-Of-Place Patching

Like with Oracle Database, I strongly recommend patching Oracle Grid Infrastructure using the out-of-place method. It has many advantages over in-place patching

A while ago, I wrote about the files you must move during Oracle Database out-of-place patching. There are quite many files to consider. A blog reader left a comment asking about a similar blog post for Oracle Grid Infrastructure, so here it is.

Listener

The listener runs out of the GI home. When you patch out-of-place, the listener must also start in the new GI home.

If you patch using any of the below methods, it will move the listener for you:

The two methods also move listener.ora as part of the process.

If you use opatchauto you should note that the tool moves listener.ora when preparing the new GI using opatchauto apply ... -prepare-clone. You can run that command hours or days before you move the listener. If you add things to listener.ora in between, you must also add it to listener.ora in the new GI home.

Conclusion

There is really nothing to worry about when you patch Oracle Grid Infrastructure out-of-place. The above-mentioned two tools will take care of it for you.

Happy Patching

How to Apply Patches Out-of-place to Oracle Grid Infrastructure and Oracle Data Guard Using Standby-First

I strongly recommend that you always patch out-of-place. Here’s an example of how to do it on Oracle Grid Infrastructure (GI) and Oracle Data Guard using Standby-First Patch Apply.

Standby-First Patch Apply allows you to minimize downtime to the time it takes to perform a Data Guard switchover. Further, it allows you to test the apply mechanism on the standby database by temporarily converting it into a snapshot standby database.

The scenario:

  • Oracle Grid Infrastructure 19c and Oracle Database 19c
  • Patching from Release Update 19.17.0 to 19.19.0
  • Vertical patching – GI and database at the same time
  • Data Guard setup with two RAC databases
    • Cluster 1: copenhagen1 and copenhagen2
    • Cluster 2: aarhus1 and aarhus2
    • DB_NAME: CDB1
    • DB_UNIQUE_NAME: CDB1_COPENHAGEN and CDB1_AARHUS
  • Using Data Guard broker
  • Patching GI using SwitchGridHome method

Let’s get started!

Step 1: Prepare

I can make the preparations without interrupting the database.

  • I ensure my environment meets the requirements for Standby-First Patch Apply.

  • I deploy new GI homes to all four hosts.

    • I use the SwitchGridHome method.
    • Very important: I only perform step 1 (Prepare a New Grid Home).
    • I apply the Release Update 19.19.0 as part of the deployment using gridSetup.sh ... -applyRU ... -applyOneOffs as described in the blog post.
  • I deploy new database homes to all four hosts.

  • I also recompile invalid objects. This can make it easier for Datapatch later in the process:

    PRIMARY SQL> @?/rdbms/admin/utlrp
    

Step 2: Restart Standby in New Oracle Homes

Now, I can move the standby database to the new GI and database homes.

  • On the standby hosts, aarhus1 and aarhus2, I first move the database configuration files from the old database home to the new one.

  • I change the database configuration in GI. Next time the database restarts, it will be in the new Oracle Home:

    [oracle@aarhus1]$ $OLD_ORACLE_HOME/bin/srvctl modify database \
       -db $ORACLE_UNQNAME \
       -oraclehome $NEW_ORACLE_HOME
    
  • I switch to the new GI on all standby hosts, aarhus1 and aarhus2, by executing step 2 (Switch to the new Grid Home) of the SwitchGridHome method.

    • It involves running gridSetup.sh ... -switchGridHome and root.sh.
    • You can perform the switch in a rolling manner or all at once.
    • The switch restarts the standby database instance. The standby database instance restarts in the new Oracle Home.
    • If the profile of grid (like .bashrc) sets the ORACLE_HOME environment variable, I ensure to update it.
  • If I had multiple standby databases, I would process all standby databases in this step.

Step 3: Test Standby Database

This is an optional step, but I recommend that you do it.

  • I convert the standby database (CDB1_AARHUS) to a snapshot standby database:
    DGMGRL> convert database CDB1_AARHUS to snapshot standby;
    
  • I test Datapatch on the standby database. It is important that I run the command on the standby database:
    [oracle@aarhus1]$ $ORACLE_HOME/OPatch/datapatch -verbose
    
  • I can also test my application on the standby database.
  • At the end of my testing, I revert the standby database to a physical standby database. The database automatically reverts all the changes made during testing:
    DGMGRL> convert database CDB1_AARHUS to physical standby;
    

Step 4: Switchover

I can perform the previous steps without interrupting my users. This step requires a maintenance window because I am doing a Data Guard switchover.

  • I check that my standby database is ready to become primary. Then, I start a Data Guard switchover:
    DGMGRL> connect sys/<password> as sysdba
    DGMGRL> validate database CDB1_AARHUS;
    DGMGRL> switchover to CDB1_AARHUS;
    

A switchover does not have to mean downtime.

If my application is configured properly, the users will experience a brownout; a short hang, while the connections switch to the new primary database.

Step 5: Restart New Standby in New Oracle Homes

Now, the primary database runs on aarhus1 and aarhus2. Next, I can move the new standby hosts, copenhagen1 and copenhagen2, to the new GI and database homes.

  • I repeat step 2 (Restart Standby In New Oracle Homes) but this time for the new standby hosts, copenhagen1 and copenhagen2.

Step 6: Complete Patching

Now, both databases in my Data Guard configuration run out of the new Oracle Homes.

Only proceed with this step once all databases run out of the new Oracle Home.

I need to run this step as fast as possible after I have completed the previous step.

  • I complete the patching by running Datapatch on the primary database (CDB1_AARHUS). I add the recomp_threshold parameter to ensure Datapatch recompiles all objects that the patching invalidated:

    [orale@aarhus1]$ $ORACLE_HOME/OPatch/datapatch \
       -verbose \
       -recomp_threshold 10000
    
    • I only need to run Datapatch one time. On the primary database and only on one of the instances.
  • I can run Datapatch while users are connected to my database.

  • Optionally, I can switch back to the original primary database on copenhagen1 and copenhagen2, if I prefer to run it there.

That’s it. Happy patching!

Appendix

Further Reading

Other Blog Posts in This Series

Files to Move During Oracle Database Out-Of-Place Patching

I strongly recommend that you patch your Oracle Database using the out-of-place method. It has many advantages over in-place patching. But when you move your Oracle Database from one Oracle Home to another, you also need to move a lot of files.

Which files are that, and how can you make it easier for you? Also, some files might exist already in the target Oracle Home; what do you do then?

Files to Move

Password File

Linux:

dbs/orapw<ORACLE_SID>

Windows:

database\pwd<ORACLE_SID>.ora

You can override the default location in Windows using the following registry entries:

ORA_<ORACLE_SID>_PWFILE
ORA_PWFILE

If you use Grid Infrastructure, you can put the password file outside of the Oracle Home:

srvctl modify datatabase \
   -d $ORACLE_UNQNAME
   -pwfile <NEW_LOCATION_OUTSIDE_ORACLE_HOME>

I recommend storing it in ASM.

Parameter Files

Linux:

dbs/init<ORACLE_SID>.ora
dbs/spfile<ORACLE_SID>.ora

Windows:

database\init<ORACLE_SID>.ora
database\spfile<ORACLE_SID>.ora

Parameter files may include other files using the IFILE parameter.

You can redirect the server parameter file to a location outside the Oracle Home using the SPFILE parameter in your parameter file. If you use Grid Infrastructure, you can also redirect the server parameter file:

srvctl modify datatabase \
   -d $ORACLE_UNQNAME
   -spfile <NEW_LOCATION_OUTSIDE_ORACLE_HOME>

I recommend storing it in ASM.

Oratab

You need to update the database instance entry in the oratab file:

/etc/oratab

On Solaris, you find the file in:

/var/opt/oracle/oratab

On Windows, the file does not exist. Instead, you re-register the instance in the registry when you use oradim.exe.

Profile Scripts

Many people have profile scripts that set the environment to a specific database. Be sure to update the Oracle Home in such scripts.

Network Files

Network configuration files:

network/admin/ldap.ora
network/admin/listenener.ora
network/admin/sqlnet.ora
network/admin/tnsnames.ora

tnsnames.ora, sqlnet.ora and listener.ora can include contents from other files using the IFILE parameter, although the support of it is somewhat… questionable according to Allows for IFILE Ifile Support and Oracle Net (Doc ID 1339269.1).

You can redirect the files using the TNS_ADMIN environment variable. On Windows, you can also redirect using the TNS_ADMIN registry entry. If you use Grid Infrastructure, you can set the TNS_ADMIN environment variable as part of the cluster registration:

srvctl setenv database \
   -d $ORACLE_UNQNAME \
   -env "TNS_ADMIN=<NEW_LOCATION_OUTSIDE_ORACLE_HOME>"

Data Guard Broker Config Files

Linux:

dbs/dr1<ORACLE_SID>.dat
dbs/dr2<ORACLE_SID>.dat

Windows:

database\dr1<ORACLE_SID>.dat
database\dr2<ORACLE_SID>.dat

You can redirect the broker config files using the parameter DG_BROKER_CONFIG_FILEn:

alter system set db_broker_start=false;
alter system set dg_broker_config_file1='<NEW_LOCATION>/dr1<ORACLE_SID>.dat';
alter system set dg_broker_config_file2='<NEW_LOCATION>/dr2<ORACLE_SID>.dat';
alter system set db_broker_start=true;

I recommend storing the files in ASM.

Admin directory

admin subdirectory in Oracle Home:

admin

If you don’t set ORACLE_BASE environment variable, the database uses the Oracle Home for that location. It can contain diagnostic information like logs and tracing which you might want to move to the new Oracle Home.

In rare cases, the TDE keystore will go in there as well. This is definitely a folder that you want to keep.

admin/$ORACLE_UNQNAME/wallet

I recommend having a dedicated ORACLE_BASE location. Always set ORACLE_BASE environment variable for all databases. This will ensure that the database will not create an admin directory in the Oracle Home.

If you use TDE Tablespace Encryption, I strongly recommend that you store the database keystore outside of the Oracle Home using the WALLET_ROOT parameter.

Direct NFS

The Direct NFS configuration file:

dbs/oranfstab

The file might exist in the target Oracle Home, in which case you must merge the contents.

Typically, on Windows, the files from dbs are stored in database folder. But that’s different for this specific file (thanks Connor for helping out).

Centrally Managed Users

One of the default locations of the configuration file for Active Directory servers for centrally managed users is.

ldap/admin/dsi.ora

I recommend using the LDAP_ADMIN environment variable to redirect the file to a location outside of the Oracle Home.

LDAP

Configuration of Directory Usage Parameters:

ldap/admin/ldap.ora

I recommend using the LDAP_ADMIN or TNS_ADMIN environment variable to redirect the file to a location outside of the Oracle Home.

Oracle Messaging Gateway

The Messaging Gateway default initialization file:

mgw/admin/mgw.ora

The file might exist in the target Oracle Home, in which case you must merge the contents.

Oracle Database Provider for DRDA

Configuration file for Oracle Database Provider for DRDA:

drdaas/admin/drdaas.ora

The file might exist in the target Oracle Home, in which case you must merge the contents.

Oracle Text

If you use Oracle Text, you can generate a list of files that you must copy to the target Oracle Home:

ctx/admin/ctx_oh_files.sql

Oracle Database Gateway for ODBC

ODBC gateway initialization file:

hs/admin/init<ORACLE_SID>.ora

External Procedures

You can define the environment for external procedures in extproc.ora. Such configuration might exist in the target Oracle Home already, in which case you must merge the contents:

hs/admin/extproc.ora

Other Things to Consider

Enterprise Manager

After moving the database to a new Oracle Home, you need to reconfigure the target in Enterprise Manager. The Oracle Home path is part of the configuration. You can easily change it with emcli:

emcli modify_target \
   -type='oracle_database' \
   -name='<target_name>' \
   -properties='OracleHome:<new_oracle_home_path>'

Also, if you moved the listener as part of the patching to a new Oracle Home, you need to update that as well.

On My Oracle Support you can find an example on how to bulk update multiple targets.

In a future version of AutoUpgrade, it will be able to modify the target in Enterprise Manager for you.

Oracle Key Vault

There should be no additional configuration needed if you are using Oracle Key Vault and you move the database to a new Oracle Home.

You can find information on how to configure Oracle Key Vault in an Oracle Database in the documentation.

ZDLRA

When you patch out-of-place, you should always ensure that the target Oracle Home has the latest version of libra.so. AutoUpgrade does not copy this file for you, because there is no way to tell which version is the latest version.

Ideally, you configure ZDLRA in via sqlnet.ora and store the wallet outside of the Oracle Home. If so, the ZDLRA configuration works in the target Oracle Home because AutoUpgrade takes care of sqlnet.ora.

If you use ra_install.jar to configure ZDLRA, the script will:

  • Create a file: $ORACLE_HOME/dbs/ra<ORACLE_SID>.ora
  • Create a folder with a wallet: $ORACLE_HOME/dbs/wallet

In this case, you must manually copy the files to the target Oracle Home. You can avoid this by using sqlnet.ora for the configuration instead.

AutoUpgrade does not copy these files for you, because of the issue described above with libra.so.

Database Directories

You must update certain internal database directories, when you move the database to a new Oracle Home. The easiest way is to run:

@?/rdbms/admin/utlfixdirs.sql

In multitenant, you need to run the script in CDB$ROOT only.

On My Oracle Support you find a list of all the directories that you must update if you don’t use the script above.

In a future version of AutoUpgrade, it will change all the applicable directories for you automatically.

How to Make It Easy

Use AutoUpgrade

The easiest way to patch your Oracle Database is to use AutoUpgrade. It takes care of everything for you (unless stated otherwise above). You need a config file:

patch1.source_home=/u01/app/oracle/product/19.18.0
patch1.target_home=/u01/app/oracle/product/19.19.0
patch1.sid=MYDB

Then you start AutoUpgrade:

java -jar autoupgrade.jar -config mydb.cfg -mode deploy

That’s it! You don’t need to worry about all those configuration files. AutoUpgrade got you covered.

Use Read-Only Oracle Home

If you use Read-Only Oracle Home, the process is slightly easier.

You can’t store any custom files in the Oracle Home. Instead, you store all configuration files in Oracle Base Home; a directory outside the Oracle Home. You can find all the files you need to copy in Oracle Base Home.

When you create a new Read-Only Oracle Home, the installer will create a new Oracle Base Home specifically for that new Oracle Home. As part of the patching, you move the files into that new Oracle Base Home.

Conclusion

Did I miss any files? Please leave a comment if you move other files when you patch out-of-place.

How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and the -switchGridHome parameter.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0
  • Uses grid as the owner of the software

I patch only the GI home. I can patch the database later.

Preparation

I need to download:

  1. Download the base release of Oracle Grid Infrastructure (LINUX.X64_193000_grid_home.zip) from oracle.com or Oracle Software Delivery Cloud.
  2. Latest OPatch from My Oracle Support (6880880).
  3. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I will use the combo patch (34449117).

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Prepare a New Grid Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new Grid Home. I must do this as root on all nodes in my cluster (copenhagen1 and copenhagen 2):

    [root@copenhagen1]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen1]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen1]$ chmod -R 775 /u01/app/19.17.0
    
    [root@copenhagen2]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen2]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen2]$ chmod -R 775 /u01/app/19.17.0
    
  2. I switch to the Grid Home owner, grid.

  3. I ensure that there is passwordless SSH access between all the cluster nodes. It is a requirement for the installation, but sometimes it is disabled to strengthen security:

    [grid@copenhagen1]$ ssh copenhagen2 date
    
    [grid@copenhagen2]$ ssh copenhagen1 date
    
  4. I extract the base release of Oracle Grid Infrastructure into the new Grid Home. I work on one node only:

    [grid@copenhagen1]$ export NEWGRIDHOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    

    Optionally, you can use a golden image.

  5. I update OPatch to the latest version:

    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  6. Then, I check the Oracle Grid Infrastructure prerequisites. I am good to go, if the check doesn’t write any error messages to the console:

    [grid@copenhagen1]$ export ORACLE_HOME=$NEWGRIDHOME
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh -executePrereqs -silent
    
  7. I want to apply the 19.17.0 Release Update while I install the Grid Home. To do that, I must extract the patch file:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ unzip -oq p34449117_190000_Linux-x86-64.zip -d 34449117
    
    • The combo patch contains the GI bundle patch which consists of:
      • OCW Release Update (patch 34444834)
      • Database Release Update (34419443)
      • ACFS Release Update (34428761)
      • Tomcat Release Update (34580338)
      • DBWLM Release Update (33575402)
    • I will apply all of them.
  8. Finally, I can install the new Grid Home:

    • I need to update the environment variables.
    • CLUSTER_NODES is a comma-separated list of all the nodes in my cluster.
    • The parameter -applyRU must point to the directory holding the OCW Release Update.
    • The parameter -applyOneOffs is a comma-separated list of the paths to each of the other Release Updates in the GI bundle patch.
    [grid@copenhagen1]$ export ORACLE_BASE=/u01/app/grid
    [grid@copenhagen1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [grid@copenhagen1]$ export CLUSTER_NAME=$(olsnodes -c)
    [grid@copenhagen1]$ export CLUSTER_NODES=$(olsnodes | tr '\n' ','| sed 's/,\s*$//')
    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyRU /u01/software/34449117/34449117/34416665 \
       -applyOneOffs /u01/software/34449117/34449117/34419443,/u01/software/34449117/34449117/34428761,/u01/software/34449117/34449117/34580338,/u01/software/34449117/34449117/33575402 \
       -responseFile $ORACLE_HOME/install/response/gridsetup.rsp \
       INVENTORY_LOCATION=$ORA_INVENTORY \
       ORACLE_BASE=$ORACLE_BASE \
       SELECTED_LANGUAGES=en \
       oracle.install.option=CRS_SWONLY \
       oracle.install.asm.OSDBA=asmdba \
       oracle.install.asm.OSOPER=asmoper \
       oracle.install.asm.OSASM=asmadmin \
       oracle.install.crs.config.ClusterConfiguration=STANDALONE \
       oracle.install.crs.config.configureAsExtendedCluster=false \
       oracle.install.crs.config.clusterName=$CLUSTER_NAME \
       oracle.install.crs.config.gpnp.configureGNS=false \
       oracle.install.crs.config.autoConfigureClusterNodeVIP=false \
       oracle.install.crs.config.clusterNodes=$CLUSTER_NODES
    
    • Although the script says so, I don’t run root.sh yet.
    • I install it in silent mode, but I could use the wizard instead.
    • You need to install the new GI home in a way that matches your environment.
    • For inspiration, you can check the response file used in the previous Grid Home on setting the various parameters.
    • If I have one-off patches to install, I can add them to the -applyOneOffs parameter.

2. Switch to the new Grid Home

Now, I can complete the patching process by switching to the new Grid Home. I do this one node at a time. Step 2 involves downtime.

  1. First, on copenhagen1, I switch to the new Grid Home:
    [grid@copenhagen1]$ export ORACLE_HOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ export CURRENT_NODE=$(hostname)
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \
       -silent -switchGridHome \
       oracle.install.option=CRS_SWONLY \
       ORACLE_HOME=$ORACLE_HOME \
       oracle.install.crs.config.clusterNodes=$CURRENT_NODE \
       oracle.install.crs.rootconfig.executeRootScript=false
    
  2. Then, I run the root.sh script as root:
    • This step restarts the entire GI stack, including resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
    • Optionally, if I want a more graceful approach, I can manually stop the services, and perform draining.
    • In the end, GI restarts the resources (databases and the like).
    [root@copenhagen1]$ /u01/app/19.17.0/grid/root.sh
    
  3. I update any profiles (e.g., .bashrc) and other scripts referring to the Grid Home.
  4. I connect to the other node, copenhagen2, and repeat steps 2.1 to 2.3. I double-check that the CURRENT_NODE environment variable gets updated to copenhagen2.

That’s it! I have now patched my Grid Infrastructure deployment.

Later on, I can patch my databases as well.

A Word about the Directory for the New Grid Home

Be careful when choosing a location for the new Grid Home. The documentation lists some requirements you should be aware of.

In my demo environment, the existing Grid Home is:

/u01/app/19.0.0.0/grid

Since I am patching to 19.17.0, I think it makes sense to use:

/u01/app/19.17.0/grid

If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.

Don’t Forget to Clean Your Room

At a future point, I need to remove the old Grid Home. I use the deinstall tool in the Grid Home. I execute the command on all nodes in my cluster:

$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local

I will wait until:

  • I have seen the new Grid Home run without problems for a week or two.
  • I have patched my Oracle Databases managed by GI.
  • I have seen my Oracle Databases run without GI-related problems for a week or two.

Happy Patching!

Appendix

Windows

Oracle supports this functionality, SwitchGridHome, on Microsoft Windows starting from Oracle Database 23ai.

AIX

Check this out: Grid Infrastructure 19c Out-Of-Place Patching Fails on AIX

Further Reading

Other Blog Posts in This Series

Patching Oracle Grid Infrastructure 19c – Beginner’s Guide

This is the start of a blog post series on patching Oracle Grid Infrastructure 19c (GI). It is supposed to be easy to follow, so that I may have skipped a detail here and there.

I know my way around database patching. I have done it countless times. When it comes to GI, it’s the other way around. I have never really done it in the real world (i.e., before joining Oracle) and my knowledge was limited. I told my boss, Mike, and he gave me a challenge: Learn about it by writing a blog post series.

Why Do I Need to Patch Oracle Grid Infrastructure

Like any other piece of software, you need to patch GI to get rid of security issues and fix issues.

You should keep the GI and Oracle Database patch level in sync. This means that you need to patch GI and your Oracle Database at the same cadence. Ideally, that cadence is quarterly.

It is supported to run GI and Oracle Database at different patch levels as long as they are on the same release. GI is also certified to run some of the older Oracle Database releases. This is useful in upgrade projects. Check Oracle Clusterware (CRS/GI) – ASM – Database Version Compatibility (Doc ID 337737.1) for details.

A few examples:

GI Database Supported
19.18.0 19.18.0 Yes – recommended
19.16.0 19.18.0 Yes
19.18.0 19.16.0 Yes
19.18.0 11.2.0.4 Yes – used during upgrade, for instance
19.18.0 21.9.0 No

If possible and not too cumbersome, I recommend that you first patch GI and then Oracle Database. Some prefer to patch the two components in two separate operations, while others do it in one operation.

Which Patches Should You Apply to Oracle Grid Infrastructure

You should apply:

Whether you download the bundle patches individually or go with the combo patch is a matter of personal preference. Ultimately, they contain the same.

Some prefer an N-1 approach: When the April Release Update comes, they patch with the previous one from January; Always one quarter behind. For stability reasons, I assume.

What about OJVM patches for GI? The short answer is no.

Which Method Do I Use For Patching

You can patch in two ways:

  • In-place patching
  • Out-of-place patching
In-place Out-of-place
You apply patches to an existing Grid Home. You apply patches to a new Grid Home.
You need disk space for the patches. You need disk space for a brand new Grid Home and the patches.
You patch the existing Grid Home. When you start patching a node, GI drains all connections and moves services to other nodes. The node is down during patching. You create and patch a new Grid Home without downtime. You complete patching by switching to the new Grid Home. The node is down only during switching.
Longer node downtime. Shorter node downtime.
No changes to profile and scripts. Profile, scripts and the like must be updated to reflect the new Grid Home.
My recommended method.

Note: When I write node downtime, it does not mean database downtime. I discuss it shortly.

In other words:

In-place patching replaces the Oracle Clusterware software with the newer version in the same Grid home. Out-of-place upgrade has both versions of the same software present on the nodes at the same time, in different Grid homes, but only one version is active.

Oracle Fleet Patching and Provisioning

When you have more systems to manage, it is time to consider Fleet Patching and Provisioning (FPP).

Oracle Fleet Patching & Provisioning is the recommended solution for performing lifecycle operations (provisioning, patching & upgrades) across entire Oracle Grid Infrastructure and Oracle RAC Database fleets and the default solution used for Oracle Database Cloud services

It will make your life so much easier; more about that in a later blog post.

Zero Downtime Oracle Grid Infrastructure Patching

As of 19.16.0 you can also do Zero Downtime Oracle Grid Infrastructure Patching (ZDOGIP).

Use the zero-downtime Oracle Grid Infrastructure patching method to keep your Oracle RAC database instances running and client connections active during patching.

ZDOGIP is an extension to out-of-place patching. But ZDGIOP will not update the operating system drivers and will not bring down the Oracle stack (database instance, listener etc.). The new GI takes over control of the Oracle stack without users noticing. However, you must update the operating system drivers by taking down the node. But you can postpone it to a later point in time.

More details about ZDGIOP in a later blog post.

What about Oracle Database Downtime

When you patch GI on a node, the node is down. You don’t need to restart the operating system itself, but you do shut down the entire GI stack, including everything GI manages (database, listeners etc.).

What does that mean for Oracle Database?

Single Instance

If you have a single instance database managed by GI, your database is down during patching. Your users will experience downtime. By using out-of-place patching, you can reduce downtime.

Data Guard

If you have a Data Guard configuration, you can hide the outage from the end users.

First, you patch GI on your standby databases, then perform a switchover, and finally patch GI on the former primary database.

The only interruption is the switchover; a brownout period while the database switches roles. In the brownout period, the database appears to hang, but underneath the hood, you wait for the role switch to complete and connect to the new primary database.

If you have configured your application properly, it will not encounter any ORA-errors. Your users experience a short hang and continue as if nothing had happened.

RAC

If you have a RAC database, you can perform the patching in a rolling manner – node by node.

When you take down a node for patching, GI tells connections to drain from the affected instances and connect to other nodes.

If your application is properly configured, it will react to the drain events and connect seamlessly to another instance. The end users will not experience any interruption nor receive any errors.

If you haven’t configured your application properly or your application doesn’t react in due time, the connections will be forcefully terminated. How that will affect your users depend on the application. But it won’t look pretty.

Unless you configure Application Continuity. If so, the database can replay any in-flight transaction. From a user perspective, all looks fine. They won’t even notice that they have connected to a new instance and that the database replayed their transaction.

Happy Patching!

Appendix

Further Reading

Other Blog Posts in This Series

Can I Run Datapatch When Users Are Connected

The short answer is: Yes! The longer answer is: Yes, but very busy systems or in certain situations, you might experience a few hiccups.

The obvious place to look for the answer would be in the documentation. Unfortunately, there is no Patching Guide similar to the Upgrade Guide. The information in this blog post is pieced together from many different sources.

A few facts about patching with Datapatch:

  • The database must be open in read write mode.
  • You can’t run Datapatch on a physical standby database – even if it’s open (Active Data Guard).
  • A patch is not fully installed until you have executed Datapatch successfully.

How To

First, let me state that it is fully supported to run Datapatch on a running database with users connected.

The procedure:

  1. Install a new Oracle Home and use OPatch to apply the desired patches.
  2. Shut down the database.
  3. Restart the database in the new, patched Oracle Home.
  4. Downtime is over! Users are allowed to connect to the database
  5. Execute ./datapatch -verbose.
  6. End of procedure. The patch is now fully applied.

Often users move step 4 (Downtime is over) to the end of the procedure. That’s of course also perfectly fine, but it does extend the downtime needed and often is not needed.

What About RAC and Data Guard

The above procedure is exactly what happens in a rolling patch apply on a RAC database. When you perform a rolling patch apply on a RAC database, there is no downtime at all. You use opatchauto to patch a RAC database. opatchauto restarts all instances of the database in the patched Oracle Home in a rolling manner. Finally, it executes datapatch on the last node. Individual instances are down temporarily, but the database is always up.

It is a similar situation when you use the Standby First Patch Apply. First, you restart all standby databases in the patched Oracle Home. Then, you perform a switchover and restart the former primary database in the patched Oracle Home. Finally, you execute datapatch to complete the patch installation. You must execute datapatch on the primary database.

Either way, don’t use Datapatch until all databases or instances run on the new, patched Oracle Home.

That’s It?

Yes, but I did write initally that there might be hiccups.

Waits

Datapatch connects to the database like any other session to make changes inside the database. These changes could be:

  • Creating new tables
  • Altering existing tables
  • Creating or altering views
  • Recreating PL/SQL packages like DBMS_STATS

Imagine this scenario:

  1. You restart the database in the patched Oracle Home.
  2. A user connects and starts to use DBMS_STATS.
  3. You execute datapatch.
    1. Datapatch must replace DBMS_STATS to fix a bug.
    2. Datapatch executes CREATE OR REPLACE PACKAGE SYS.DBMS_STATS .....
    3. The Datapatch database session go into a wait.
  4. User is done with DBMS_STATS.
  5. The Datapatch session come out of wait and replace the package.

In this scenario, the patching procedure was prolonged due to the wait. But it completed eventually.

Hangs

From time to time, we are told that Datapatch hangs. Most likely, it is not a real hang, but just a wait on a lock. You can identify the blocking session by using How to Analyze Library Cache Timeout with Associated: ORA-04021 ‘timeout occurred while waiting to lock object %s%s%s%s%s.’ Errors (Doc ID 1486712.1).

You might even want to kill the blocking session to allow Datapatch to do its work.

Timeouts

What will happen in the above scenario if the user never releases the lock on DBMS_STATS? By default, Datapatch waits for 15 minutes (controlled by _kgl_time_to_wait_for_locks) before throwing an error:

ORA-04021: timeout occurred while waiting to lock object

To resolve this problem, restart Datapatch and ensure that there are no blocking sessions. Optionally, increase the DDL timeout:

./datapatch -ddl_lock_timeout <time-in-sec>

Really Busy Databases

I recommend patching at off-peak hours to reduce the likelihood of hitting the above problems.

If possible, you can also limit the activity in the database while you perform the patching. If your application is using e.g. DBMS_STATS and locking on that object is often a problem, you can hold off these sessions for a little while.

The Usual Suspects

Based on my experience, when there is a locking situation, these are often the sinner:

  • Scheduler Jobs – if you have jobs runnings very frequently, they may all try to start when you restart your database in the new Oracle Home. Suspend the workload temporarilty by setting job_queue_processes to 0.
  • Advanced Queeing – if you have lots of activities happening via AQ, you can suspend it temporarily by setting aq_tm_processes to 0. If you disable the scheduler, you also disable AQ.
  • Materialized Views – when the database refreshes materialized views it uses internal functionality (or depending objects) that Datapatch needs to replace. By disabling the scheduler, you also disable the materialized view refreshes.
  • Backup jobs – I have seen several situations where Datapatch couldn’t replace the package dbms_backup_restore because the backup system took archive backups.

Last Resort

If you want to be absolutely sure no one intervenes with your patching, use this approach. But it means downtime:

  1. SQL> startup restrict
  2. ./datapatch -verbose
  3. SQL> alter system disable restricted session;

I don’t recommend starting in upgrade mode. To get out of upgrade mode a database restart is needed extending the downtime window.

Datapatch And Resources

How much resources does Datapatch need? Should I be worried about Datapatch depleting the system?

No, you should not. The changes that Datapatch needs to make are not resource-intensive. However, a consequence of the DDL statements might be object invalidation. But even here, you should not worry. Datapatch will automatically recompile any ORACLE_MAINTAINED object that was invalidated by the patch apply. But the recompilation happens serially, i.e., less resources needed.

Of course, if you system is running at 99% capacity, it might be a problem. On the other hand, if your system is at 99%, patching problems are probably the least of your worries.

What About OJVM

If you are using OJVM and you apply the OJVM bundle patch, things are a little different.

Release RAC Rolling Standby-First Datapatch
Oracle Database 21c Fully No No Datapatch downtime.
Oracle Database 19c + 18c Partial No No Datapatch downtime, but java system is patched which requires ~10 second outage. Connected clients using java will receive ORA-29548.
Oracle Database 12.2 + 12.1 No No Datapatch must execute in upgrade mode.
Oracle Database 11.2.0.4 No No Similar to 12.2 and 12.1 except you don’t use Datapatch.

Mike Dietrich also has a good blog that you might want to read: Do you need STARTUP UPGRADE for OJVM?

What About Oracle GoldenGate

You should stop Oracle GoldenGate when you execute datapatch. When datapatch is done, you can restart Oracle GoldenGate.

If you are manually recompiling objects after datapatch, I recommend that you restart Oracle GoldenGate after the recompilation.

The above applies even if the patches being applied does not contain any Oracle GoldenGate specific patches.

Oracle GoldenGate uses several objects owned by SYS. When datapatch is running it might change some of those objects. In that case, unexpected errors might occur.

Recommendations

Based on my experience, these are my recommendations

Before Patching

  • Recompile invalid objects (utlrp).
  • Perform a Datapatch sanity check ($ORACLE_HOME/OPatch/datapatch -sanity_checks).
  • Postpone your backup jobs.
  • Stop any Oracle GoldenGate processes that connects to the database.
  • Disable the scheduler.

Patching

  • Always use the latest OPatch.
  • Always use out-of-place patching, even for RAC databases.
  • Always enable verbose output in Datapatch ($ORACLE_HOME/OPatch/datapatch -verbose).

After Patching

  • If applicable, re-enable
    • Backup jobs.
    • Oracle GoldenGate processes.
    • The scheduler.
  • Check Datapatch output. If Datapatch failed to recompile any objects, a message is printed to the console. If you patch interactively, you can find the same information in the log files.

Still Don’t Believe Me?

In Autonomous Database (ADB), there is no downtime for patching. An ADB runs on RAC and patching is fully rolling. The automation tooling executes Datapatch while users are connected to the database.

Of course, one might run into the same issues described above. But Oracle have automation to handle the situation. If necessary, the database kills any sessions blocking Datapatch. In the defined maintenance window in your ADB, you may end up in a situation that a long-running, blocking session terminates because it was blocking a Datapatch execution. But if you minimize your activities in the defined maintenance windows, then chances of that happening is minimal.

Conclusion

Go ahead and patch your database with Datapatch while users are connected.

Further Reading