Use Cluster Verification Utility (cluvfy) and Avoid Surprises

I guess no one really wants nasty surprises. That’s also true when you patch Oracle Grid Infrastructure 19c (GI). Luckily, you can prepare yourself using the Cluster Verification Utility and its oddly sounding command line interface, cluvfy (I pronounce it cluffy).

The Grid Infrastructure Release Update readme says:

2.1.3 Patch Installation Checks The Cluster Verification Utility (CVU) command line interface (CLUVFY) may be used to verify the readiness of the Grid Home to apply the patch. … The CLUVFY command line for patching ensures that the Grid Home can receive the new patch and also ensures that the patch application process completed successfully leaving the home in the correct state.

That sounds like a good idea. You can check the GI home before and after patching.

Where Is Cluster Verification Utility?

Although Cluster Verification Utility (CVU) is present in my Grid Infrastructure (GI) home, I always get the latest version from My Oracle Support. I find it via patch 30839369.

CVU is backward compatible, so I download the release Oracle Database 21.0.0.0.0.

  1. I must install CVU on one node only. CVU connects to the other nodes and checks them as well. I log on as grid:
    [grid@copenhagen1]$ export CVUHOME=/u01/app/grid/cvu
    [grid@copenhagen1]$ mkdir $CVUHOME
    
  2. I extract the zip file I downloaded from My Oracle Support patch 30839369:
    [grid@copenhagen1]$ cd $CVUHOME
    [grid@copenhagen1]$ unzip /u01/app/grid/patches/cvupack_linux_ol7_x86_64.zip   
    
  3. I check that it works:
    [grid@copenhagen1]$ export PATH=$CVUHOME/bin:$PATH
    [grid@copenhagen1]$ cluvfy -version
    Version 21.0.0.0.0 Build 011623x8664
    Full version 21.9.0.0.0
    

Check Before Patching

Before I start patching GI, I check my cluster with cluvfy. I log on as grid:

[grid@copenhagen1]$ export PATH=$CVUHOME/bin:$PATH
[grid@copenhagen1]$ cluvfy stage -pre patch

cluvfy prints the report to the screen. In this case, all is good, no problems were found:

Performing following verification checks ...

  cluster upgrade state ...PASSED
  OLR Integrity ...PASSED
  Hosts File ...PASSED
  Free Space: copenhagen2:/ ...PASSED
  Free Space: copenhagen1:/ ...PASSED
  OPatch utility version consistency ...PASSED
  Software home: /u01/app/19.0.0.0/grid ...PASSED
  ORAchk checks ...PASSED

Pre-check for Patch Application was successful.

Here are a few examples of problems detected by cluvfy:

  • No SSH connection between the nodes:
    User Equivalence ...FAILED (PRVG-2019, PRKC-1191)
    PRVF-4009 : User equivalence is not set for nodes: copenhagen2
    Verification will proceed with nodes: copenhagen1
    
  • Different versions of OPatch on the individual nodes:
    Performing following verification checks ...
    
       cluster upgrade state ...PASSED
       OLR Integrity ...PASSED
       Hosts File ...PASSED
       Free Space: copenhagen2:/ ...PASSED
       Free Space: copenhagen1:/ ...PASSED
       OPatch utility version consistency ...WARNING (PRVH-0668)
    

Check After Patching

The patch readme advises me to rerun cluvfy after patching:

[grid@copenhagen1]$ export PATH=$CVUHOME/bin:$PATH
[grid@copenhagen1]$ cluvfy stage -post patch

Luckily, I patched GI without any problems. cluvfy tells me all is good:

Performing following verification checks ...

  cluster upgrade state ...PASSED

Post-check for Patch Application was successful.

Happy Cluvfy’ing!

Appendix

Further Reading

Cluster Verification Utility in Your Grid Infrastructure Home

I can also find Cluster Verification Utility (CVU) in my GI home.

[grid@copenhagen1]$ cd $ORACLE_HOME/bin
[grid@copenhagen1]$ ls -l cluvfy
-rwxr-xr-x 1 root oinstall 10272 Feb 20 11:47 cluvfy

The tool gets updated with Release Updates. However, when I try to use it, it prints this warning:

[grid@copenhagen1]$ cd $ORACLE_HOME/bin
[grid@copenhagen1]$ ./cluvfy stage -pre patch
This software is "223" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 756671.1 for more details.

Even if I patch my GI home with the newest Release Update, I am not guaranteed to get the latest version of CVU.

Thus, I recommend always getting the latest version of CVU from My Oracle Support.

Other Blog Posts in This Series

How to Avoid Interruptions When You Patch Oracle Grid Infrastructure 19c

In previous blog posts, I have shown you have to patch Oracle Grid Infrastructure 19c (GI). When you patch GI, it happens in a rolling manner, node by node. The database is always up, but each instance is temporarily down.

If my application is connected to an instance going down, how do I ensure my application is not interrupted?

My demo system:

  • Is a 2-node RAC. The nodes are named copenhagen1 and copenhagen2.
  • Database is called CDB_COPENHAGEN
  • PDB is called SALES

Connect Using A Custom Service

  1. First, I create a dedicated service that my application must use:

    [oracle@copenhagen1]$ srvctl add service \
       -db CDB1_COPENHAGEN \
       -pdb SALES \
       -service SALESGOLD \
       -preferred CDB11,CDB12 \
       -failover_restore AUTO \
       -failoverretry 1 \
       -failoverdelay 3 \
       -commit_outcome TRUE \
       -failovertype AUTO \
       -replay_init_time 600 \
       -retention 86400 \
       -notification TRUE \
       -drain_timeout 300 \
       -stopoption IMMEDIATE
    
    • I want to use all nodes in my cluster for this service, so I set preferred to all instances of my database.
    • I set failover_restore AUTO to enable Transparent Application Continuity for the service.
    • When I stop the service, the sessions have 5 minutes to move to another instance as specified by drain_timeout.
    • If the sessions don’t drain in due time, the stopoption parameter tells that sessions are to be killed immediately.
  2. Now I start the service:

    [oracle@copenhagen1]$ srvctl start service \
       -db CDB1_COPENHAGEN \
       -service SALESGOLD
    
  3. I add an alias called SALESGOLD to tnsnames.ora:

    SALESGOLD =
    (DESCRIPTION =
       (CONNECT_TIMEOUT=90)(RETRY_COUNT=50)(RETRY_DELAY=3)(TRANSPORT_CONNECT_TIMEOUT=3)
       (ADDRESS_LIST =
          (LOAD_BALANCE=on)
          (ADDRESS = (PROTOCOL = TCP)(HOST=copenhagen-scan)(PORT=1521))
       )
       (CONNECT_DATA=
          (SERVICE_NAME = SALESGOLD)
       )
    )
    
    • I connect to SALESGOLD service using the SERVICE_NAME attribute.
    • I connect via the SCAN listener.
  4. I configure my application to connect via the SALESGOLD alias.

You should always configure your application to connect to a custom service. The database has default services, but you should not use them for your application. Custom services are the way to go if you want a highly available system.

Let’s Patch

It doesn’t matter whether you use in-place or out-of-place patching. If you are using out-of-place patching, do as much of the preparations as possible. This procedure starts right before a node goes down.

  1. First, I want to patch GI on copenhagen1.
  2. I stop my service SALESGOLD on that node only:
    [oracle@copenhagen1]$ srvctl stop service \
       -db CDB1_COPENHAGEN \
       -service SALESGOLD \
       -instance CDB11 \
       -force
    
    • I use the instance parameter to stop the service only on the node I am about to patch. In this case, CDB11 is the database instance name of my database CDB_COPENHAGEN running on copenhagen1.
    • GI marks the service as OFFLINE on copenhagen1, and new connections will be directed to the other node.
    • GI sends out a FAN event telling the instance is going down. The Oracle driver on the clients detects the event.
    • When I created the service, I set the drain timeout to 300. This gives database sessions 5 minutes to finish their work.
      • If desired, I can override the value using -drain_timeout.
    • If a session becomes idle before the timeout, the driver on the client automatically reconnects to the other node. The session continues on the other node; no error message is displayed.
    • If the session is still active after the timeout, it will be killed immediately. This I defined when I created the service using stopoption parameter.
      • I enabled Transparent Application Continuity (TAC) on my database service using the failover_restore parameter. TAC detects the session is killed and attempts to intervene.
      • If my application is fit for TAC, a new connection is made, any in-flight transaction is replayed in the new session; no error message is displayed.
      • If my application is not fit for TAC or if I was doing something that is not replayable, my session is terminated, and my application receives an error.
  3. Now, all sessions using SALESGOLD service have been drained from copenhagen1. I can bring down the GI stack to patch.
    • I use opatchauto apply for in-place patching.
    • Or, I use opatchauto apply -switch-clone for out-of-place patching.
    • Or, I use root.sh for out-of-place patching with switchGridHome.
    • Or, whatever method suits my environment. It doesn’t matter.
  4. When the GI stack is restarted, my service SALESGOLD is automatically restarted. Now, my application can connect to the instance on copehagen1 again.
  5. I ensure that the service is started:
    [grid@copenhagen1]$ crsctl stat resource -t
    
  6. I repeat steps 2-5 for the second node, copenhagen2.

My Application Can’t Use Application Continuity

If, for some reason, you are not ready to use Application Continuity or Tranparent Application Continuity yet, you can still use this procedure. You can change the failover_restore setting on the service to something that suits your application. You can still perform draining, but you need to find another method for dealing with those sessions that don’t drain in due time.

One option is to set the drain_timeout parameter high enough to allow everyone to finish their work and reconnect to another node. Just be aware that the other node must be able to handle the entire workload during that period.

If draining doesn’t work for you either (or the patch is non-rolling), you must take an outage to patch. You can follow the Minimum Downtime approach described in Rolling Patch – OPatch Support for RAC (Doc ID 244241.1):

  • Shut down GI stack on node 1
  • Patch node 1
  • Shut down GI stack on node 2
  • Patch node 2
  • Shut down GI stack on node 3
  • Start up node 1 and node 2
  • Patch node 3
  • Start up node 3

Happy Patching!

Appendix

Further Reading

Other Blog Posts in This Series

New Webinars Coming Up

I am really excited to announce two new webinars:

  • Data Pump Best Practices and Real World Scenarios April 5, 2023, 16:00 CET
  • Release and Patching Strategies for Oracle Database 23c May 10, 2023, 16:00 CET

Oracle Database 19c Upgrade Virtual Classroom

You can sign up here.

The entire team, Roy, Mike, Bill, Rodrigo, and myself, are working hard to polish all the details.

Data Pump Best Practices and Real World Scenarios

In short: It’s all the stuff we couldn’t fit into our last Data Pump webinar.

Here’s the full abstract: > We promised to share more information in our last Data Pump Deep Dive With Development seminar. And here we are back again. Data Pump best practices is the topic we would like to emphasize on today. This will include some common tips and tricks but target especially parallel optimizations and transformation. It is quite common that you restructure objects and types when you use Data Pump for a migration. So we will give a detailed overview on the most common scenarios. This will guide us directly to real world scenarios where we’ll demonstrate several of those best practices used by customers.

Release and Patching Strategies for Oracle Database 23c

This is a revamped version of our very first webinar, Release and Patching Strategy. It’s updated to reflect the latest changes, and we have included even more details and demos.

Last time, the interest for this webinar was huge, and we ended up maxing out of Zoom capacity. A lot of you couldn’t get it. So, you better be ready on time, or you might miss your seat.

The full abstract: > This is a session every Oracle customer needs to attend to. Oracle Database 23c, the next long-term support release will be available sometime this year. Now it is time to refresh your knowledge about the best and most efficient strategies for your future release planning. Are there changes to the release numbering? Are there important changes regarding database patching? We will give you a complete overview on the available patch bundles and recent and future changes. We’ll discuss and showcase why a proper patching strategy is of vital importance – and how you can automate and optimize certain essential tasks.

But I Can’t Make It

Don’t worry. As usual, we will publish the recording on our YouTube channel and share the slides with you. Keep an eye out on my Webinars page.

But it’s better to watch it live. The entire team will be there, and we will answer all your questions. I promise you; we won’t leave until all questions have been answered.

All Tech, No Marketing

Remember, our mantra is: All tech, no marketing.

These webinars are technical. This is the place for you if you want all the gory details and cool demos.

I hope to see you there

How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place OPatchAuto

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and opatchauto.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0

When I use this procedure, I patch both the GI home only. I can patch the database later.

Preparation

I need to download:

  1. Latest OPatch from My Oracle Support (6880880).
  2. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I use patch 34416665.

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Make Ready for Patching

I can do this in advance. It doesn’t cause any downtime.

  1. I ensure that there is passwordless SSH access between all the cluster nodes:

    [grid@copenhagen1]$ ssh copenhagen2 date
    
    [grid@copenhagen2]$ ssh copenhagen1 date
    
  2. I update OPatch to the latest available version.

    • On all nodes (copenhagen1 and copenhagen 2)
    • In the GI home as grid

    I just show how to do it for the GI home on one node:

    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  3. I extract the patch file on both nodes. I show it for just the first node:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ mkdir 34416665
     [grid@copenhagen1]$ mv p34416665_190000_Linux-x86-64.zip 34449117
     [grid@copenhagen1]$ cd 34416665
     [grid@copenhagen1]$ unzip p34416665_190000_Linux-x86-64.zip
    
  4. I check for patch conflicts.

    • I can find the details about which checks to make in the patch readme file.
    • I must check for conflicts in the GI home only.
    • Since I have the same patches on all nodes, I can check for conflicts on just one of the nodes.

    As grid with ORACLE_HOME set to the GI home:

    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34419443 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34444834 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34428761 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34580338 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/33575402 | grep checkConflictAgainstOHWithDetail
    

2. Clone Home and Patch

Like phase 1, I can do this in advance. No downtime.

  1. First, on copenhagen1, I clone the existing GI home. I set ORACLE_HOME to my existing GI home. As root:
    [root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
    [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto \
       apply /u01/software/34416665/34416665 \
       -prepare-clone \
       -oh $ORACLE_HOME
    
    • opatchauto prompts me for the location of the new GI home. I choose the following location:
      • /u01/app/19.17.0/grid
    • Optionally, I can specify the new location in a response file. See appendix.
    • opatchauto clones the GI home and applies the patches.
  2. Next, I clone on the other node. On copenhagen2, I repeat step 1.
    • opatchauto automatically chooses the same locations for GI home as I used on the first node.

Do not touch the new, cloned homes. The next step will fail if you make any changes (like applying additional patches).

3. Switch to the New GI Home

Now, I can complete the patching process by switching to the new GI home. I do this one node at a time.

  1. First, on copenhagen1, I switch to the new GI home. ORACLE_HOME is still set to my existing GI home. As root:
    [root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
    [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto apply -switch-clone
    
    • Be sure to start with the same node you did when you cloned the GI home (phase 2).
    • This step stops the entire GI stack, including the resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the GI home, opatchauto will move it to the new GI home, including copying listener.ora.
    • Optionally, if I want a more graceful approach, I can manually stop the services and perform draining.
    • In the end, GI restarts using the new, patched GI home. GI restarts the resources (databases and the like) as well.
    • Proceed with the rest of the nodes as quickly as possible.
  2. I connect to the other node, copenhagen2, and repeat step 1.
    • If I had more nodes in my cluster, I must process the nodes in the same order as I did the cloning (phase 2).
  3. I update any profiles (e.g., .bashrc) and other scripts referring to the GI home on all nodes.

That’s it! I have now patched my Grid Infrastructure deployment.

In my demo environment, each node was down for around 7 minutes. But the database remained up on other nodes all the time.

For simplicity, I have removed some of the prechecks. Please follow the patch readme file instructions and perform all the described prechecks.

In this blog post, I decided to perform the out-of-place patching as a two-step process. This gives me more control. I can also do it all in just one operation. Please see Grid Infrastructure Out of Place ( OOP ) Patching using opatchauto (Doc ID 2419319.1) for details.

Later on, I patch my database.

A Word about the Directory for the New GI Home

Be careful when choosing a location for the new GI home. The documentation lists some requirements you should be aware of.

In my demo environment, the existing GI home is:

/u01/app/19.0.0.0/grid

Since I am patching to 19.17.0, I think it makes sense to use:

/u01/app/19.17.0/grid

If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.

Don’t Forget to Clean Your Room

At a future point, I need to remove the old GI home. I use the deinstall tool in the Oracle home.

I execute the command on all nodes in my cluster:

$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local

But I wait a week or two before doing so. To ensure that everything runs smoothly on the new patch level.

Happy Patching!

Appendix

Installing Multiple Patches

In the example above, I am just installing one patch. Often, you need to install multiple patches, like Release Update, Monthly Recommend Patches (MRP), and perhaps also one-offs.

OPatchAuto can install multiple patches in one operation. You can use -phBaseDir to specify a directory where you place all the patches as subdirectories:

opatchauto apply -phBaseDir <directory> ...

How to Use Prepare-Clone in Silent Mode

When I execute opatchauto apply -prepare-clone, the program prompts for the location of the new GI home. For an unattended execution, I can specify the old and new GI home in a file, and reference that file using the -silent parameter.

[root@copenhagen1]$ cat prepare-clone.properties
/u01/app/19.0.0.0/grid=/u01/app/19.17.0/grid

[root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto apply ... -prepare-clone -silent prepare-clone.properties -oh $ORACLE_HOME

Further Reading

Other Blog Posts in This Series

How to Patch Oracle Grid Infrastructure 19c Using In-Place OPatchAuto

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the in-place method and opatchauto.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0

When I use this procedure, I patch both the GI home only. However, opatchauto has the option of patching the database home as well.

Preparation

I need to download:

  1. Latest OPatch from My Oracle Support (6880880).
  2. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I use patch 34416665.

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Make Ready For Patching

I can do this in advance. It doesn’t cause any downtime.

  1. I update OPatch to the latest available version.

    • On all nodes (copenhagen1 and copenhagen 2)
    • In the GI home as grid

    I just show how to do it for the GI home on one node

    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  2. I extract the patch file on both nodes. I show it for just the first node:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ mkdir 34416665
     [grid@copenhagen1]$ mv p34416665_190000_Linux-x86-64.zip 34449117
     [grid@copenhagen1]$ cd 34416665
     [grid@copenhagen1]$ unzip p34416665_190000_Linux-x86-64.zip
    
  3. I check for patch conflicts.

    • In the patch readme file I can find the details about which checks to make.
    • I must check for conflicts in the GI home only.
    • Since I have the same patches on all nodes, I can check for conflicts on just one of the nodes.

    As grid with ORACLE_HOME set to the GI home:

    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34419443 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34444834 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34428761 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34580338 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/33575402 | grep checkConflictAgainstOHWithDetail
    

2. Start Patching

Now, I start the actual patch process. I do this one node at a time.

  1. First, as root on copenhagen1, I use opatchauto to patch my GI home:

    [root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
    [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto \
       apply /u01/app/grid/patches/34416665/34416665 \
       -oh $ORACLE_HOME
    
    • This step stops the entire GI stack, including the resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • Optionally, if I want a more graceful approach, I can manually stop the services, and perform draining.
    • opatchauto applies the patches to the GI home.
    • In the end, opatchauto restarts GI, including the managed resources (databases and the like).
    • opatchauto does not patch the database home.
  2. I connect to the other node, copenhagen2, and repeat step 1.

That’s it! I have now patched my Grid Infrastructure.

In my demo environment, each node was down for around 15 minutes. But the database remained up on other nodes all the time.

For simplicity, I have removed some of the prechecks. Please follow the patch readme file instructions and perform all the described prechecks.

Later on, I can patch my database.

Happy Patching!

Appendix

Installing Multiple Patches

In the example above, I am just installing one patch. Often, you need to install multiple patches, like Release Update, Monthly Recommend Patches (MRP), and perhaps also one-offs.

OPatchAuto can install multiple patches in one operation. You can use -phBaseDir to specify a directory where you place all the patches as subdirectories:

opatchauto apply -phBaseDir <directory> ...

Other Blog Posts in This Series

How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and the -switchGridHome parameter.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0
  • Uses grid as the owner of the software

I patch only the GI home. I can patch the database later.

Preparation

I need to download:

  1. Download the base release of Oracle Grid Infrastructure (LINUX.X64_193000_grid_home.zip) from oracle.com or Oracle Software Delivery Cloud.
  2. Latest OPatch from My Oracle Support (6880880).
  3. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I will use the combo patch (34449117).

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Prepare a New Grid Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new Grid Home. I must do this as root on all nodes in my cluster (copenhagen1 and copenhagen 2):

    [root@copenhagen1]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen1]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen1]$ chmod -R 775 /u01/app/19.17.0
    
    [root@copenhagen2]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen2]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen2]$ chmod -R 775 /u01/app/19.17.0
    
  2. I switch to the Grid Home owner, grid.

  3. I ensure that there is passwordless SSH access between all the cluster nodes. It is a requirement for the installation, but sometimes it is disabled to strengthen security:

    [grid@copenhagen1]$ ssh copenhagen2 date
    
    [grid@copenhagen2]$ ssh copenhagen1 date
    
  4. I extract the base release of Oracle Grid Infrastructure into the new Grid Home. I work on one node only:

    [grid@copenhagen1]$ export NEWGRIDHOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    

    Optionally, you can use a golden image.

  5. I update OPatch to the latest version:

    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  6. Then, I check the Oracle Grid Infrastructure prerequisites. I am good to go, if the check doesn’t write any error messages to the console:

    [grid@copenhagen1]$ export ORACLE_HOME=$NEWGRIDHOME
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh -executePrereqs -silent
    
  7. I want to apply the 19.17.0 Release Update while I install the Grid Home. To do that, I must extract the patch file:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ unzip -oq p34449117_190000_Linux-x86-64.zip -d 34449117
    
    • The combo patch contains the GI bundle patch which consists of:
      • OCW Release Update (patch 34444834)
      • Database Release Update (34419443)
      • ACFS Release Update (34428761)
      • Tomcat Release Update (34580338)
      • DBWLM Release Update (33575402)
    • I will apply all of them.
  8. Finally, I can install the new Grid Home:

    • I need to update the environment variables.
    • CLUSTER_NODES is a comma-separated list of all the nodes in my cluster.
    • The parameter -applyRU must point to the directory holding the OCW Release Update.
    • The parameter -applyOneOffs is a comma-separated list of the paths to each of the other Release Updates in the GI bundle patch.
    [grid@copenhagen1]$ export ORACLE_BASE=/u01/app/grid
    [grid@copenhagen1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [grid@copenhagen1]$ export CLUSTER_NAME=$(olsnodes -c)
    [grid@copenhagen1]$ export CLUSTER_NODES=$(olsnodes | tr '\n' ','| sed 's/,\s*$//')
    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyRU /u01/software/34449117/34449117/34416665 \
       -applyOneOffs /u01/software/34449117/34449117/34419443,/u01/software/34449117/34449117/34428761,/u01/software/34449117/34449117/34580338,/u01/software/34449117/34449117/33575402 \
       -responseFile $ORACLE_HOME/install/response/gridsetup.rsp \
       INVENTORY_LOCATION=$ORA_INVENTORY \
       ORACLE_BASE=$ORACLE_BASE \
       SELECTED_LANGUAGES=en \
       oracle.install.option=CRS_SWONLY \
       oracle.install.asm.OSDBA=asmdba \
       oracle.install.asm.OSOPER=asmoper \
       oracle.install.asm.OSASM=asmadmin \
       oracle.install.crs.config.ClusterConfiguration=STANDALONE \
       oracle.install.crs.config.configureAsExtendedCluster=false \
       oracle.install.crs.config.clusterName=$CLUSTER_NAME \
       oracle.install.crs.config.gpnp.configureGNS=false \
       oracle.install.crs.config.autoConfigureClusterNodeVIP=false \
       oracle.install.crs.config.clusterNodes=$CLUSTER_NODES
    
    • Although the script says so, I don’t run root.sh yet.
    • I install it in silent mode, but I could use the wizard instead.
    • You need to install the new GI home in a way that matches your environment.
    • For inspiration, you can check the response file used in the previous Grid Home on setting the various parameters.
    • If I have one-off patches to install, I can add them to the -applyOneOffs parameter.

2. Switch to the new Grid Home

Now, I can complete the patching process by switching to the new Grid Home. I do this one node at a time. Step 2 involves downtime.

  1. First, on copenhagen1, I switch to the new Grid Home:
    [grid@copenhagen1]$ export ORACLE_HOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ export CURRENT_NODE=$(hostname)
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \
       -silent -switchGridHome \
       oracle.install.option=CRS_SWONLY \
       ORACLE_HOME=$ORACLE_HOME \
       oracle.install.crs.config.clusterNodes=$CURRENT_NODE \
       oracle.install.crs.rootconfig.executeRootScript=false
    
  2. Then, I run the root.sh script as root:
    • This step restarts the entire GI stack, including resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
    • Optionally, if I want a more graceful approach, I can manually stop the services, and perform draining.
    • In the end, GI restarts the resources (databases and the like).
    [root@copenhagen1]$ /u01/app/19.17.0/grid/root.sh
    
  3. I update any profiles (e.g., .bashrc) and other scripts referring to the Grid Home.
  4. I connect to the other node, copenhagen2, and repeat steps 2.1 to 2.3. I double-check that the CURRENT_NODE environment variable gets updated to copenhagen2.

That’s it! I have now patched my Grid Infrastructure deployment.

Later on, I can patch my databases as well.

A Word about the Directory for the New Grid Home

Be careful when choosing a location for the new Grid Home. The documentation lists some requirements you should be aware of.

In my demo environment, the existing Grid Home is:

/u01/app/19.0.0.0/grid

Since I am patching to 19.17.0, I think it makes sense to use:

/u01/app/19.17.0/grid

If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.

Don’t Forget to Clean Your Room

At a future point, I need to remove the old Grid Home. I use the deinstall tool in the Grid Home. I execute the command on all nodes in my cluster:

$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local

I will wait until:

  • I have seen the new Grid Home run without problems for a week or two.
  • I have patched my Oracle Databases managed by GI.
  • I have seen my Oracle Databases run without GI-related problems for a week or two.

Happy Patching!

Appendix

Windows

Oracle supports this functionality, SwitchGridHome, on Microsoft Windows starting from Oracle Database 23ai.

AIX

Check this out: Grid Infrastructure 19c Out-Of-Place Patching Fails on AIX

Further Reading

Other Blog Posts in This Series

Installing Oracle Database 19c and All the Things to Put on Top

When you prepare for patching or upgrading Oracle Database 19c, you must also prepare an Oracle Home. Installing the Oracle Home is easy, but there is more to it.

Out-of-place Installation

I always use out-of-place installation. I install a new, fresh Oracle Home. I will move the databases into that Oracle Home as I upgrade or patch.

The alternative, in-place installation, leads to more downtime, is more error-prone, and makes fallback more complicated. In addition, in-place installation will gradually slow down patching; as Mike Dietrich describes in Binary patching is slow because of the inventory.

Download and Prepare Oracle Home

First, I download the base release from Oracle Software Delivery Cloud, aka e-delivery.

Find REL: Oracle Database 19.3.0.0.0 – Long Term Release, the right platform, and download.

Extract the zip file into a new Oracle Home location:

export NEW_ORACLE_HOME=<path>
mkdir -p $NEW_ORACLE_HOME
cd $NEW_ORACLE_HOME
unzip -oq /tmp/LINUX.X64_193000_db_home.zip
rm /tmp/LINUX.X64_193000_db_home.zip

Don’t run the installer yet.

Clone Existing Oracle Home

I could clone an existing Oracle Home and then just apply the new patches. But it will make me susceptible to the same issue described above about in-place patching. Plus, if you clone an Oracle Home with one-offs then you might need to roll them off before you can apply the next Release Update.

Update OPatch

OPatch is needed later on to apply patches to the new Oracle Home. Get the latest version and install it into the new Oracle Home:

rm -rf $NEW_ORACLE_HOME/OPatch
cd $NEW_ORACLE_HOME
unzip -oq /tmp/<opatch_zip_file>
rm /tmp/<opatch_zip_file>

Patches

Now, I will determine which patches to apply to the Oracle Home.

  • Start by getting the latest Release Update. I really mean the latest. I have helped too many customers with issues, only to find out the issue is already solved in a later Release Update. If your database has JAVAVM installed, then get the combo patch.
  • Review the list of important one-off patches for the specific Release Update. The list contains important fixes that haven’t made into a Release Update yet. I don’t need to get all of them, but based on my knowledge of my database, I can cherrypick those that could be relevant.
  • If I am using Data Pump, I get the Data Pump bundle patch. Data Pump fixes rarely make it into Release Updates, because they are not RAC-Rolling Installable which is a clear requirement for inclusion in Release Update.
  • If I am using GoldenGate, I get the GoldenGate bundle patch.
  • If my database uses OJVM (see appendix), I get the OJVM patch that matches the Release Update I am using. I can also get the OJVM patch as a combo patch together with the Release Update.

Unzip

Now that I have downloaded a number of zip files, I go ahead and unzip the files into separate directories. In the below example, I am using 19.16 Release Update and Data Pump bundle patch:

#Release Update 19.16.0
mkdir -p $NEW_ORACLE_HOME/patch/p34133642
cd $NEW_ORACLE_HOME/patch/p34133642
unzip -oq /tmp/p34133642_190000_Linux-x86-64.zip
rm /tmp/p34133642_190000_Linux-x86-64.zip

#Data Pump bundle patch
mkdir -p $NEW_ORACLE_HOME/patch/p34294932
cd $NEW_ORACLE_HOME/patch/p34294932
unzip -oq /tmp/p34294932_1916000DBRU_Generic
rm /tmp/p34294932_1916000DBRU_Generic

Install

Now, I can install the Oracle Home and apply all the patches in one operation. Mike has a really good description of the functionality and a demo.

I do a silent installation using a response file. Notice how I am applying the patches during the installation using -applyRU and -applyOneOffs:

export ORACLE_BASE=<path_to_oracle_base>
export ORACLE_HOME=<path_to_oracle_home>
#Path to inventory is most likely /u01/app/oraInventory
export ORA_INVENTORY=<path_to_inventory>
cd $ORACLE_HOME
./runInstaller -ignorePrereqFailure -waitforcompletion -silent \
   -responseFile $ORACLE_HOME/install/response/db_install.rsp \
   -applyRU patch/p34133642/34133642 \
   -applyOneOffs patch/p34294932/34294932 \
   oracle.install.option=INSTALL_DB_SWONLY \
   UNIX_GROUP_NAME=oinstall \
   INVENTORY_LOCATION=$ORA_INVENTORY \
   SELECTED_LANGUAGES=en,en_GB \
   ORACLE_HOME=$ORACLE_HOME \
   ORACLE_BASE=$ORACLE_BASE \
   oracle.install.db.InstallEdition=EE \
   oracle.install.db.OSDBA_GROUP=dba \
   oracle.install.db.OSBACKUPDBA_GROUP=dba \
   oracle.install.db.OSDGDBA_GROUP=dba \
   oracle.install.db.OSKMDBA_GROUP=dba \
   oracle.install.db.OSRACDBA_GROUP=dba \
   oracle.install.db.isRACOneInstall=false \
   oracle.install.db.rac.serverpoolCardinality=0 \
   oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \
   oracle.install.db.ConfigureAsContainerDB=false \
   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
   DECLINE_SECURITY_UPDATES=true

You can read more about silent installation on oracle-base.com. That’s where I got inspired from. The reponse file db_install.rsp is the default one that comes with the Oracle Home. I don’t change anything in it.

Finally, I execute root.sh as root:

$ORACLE_HOME/root.sh

AutoUpgrade

Download the latest version of AutoUpgrade, and put it into $ORACLE_HOME/rdbms/admin.

Et Voilà

That’s it. I can now use the Oracle Home to upgrade or patch my Oracle Database 19c.

When you move your Oracle Database to the new Oracle Home, be sure to move all the necessary configuration files as well.

Appendix

Patches

As if the list of patches to apply wasn’t long enough. There are even more MOS notes!

Good news is that you don’t have to go through them, as long as you stay on the latest Release Update. If you check the notes, you will see that almost all bugs are already included in a Release Update. That’s a pretty strong argument for always using the latest Release Update.

  • Things to Consider to Avoid Prominent Wrong Result Problems on 19C Proactively (Doc ID 2606585.1)
  • Things to Consider to Avoid Database Performance Problems on 19c (Doc ID 2773012.1)
  • Things to Consider to Avoid SQL Performance Problems on 19c (Doc ID 2773715.1)
  • Things to Consider to Avoid SQL Plan Management (SPM) Related Problems on 19c (Doc ID 2774029.1)

Grid Infrastructure

If Grid Infrastructure manages my database, I must remember to keep GI and database patch level in sync.

It Looks Complicated

It is a little to cumbersome. We know, and that’s why there are several initiatives to make it easier.

You could also look at Oracle Fleet Patching & Provisioning (FPP). Philippe Fierens is product manager for FPP. You can read his blog posts or reach out to him (he is a nice guy who takes every opportunity to talk about FPP).

OJVM

If your database is using OJVM, then you must also apply the OJVM patch to your Oracle Home. You can check it using:

select version, status from dba_registry where comp_id=’JAVAVM’

I have seen many databases that had OJVM installed, but it was never used. In such case, you can remove the component from your database. Then you no longer need to apply the OJVM patch to your Oracle Home. Plus it has the added benefit that it will make your upgrades faster.

Mike Dietrich has a good blog – the OJVM Patching Saga. Catchy title!