How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and the -switchGridHome parameter.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0
  • Uses grid as the owner of the software

I patch only the GI home. If I want to patch the database as well, I must do it separately.

Preparation

I need to download:

  1. Download the base release of Oracle Grid Infrastructure (LINUX.X64_193000_grid_home.zip) from oracle.com or Oracle Software Delivery Cloud.
  2. Latest OPatch from My Oracle Support (6880880).
  3. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I will use the combo patch (34449117).

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Prepare a New Grid Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new Grid Home. I must do this as root on all nodes in my cluster (copenhagen1 and copenhagen 2):

    [root@copenhagen1]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen1]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen1]$ chmod -R 775 /u01/app/19.17.0
    
    [root@copenhagen2]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen2]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen2]$ chmod -R 775 /u01/app/19.17.0
    
  2. I switch to the Grid Home owner, grid.

  3. I ensure that there is passwordless SSH access between all the cluster nodes. It is a requirement for the installation, but sometimes it is disabled to strengthen security:

    [grid@copenhagen1]$ ssh copenhagen2 date
    
    [grid@copenhagen2]$ ssh copenhagen1 date
    
  4. I extract the base release of Oracle Grid Infrastructure into the new Grid Home. I work on one node only:

    [grid@copenhagen1]$ export NEWGRIDHOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    

    Optionally, you can use a golden image.

  5. I update OPatch to the latest version:

    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  6. Then, I check the Oracle Grid Infrastructure prerequisites. I am good to go, if the check doesn’t write any error messages to the console:

    [grid@copenhagen1]$ export ORACLE_HOME=$NEWGRIDHOME
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh -executePrereqs -silent
    
  7. I want to apply the 19.17.0 Release Update while I install the Grid Home. To do that, I must extract the patch file:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ mkdir 34449117
     [grid@copenhagen1]$ mv p34449117_190000_Linux-x86-64.zip 34449117
     [grid@copenhagen1]$ cd 34449117
     [grid@copenhagen1]$ unzip p34449117_190000_Linux-x86-64.zip
    
  8. Finally, I can install the new Grid Home:

    • I need to update the environment variables.
    • CLUSTER_NODES is a comma-separated list of all the nodes in my cluster.
    • The parameter -applyRU must point to the directory holding the GI Release Update. Since I am using the combo patch, I need to specify the subdirectory containing the GI Release Update
    [grid@copenhagen1]$ export ORACLE_BASE=/u01/app/grid
    [grid@copenhagen1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [grid@copenhagen1]$ export CLUSTER_NAME=$(olsnodes -c)
    [grid@copenhagen1]$ export CLUSTER_NODES=$(olsnodes | tr '\n' ','| sed 's/,\s*$//')
    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyRU /u01/software/34449117/34449117/34416665 \
       -responseFile $ORACLE_HOME/install/response/gridsetup.rsp \
       INVENTORY_LOCATION=$ORA_INVENTORY \
       ORACLE_BASE=$ORACLE_BASE \
       SELECTED_LANGUAGES=en \
       oracle.install.option=CRS_SWONLY \
       oracle.install.asm.OSDBA=asmdba \
       oracle.install.asm.OSOPER=asmoper \
       oracle.install.asm.OSASM=asmadmin \
       oracle.install.crs.config.ClusterConfiguration=STANDALONE \
       oracle.install.crs.config.configureAsExtendedCluster=false \
       oracle.install.crs.config.clusterName=$CLUSTER_NAME \
       oracle.install.crs.config.gpnp.configureGNS=false \
       oracle.install.crs.config.autoConfigureClusterNodeVIP=false \
       oracle.install.crs.config.clusterNodes=$CLUSTER_NODES
    
    • Although the script says so, I don’t run root.sh yet.
    • I install it in silent mode, but I could use the wizard instead.
    • For inspiration, you can check the response file used in the previous Grid Home on setting the various parameters.
    • If I have one-off patches to install, I can use the -applyOneOffs parameter.

2. Switch to the new Grid Home

Now, I can complete the patching process by switching to the new Grid Home. I do this one node at a time. Step 2 involves downtime.

  1. First, on copenhagen1, I switch to the new Grid Home:
    [grid@copenhagen1]$ export ORACLE_HOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ export CURRENT_NODE=$(hostname)
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \
       -silent -switchGridHome \
       oracle.install.option=CRS_SWONLY \
       ORACLE_HOME=$ORACLE_HOME \
       oracle.install.crs.config.clusterNodes=$CURRENT_NODE \
       oracle.install.crs.rootconfig.executeRootScript=false
    
  2. Then, I run the root.sh script as root:
    • This step restarts the entire GI stack, including resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
    • Optionally, if I want a more graceful approach, I can manually stop the services, and perform draining.
    • In the end, GI restarts the resources (databases and the like).
    [root@copenhagen1]$ /u01/app/19.17.0/grid/root.sh
    
  3. I update any profiles (e.g., .bashrc) and other scripts referring to the Grid Home.
  4. I connect to the other node, copenhagen2, and repeat steps 1-3. I double-check that the CURRENT_NODE environment variable gets updated to copenhagen2.

That’s it! I have now patched my Grid Infrastructure deployment.

Later on, I can patch my databases as well.

A Word about the Directory for the New Grid Home

Be careful when choosing a location for the new Grid Home. The documentation lists some requirements you should be aware of.

In my demo environment, the existing Grid Home is:

/u01/app/19.0.0.0/grid

Since I am patching to 19.17.0, I think it makes sense to use:

/u01/app/19.17.0/grid

If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.

Don’t Forget to Clean Your Room

At a future point, I need to remove the old Grid Home. I use the deinstall tool in the Grid Home. I execute the command on all nodes in my cluster:

$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local

I will wait until:

  • I have seen the new Grid Home run without problems for a week or two.
  • I have patched my Oracle Databases managed by GI.
  • I have seen my Oracle Databases run without GI-related problems for a week or two.

Happy Patching!

Appendix

Further Reading

Other Blog Posts in This Series

24 thoughts on “How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome

  1. Rather than install the software from scratch – wouldn’t it be better to clone the software from one of the existing nodes? In that way, any specific patches that might have been applied (and potentially the order of them) will be preserved?

    (If you agree, could you add how to clone with the installer to this post – I plan on this post being my template for future patching)

    Like

  2. Hi Blair,

    When I patch Oracle Database I always use brand-new homes (instead of cloning). There are several issues during patching when you keep cloning existing Oracle Homes.
    Without knowing whether they still apply for Grid Infrastructure (I think they do), I continued to use my approach with brand-new Oracle Homes. But I will put it on my list and if time allows, I will follow up on it later on.

    Mike Dietrich has several posts about the issues that arise when you clone existing Oracle Homes. Here’s one of them:
    https://mikedietrichde.com/2022/05/10/binary-patching-is-slow-because-of-the-inventory/

    Regards,
    Daniel

    Like

  3. Did you try zero downtime GI patching? Unless you’re using acfs or things like that, a db 19.16 above will remain up while switching GI home with the transparent option 😉

    Like

  4. Hi Daniel,

    Thanks for this great article.
    I think you could even skip (on Linux) the RU patching of the target binaries by using:

    34695857: PLACEHOLDER BUG FOR GRID SOFTWARE CLONE 19.17.0.0.221018 OCT 2022
    or for 19.18:
    34979367: PLACEHOLDER BUG FOR GRID SOFTWARE CLONE 19.18.0.0.230117 JAN 2023

    Regards,
    Andreas

    Like

  5. Hi Andreas,

    Oh. At first glance, they look promising. However, I think they are meant for Exadata only. If I remember correctly, gold images have been available for Exadata for a while. Let me find out whether they can be used for non-Exadata platforms as well.

    Great tip.

    Regards,
    Daniel

    Like

  6. Hi Jorge,
    I think it does. The MOS note doesn’t mention any restrictions. Unfortunately, I don’t have a Windows RAC that I can use for testing.
    One thing that wont work is -applyRU and -applyOneOffs. Those are – for some reason – not available on Windows.
    Regards,
    Daniel

    Like

  7. Hi Neil,

    I’ve never tried it myself for Oracle Restart. But the procedure does seems similar for Oracle Restart – except for a few changes in the commands.
    Have you tried and what was your experience with it?

    I have it on my to-do list but – you know – there are so many things on it 🙂 I’ll see what I can do.

    Regards,
    Daniel

    Like

  8. Good Afternoon
    I am working on trying to transition our site to out-of-place patching. Are there any examples/documents on doing this for GRID HAS systems?

    Like

  9. Hi,
    Is that single-instance systems with Grid Infrastructure? Like an Oracle Restart configuration?
    I have that on my list of things to do… When time allows 🙂
    Regards,
    Daniel

    Like

  10. You are correct. We have built scripting to do the basic copy of the unconfigured tar image we create (we use this for provisioning also) and that installs fine, if we set the home to it and query sqlplus it shows we are on the new version. just need to qualify how to move from the old grid home to the new grid home for our standalone HAS install (created using HA_SWONLY option)

    Like

  11. Hello and thank you for your hard work on patching!

    I’m transitioning from in-place patching to OOP as described in your article. Does gridSetup.sh -applyRU perform prechecks? Normally I would run opatchauto -analyze before applying….

    Thanks again

    Like

  12. Hi David,
    It does not check upfront. It tries to install the patches and will bail out on conflicts.

    But the good thing about switchgridhome is that you do all this in advance. You create the new GI home before the maintenance window and have plenty of time to deal with conflicts. When you know the patches you want to install either use the MOS conflict checker or run the conflict check manually (opatch prereq CheckConflictAgainstOHWithDetail).

    But let me stress again. You prepare the new Oracle Home in advance. There is no stress and you can deal with the conflicts. Ideally, you create a golden image (and deal with conflicts one time). Then you save that golden image and deploy it to all your servers. Then there are no patching on servers when you deploy, just use the golden image which is already patched.

    Regards,
    Daniel

    Like

  13. Hi Daniel

    thanks for the great block it worked without any changes on my environment (2 node RAC cluster with 2 node Data Guard) without any changes.

    But how can I fail back to the old version in case of a bug? switchgridhome wouldn’t probably sufficient?

    Like

  14. Hi Daniel!

    Great manual, many thanks!

    One additional noteL It’s better to stop DB instance on host in step 2 in advance, because cluster software stops it in abort mode, not very graceful method.

    Regards, Vadim

    Liked by 1 person

Leave a comment