Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using Zero Downtime Oracle Grid Infrastructure Patching (ZDOGIP).
My demo system:
- Is a 2-node RAC
- Runs Oracle Linux
- Is currently on 19.16.0, and I want to patch to 19.17.0
- Uses neither ACFS nor ASM Filter Driver
I patch only the GI home. If I want to patch the database as well, I must do it separately.
I suggest you read A Word about Zero Downtime Oracle Grid Infrastructure Patching, especially if your system uses ACFS or ASM Filter Driver.
Preparation
I need to download:
- Download the base release of Oracle Grid Infrastructure (LINUX.X64_193000_grid_home.zip) from oracle.com or Oracle Software Delivery Cloud.
- Latest OPatch from My Oracle Support (6880880).
- The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I will use the combo patch (34449117).
You can use AutoUpgrade to easily download GI patches.
I place the software in /u01/software.
How to Patch Oracle Grid Infrastructure 19c
1. Prepare a New Grid Home
I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.
-
I need to create a folder for the new Grid Home. I must do this as root on all nodes in my cluster (copenhagen1 and copenhagen 2):
[root@copenhagen1]$ mkdir -p /u01/app/19.17.0/grid [root@copenhagen1]$ chown -R grid:oinstall /u01/app/19.17.0 [root@copenhagen1]$ chmod -R 775 /u01/app/19.17.0 [root@copenhagen2]$ mkdir -p /u01/app/19.17.0/grid [root@copenhagen2]$ chown -R grid:oinstall /u01/app/19.17.0 [root@copenhagen2]$ chmod -R 775 /u01/app/19.17.0 -
I switch to the Grid Home owner, grid.
-
I ensure that there is passwordless SSH access between all the cluster nodes. It is a requirement for the installation, but sometimes it is disabled to strengthen security:
[grid@copenhagen1]$ ssh copenhagen2 date [grid@copenhagen2]$ ssh copenhagen1 date -
I extract the base release of Oracle Grid Infrastructure into the new Grid Home. I work on one node only:
[grid@copenhagen1]$ export NEWGRIDHOME=/u01/app/19.17.0/grid [grid@copenhagen1]$ cd $NEWGRIDHOME [grid@copenhagen1]$ unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip -
I update OPatch to the latest version:
[grid@copenhagen1]$ cd $NEWGRIDHOME [grid@copenhagen1]$ rm -rf OPatch [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip -
Then, I check the Oracle Grid Infrastructure prerequisites. I am good to go, if the check doesn’t write any error messages to the console:
[grid@copenhagen1]$ export ORACLE_HOME=$NEWGRIDHOME [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh -executePrereqs -silent -
I want to apply the 19.17.0 Release Update while I install the Grid Home. To do that, I must extract the patch file:
[grid@copenhagen1]$ cd /u01/software [grid@copenhagen1]$ unzip -oq p34449117_190000_Linux-x86-64.zip -d 34449117- The combo patch contains the GI bundle patch which consists of:
- OCW Release Update (patch 34444834)
- Database Release Update (34419443)
- ACFS Release Update (34428761)
- Tomcat Release Update (34580338)
- DBWLM Release Update (33575402)
- I will apply all of them.
- The combo patch contains the GI bundle patch which consists of:
-
Finally, I can install the new Grid Home:
- I need to update the environment variables.
CLUSTER_NODESis a comma-separated list of all the nodes in my cluster.- The parameter
-applyRUmust point to the directory holding the OCW Release Update. - The parameter
-applyOneOffsis a comma-separated list of the paths to each of the other Release Updates in the GI bundle patch.
[grid@copenhagen1]$ export ORACLE_BASE=/u01/app/grid [grid@copenhagen1]$ export ORA_INVENTORY=/u01/app/oraInventory [grid@copenhagen1]$ export CLUSTER_NAME=$(olsnodes -c) [grid@copenhagen1]$ export CLUSTER_NODES=$(olsnodes | tr '\n' ','| sed 's/,\s*$//') [grid@copenhagen1]$ cd $ORACLE_HOME [grid@copenhagen1]$ ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \ -applyRU /u01/software/34449117/34449117/34416665 \ -applyOneOffs /u01/software/34449117/34449117/34419443,/u01/software/34449117/34449117/34428761,/u01/software/34449117/34449117/34580338,/u01/software/34449117/34449117/33575402 \ -responseFile $ORACLE_HOME/install/response/gridsetup.rsp \ INVENTORY_LOCATION=$ORA_INVENTORY \ ORACLE_BASE=$ORACLE_BASE \ SELECTED_LANGUAGES=en \ oracle.install.option=CRS_SWONLY \ oracle.install.asm.OSDBA=asmdba \ oracle.install.asm.OSOPER=asmoper \ oracle.install.asm.OSASM=asmadmin \ oracle.install.crs.config.ClusterConfiguration=STANDALONE \ oracle.install.crs.config.configureAsExtendedCluster=false \ oracle.install.crs.config.clusterName=$CLUSTER_NAME \ oracle.install.crs.config.gpnp.configureGNS=false \ oracle.install.crs.config.autoConfigureClusterNodeVIP=false \ oracle.install.crs.config.clusterNodes=$CLUSTER_NODES- Although the script says so, I don’t run root.sh yet.
- I install it in silent mode, but I could use the wizard instead.
- You need to install the new GI home in a way that matches your environment.
- For inspiration, you can check the response file used in the previous Grid Home on setting the various parameters.
- If I have one-off patches to install, I can add them to the
-applyOneOffsparameter.
2. Switch to the new Grid Home
Now, I can complete the patching process by switching to the new Grid Home. I do this one node at a time. Since I am using ZDOGIP there is no downtime.
- First, on copenhagen1, I switch to the new Grid Home:
[grid@copenhagen1]$ export ORACLE_HOME=/u01/app/19.17.0/grid [grid@copenhagen1]$ export CURRENT_NODE=$(hostname) [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \ -silent -switchGridHome \ oracle.install.option=CRS_SWONLY \ ORACLE_HOME=$ORACLE_HOME \ oracle.install.crs.config.clusterNodes=$CURRENT_NODE \ oracle.install.crs.rootconfig.executeRootScript=false - Then, I run the root.sh script as root:
- I must use the
-transparentand-nodriverupdateparameters. - This step restarts the entire GI stack, but the resources it manages (databases, listener, etc.) stay up.
- This step also restarts the ASM instance. Database instances on this node will switch to a remote ASM instance (Flex ASM). The database instances do not switch back to the local ASM instance after the GI restart.
- In that period, the services stay ONLINE on this node.
- The database instance on this node stays up all the time.
- If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
[root@copenhagen1]$ /u01/app/19.17.0/grid/root.sh \ -transparent \ -nodriverupdate - I must use the
- I update any profiles (e.g.,
.bashrc) and other scripts referring to the Grid Home. - I connect to the other node, copenhagen2, and repeat steps 1-3. I double-check that the
CURRENT_NODEenvironment variable gets updated to copenhagen2.- You must proceed as quickly as possible.
That’s it! I have now patched my Grid Infrastructure deployment.
Later on, I can patch my databases as well.
A Word about the Directory for the New Grid Home
Be careful when choosing a location for the new Grid Home. The documentation lists some requirements you should be aware of.
In my demo environment, the existing Grid Home is:
/u01/app/19.0.0.0/grid
Since I am patching to 19.17.0, I think it makes sense to use:
/u01/app/19.17.0/grid
If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.
Don’t Forget to Clean Your Room
At a future point, I need to remove the old Grid Home. I use the deinstall tool in the Grid Home. I execute the command on all nodes in my cluster:
$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local
I will wait until:
- I have seen the new Grid Home run without problems for a week or two.
- I have patched my Oracle Databases managed by GI.
- I have seen my Oracle Databases run without GI-related problems for a week or two.
Happy Patching!
Appendix
Important Patches
- Bug 37033171 : ZERO DOWNTIME PATCHING HANGS ON ROOT SCRIPT EXECUTION.
- Bug 37827355 : ZERO DOWNTIME SWITCH GRID HOME HANGS ON ‘CRSCTL START CRS -WAIT -TGIP’ WITH BUG 37033171 FIX APPLIED
Further Reading
- MOS note: Zero-Downtime Oracle Grid Infrastructure Patching (ZDOGIP). (Doc ID 2635015.1)
- MOS note: Step by Step Zero Downtime Oracle Grid Infrastructure Patching in Silent Mode (Doc ID 2865083.1)
- Documentation: Applying Patches Using Zero-Downtime Oracle Grid Infrastructure Patching, Grid Infrastructure Installation and Upgrade Guide for Linux 19c
Other Blog Posts in This Series
- Introduction
- How to Patch Oracle Grid Infrastructure 19c Using In-Place OPatchAuto
- How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place OPatchAuto
- How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome
- How to Patch Oracle RAC Database 19c Using Manual Out-Of-Place
- How to Patch Oracle Grid Infrastructure 19c and Oracle Data Guard Using Standby-First
- How to Patch Oracle Grid Infrastructure 19c Using Zero Downtime Oracle Grid Infrastructure Patching
- How to Patch Oracle Restart 19c and Oracle Database Using Out-Of-Place Switch Home
- How to Patch Oracle Restart 19c and Oracle Data Guard Using Out-Of-Place Switch Home
- Which Method Should I Choose When Patching Oracle Grid Infrastructure 19c
- How to Avoid Interruptions When You Patch Oracle Grid Infrastructure 19c
- Patching Oracle Grid Infrastructure And Oracle Data Guard
- How to Clone Oracle Grid Infrastructure Home Using Golden Images
- How to Roll Back Oracle Grid Infrastructure 19c Using SwitchGridHome
- How to Remove an Old Oracle Grid Infrastructure 19c Home
- Use Cluster Verification Utility (cluvfy) and Avoid Surprises
- A Word about Zero Downtime Oracle Grid Infrastructure Patching
- Why You Need to Use Oracle Fleet Patching and Provisioning
- My Best Advice on Patching Oracle Grid Infrastructure
- Pro Tips
I tested this technology to upgrade GI on my laptop from 19.15 to 19.16 with zero downtime. I found that described technology works: my DB was online on both cluster nodes during the GI upgrade.
But after the patching have been finished I run crsctl and obtained the old GI version:
$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [3063913975] and the complete list of patches [33575402 33806152 33815596 33815607 33911149 ] have been applied on the local node. The release patch string is [19.15.0.0.0].
Then I decided to restart GI and obtained an error:
$ crsctl start crs
CRS-6706: Oracle Clusterware Release patch level (‘3063913975’) does not match Software patch level (‘1148548346’). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.
Then I decided to reboot node. After node reboot GI and DB started successfully.
Explain please, this error is my fault or we have to to reboot the node to finalize the GI patching procedure?
LikeLike
Hi Yury,
If you use ASM Filter Driver or ACFS you need to restart the GI at one point in time to load the new kernel drivers. There is a special parameter on root.sh for this. Please see this blog post: https://dohdatabase.com/2023/03/08/a-word-about-zero-downtime-oracle-grid-infrastructure-patching/
It also links to the documentation where it is described. Further, there is a link to a blog post made by an Oracle ACE which is quite good as well. It goes a little deeper (although it is about Oracle Database 21c).
Regards,
Daniel
LikeLike
Hi Daniel,
After reading this article i have more question:
Is it possible?
I found no appropriate commands in the documentation.
Thank you !
LikeLike
Hi Yury,
You can find a little more information about the technical implementation in this blog post:
Otherwise, it has links to the Oracle documentation with more details.
But the database stays up during patching when you use ZDOGIP. Further, the database remain operational because the local database switches to a remote ASM instance (Flex ASM). Flex ASM came in 19c I believe, but if you search the internet you can find more details about it.
I hope that helps.
Regards,
Daniel
LikeLike