Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and opatchauto
.
My demo system:
- Is a 2-node RAC
- Runs Oracle Linux
- Is currently on 19.16.0, and I want to patch to 19.17.0
When I use this procedure, I patch both the GI home only. However, opatchauto
has the option of patching the database home as well.
Preparation
I need to download:
- Latest OPatch from My Oracle Support (6880880).
- The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I use patch 34416665.
I place the software in /u01/software.
How to Patch Oracle Grid Infrastructure 19c
1. Make Ready for Patching
I can do this in advance. It doesn’t cause any downtime.
-
I ensure that there is passwordless SSH access between all the cluster nodes:
[grid@copenhagen1]$ ssh copenhagen2 date [grid@copenhagen2]$ ssh copenhagen1 date
-
I update OPatch to the latest available version.
- On all nodes (copenhagen1 and copenhagen 2)
- In the GI home as grid
I just show how to do it for the GI home on one node:
[grid@copenhagen1]$ cd $ORACLE_HOME [grid@copenhagen1]$ rm -rf OPatch [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
-
I extract the patch file on both nodes. I show it for just the first node:
[grid@copenhagen1]$ cd /u01/software [grid@copenhagen1]$ mkdir 34416665 [grid@copenhagen1]$ mv p34416665_190000_Linux-x86-64.zip 34449117 [grid@copenhagen1]$ cd 34416665 [grid@copenhagen1]$ unzip p34416665_190000_Linux-x86-64.zip
-
I check for patch conflicts.
- I can find the details about which checks to make in the patch readme file.
- I must check for conflicts in the GI home only.
- Since I have the same patches on all nodes, I can check for conflicts on just one of the nodes.
As grid with ORACLE_HOME set to the GI home:
[grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34419443 | grep checkConflictAgainstOHWithDetail [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34444834 | grep checkConflictAgainstOHWithDetail [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34428761 | grep checkConflictAgainstOHWithDetail [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34580338 | grep checkConflictAgainstOHWithDetail [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/33575402 | grep checkConflictAgainstOHWithDetail
2. Clone Home and Patch
Like phase 1, I can do this in advance. No downtime.
- First, on copenhagen1, I clone the existing GI home. I set ORACLE_HOME to my existing GI home. As root:
[root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto \ apply /u01/software/34416665/34416665 \ -prepare-clone \ -oh $ORACLE_HOME
opatchauto
prompts me for the location of the new GI home. I choose the following location:- /u01/app/19.17.0/grid
- Optionally, I can specify the new location in a response file. See appendix.
opatchauto
clones the GI home and applies the patches.
- Next, I clone on the other node. On copenhagen2, I repeat step 1.
opatchauto
automatically chooses the same locations for GI home as I used on the first node.- Proceed with the rest of the nodes as quickly as possible.
Do not touch the new, cloned homes. The next step will fail if you make any changes (like applying additional patches).
3. Switch to the New GI Home
Now, I can complete the patching process by switching to the new GI home. I do this one node at a time.
- First, on copenhagen1, I switch to the new GI home. ORACLE_HOME is still set to my existing GI home. As root:
[root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto apply /u01/software/34416665/34416665 -switch-clone
- Be sure to start with the same node you did when you cloned the GI home (phase 2).
- This step stops the entire GI stack, including the resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
- In that period, GI marks the services as OFFLINE so users can connect to other nodes.
- If my database listener runs out of the GI home,
opatchauto
will move it to the new GI home, including copying listener.ora. - Optionally, if I want a more graceful approach, I can manually stop the services and perform draining.
- In the end, GI restarts using the new, patched GI home. GI restarts the resources (databases and the like) as well.
- I connect to the other node, copenhagen2, and repeat step 1.
- If I had more nodes in my cluster, I must process the nodes in the same order as I did the cloning (phase 2).
- I update any profiles (e.g.,
.bashrc
) and other scripts referring to the GI home on all nodes.
That’s it! I have now patched my Grid Infrastructure deployment.
In my demo environment, each node was down for around 7 minutes. But the database remained up on other nodes all the time.
For simplicity, I have removed some of the prechecks. Please follow the patch readme file instructions and perform all the described prechecks.
In this blog post, I decided to perform the out-of-place patching as a two-step process. This gives me more control. I can also do it all in just one operation. Please see Grid Infrastructure Out of Place ( OOP ) Patching using opatchauto (Doc ID 2419319.1) for details.
Later on, I patch my database.
A Word about the Directory for the New GI Home
Be careful when choosing a location for the new GI home. The documentation lists some requirements you should be aware of.
In my demo environment, the existing GI home is:
/u01/app/19.0.0.0/grid
Since I am patching to 19.17.0, I think it makes sense to use:
/u01/app/19.17.0/grid
If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.
Don’t Forget to Clean Your Room
At a future point, I need to remove the old GI home. I use the deinstall tool in the Oracle home.
I execute the command on all nodes in my cluster:
$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local
But I wait a week or two before doing so. To ensure that everything runs smoothly on the new patch level.
Happy Patching!
Appendix
Other Blog Posts in This Series
- Introduction
- How to Patch Oracle Grid Infrastructure 19c Using In-Place OPatchAuto
- How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place OPatchAuto
- How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome
- How to Patch Oracle Grid Infrastructure 19c Using Zero Downtime Oracle Grid Infrastructure Patching
- Which Method Should I Choose When Patching Oracle Grid Infrastructure 19c
- How to Avoid Interruptions When You Patch Oracle Grid Infrastructure 19c
- Patching Oracle Grid Infrastructure And Oracle Data Guard
- Use Cluster Verification Utility (cluvfy) and Avoid Surprises
- A Word about Zero Downtime Oracle Grid Infrastructure Patching
- Why You Need to Use Oracle Fleet Patching and Provisioning
- My Best Advice on Patching Oracle Grid Infrastructure
- Pro Tips
Further Reading
- MOS note: Grid Infrastructure Out of Place ( OOP ) Patching using opatchauto (Doc ID 2419319.1)
- Documentation: OPatch User’s Guide, Out-of-Place Patching
How to Use Prepare-Clone in Silent Mode
When I execute opatchauto apply -prepare-clone
, the program prompts for the location of the new GI home. For an unattended execution, I can specify the old and new GI home in a file, and reference that file using the -silent
parameter.
[root@copenhagen1]$ cat prepare-clone.properties
/u01/app/19.0.0.0/grid=/u01/app/19.17.0/grid
[root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto apply ... -prepare-clone -silent prepare-clone.properties -oh $ORACLE_HOME
Please tell me you will do a similar blog for Oracle restart (i.e single node)
The reason I ask is that many of the MOS support notes exclude Oracle Restart from the procedure.
LikeLike
Hi Neil,
I have it on my “perhaps” list. If time allows I would like to cover Oracle Restart and SIHA. But time runs so fast! 🙂
Regards,
Daniel
LikeLike
Hi Daniel,
does out of place patching with cloning get rid of the patch history in the old home, or will this history as well be cloned into the new home?
Thx,
Robert
LikeLike
Hi,
Thank you for the work you do with these posts. I find them very usefull.
Regarding this post here, I have a question. You stress the point, that one must not modify the cloned/patched ORACLE_HOME prior to doing the switch to the new version.
But… How do we handle OneOff patches that need to be applied? Either those recommended in MOS ID 555.1 or OneOffs for bugs we have encountered (and therefore need) but which haven’t been included in the RUs, yet.
LikeLike
Hi Jan,
That’s a good question. To my knowledge you need to complete a separate patch cycle for each of the patches that you want to install. Or use the switchgridhome method instead.
Regards,
Daniel
LikeLike
Hi Robert,
I *think* this procedure will maintain the patch history, but let me check that.
Regards,
Daniel
LikeLike