Let me show you how I roll back a patch from Oracle Grid Infrastructure 19c (GI) using the out-of-place method and the -switchGridHome
parameter.
My demo system:
- Is a 2-node RAC (nodes copenhagen1 and copenhagen2).
- Runs Oracle Linux.
- Was patched from 19.17.0 to 19.19.0. I patched both GI and database. Now I want GI back on 19.17.0.
I only roll back the GI home. See the appendix for a few thoughts on rolling back the database as well.
This method works if you applied the patch out-of-place – regardless of whether you used the OPatchAuto or SwitchGridHome method.
Preparation
-
I use the term old Oracle Home for the original, lower patch level Oracle Home.
- It is my 19.17.0 Oracle Home
- It is stored in /u01/app/19.0.0.0/grid
- I refer to this home using the environment variable OLD_ORACLE_HOME
- This is the Oracle Home that I want to roll back to
-
I use the term new Oracle Home for the higher patch level Oracle Home.
- It is my 19.19.0 Oracle Home
- It is stored in /u01/app/19.19.0/grid
- I refer to this home using the environment variable NEW_ORACLE_HOME
- This is the Oracle Home that I want to roll back from
Both GI homes are present in the system already.
How to Roll Back Oracle Grid Infrastructure 19c
1. Sanity Checks
I execute the following checks on both nodes, copenhagen1 and copenhagen2. I show the commands for one node only.
-
I verify that the active GI home is the new GI home:
[grid@copenhagen1]$ export ORACLE_HOME=$NEW_GRID_HOME [grid@copenhagen1]$ $ORACLE_HOME/srvm/admin/getcrshome
-
I verify that the cluster upgrade state is NORMAL:
[grid@copenhagen1]$ $ORACLE_HOME/bin/crsctl query crs activeversion -f
-
I verify all CRS services are online:
[grid@copenhagen1]$ $ORACLE_HOME/bin/crsctl check cluster
-
I verify that the cluster patch level is 19.19.0 – the new patch level:
[grid@copenhagen1]$ $ORACLE_HOME/bin/crsctl query crs releasepatch
2. Cluster Verification Utility
- I use Cluster Verification Utility (CVU) to verify that my cluster meets all prerequisites for a patch/rollback. I do this on one node only:
[grid@copenhagen1]$ $CVU_HOME/bin/cluvfy stage -pre patch
- You can find CVU in the GI home, but I recommend always getting the latest version from My Oracle Support.
3. Roll Back Node 1
The GI stack (including database, listener, etc.) needs to restart on each instance. But I do the rollback in a rolling manner, so the database stays up all the time.
-
I drain connections from the first node, copenhagen1.
-
I unlock the old GI home, root:
[root@copenhagen1]$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid [root@copenhagen1]$ cd $OLD_GRID_HOME/crs/install [root@copenhagen1]$ ./rootcrs.sh -unlock -crshome $OLD_GRID_HOME
- This is required because the next step (
gridSetup.sh
) runs as grid and must have access to the GI home. - Later on, when I run
root.sh
, the script will lock the GI home.
- This is required because the next step (
-
I switch to old GI home as grid:
[grid@copenhagen1]$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid [grid@copenhagen1]$ export ORACLE_HOME=$OLD_GRID_HOME [grid@copenhagen1]$ export CURRENT_NODE=$(hostname) [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \ -silent -switchGridHome \ oracle.install.option=CRS_SWONLY \ ORACLE_HOME=$ORACLE_HOME \ oracle.install.crs.config.clusterNodes=$CURRENT_NODE \ oracle.install.crs.rootconfig.executeRootScript=false
-
I complete the switch by running
root.sh
as root:[root@copenhagen1]$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid [root@copenhagen1]$ $OLD_GRID_HOME/root.sh
- This step restarts the entire GI stack, including resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
- In that period, GI marks the services as OFFLINE so users can connect to other nodes.
- If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
- In the end, GI restarts the resources (databases and the like).
-
I update any profiles (e.g.,
.bashrc
) and other scripts referring to the GI home. -
I verify that the active GI home is the new GI home:
[grid@copenhagen1]$ $OLD_ORACLE_HOME/srvm/admin/getcrshome
-
I verify that the cluster upgrade state is ROLLING PATCH:
[grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl query crs activeversion -f
-
I verify all CRS services are online:
[grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl check cluster
-
I verify all resources are online:
[grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl stat resource -t
-
I verify that the GI patch level is 19.17.0 – the old patch level:
[grid@copenhagen1]$ $OLD_ORACLE_HOME/bin/crsctl query crs releasepatch
4. Roll Back Node 2
- I roll back the second node, copenhagen2, using the same process as the first node, copenhagen1.
- I double-check that the
CURRENT_NODE
environment variable gets updated to copenhagen2. - When I use
crsctl query crs activeversion -f
to check the cluster upgrade state, it will now be back in NORMAL mode, because copenhagen2 is the last node in the cluster.
- I double-check that the
5. Cluster Verification Utility
- I use Cluster Verification Utility (CVU) again. Now I perform a post-rollback check. I do this on one node only:
[grid@copenhagen1]$ $CVU_HOME/bin/cluvfy stage -post patch
That’s it!
My cluster is now operating at the previous patch level.
Appendix
SwitchGridHome Does Not Have Dedicated Rollback Functionality
OPatchAuto has dedicated rollback functionality that will revert the previous patch operation. Similar functionality does not exist when you use the SwitchGridHome method.
This is described in Steps for Minimal Downtime Grid Infrastructure Out of Place ( OOP ) Patching using gridSetup.sh (Doc ID 2662762.1). To rollback, simply switch back to the previous GI home using the same method as for the patch.
There is no real rollback option as this is a switch from OLD_HOME to NEW_HOME To return to the old version you need to recreate another new home and switch to that.
Should I Roll Back the Database as Well?
This post describes rolling back the GI home only. Usually, I recommend keeping the database and GI patch level in sync. If I roll back GI, should I also roll back the database?
The short answer is no!
Keeping the GI and database patch in sync is a good idea. But when you need to roll back, you are in a contingency. Only roll back the component that gives you problems. Then, you will be out of sync for a period of time until you can get a one-off patch or move to the next Release Update. Being in this state for a shorter period is perfectly fine – and supported.
Other Blog Posts in This Series
- Introduction
- How to Patch Oracle Grid Infrastructure 19c Using In-Place OPatchAuto
- How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place OPatchAuto
- How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome
- How to Patch Oracle Grid Infrastructure 19c and Oracle Data Guard Using Standby-First
- How to Patch Oracle Grid Infrastructure 19c Using Zero Downtime Oracle Grid Infrastructure Patching
- Which Method Should I Choose When Patching Oracle Grid Infrastructure 19c
- How to Avoid Interruptions When You Patch Oracle Grid Infrastructure 19c
- Patching Oracle Grid Infrastructure And Oracle Data Guard
- How to Clone Oracle Grid Infrastructure Home Using Golden Images
- How to Roll Back Oracle Grid Infrastructure 19c Using SwitchGridHome
- How to Remove an Old Oracle Grid Infrastructure 19c Home
- Use Cluster Verification Utility (cluvfy) and Avoid Surprises
- A Word about Zero Downtime Oracle Grid Infrastructure Patching
- Why You Need to Use Oracle Fleet Patching and Provisioning
- My Best Advice on Patching Oracle Grid Infrastructure
- Pro Tips