How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and the -switchGridHome parameter.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0
  • Uses grid as the owner of the software

I patch only the GI home. I can patch the database later.

Preparation

I need to download:

  1. Download the base release of Oracle Grid Infrastructure (LINUX.X64_193000_grid_home.zip) from oracle.com or Oracle Software Delivery Cloud.
  2. Latest OPatch from My Oracle Support (6880880).
  3. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I will use the combo patch (34449117).

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Prepare a New Grid Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new Grid Home. I must do this as root on all nodes in my cluster (copenhagen1 and copenhagen 2):

    [root@copenhagen1]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen1]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen1]$ chmod -R 775 /u01/app/19.17.0
    
    [root@copenhagen2]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen2]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen2]$ chmod -R 775 /u01/app/19.17.0
    
  2. I switch to the Grid Home owner, grid.

  3. I ensure that there is passwordless SSH access between all the cluster nodes. It is a requirement for the installation, but sometimes it is disabled to strengthen security:

    [grid@copenhagen1]$ ssh copenhagen2 date
    
    [grid@copenhagen2]$ ssh copenhagen1 date
    
  4. I extract the base release of Oracle Grid Infrastructure into the new Grid Home. I work on one node only:

    [grid@copenhagen1]$ export NEWGRIDHOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    

    Optionally, you can use a golden image.

  5. I update OPatch to the latest version:

    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  6. Then, I check the Oracle Grid Infrastructure prerequisites. I am good to go, if the check doesn’t write any error messages to the console:

    [grid@copenhagen1]$ export ORACLE_HOME=$NEWGRIDHOME
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh -executePrereqs -silent
    
  7. I want to apply the 19.17.0 Release Update while I install the Grid Home. To do that, I must extract the patch file:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ unzip -oq p34449117_190000_Linux-x86-64.zip -d 34449117
    
    • The combo patch contains the GI bundle patch which consists of:
      • OCW Release Update (patch 34444834)
      • Database Release Update (34419443)
      • ACFS Release Update (34428761)
      • Tomcat Release Update (34580338)
      • DBWLM Release Update (33575402)
    • I will apply all of them.
  8. Finally, I can install the new Grid Home:

    • I need to update the environment variables.
    • CLUSTER_NODES is a comma-separated list of all the nodes in my cluster.
    • The parameter -applyRU must point to the directory holding the OCW Release Update.
    • The parameter -applyOneOffs is a comma-separated list of the paths to each of the other Release Updates in the GI bundle patch.
    [grid@copenhagen1]$ export ORACLE_BASE=/u01/app/grid
    [grid@copenhagen1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [grid@copenhagen1]$ export CLUSTER_NAME=$(olsnodes -c)
    [grid@copenhagen1]$ export CLUSTER_NODES=$(olsnodes | tr '\n' ','| sed 's/,\s*$//')
    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyRU /u01/software/34449117/34449117/34416665 \
       -applyOneOffs /u01/software/34449117/34449117/34419443,/u01/software/34449117/34449117/34428761,/u01/software/34449117/34449117/34580338,/u01/software/34449117/34449117/33575402 \
       -responseFile $ORACLE_HOME/install/response/gridsetup.rsp \
       INVENTORY_LOCATION=$ORA_INVENTORY \
       ORACLE_BASE=$ORACLE_BASE \
       SELECTED_LANGUAGES=en \
       oracle.install.option=CRS_SWONLY \
       oracle.install.asm.OSDBA=asmdba \
       oracle.install.asm.OSOPER=asmoper \
       oracle.install.asm.OSASM=asmadmin \
       oracle.install.crs.config.ClusterConfiguration=STANDALONE \
       oracle.install.crs.config.configureAsExtendedCluster=false \
       oracle.install.crs.config.clusterName=$CLUSTER_NAME \
       oracle.install.crs.config.gpnp.configureGNS=false \
       oracle.install.crs.config.autoConfigureClusterNodeVIP=false \
       oracle.install.crs.config.clusterNodes=$CLUSTER_NODES
    
    • Although the script says so, I don’t run root.sh yet.
    • I install it in silent mode, but I could use the wizard instead.
    • You need to install the new GI home in a way that matches your environment.
    • For inspiration, you can check the response file used in the previous Grid Home on setting the various parameters.
    • If I have one-off patches to install, I can add them to the -applyOneOffs parameter.

2. Switch to the new Grid Home

Now, I can complete the patching process by switching to the new Grid Home. I do this one node at a time. Step 2 involves downtime.

  1. First, on copenhagen1, I switch to the new Grid Home:
    [grid@copenhagen1]$ export ORACLE_HOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ export CURRENT_NODE=$(hostname)
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \
       -silent -switchGridHome \
       oracle.install.option=CRS_SWONLY \
       ORACLE_HOME=$ORACLE_HOME \
       oracle.install.crs.config.clusterNodes=$CURRENT_NODE \
       oracle.install.crs.rootconfig.executeRootScript=false
    
  2. Then, I run the root.sh script as root:
    • This step restarts the entire GI stack, including resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
    • Optionally, if I want a more graceful approach, I can manually stop the services, and perform draining.
    • In the end, GI restarts the resources (databases and the like).
    [root@copenhagen1]$ /u01/app/19.17.0/grid/root.sh
    
  3. I update any profiles (e.g., .bashrc) and other scripts referring to the Grid Home.
  4. I connect to the other node, copenhagen2, and repeat steps 2.1 to 2.3. I double-check that the CURRENT_NODE environment variable gets updated to copenhagen2.

That’s it! I have now patched my Grid Infrastructure deployment.

Later on, I can patch my databases as well.

A Word about the Directory for the New Grid Home

Be careful when choosing a location for the new Grid Home. The documentation lists some requirements you should be aware of.

In my demo environment, the existing Grid Home is:

/u01/app/19.0.0.0/grid

Since I am patching to 19.17.0, I think it makes sense to use:

/u01/app/19.17.0/grid

If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.

Don’t Forget to Clean Your Room

At a future point, I need to remove the old Grid Home. I use the deinstall tool in the Grid Home. I execute the command on all nodes in my cluster:

$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local

I will wait until:

  • I have seen the new Grid Home run without problems for a week or two.
  • I have patched my Oracle Databases managed by GI.
  • I have seen my Oracle Databases run without GI-related problems for a week or two.

Happy Patching!

Appendix

Windows

Oracle supports this functionality, SwitchGridHome, on Microsoft Windows starting from Oracle Database 23ai.

AIX

Check this out: Grid Infrastructure 19c Out-Of-Place Patching Fails on AIX

Further Reading

Other Blog Posts in This Series

63 thoughts on “How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome

  1. Rather than install the software from scratch – wouldn’t it be better to clone the software from one of the existing nodes? In that way, any specific patches that might have been applied (and potentially the order of them) will be preserved?

    (If you agree, could you add how to clone with the installer to this post – I plan on this post being my template for future patching)

    Like

    1. Hi Blair,

      When I patch Oracle Database I always use brand-new homes (instead of cloning). There are several issues during patching when you keep cloning existing Oracle Homes.
      Without knowing whether they still apply for Grid Infrastructure (I think they do), I continued to use my approach with brand-new Oracle Homes. But I will put it on my list and if time allows, I will follow up on it later on.

      Mike Dietrich has several posts about the issues that arise when you clone existing Oracle Homes. Here’s one of them:

      Binary patching is slow because of the inventory

      Regards,
      Daniel

      Like

  2. Did you try zero downtime GI patching? Unless you’re using acfs or things like that, a db 19.16 above will remain up while switching GI home with the transparent option ;)

    Like

  3. Hi Daniel,

    Thanks for this great article.
    I think you could even skip (on Linux) the RU patching of the target binaries by using:

    34695857: PLACEHOLDER BUG FOR GRID SOFTWARE CLONE 19.17.0.0.221018 OCT 2022
    or for 19.18:
    34979367: PLACEHOLDER BUG FOR GRID SOFTWARE CLONE 19.18.0.0.230117 JAN 2023

    Regards,
    Andreas

    Like

    1. Hi Andreas,

      Oh. At first glance, they look promising. However, I think they are meant for Exadata only. If I remember correctly, gold images have been available for Exadata for a while. Let me find out whether they can be used for non-Exadata platforms as well.

      Great tip.

      Regards,
      Daniel

      Like

    1. Hi Jorge,
      I think it does. The MOS note doesn’t mention any restrictions. Unfortunately, I don’t have a Windows RAC that I can use for testing.
      One thing that wont work is -applyRU and -applyOneOffs. Those are – for some reason – not available on Windows.
      Regards,
      Daniel

      Like

    1. Hi Neil,

      I’ve never tried it myself for Oracle Restart. But the procedure does seems similar for Oracle Restart – except for a few changes in the commands.
      Have you tried and what was your experience with it?

      I have it on my to-do list but – you know – there are so many things on it :) I’ll see what I can do.

      Regards,
      Daniel

      Like

  4. Good Afternoon
    I am working on trying to transition our site to out-of-place patching. Are there any examples/documents on doing this for GRID HAS systems?

    Like

  5. You are correct. We have built scripting to do the basic copy of the unconfigured tar image we create (we use this for provisioning also) and that installs fine, if we set the home to it and query sqlplus it shows we are on the new version. just need to qualify how to move from the old grid home to the new grid home for our standalone HAS install (created using HA_SWONLY option)

    Like

  6. Hello and thank you for your hard work on patching!

    I’m transitioning from in-place patching to OOP as described in your article. Does gridSetup.sh -applyRU perform prechecks? Normally I would run opatchauto -analyze before applying….

    Thanks again

    Like

    1. Hi David,
      It does not check upfront. It tries to install the patches and will bail out on conflicts.

      But the good thing about switchgridhome is that you do all this in advance. You create the new GI home before the maintenance window and have plenty of time to deal with conflicts. When you know the patches you want to install either use the MOS conflict checker or run the conflict check manually (opatch prereq CheckConflictAgainstOHWithDetail).

      But let me stress again. You prepare the new Oracle Home in advance. There is no stress and you can deal with the conflicts. Ideally, you create a golden image (and deal with conflicts one time). Then you save that golden image and deploy it to all your servers. Then there are no patching on servers when you deploy, just use the golden image which is already patched.

      Regards,
      Daniel

      Like

  7. Hi Daniel

    thanks for the great block it worked without any changes on my environment (2 node RAC cluster with 2 node Data Guard) without any changes.

    But how can I fail back to the old version in case of a bug? switchgridhome wouldn’t probably sufficient?

    Like

  8. Hi Daniel!

    Great manual, many thanks!

    One additional noteL It’s better to stop DB instance on host in step 2 in advance, because cluster software stops it in abort mode, not very graceful method.

    Regards, Vadim

    Liked by 1 person

  9. Hi Daniel,
    just tried out the “switchGridHome” method. With your tutorial it has worked!

    I already asked here https://community.oracle.com/mosc/discussion/4533331/oop-rac-database-without-downtime
    how to patch an DBHome in RAC with OOP. I’ve learned I should use FPP.
    But we only have two clusters and I’m not sure if it is worthwhile to build an FPP environment.

    Some time has passed: So, are there any insights when Oracle will official support OOP of Oracle Homes in a RAC env with opatchauto or a “-switchDBHome” – switch for ./runInstaller? I think autoupgrade.jar could be an optio, too.
    In Doc ID 2853839.1) I found this “Patching database home by in-place method is usually preferable.”.
    I’m a little bit confused. On the other side I always read that Oracle recommends “use OOP”.
    Will FPP becomes the recommended tool for OOP in RAC envs, hence I have to test and implement it?
    Thank you, Peter

    Like

    1. Hi Peter,

      I’m glad that you used my blog post to try the new procedure – and that you had success.

      If you just have two clusters, I’d say FPP is overkill. Although it has a stand-alone mode, it requires a new skillset and there’s little benefit. Don’t get me wrong – FPP is great – but I don’t the benefits kick in with just two clusters. Unless you also have Exadata – then perhaps.

      My team strongly disagree with the authors of Doc ID 2853839.1. For database and GI, my team strongly recommends out-of-place patching. No doubt! But I have no control to change all MOS notes :)

      FPP uses always out-of-place patching and more and more of our tools are going that way. E-Business Suite is also working on certifying OOP patching.

      I am not aware of any plans to make FPP mandatory. I would recommend that you focus on getting OOP patching right for your current cluster.

      Just to get things straight. Oracle DO SUPPORT out-of-place patching using the -switchGridHome option. It’s in the documentation and our tools are actively using it.

      Regards,
      Daniel

      Like

  10. Hi Daniel,

    I would like to thank your OOP patching approach that helped me a lot during my latest RU in 19c RAC deployments.

    I follow your instructions and I managed to update both GI & DB oracle homes using OOP.

    There is a minor mistake at the end of your instructions that need to be corrected ”

    I connect to the other node, copenhagen2, and repeat steps 1-3. I double-check that the CURRENT_NODE environment variable gets updated to copenhagen2.”

    It should be repeat steps 2-3 ,since it is a RAC env the new ORACLE_HOME is automatically propagated to the remote node, so only switch home and root.sh need to be performed.

    Thank you for helping us and adding an extra value to our skills.

    Like

    1. Hi,
      I’m really glad that you found the blog post useful and that you found the value of the new method.
      I believe you still need to execute steps 1-3 for the second node. When you use the SwitchGridHome method, it applies to just the current node. It does not update the inventory on other nodes.
      Regards,
      Daniel

      Like

  11. Hi Daniel,

    The first step includes the installation and patching process that takes place on all participated nodes and does not have to be performed on the second one.

    But the material says that we need to follow the steps from 1 to 3 on the second node which is wrong, since they have been already installed.

    I installed the binaries on the first node and the process competed the installation also on the second system.

    After that I performed the switch home process and root.sh script on first node only.

    Once the task had been completed successfully I did the switch home and root.sh process on the second node.

    Thank you!

    Like

    1. Hi,

      I think we agree – we just read the post differently.

      You need to complete steps 2.1 to 2.3 on the second node. Not the entire section 1 and 2. I made the post clearer. I hope you agree to this.

      Regards,
      Daniel

      Like

  12. Hi Daniel,

    Thanks for this blogpost, very usefull.

    While doing the gridsetup in the preparation phase, do we only need the parameters you specified in this blog ?

    What about the paramaters related to the scan, the parameter oracle.install.crs.config.networkInterfaceList, etc?

    If we will not specify these parameters, gridsetup will keep the existing configuration for the new GI home?

    Or do we have to take the rsp file from the installation (as specified in the MOS note 2853839.1) ?
    In our case, the rsp file is from an installation on Oracle Linux9 with AFD (as asmlib is not supported on OL9), so we will have to change oracle.install.asm.configureAFD (as AFD is already configured) and the same for the creation of the voting disks, etc

    thanks,

    Els

    Like

    1. Hi Els,

      Thanks for the positive feedback. Much appreciated.

      You should install the new GI home just like you want it. I’m using parameters that matches my environment, but in your case with AFD you need to change accordingly. There might be other settings where you want it differently. Typically, you would want to more of less copy the settings from the previous response file.

      The possibility of using out-of-place patching and the switch home method doesn’t depend on the way that you install the new GI home.

      I hope that helps,
      Daniel

      Like

  13. Thanks for your answer Daniel, but it is still not clear for me which parameters we have to specify in the preparation phase. Maybe my question was not clear (sorry for that), but I would like to know which parameters need to be configured only once for a RAC cluster and which parameters depend on the GI home.

    I could imagine that AFD need to be configured only once (during the initial installation of the cluster) and that the clusterNodes need to be specified while installing every new GI home.

    But what about the config of the voting disks, the vip and scan adresses, networkInterfaceList, etc? When executing the “gridSetup.sh -switchGridHome” command, will the config of the whole RAC cluster be transferred to the new GI home?

    I tried to look it up in the “Grid Infrastructure Installation and Upgrade Guide ” but I didn’t find detailed information.Can you point me to the documentation you used?

    Thanks!

    Like

    1. Hi Els,

      The entire GI configuration is transferred to the new GI home as part of the switch. Normally, you’d need just the parameters that I use, but since I don’t know your environment, I can’t give you specific advice.
      If you open the template response file from the GI home, you can see which sections you need to fill out to do a CRS_SWONLY install.

      The procedure is described further here:
      https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/switching-oracle-grid-infrastructure-home-after-patching.html

      And the MOS note referenced in the appendix.

      I hope that helps.

      If you need any further clarification than this, you should probably open an SR.

      Regards,
      Daniel

      Like

  14. Hi Daniel,

    up to now we have used the OOP patching feature of opatchauto. As the runtimes for the patching process is growing and growing with each update we would like to use “your” way of OOP patching. But: we are using AIX (sorry for that) and we are facing bug 34962446 when we try create a new 19.25.0 grid home :-(

    There is a knowledge base article for that:

    Oracle Support Document 2948468.1 (Out of place (OOP) patching of 19c Release Update (RU) fails on AIX) can be found at: https://support.oracle.com/epmos/faces/DocumentDisplay?id=2948468.1

    Im not familiar with the ‘createGoldImage’-option of gridSetup.sh so I would like to know if the described solution is a real workaround. The solution would mean that we have to create a golden image from the current installation (which is a 19.22.0). Is that realy comparable to a fresh new installation? Do you see another workaround?

    Thanks & warm regards & Happy New Year 2025

    Like

  15. Hi Daniel,

    Prereqs & Setup Software Steps completed without issues.

    Switch grid on Node 1 runs into

    [FATAL] [INS-41885] The group specified for OSOPER is not same as the group ” retrieved from the current configuration of grid software. The patching operation will not be successful if the same group name is not selected.
    ACTION: Select the same group and proceed further.

    How to proceed with this?

    Thanks,

    Surya

    Like

    1. Hi Surya,

      I’m glad to hear that you’re trying out the new, improved method for out-of-place patching. Once you sort out the little details, you won’t regret it! :)

      The error probably comes from the installation of the new GI home. Here you specify the command line parameter “oracle.install.asm.OSOPER”. In my example, I use “asmoper” but it could be different in your case. So, check your current installation and see which value you used.
      If in doubt, you can check the response file from the previous installation. It is placed in $ORACLE_HOME/install/response.

      Regards,
      Daniel

      Like

  16. Hi Daniel,

    We just recently started OOP and have followed the steps outlined in this blogpost.
    Thanks a lot, we have found it very useful.

    But we have run into a rather peculiar situation:
    After switchGridHome and during execution of root.sh to move GI stack to new home, services are not failed over to node2.
    And root.sh on node1 fails on startup of ASM, we were able to startup manually later after following DocID 1383737.1 (Case4, solution 4.2).
    We have also discovered that HAIP is not working correctly, ie the autogenerated 169.254.x.x is not assigned to network interface.

    After switchingGridHome on node2 (same issue, same manual solution) the cluster is running just fine on 19.25. We have tested the failover feature thoroughly and it seems to work just fine, therefore decided to go forward with test- and production clusters.

    After rolling back our sandbox-cluster to 19.22 (using you blog on rollback), HAIP is working as expected!
    I logged an SR (3-39525566221), but Oracle Support recommends to roll back production to 19.22 and try in-place patching as OOP is “not recommended for HAIP”.

    Any thoughts will be very welcome.

    Regards
    Temsgen

    Like

  17. Hi Daniel,

    thanks for your great blogpost – when following your instructions I had a strange behaviour

    when executing root.sh. In the logfile it complains about missing programs (clsecho, cluutil, etc.).

    But I figured out that I need to execute root.sh twice on each node. On the second run the final

    switch from old GRID_HOME to the new GRID_HOME worked properly. Maybe this information

    useful for other ones having the same issue.

    BR

    Michael.

    Like

    1. Hi Michael,

      Thanks for sharing your experience. It’s not intentional that you need to run root.sh twice. That does sound like some sort of issue to me – possibly a bug. If the issue persists, perhaps you should pursue it with support.

      But thanks for sharing the workaround.

      Daniel

      Like

    1. Hi Kirk,

      Thanks for leaving a comment. Whether you use opatchauto or switchgridhome, you can do it in a rolling manner. I like switchgridhome because:
      1. I get more control over the process compared to opatchauto.
      2. I can use brand-new Oracle homes for the new GI home. I don’t have to re-use an old one (like opatchauto does).
      3. It allows me to create gold images that I can replicate to all other servers.

      If you’re happy with opatchauto today and it works fine for you, then keep use it. However, if you think that those benefits would be good to have, then give the new method a try.

      Regards,
      Daniel

      Like

  18. Hi – great article – can you confirm that if i run gridsetup -SwitchGridHome and use the gui, I will have to run just the root.sh script on both nodes and not the actual gridsetup cmd on node 2?

    Like

    1. Hi Nik,

      Thanks for the nice feedback – much appreciated.

      I haven’t tried this the GUI-way, but I’d strongly assume that you should do it in the same way – whether by GUI or command line. So, running -switchGridHome interactively on all nodes in your cluster.

      Regards,
      Daniel

      Like

      1. Hiya Daniel, thanks for getting back to me – just wanted confirm that I tried it via the GUI last night on my test system and it does prompt you to run root.sh on both nodes after the initial set up is done, so there is no need to run the command from the second node. Hope this helps.

        Like

  19. Hiya Daniel, thanks for getting back to me – just wanted confirm that I tried it via the GUI last night on my test system and it does prompt you to run root.sh on both nodes after the initial set up is done, so there is no need to run the command from the second node. Hope this helps.

    Like

  20. Hi Daniel – quick update – during my tests i tried to mimick the production environment as much as i could. Our current env is a 2 RAC cluster with DB on 12.2 and GI on 19.7 + 1* 2 RAC standby in same config. The password file is stored on the file system on all systems. On my test box everything is on 19 and password file in ASM – I did the switch to 19.27 – it worked fine. I did the switchback to 19.3 it worked fine. I did another switch to 19.27 and again it worked fine. Before the latest switchback i moved the password file out of ASM into its default directory on the OS and edited the srvctl config so that it was blank – the switchback worked fine but I could not connect as sys using the service – was getting ORA-01017: invalid username/password; logon denied – I assumed the password file would be picked up from the OS side. Once i modified the config to point back to the original password file in ASM it all worked. I found this to be the case when I did this on our PREPROD environment – I had to explicitily create a password in ASM get past the errors. Are you able to shed any light on this?

    Thanks,

    Like

    1. Hi Nik,
      That’s a good amount of testing you’re doing there. Great.

      I don’t know the reason for that behavior. I haven’t seen that before. Sounds like something in the Clusterware space. You could file an SR if you want to dig deeper into that.

      Regards,
      Daniel

      Like

  21. Hi Nik B and Daniel,on which side the passwordless connect didn’t work any more primary/standby or both?Maybe we have a simular problem. We switch a snapshot standby twice a day to physical and then back to snapshot to get fresh data in the snapshot db. We had the same problem getting ORA-01017 during connecting the standby side. We fixed it with a pre step, always copying the pw-file from primary to standby. RegardsHolger

    Like

  22. Hello Daniel,

    We follow your procedure to patch from 19.12 to 19.28 on 2 nodes RAC (linux OEL).

    Everything is working fine, the processus was ok, without interruption. And ours clusters are up and running.

    We also patch rdbms with your other proc.

    But in the grid dir, lots of files are missing (crsctl for example), after lots of test and exchange with oracle support, it seems that roothas.sh (or similar) was not executed.
    I can’t find when it’s executed with switchgrid.

    In this doc : https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/switch-gi-home-patching.html , there s command to execute this file, but in my understand it’s another patching method.

    Did it happened to you ?

    I did some test with patching from 19.26 to 19.28 with the same issue each time.

    Here s my proc :

    For 19c the file system needs at least 10GB of free space.

    GI RU and optional one-off patch(es) in use should be unzipped elsewhere.

    and its parent directory should be owned by grid:oinstall

    directory should be empty.

    USER =====> orapro dans /opt/app/oracle/source/grid, OPatch USER =====> root

    mkdir -p /opt/app/grid/19.28.0.0
    chown -R orapro:oinstall /opt/app/grid/19.28.0.0
    chmod -R 775 /opt/app/grid/19.28.0.0 INFO =====> check

    rpm -q policycoreutils policycoreutils-python USER =====> avec orapro sur boule

    1crs
    export ORACLE_HOME=/opt/app/grid/19.28.0.0
    export ACTUAL_ORACLE_HOME=/opt/app/grid/19.26.0.0
    export SOURCE_DIR=/opt/app/oracle/source/ INFO =====> As grid user, unzip base release or gold image only on first node.

    cd $ORACLE_HOME
    unzip -oq $SOURCE_DIR/grid/V982068-01.zip

    rm -Rf OPatch/*
    unzip -oq $SOURCE_DIR/OPatch/p6880880_190000_Linux-x86-64.zip INFO =====> Unzip only on first node and gridSetup.sh will copy binaries to other nodes during deployment.

    cd $SOURCE_DIR/grid/
    unzip -oq p37952382_190000_Linux-x86-64.zip -d 37952382

    $ORACLE_HOME/gridSetup.sh -executePrereqs INFO =====> In this procedure all command lines regarding gridSetup.sh must be INFO =====> – running from the destination grid home INFO =====> – with variable oracle.install.option=CRS_SWONLY INFO =====> Create response file INFO =====> Copy from template file /install/response/gridsetup.rsp to create your own response file. INFO =====> If your cluster was initially installed via response file, then that file is a better start.

    cp -v $ACTUAL_ORACLE_HOME/install/response/grid_2025-03-04_10-06-02AM.rsp $ORACLE_HOME/install/response/new_gridsetup.rsp
    sed -i ‘s/oracle.install.option=UPGRADE/oracle.install.option=CRS_SWONLY/’ $ORACLE_HOME/install/response/new_gridsetup.rsp
    sed -i ‘s/oracle.install.asm.OSDBA=/oracle.install.asm.OSDBA=oinstall/’ $ORACLE_HOME/install/response/new_gridsetup.rsp
    sed -i ‘s/oracle.install.asm.OSASM=/oracle.install.asm.OSASM=asmadmin/’ $ORACLE_HOME/install/response/new_gridsetup.rsp sed -i ‘s/oracle.install.asm.OSOPER=/oracle.install.asm.OSOPER=oper/’ $ORACLE_HOME/install/response/new_gridsetup.rsp INFO =====> Fill out the response file WARN =====> This response file should not have variable ‘ORACLE_HOME’ et avoir oracle.install.option=CRS_SWONLY

    vi /opt/app/grid/19.28.0.0/install/response/new_gridsetup.rsp # Do not change the following system generated value. #—————————————————————————— oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0 #——————————————————————————- oracle.install.option=CRS_SWONLY #——————————————————————————- # The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. #——————————————————————————- oracle.install.asm.OSDBA=orapro #——————————————————————————- # The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This # must be different than the previous two. #——————————————————————————- oracle.install.asm.OSASM=asmadmin # Specify the complete path of the Oracle Base. #——————————————————————————- ORACLE_BASE=/opt/app/grid/base # You can specify a range of nodes in the tuple using colon separated fields of format # hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node # #——————————————————————————- oracle.install.crs.config.clusterNodes=xxxx,zzzzz

    $ORACLE_HOME/gridSetup.sh -silent -ignorePrereqFailure -applyRU $SOURCE_DIR/grid/37952382/37952382/37957391 -applyOneOffs $SOURCE_DIR/grid/37952382/37952382/37847857 -responseFile $ORACLE_HOME/install/response/new_gridsetup.rsp INFO =====> INFO =====> on the first node INFO =====> INFO =====> check

    1crs
    $ORACLE_HOME/bin/crsctl query crs releasepatch
    $ORACLE_HOME/bin/crsctl check cluster
    $ORACLE_HOME/bin/crsctl query crs activeversion -f

    export ORACLE_HOME=/opt/app/grid/19.28.0.0
    export CURRENT_NODE=xxxxx
    $ORACLE_HOME/gridSetup.sh -silent -switchGridHome oracle.install.option=CRS_SWONLY ORACLE_HOME=$ORACLE_HOME oracle.install.crs.config.clusterNodes=$CURRENT_NODE oracle.install.crs.rootconfig.executeRootScript=false INFO =====> executer root.sh as ROOT USER =====> root WARN =====> INFO =====> INFO =====> on second node INFO =====>

    1crs
    export ORACLE_HOME=/opt/app/grid/19.28.0.0
    export CURRENT_NODE=zzzzz
    $ORACLE_HOME/gridSetup.sh -silent -switchGridHome oracle.install.option=CRS_SWONLY ORACLE_HOME=$ORACLE_HOME oracle.install.crs.config.clusterNodes=$CURRENT_NODE oracle.install.crs.rootconfig.executeRootScript=false INFO =====> execut root.sh as ROOT USER =====> root WARN =====>

    Thanks

    Regards

    Cyrille

    Like

    1. Hi Cyrille,

      I haven’t heard of such issues before and I know many customers that are using it with success.

      The link to the docs that you provide is for Oracle Restart (single instance database mananged by GI). It includes a step to run roothas.sh. I have another blog post that deals with Oracle Restart.

      Since you have an Oracle RAC database, you should follow a different procedure which doesn’t include roothas.sh.
      It includes the switchGridHome command and the root script – and that’s it. There should be no more commands.

      Do you have the original SR? You can send it to me at daniel.overby.hansen (a) oracle.com.

      Thanks,
      Daniel

      Like

      1. Hi Daniel,

        Thanks for your answer.

        I was searching which program deploy the binaries, and found roothas.sh, but I don t know if it s relevant.

        But for our RAC, we follow your documentation with switchgrid : https://dohdatabase.com/2023/03/10/how-to-patch-oracle-grid-infrastructure-19c-using-zero-downtime-oracle-grid-infrastructure-patching/

        I did it with success when I patched from 19.12 to 19.26 in dev, but when we did 19.26 to 19.28 in dev and 19.12 to 19.28 in prod, we get the issue with the missing binaries.

        I’ll send you my SR number by email.

        Thanks for your time and your help.

        Like

      2. Hello Daniel,

        I found the source of my problem.

        It s our fault, we have standard to name directory, but this time the standard was not use.
        I was looking under /opt/app/grid/19.28.0.0/ (standard) but the deployment was done in /opt/app/grid/19_28_0_0/.

        Everything is on the /opt/app/grid/19_28_0_0/ directory.

        Sorry to bothered you with a human error.

        Regards

        Cyrille

        Like

Leave a reply to Michael. Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.