Avoid Problems on the Primary Database by Testing on a Snapshot Standby

One of the advantages of standby-first patch apply, is that I can test the patches in a production-like environment (the standby) before applying them to the primary. Should I find any issues with the patches, I can stop the process and avoid impacting the primary database.

Here’s an overview of the process.

For demo purposes, my Data Guard configuration consists of two databases:

  • SID: SALES
  • Databases: SALES_COPENHAGEN and SALES_AARHUS
  • Hosts: copenhagen and aarhus
  • Primary database: SALES_COPENHAGEN running on copenhagen

How To

This procedure starts right after I’ve patched the standby (SALES_AARHUS). It runs out of the target Oracle home, whereas the primary database (SALES_COPENHAGEN) still runs on the source Oracle home.

  • Convert the patched standby to a snapshot standby:

    DGMGRL> convert database SALES_AARHUS to snapshot standby;
    
  • Test the patch apply by running Datapatch on the standby:

    [oracle@aarhus] $ORACLE_HOME/OPatch/datapatch
    
    • One always runs Datapatch on the primary database and the changes made by the patches goes into redo to the standby.
    • But, since I converted to a snapshot standby, it is now opened like a normal database and I can run Datapatch on it.
    • If Datapatch completes without problems on the standby, I can be pretty sure it will do so on the primary as well. The standby is after all an exact copy of the primary database.
  • Optionally, perform additional testing on the standby.

    • I can connect any application and perform additional tests.
    • I can use SQL Performance Analyzer to check for regressing SQL statements.
    • I can make changes to any data in the standby. It is protected by a restore point.
  • When done, convert the snapshot standby back to a physical standby:

    DGMGRL> convert database SALES_AARHUS to physical standby;
    
    • This implicitly shuts down the standby, flashes back to the restore point and re-opens the database as a physical standby.
    • All changes made when it was a snapshot standby, including the Datapatch run, are undone.

Continue the patching procedure on the primary database as described elsewhere.

Is It Safe?

Sometimes, when I suggest using the standby for testing, people are like: Huh! Seriously?

What Happens If I Need to Switch Over or Fail Over?

I can still perform a switchover or a failover. However, they will take a little bit longer.

When I convert to snapshot standby:

  • Redo transport is still active.
  • Redo apply is turned off.

So, the standby receives all redo from the primary but doesn’t apply it. Since you normally test for 10-20 minutes, this would be the maximum apply lag. On a well-oiled standby, it shouldn’t take more than a minute or two to catch up.

When performing a switchover or failover on a snapshot standby, you should expect an increase with the time it takes to:

  • Shut down
  • Flashback
  • Apply redo

I’d be surprised if that would be more than 5 minutes. If your RTO doesn’t allow for a longer period:

  • Get a second standby.
  • Consider the reduction in risk you get when you test on the standby. Perhaps a short increase in RTO could be allowed after all.

What Happens If Datapatch Fails

If Datapatch fails on my snapshot standby, I should be proud of myself. I just prevented the same problem from hitting production.

  • I grab all the diagnostic information I need, so I can work with Oracle Support on the issue.
  • Convert back to physical standby, which will undo the failed Datapatch run.
  • If I expect to solve the issue quickly, leave the standby running in the target Oracle home. Otherwise, put it back into the source Oracle home.

So, yes, it’s safe to use!

Happy testing

Appendix

Other Blog Posts in the Series

How To Patch Oracle Data Guard Using AutoUpgrade For Non-Standby-First Installable Patches

Let me show you how I patch my Oracle Data Guard configuration. I make it easy using Oracle AutoUpgrade. I patch all at once – all databases at the same time – which means a short downtime. I can use this approach for all patches – even those that are not standby-first installable.

I recommend this approach only when you have patches that are not standby-first installable.

  • My Data Guard configuration consists of two databases:
    • SID: SALES
    • Databases: SALES_COPENHAGEN and SALES_AARHUS
    • Hosts: copenhagen and aarhus
    • Primary database: SALES_COPENHAGEN running on copenhagen

Preparations

You should do these preparations in advance of your maintenance window. They don’t interupt operations on your databases.

  • I download the patches using AutoUpgrade.

    • Create a config file called sales-download.cfg:

      global.global_log_dir=/home/oracle/autoupgrade-patching/download
      global.keystore=/home/oracle/autoupgrade-patching/keystore
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.target_version=19
      patch1.platform=linux.x64
      
    • Start AutoUpgrade in download mode:

      java -jar autoupgrade.jar -config sales-download.cfg -patch -mode download
      
      • I can download the patches from any computer. It doesn’t have to be one of the database hosts, which typically don’t have internet access.
  • I create a new Oracle home on all hosts.

    • Create a config file called sales.cfg:
      global.global_log_dir=/home/oracle/autoupgrade-patching/sales
      patch1.source_home=/u01/app/oracle/product/19.3.0.0/dbhome_1
      patch1.target_home=/u01/app/oracle/product/19/dbhome_19_26_0
      patch1.sid=SALES
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.download=no
      
      • Start AutoUpgrade in create_home mode:
      java -jar autoupgrade.jar -config sales.cfg -patch -mode create_home
      
      • AutoUpgrade also runs root.sh. It requires either:
        • oracle user has sudo privileges
        • Else, I must manually execute root.sh.
  • Optionally, but recommended, I run an analysis on the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode analyze
    
    • Check the findings in the summary report.

Patching

Proceed with the following when your maintenance window starts.

  • I connect to the primary database using Data Guard broker and disable redo transport from the primary database:

    DGMGRL> edit database sales_copenhagen set state='TRANSPORT-OFF';
    
  • I update listener.ora on both hosts (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener on both hosts:

    lsnrctl reload
    
  • Downtime starts!

    • Perform draining in advance according to your practices.
    • Shut down your application.
  • Patch the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode deploy
    
  • Simultaneously, I patch the standby database:

    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
  • I update my profile and scripts so they point to the target Oracle home.

  • When patching completes in both hosts, I re-enable redo transport:

    DGMGRL> edit database sales_copenhagen set state='TRANSPORT-ON';
    
  • Verify the Data Guard configuration and ensure the standby database is receiving and applying redo:

    DGMGRL> show database SALES_COPENHAGEN;
    DGMGRL> show database SALES_AARHUS;
    DGMGRL> validate database SALES_COPENHAGEN;
    DGMGRL> validate database SALES_AARHUS;
    

That’s it.

Happy Patching!

Appendix

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Further Reading

Other Blog Posts in the Series

How To Patch Oracle Data Guard Using AutoUpgrade And Standby-First Patch Apply With Restart

Let me show you how I patch my Oracle Data Guard configuration. I make it as easy as possible using Oracle AutoUpgrade. I reduce the interuption by doing standby-first patch apply with a primary database restart.

  • My Data Guard configuration consists of two databases:
    • SID: SALES
    • Databases: SALES_COPENHAGEN and SALES_AARHUS
    • Hosts: copenhagen and aarhus
    • Primary database: SALES_COPENHAGEN running on copenhagen

Preparations

You should do these preparations in advance of your maintenance window. They don’t interupt operations on your databases.

  • I download the patches using AutoUpgrade.

    • Create a config file called sales-download.cfg:

      global.global_log_dir=/home/oracle/autoupgrade-patching/download
      global.keystore=/home/oracle/autoupgrade-patching/keystore
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.target_version=19
      patch1.platform=linux.x64
      
    • Start AutoUpgrade in download mode:

      java -jar autoupgrade.jar -config sales-download.cfg -patch -mode download
      
      • I can download the patches from any computer. It doesn’t have to be one of the database hosts, which typically don’t have internet access.
  • I verify all patches are standby-first installable and my configuration meets the requirements for standby-first patch apply.

  • I create a new Oracle home on all hosts.

    • Create a config file called sales.cfg:
      global.global_log_dir=/home/oracle/autoupgrade-patching/sales
      patch1.source_home=/u01/app/oracle/product/19.3.0.0/dbhome_1
      patch1.target_home=/u01/app/oracle/product/19/dbhome_19_26_0
      patch1.sid=SALES
      patch1.folder=/home/oracle/autoupgrade-patching/patch
      patch1.patch=RECOMMENDED,MRP
      patch1.download=no
      
      • Start AutoUpgrade in create_home mode:
      java -jar autoupgrade.jar -config sales.cfg -patch -mode create_home
      
      • AutoUpgrade also runs root.sh. It requires either:
        • oracle user has sudo privileges
        • Or I’ve stored the root credentials in the AutoUpgrade keystore
        • Else, I must manually execute root.sh.
  • Optionally, but recommended, I run an analysis on the primary database:

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode analyze
    
    • Check the findings in the summary report.

Patching

Proceed with the following when your maintenance window starts.

  • Update listener.ora on the standby host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@aarhus] lsnrctl reload
    
  • Patch the standby database:

    [oracle@aarhus] java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
    • I don’t disable redo transport/apply.
  • Optionally, test the application of patches using a snapshot standby database.

  • Downtime starts!

    • Perform draining in advance according to your practices.
    • Shut down your application.
  • Update listener.ora on the primary host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  • I reload the listener:

    [oracle@copenhagen] lsnrctl reload
    
  • Patch the primary database (see appendix):

    [oracle@copenhagen] java -jar autoupgrade.jar -config sales.cfg -patch -mode deploy
    
    • I use the sales.cfg config file.
    • AutoUpgrade detects it’s running against the primary database, and executes Datapatch and all the post-upgrade tasks.
  • Verify the Data Guard configuration and ensure the standby database is receiving and applying redo:

    DGMGRL> show database SALES_COPENHAGEN;
    DGMGRL> show database SALES_AARHUS;
    DGMGRL> validate database SALES_COPENHAGEN;
    DGMGRL> validate database SALES_AARHUS;
    

That’s it.

Happy Patching!

Appendix

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Further Reading

Other Blog Posts in the Series

Why Is the Data Pump Bundle Patch Not Included in Release Updates?

The Data Pump bundle patch (DPBP) contains many handy fixes for users of Data Pump and DBMS_METADATA. In 19.26 it includes 218 fixes – most of them are functional fixes but there’s a fair share of performance fixes as well.

I often advocate for applying the DPBP, and that leads to the following question:

If the Data Pump bundle patch is that important, why isn’t it included in the Release Updates?

Touché!

Release Updates

There are a number of requirements that any patch must meet to be included in a Release Update. One of them is that the patch must be RAC Rolling Installable.

DPBP doesn’t meet this requirement meaning, it will never be part of a Release Update (unless there’s an improved way of patching, uhh, cliffhanger, read on…).

Why?

The short version:

  • Data Pump fixes generally contain PL/SQL changes (mostly to DBMS_DATAPUMP).
  • When Datapatch applies such fixes, it will issue a CREATE OR REPLACE command for the relevant packages.
  • If someone is using Data Pump, they have a pin on the packages, and Datapatch can’t replace it. Datapatch will wait for 15 minutes maximum, at which point it bails out (ORA-04021), and the patching is incomplete.
  • The PL/SQL engine is optimized for speed and such pins are held longer than you might expect. Normally, that’s good because it gives you faster PL/SQL execution, but when patching it is potentially a problem.
  • Data Pump strictly obeys the rules and since it doesn’t meet the RAC Rolling criteria, we don’t include them in Release Updates.
  • There’s a longer version, too, but that’ll have to wait for another day.

Will This Continue in Oracle Database 23ai?

Yes, so far, nothing has changed in Oracle Database 23ai. Like with Oracle Database 19c, there is a Data Pump bundle patch for Oracle Database 23ai.

What About Oracle Database 21c?

There’s no Data Pump bundle patch in Oracle Database 21c; it’s an innovation release. If you’re on that release, you need to request the individual fixes you need.

Patching With Data Pump Bundle Patch

Here are some facts about DPBP:

  • The patch is bound to one Release Update. When you move to the next Release Update, you need a newer version of DPBP.
  • If you patch with AutoUpgrade Patching (which I strongly recommend), then DPBP is automatically added when you set patch=recommended. AutoUpgrade finds the right bundle patch for your platform and adds it together with the Release Update and other patches.

Do I Need to Remove DPBP Before Applying the Next Patch?

No, if you’re using out-of-place patching (which you should). When you prepare the new Oracle home, simply install the DPBP matching the Release Update, and that’s it. Datapatch will figure it out when it runs.

If you’re using in-place patching, then you need to roll off DPBP before you can apply the newer Release Update. After that, you can apply the newer DPBP as well. This is a tedious task and proves why in-place patching is not preferable.

Non-Binary Online Installable

Although DPBP is not RAC Rolling Installable, you can still apply it easily without any database downtime.

DPBP is a non-binary online installable patch, which means that you can apply it to a running database (opatch apply + Datapatch). Just ensure that no Data Pump jobs are running, and it will apply without problems. This applies even to single instance databases.

You can read more about non-binary online installable patches in MOS note Data Pump Recommended Proactive Patches For 19.10 and Above(Doc ID 2819284.1).

Roy Swonger explains how to apply the Data Pump bundle patch as a non-binary online installable patch

It is not the same as a hot patch or an online patch:

A patch that only affects SQL scripts, PL/SQL, view definitions and XSL style sheets (i.e. non-binary components). This is different than an Online Patch, which can change binary files. Since it does not touch binaries, it can be installed while the database instance is running, provided the component it affects is not in use at the time. Unlike an Online Patch, it does not require later patching with an offline patch at the next maintenance period.

Source: Data Pump Recommended Proactive Patches For 19.10 and Above(Doc ID 2819284.1)

The Cliffhanger

We are working on improving our patching mechanism. Data Pump and Datapatch will become aware of each other and there will be ways for Datapatch to engage with Data Pump during patching that allows patching to complete.

Stay tuned for more information.

You Need More Information

Let me finish off with some additional information for you to consume if you’re interested in the inner workings of Data Pump:

Happy Patching

AutoUpgrade New Features: Extended Platform Support

From the inception of Oracle AutoUpgrade, our clear ambition has been to support all platforms where you can run Oracle Database.

We still work with this ambition in mind, and the latest version of AutoUpgrade lifts additional platform restrictions. Now, you can enjoy AutoUpgrade’s true awesomeness on even more platforms.

Old computer

Photo by Museums Victoria on Unsplash

Patching

You can now use one-button patching on the following platforms:

  • Linux x86-64
  • LINUX ARM
  • Microsoft Windows x64
  • Oracle Solaris on SPARC
  • Oracle Solaris on x86-64
  • IBM AIX on POWER Systems

How to Patch a Database

  1. Ensure that you have the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Patch your database using the instructions from my previous blog post or our webinar.

How to Download Patches for Other Platforms

You can also use AutoUpgrade to just download patches from My Oracle Support and you can download for multiple platforms.

  1. Ensure that you have the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Create a config file. Note there are two prefixes – one for Linux and one for Windows.

    global.global_log_dir=/home/oracle/autoupgrade-patching/log
    global.keystore=/home/oracle/autoupgrade-patching/keystore
    
    patch1.folder=/home/oracle/autoupgrade-patching/patch
    patch1.patch=RECOMMENDED
    patch1.target_version=19
    patch1.platform=LINUX.X64
    
    patch2.folder=/home/oracle/autoupgrade-patching/patch
    patch2.patch=RECOMMENDED
    patch2.target_version=19
    patch2.platform=WINDOWS.X64
    
  3. Download the latest Release Update and other recommended patches for Linux and Windows:

    java -jar autoupgrade.jar -config download.cfg -patch -mode download
    

You can tweak it even further:

  • Download for additional platforms by adding more prefixes. Check the documentation for supported platform names.
  • Download one-off patches or other patches by adding them to the patch parameter.

Some useful information:

  • If you omit platform it defaults to your current platform. If AutoUpgrade doesn’t support your current platform (like when you run AutoUpgrade on a Mac), it defaults to LINUX.X64.
  • Rodrigo Jorge from my team has a good blog post showing how to use download mode.

Windows

AutoUpgrade has supported upgrading on Windows right since the beginning – with the exception of Oracle RAC. But now, this restriction is lifted, so you can upgrade your Oracle RAC as well.

For patching, AutoUpgrade also supports Oracle RAC on Windows. But for now, it doesn’t perform rolling patching (nor does it do so on any platform). You still need opatchauto for that.

Instead, AutoUpgrade performs an all-node patching. It brings down all nodes at once, which means database downtime.

Linux 9

The latest release also includes a few bugs related to Oracle Linux 9. You should be able to use AutoUpgrade on this platform without problems.

Happy patching

Full Day Workshop in London

Calling all database friends in London!

The full-day workshop Real World Database Upgrade and Migration 19c and 23ai comes to London on April 9.

If you’re interested in database tech, Mike Dietrich and I invite you to join our workshop.

During this workshop, Mike Dietrich and Daniel Overby Hansen will guide you through various examples of upgrade, migration, and consolidation techniques and strategies featuring real-world customer cases.

Real World Database Upgrade and Migration 19c and 23ai

The Agenda

Here are the topics that we will cover:

  • Release Strategy with Oracle Database 19c and 23ai
  • Oracle Database Patching
  • Upgrade to Oracle Database 19c and 23ai
  • Migration to the CDB architecture
  • Data Pump: Performance best practices and tweaks
  • Simple ADB Migrations for beginners and experts
  • Performance Stability Prescription and Secret Underscores
  • Insights into the Oracle Database Development process
  • The coolest new features in Oracle Database 23ai for DBAs and Developers

It’s all tech, no marketing!

When and Where

It takes place at the Oracle office in London on April 9. We start at 09:00 and finish at 16:30.

The workshop is an in-person event. It’s not possible to join remotely.

Sign Up

The workshop is free, but registration is required.

Sign up here.

Seats are limited, so sign up to secure your seat. If you can’t make it, please cancel your registration so a fellow geek can join instead.

I hope to see you there for a full day of fun!

AutoUpgrade New Features: List All Checks

Oracle AutoUpgrade was made to make upgrading and patching easier. When doing so, there is a risk that we hide too much information and turn AutoUpgrade into a black box.

It has always been the intention that AutoUpgrade is fully transparent and enables you to see exactly what’s going on.

This new feature increases transparency by allowing you to get a list of all the checks that are performed before and after upgrading or patching.

How Does It Work?

  1. Ensure that you have the latest version of AutoUpgrade:
    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  2. Start AutoUpgrade and list all checks. Pipe the output into a file for easier reading:
    java -jar autoupgrade.jar -listchecks > autoupgrade_checks.txt
    
  3. You can shorten the output to just one of the checks:
    java -jar autoupgrade.jar -listchecks ORACLE_RESERVED_USERS
    

Here’s the output from one of the checks:

Check : ORACLE_RESERVED_USERS
        Description : Oracle occasionally adds new internal USERs and ROLEs as the database evolves. To avoid a name conflict in the upgraded version, a source database must not contain any USER or ROLE with a name that matches one reserved by Oracle in the target release.
        Fixup Action : You must drop the following USERs or ROLEs from the database: {1}
        Severity : ERROR
        Fixup Stage : PRE
        Min Version(inclusive) Check applies : NONE
        Max Version(exclusive) Check applies : NONE
        Check Introduced Version : NONE
        Check Removed Version : NONE
        Manual Fixup or Automatic : MANUAL
        AutoUpgrade Only : NO
        Run for Datapatch : NO
  • Severity may take one of the following values: INFO, WARNING, RECOMMEND, ERROR.
  • Fixup Stage tells you when AutoUpgrade executes the check: PRE (before), POST (after)
  • If Manual Fixup or Automatic is AUTO it means AutoUpgrade will fix any issues for you during fixups or deploy mode. Otherwise, it is something that the user must fix. Depending on the severity a fix is mandatory.
  • If AutoUpgrade also executes the check during patching, then Run for Datapatch is set to YES.

How Can I Use the Information?

First, this feature adds transparency. We don’t want AutoUpgrade to become a black box.

Second, it allows you to correlate the information with your own runbook. Perhaps you are performing some of the same checks and that’s an opportunity to trim your runbook. I have seen this countless times when I talk to customers. Their runbook has evolved over many years and often contain checks that are no longer needed or executed by AutoUpgrade.

Final Words

At the time of writing, there are more than 200 checks in AutoUpgrade:

java -jar autoupgrade.jar -listchecks | grep "Check : " | wc -l

201

Happy upgrading!

Further Reading

How to Patch Oracle RAC Database 19c Using Manual Out-Of-Place

Let me show you how I patch an Oracle RAC Database 19c using out-of-place patching and by doing it manually – that is without using OPatchAuto.

The advantages of this solution:

  • I get more control over the process which is handy for automation
  • I can create my Oracle homes using gold images
  • Or, I can use brand-new Oracle homes instead of cloning existing Oracle homes which OPatchAuto does

My Demo System

  • 2 node Oracle RAC Database
    • Nodes: copenhagen1 and copenhagen2
    • SID: CDB1
    • Instances: CDB11 and CDB12
  • Runs Oracle Linux
  • GI home is already on 19.26
  • Database home is on 19.25

I want to:

  • Patch my database to 19.26 in a rolling manner
  • Patch just the database – GI home is already patched
  • Patch manually without using OPatchAuto

Preparation

I need to download the following to /u01/software on copenhagen1:

  1. The base release of:
    • Oracle Database (LINUX.X64_193000_db_home.zip)
  2. Latest OPatch from My Oracle Support (6880880).
  3. Patches from My Oracle Support:
    • 19.26 Release Update for Grid Infrastructure (37257886)

You can use AutoUpgrade to easily download GI patches.

I have already established SSH equivalence between the two nodes.

How to

1. Prepare a New Database Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new database home. I must do this as oracle on both nodes, copenhagen1:
    [oracle@copenhagen1]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/dbhome_1926
    [oracle@copenhagen1]$ mkdir -p $NEW_ORACLE_HOME
    
    copenhagen2:
    [oracle@copenhagen2]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/dbhome_1926
    [oracle@copenhagen2]$ mkdir -p $NEW_ORACLE_HOME
    
  2. I extract the base release of Oracle Database into the new database home on the first node only:
    [oracle@copenhagen1]$ cd $NEW_ORACLE_HOME
    [oracle@copenhagen1]$ unzip -oq /u01/software/LINUX.X64_193000_db_home.zip
    
    Optionally, I can use a golden image. You can use a gold image from a single instance install. The Oracle home is identical.
  3. I update OPatch to the latest version:
    [oracle@copenhagen1]$ rm -rf OPatch
    [oracle@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  4. I want to apply the 19.26 Release Update while I install the database home. To do that, I must extract the patch file:
     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ unzip -oq p37257886_190000_Linux-x86-64.zip -d 37257886
    
    • I must use the GI Release Update, not the database Release Update. The GI Release Update is a bundle patch consisting of the database and OCW Release Updates. I must apply both to my database home.
  5. Then, I can install the new database home and apply the patches at the same time:
    • The variable CLUSTER_NODES contains a comma separated list of all nodes in my cluster.
    • The parameter -applyRU is the path to the Database Release Update.
    • The parameter -applyOneOffs is the paths to the OCW Release Update.
    [oracle@copenhagen1]$ export CV_ASSUME_DISTID=OEL7.8
    [oracle@copenhagen1]$ export ORACLE_HOSTNAME=$(hostname)
    [oracle@copenhagen1]$ export CLUSTER_NODES=copenhagen1,copenhagen2
    [oracle@copenhagen1]$ export ORACLE_BASE=/u01/app/oracle
    [oracle@copenhagen1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [oracle@copenhagen1]$ export OLD_ORACLE_HOME=$ORACLE_HOME
    [oracle@copenhagen1]$ export ORACLE_HOME=$NEW_ORACLE_HOME
    [oracle@copenhagen1]$ cd $ORACLE_HOME
    [oracle@copenhagen1]$ ./runInstaller -ignorePrereqFailure -waitforcompletion -silent \
         -responseFile $ORACLE_HOME/install/response/db_install.rsp \
         -applyRU /u01/software/37257886/37260974 \
         -applyOneOffs /u01/software/37257886/37268031 \
         oracle.install.option=INSTALL_DB_SWONLY \
         ORACLE_HOSTNAME=$ORACLE_HOSTNAME \
         UNIX_GROUP_NAME=oinstall \
         INVENTORY_LOCATION=$ORA_INVENTORY \
         SELECTED_LANGUAGES=en \
         ORACLE_HOME=$ORACLE_HOME \
         ORACLE_BASE=$ORACLE_BASE \
         oracle.install.db.InstallEdition=EE \
         oracle.install.db.OSDBA_GROUP=dba \
         oracle.install.db.OSBACKUPDBA_GROUP=dba \
         oracle.install.db.OSDGDBA_GROUP=dba \
         oracle.install.db.OSKMDBA_GROUP=dba \
         oracle.install.db.OSRACDBA_GROUP=dba \
         oracle.install.db.CLUSTER_NODES=$CLUSTER_NODES \
         oracle.install.db.isRACOneInstall=false \
         oracle.install.db.rac.serverpoolCardinality=0 \
         SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
         DECLINE_SECURITY_UPDATES=true
    
    • I install it in silent mode, but I could use the wizard instead.
    • You need to install the new database home in a way that matches your environment.
    • For inspiration, you can check the response file used in the previous database home on setting the various parameters.
    • If I have additional one-off patches to install, I add them to the comma-separated list.
  6. I run the root script on all nodes, copenhagen1:
    [root@copenhagen1]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/dbhome_1926
    [root@copenhagen1]$ NEW_ORACLE_HOME/root.sh
    
    copenhagen2:
    [root@copenhagen2]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/dbhome_1926
    [root@copenhagen2]$ $NEW_ORACLE_HOME/root.sh
    

2. Prepare Database

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

I will move the database into a new Oracle home, so I need to ensure the database configuration files are either outside the Oracle home or move them to the new Oracle home.

  1. I verify that my SP file and password file are stored in ASM – or at least outside the Oracle home:
    [oracle@copenhagen1]$ export ORACLE_HOME=$OLD_ORACLE_HOME
    [oracle@copenhagen1]$ srvctl config database -db $ORACLE_UNQNAME | grep file  
    
    • If the files are stored in the dbs folder, I copy them to new Oracle home on both nodes.
  2. I copy tnsnames.ora and sqlnet.ora to the new Oracle home on both nodes, copenhagen1:
    [oracle@copenhagen1]$ cp $OLD_ORACLE_HOME/network/admin/sqlnet.ora $NEW_ORACLE_HOME/network/admin
    [oracle@copenhagen1]$ cp $OLD_ORACLE_HOME/network/admin/tnsnames.ora $NEW_ORACLE_HOME/network/admin
    
    copenhagen2:
    [oracle@copenhagen2]$ cp $OLD_ORACLE_HOME/network/admin/sqlnet.ora $NEW_ORACLE_HOME/network/admin
    [oracle@copenhagen2]$ cp $OLD_ORACLE_HOME/network/admin/tnsnames.ora $NEW_ORACLE_HOME/network/admin
    
  3. I take care of any other configuration files in the Oracle home.
  4. I modify the database so it starts in the new Oracle home on the next restart.
    [oracle@copenhagen1]$ srvctl modify database -d $ORACLE_UNQNAME -o $NEW_ORACLE_HOME
    

3. Restart Instances

Now, I restart each instance in a rolling manner. On restart, the database starts in the new Oracle home. There is no database outage, however, each instance needs to restart.

Maintenance window starts now!

  1. I start draining the first instance, CDB11, running on copenhagen1:
    [oracle@copenhagen1]$ export ORACLE_HOME=$OLD_ORACLE_HOME
    [oracle@copenhagen1]$ export PATH=$ORACLE_HOME/bin:$PATH
    $ORACLE_HOME/bin/srvctl stop service \
       -d $ORACLE_UNQNAME \
       -service SALESGOLD \
       -drain_timeout 60 \
       -stopoption IMMEDIATE \
       -node copenhagen1
    
    • In this example, I have just one service that I’m shutting down. You probably have other services and other drain settings. You can adjust accordingly.
  2. After draining, I can shut down the instance, CDB11, on copenhagen1:
    [oracle@copenhagen1]$ $ORACLE_HOME/bin/srvctl stop instance \
       -d $ORACLE_UNQNAME \
       -i CDB11 \
       -force
    
  3. I immediately restart the instance:
    [oracle@copenhagen1]$ $ORACLE_HOME/bin/srvctl start instance \
       -d $ORACLE_UNQNAME \
       -i CDB11
    
    • The instance now restarts in the new Oracle home.
  4. I repeat the drain and restart cycle on the second instance, CDB12, running on copenhagen2. Repeat steps 1-3 but change any reference to the node and instance:
    • Change -node to copenhagen2
    • Change -i to CDB12
  5. I have now restarted all instances and they are running in the new Oracle home.
  6. I update any profiles (e.g., .bash_profile) and other scripts referring to the database home on all nodes.
  7. If the database is added to /etc/oratab, I change the file accordingly on all nodes.

4. Complete Patching

  1. I complete patching of the database by running Datapatch (ensure the environment is set correctly):
    [oracle@copenhagen1]$ env | grep ORA
    [oracle@copenhagen1]$ $ORACLE_HOME/OPatch/datapatch
    
    • I can do this while users are connected to the database.

Most likely, there are other changes that you need to make in your own environment:

  • Update Enterprise Manager registration
  • Upgrade RMAN catalog

That’s it! I have now patched my Oracle RAC Database.

Happy Patching!

Appendix

Deinstall

In the future, I should remove the old Oracle homes. I use the deinstall tool in the respective Oracle homes.

I would recommend waiting a week or two until I’m confident the new Release Updates are fine.

Rollback

If you need to roll back, you just reverse the process. Restart the instances in the old Oracle home and let Datapatch perform the rollback of the database changes.

Combine with SwitchGridHome

You can combine this method with SwitchGridHome for Grid Infrastructure patching.

Patch with OPatchAuto

An alternative to this method is to use OPatchAuto and out-of-place patching. Check the patch readme for instructions on how to use that method.

Disable Binary Options

If you need to disable binary options using make, you can do it immediately after installing the new Oracle home. Do it on all nodes.

Other Blog Posts in This Series

Which Grid Infrastructure Should I Install on My Brand New Exadata?

I received a question from a customer:

We just got a brand-new Exadata. We will use it for critical databases and stay on Oracle Database 19c for the time being. Which Grid Infrastructure should we install: 19c or 23ai?

I recommend installing Oracle Grid Infrastructure 19c (GI 19c).

The Reason

GI 19c has been out since 2019. It is currently at the 23rd Release Update (19.26), and used on many thousands of systems. Those systems include some of the most critical systems you can find.

GI 19c is a very proven release that has reached a very stable state. Proven and stable – two attributes that are very valuable for a mission-critical system.

Additionally, I would apply the latest Release Update – at the time of writing that’s 19.26. Also, I would include fixes from Oracle Database 19c Important Recommended One-off Patches (Doc ID 555.1).

Further, I would ensure the databases were on the same Release Update, 19.26. If that’s impossible, at least keep the database within two Release Updates of Grid Infrastructure, so, minimum 19.24.

In this case, the customer migrates the databases from a different platform onto the new Exadata system. The source database is already running GI 19c, and keeping the same GI release on the new system means there’s one less change to deal with.

Why Not Oracle Grid Infrastructure 23ai?

First, there’s absolutely nothing wrong with the quality of Oracle Grid Infrastructure 23ai (GI 23ai).

When I recommend GI 19c over GI 23ai, it is a matter of choosing between two good options.

But GI 23ai has been out for Exadata for over half a year. Much less than GI 19c, which is about to reach six years of general availability.

Every piece of software as a few rough edges to grind off and I would expect that for GI 23ai as well.

For a mission-critical system, there’s no need to take any chances, which is why I recommend GI 19c.

When To Use Oracle Grid Infrastructure 23ai

If the customer wants to use Oracle Database 23ai – either now or in the foreseeable future – then they should install GI 23ai. No doubt about that.

Also, for less critical systems, including test and development systems, I would recommend GI 23ai as well.

Why Not Both?

I added this after a report on LinkedIn by my colleague, Alex Blyth.

Alex agrees with my recommendation but adds the following:

What I also would have said is, you can have your cake, and you can eat it too. This means Exadata can do more than one thing at a time. With virtualization, you can have a VM cluster with 19c Database and GI for critical databases, and another VM cluster (up to 50 per DB server with X10M / X11M and the latest Exadata System Software) that is running DB and GI 23ai. What’s more, you could also deploy Exadata Exascale and take advantage of the high-performance shared storage for the VM images, and the awesome instant database snapshot and cloning capabilities for DB 23ai.

He raises a really good point.

Exadata is the world’s best database platform and the flexibility it offers with virtualization would allow this customer the stability they need for their mission-critical database, plus, getting started with the many new features on 23ai.

The best of both worlds!

Final Words

Although I work in the upgrade team and love upgrades, I don’t recommend them at any cost.

For mission-critical systems, stability and maturity are paramount, and that influences my recommendation of GI 19c.

But get started with Oracle Grid Infrastructure and Database 23ai today. Install it in your lab and then on your less important systems. There are many great enhancements to exploring on Oracle Database 23ai.

Prepare yourself for the next release in due time.

Recreate Database Services After Moving An Oracle Database

Oracle recommends that you connect to the database via custom services. In your connect string, don’t connect:

  • Directly to the SID
  • Or to the database’s default service (the service with the same name as the database).

When you move a database around, in some situations, the database does not retain these services, for instance, when you:

  • Migrate a non-CDB to PDB using refreshable clone PDB
  • Upgrade a PDB using refreshable clone PDB
  • Move a PDB to a different CDB using refreshable clone PDB
  • Migrating a database using Full Transportable Export/Import or transportable tablespaces

The services are important because your application and clients connect to the database through that service. Also, the service might define important properties for things like Application Continuity or set default drain timeout.

Here’s how to recreate such services.

Database Managed Services

A database-managed service is one that you create directly in the database using dbms_service:

begin
   dbms_service.create_service(
      service_name=>'SALESGOLD',
      network_name=>'SALESGOLD');
   dbms_service.start_service('SALESGOLD');   
end;
/

After the migration, you must manually recreate the service in the target database.

dbms_metadata does not support services. So, you must query v$services in the source database to find the service’s defition. Then, construct a call to dbms_service.create_service and dbms_serice.start_service.

Clusterware Managed Services

I recommend defining services in Grid Infrastructure if you are using Oracle RAC or using Oracle Restart to manage your single instance database. Luckily, Grid Infrastructure supports exporting and importing service defitions.

  • You export all the services defined in the source database:

    srvctl config service \
       -db $ORACLE_UNQNAME \
       -exportfile my_services.json \
       -S 2
    
  • You edit the JSON file.

    1. Remove the default services. Keep only your custom services.
    2. Remove the dbunique_name attribute for all services.
    3. If you are renaming the PDB, you must update the pluggable_database attribute.
    4. Update the res_name attribute so it matches the resource name of the target database. Probably you just need to exchange the db_unique_name part of the resource name. You can find the resource name as grid when you execute crsctl stat resource -t.
  • You can now import the services into the target database:

    srvctl add service \
       -db $ORACLE_UNQNAME \
       -importfile my_services.json
    
  • Finally, you start the service(s):

    export ORACLE_SERVICE_NAME=SALESGOLD
    srvctl start service \
       -db $ORACLE_UNQNAME \
       -service $ORACLE_SERVICE_NAME
    

Additional Information

  • The export/import features work from Oracle Database 19c, Release Update 19 and beyond.
  • You can also export/import the definition of:
    • Database: srvctl config database -db $ORACLE_UNQNAME -S 2 -exportfile my_db.json.json
    • PDB: srvctl config pdb -db $ORACLE_UNQNAME -S 2 -exportfile my_pdb.json
    • ACFS filesystem: srvctl config filesystem -S 2 -exportfile /tmp/my_filesystem.json
  • At time of writing, this functionality hasn’t made it into the documentation yet. Consider yourself lucky knowing this little hidden gem.

Final Words

Remember to recreate your custom services after a migration. Your application needs the service to connect in a proper way.

Further Reading