Upgrade Oracle Database 19c CDB to 26ai with Data Guard – MAA Method

Here’s how to upgrade an entire CDB to Oracle AI Database 26ai, including all standby databases.

I’m using the MAA Method (see appendix). The standby database is up during the upgrade and continuously applies redo. This gives me shorter downtime compared to the alternative.

1. Preparations

I’ve already prepared my database and installed a new Oracle home on both servers. The maintenance window has started, and users have left the database.

  1. This is my AutoUpgrade config file:

    global.global_log_dir=/home/oracle/autoupgrade/logs/CDB19
    upg1.source_home=/u01/app/oracle/product/19
    upg1.target_home=/u01/app/oracle/product/26
    upg1.sid=CDB19
    
    • sid contains the name or SID or my database.
    • I specify the source and target Oracle homes. I’ve already installed the target Oracle home.
    • Check the appendix for additional parameters.
  2. I disable Fast-Start Failover using Data Guard broker:

    disable fast_start failover;
    

2. Upgrade Standby

  1. I update listener.ora on the standby host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  2. I reload the listener:

    lsnrctl reload
    
  3. I create a guaranteed restore point on the standby database.

    alter database recover managed standby database cancel;
    create restore point stdby_before_upg guarantee flashback database;
    alter database recover managed standby database disconnect from session;
    
  4. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config CDB19.cfg -mode deploy
    
    • AutoUpgrade restarts the standby in the new Oracle home.
    • It also takes care of all config files.
    • Plus, update /etc/oratab and Grid Infrastructure configuration.
  5. I update any profiles or scripts that use the standby.

  6. I use Data Guard broker to ensure that my standby is still applying logs:

    validate database <stdby_unique_name>;
    

3. Upgrade Primary

  1. I update listener.ora on the primary host (see appendix). I change the ORACLE_HOME parameter in the static listener entry (suffixed _DGMGRL) so it matches my target Oracle home.

  2. I reload the listener:

    lsnrctl reload
    
  3. I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config CDB19.cfg -mode deploy
    
    • AutoUpgrade starts by setting a guaranteed restore point and then proceeds with the upgrade.
  4. While the job progresses, I monitor it:

    upg> lsj -a 30
    
    • The -a 30 option automatically refreshes the information every 30 seconds.
    • I can also use status -job 100 -a 30 to get detailed information about a specific job.
  5. In the end, AutoUpgrade completes the upgrade:

    Job 100 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    ---- Drop GRP at your convenience once you consider it is no longer needed ----
    Drop GRP from CDB19: drop restore point AUTOUPGRADE_9212_CDB191927000
    
    Please check the summary report at:
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.html
    /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
    • This includes the post-upgrade checks and fixups, incl. updating /etc/oratab and Grid Infrastructure configuration.
  6. I review the Autoupgrade Summary Report. The path is printed to the console:

    vi /home/oracle/autoupgrade/logs/CDB19/cfgtoollogs/upgrade/auto/status/status.log
    
  7. I take care of the post-upgrade tasks.

  8. I update any profiles or scripts that use the primary.

4. Finalizing

  1. When I’m done testing the database – including application testing – and I decide that a rollback is not needed, I drop the restore points – from primary and standby.

  2. I re-enable Fast-Start Failover using Data Guard broker:

    enable fast_start failover;
    

That’s It!

With AutoUpgrade, you can easily upgrade your entire CDB even when you use Data Guard.

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Different Methods

When upgrading with Data Guard, there are two approaches:

  • Standby Offline – Before the upgrade starts on the primary database, you shut down the standby database. You keep it shut down until the upgrade has completed on the primary database and you have finished your tests. When you are sure you will stay on the new release, the standby database is restarted and synchronized with the primary database. It will take some time before you can go live because the standby database must apply all the redo generated during the upgrade.
    • If you need to fall back, you can use Flashback Database on the primary database. In addition, no matter what happens to the primary database, you still have the standby database immediately ready in the pre-upgrade state.
    • My team recommends this method. We prefer to sacrifice a little downtime to achieve even better protection.
  • Maximum Availability Architecture (MAA) – The standby database is open and applies redo while the primary database is upgraded. This means that the standby database is closely following the primary database. You can go live very soon after the upgrade completes because there is little or very little apply lag.
    • The downside is when you must fall back. In that case, you have two databases to bring back in time with Flashback Database. In the very unlikely event that something happens during flashback on both databases, you may need to restore your backup.
    • The MAA team recommends this method as it guarantees the lowest downtime.
Standby Offline MAA
Maximum protection Minimum downtime
Upgrade team recommendation MAA recommendation
Redo transport deferred Redo transport enabled
Redo apply stopped Redo apply active
Protected by offline standby and guaranteed restore point Protected by guaranteed restore point
AutoUpgrade default

Static Listener Entry

In this blog post, I update the static listener entries required by Data Guard broker (suffixed DGMGRL). My demo environment doesn’t use Oracle Restart or Oracle Grid Infrastructure, so this entry is mandatory.

If you use Oracle Restart or Oracle Grid Infrastructure, such entry is no longer needed.

Oracle Data Guard Broker and Static Service Registration (KB142582)

What If I Have Multiple Standbys?

  • Repeat step Upgrade Standby for each standby before moving on with the next step Upgrade Primary.

  • This also applies if you have cascading standbys or use far-sync instances.

What About Rolling Upgrades?

This is a very different technique. You can learn more in Virtual Classroom #10: Zero Downtime Operations of Oracle Database.

What If My Database Is A RAC Database?

There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also updates the Grid Infrastructure configuration.

For warp-speed upgrades, take a look at distributed upgrade.

What If I Use Oracle Restart?

No changes. AutoUpgrade detects this and automatically updates the Grid Infrastructure configuration.

What If My Database Is Encrypted

You must use an auto-login keystore. AutoUpgrade checks this during the pre-upgrade analysis.

Other Config File Parameters

The config file shown above is a basic one. Let me address some of the additional parameters you can use.

  • timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.

  • before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.

  • em_target_name: Allow AutoUpgrade to create blackouts before restarting the database and to update the target configuration after the upgrade. Requires a local installation of emcli. See this blog post for details.

  • rman_catalog_connect_string: Relevant for databases that store backup metadata in a recovery catalog. Upgrades the recovery catalog schema following an upgrade. See this blog post for details.

Patching Oracle Grid Infrastructure 19c – Beginner’s Guide

This is the start of a blog post series on patching Oracle Grid Infrastructure 19c (GI). It is supposed to be easy to follow, so that I may have skipped a detail here and there.

I know my way around database patching. I have done it countless times. When it comes to GI, it’s the other way around. I have never really done it in the real world (i.e., before joining Oracle) and my knowledge was limited. I told my boss, Mike, and he gave me a challenge: Learn about it by writing a blog post series.

Why Do I Need to Patch Oracle Grid Infrastructure

Like any other piece of software, you need to patch GI to get rid of security issues and fix issues.

You should keep the GI and Oracle Database patch level in sync. This means that you need to patch GI and your Oracle Database at the same cadence. Ideally, that cadence is quarterly.

It is supported to run GI and Oracle Database at different patch levels as long as they are on the same release. GI is also certified to run some of the older Oracle Database releases. This is useful in upgrade projects. Check Oracle Clusterware (CRS/GI) – ASM – Database Version Compatibility (Doc ID 337737.1) for details.

A few examples:

GI Database Supported
19.18.0 19.18.0 Yes – recommended
19.16.0 19.18.0 Yes
19.18.0 19.16.0 Yes
19.18.0 11.2.0.4 Yes – used during upgrade, for instance
19.18.0 21.9.0 No

If possible and not too cumbersome, I recommend that you first patch GI and then Oracle Database. Some prefer to patch the two components in two separate operations, while others do it in one operation.

Which Patches Should You Apply to Oracle Grid Infrastructure

You should apply:

Whether you download the bundle patches individually or go with the combo patch is a matter of personal preference. Ultimately, they contain the same.

Some prefer an N-1 approach: When the April Release Update comes, they patch with the previous one from January; Always one quarter behind. For stability reasons, I assume.

What about OJVM patches for GI? The short answer is no.

Which Method Do I Use For Patching

You can patch in two ways:

  • In-place patching
  • Out-of-place patching
In-place Out-of-place
You apply patches to an existing Grid Home. You apply patches to a new Grid Home.
You need disk space for the patches. You need disk space for a brand new Grid Home and the patches.
You patch the existing Grid Home. When you start patching a node, GI drains all connections and moves services to other nodes. The node is down during patching. You create and patch a new Grid Home without downtime. You complete patching by switching to the new Grid Home. The node is down only during switching.
Longer node downtime. Shorter node downtime.
No changes to profile and scripts. Profile, scripts and the like must be updated to reflect the new Grid Home.
My recommended method.

Note: When I write node downtime, it does not mean database downtime. I discuss it shortly.

In other words:

In-place patching replaces the Oracle Clusterware software with the newer version in the same Grid home. Out-of-place upgrade has both versions of the same software present on the nodes at the same time, in different Grid homes, but only one version is active.

Oracle Fleet Patching and Provisioning

When you have more systems to manage, it is time to consider Fleet Patching and Provisioning (FPP).

Oracle Fleet Patching & Provisioning is the recommended solution for performing lifecycle operations (provisioning, patching & upgrades) across entire Oracle Grid Infrastructure and Oracle RAC Database fleets and the default solution used for Oracle Database Cloud services

It will make your life so much easier; more about that in a later blog post.

Zero Downtime Oracle Grid Infrastructure Patching

As of 19.16.0 you can also do Zero Downtime Oracle Grid Infrastructure Patching (ZDOGIP).

Use the zero-downtime Oracle Grid Infrastructure patching method to keep your Oracle RAC database instances running and client connections active during patching.

ZDOGIP is an extension to out-of-place patching. But ZDGIOP will not update the operating system drivers and will not bring down the Oracle stack (database instance, listener etc.). The new GI takes over control of the Oracle stack without users noticing. However, you must update the operating system drivers by taking down the node. But you can postpone it to a later point in time.

More details about ZDGIOP in a later blog post.

What about Oracle Database Downtime

When you patch GI on a node, the node is down. You don’t need to restart the operating system itself, but you do shut down the entire GI stack, including everything GI manages (database, listeners etc.).

What does that mean for Oracle Database?

Single Instance

If you have a single instance database managed by GI, your database is down during patching. Your users will experience downtime. By using out-of-place patching, you can reduce downtime.

Data Guard

If you have a Data Guard configuration, you can hide the outage from the end users.

First, you patch GI on your standby databases, then perform a switchover, and finally patch GI on the former primary database.

The only interruption is the switchover; a brownout period while the database switches roles. In the brownout period, the database appears to hang, but underneath the hood, you wait for the role switch to complete and connect to the new primary database.

If you have configured your application properly, it will not encounter any ORA-errors. Your users experience a short hang and continue as if nothing had happened.

RAC

If you have a RAC database, you can perform the patching in a rolling manner – node by node.

When you take down a node for patching, GI tells connections to drain from the affected instances and connect to other nodes.

If your application is properly configured, it will react to the drain events and connect seamlessly to another instance. The end users will not experience any interruption nor receive any errors.

If you haven’t configured your application properly or your application doesn’t react in due time, the connections will be forcefully terminated. How that will affect your users depend on the application. But it won’t look pretty.

Unless you configure Application Continuity. If so, the database can replay any in-flight transaction. From a user perspective, all looks fine. They won’t even notice that they have connected to a new instance and that the database replayed their transaction.

Happy Patching!

Appendix

Further Reading

Other Blog Posts in This Series