What to Do about RMAN Recovery Catalog When You Upgrade Oracle Database

When you upgrade your Oracle Database, you want to ensure your backup strategy is not compromised. The RMAN recovery catalog is a key part of your backup strategy. What do you need to take care of when upgrading Oracle Database? Don’t let the upgrade of your Oracle Database jeopardize your RMAN backup strategy.

First, let’s agree on the terminology:

  • Target Database – The database that you want to backup. There is where your data is stored.
  • Catalog Database – A regular Oracle Database with one or more catalog schemas.
  • Catalog Schema – A schema inside the catalog database which holds the metadata about the RMAN backups. You can register multiple databases in the same catalog schema.
  • Recovery Catalog – The catalog schema and catalog database together form the recovery catalog.

The topology of RMAN catalog and Oracle Database

What’s Required?

RMAN Client

The RMAN client you use for the backup must be the same version as the target database. I find it easiest to always use the RMAN executable from the same Oracle Home as the target database.

Previously, this was not a requirement, but it is in current versions.

Catalog Schema

The catalog schema version must be the same or higher compared to the RMAN client. When you upgrade the target database and start to use a newer RMAN client, you also need to upgrade the catalog schema.

  • Upgrade the target database.
  • Start RMAN using the executable from the target database Oracle Home.
  • Connect to the target database and recovery catalog and perform the upgrade:
    $ $ORACLE_HOME/bin/rman
    
    RMAN> connect target /
    RMAN> connect catalog <catalog_schema>/<catalog_schema_password>@<catalog_database_alias>
    RMAN> upgrade catalog noprompt;
    

If you register multiple databases into the same catalog schema, and you have already upgraded another target database to the same version, then there is no need to upgrade the catalog schema again.

When you upgrade with AutoUpgrade, I find it easiest to allow AutoUpgrade to perform the upgrade for me.

Since the catalog schema is backward compatible, you can downgrade the target database and still use a higher version catalog schema. In case of a target database downgrade, no changes are needed in the catalog schema.

Catalog Database

As long as the catalog database runs on a supported version, you should be home safe. Target databases on Oracle Database 19c support using catalog databases all the way back to 10.2.0.3. Hopefully, you don’t use such old versions anymore. The version of the catalog database does not have to match either the catalog schema or the target database.

You can upgrade the catalog database like any other database. Use AutoUpgrade in deploy mode, and that’s it. A catalog database requires no special attention. But you can always run $ORACLE_HOME/rdbms/admin/dbmsrmansys.sql in the catalog database to ensure all the prerequisites are met (plus dbmsrmanvpc.sql for VPC users).

If your catalog database is on Oracle Database 11g, there are a few details in My Oracle Support Doc ID 1970049.1 to be aware of.

In some situations, building a new catalog database is desirable. Like:

  • Your catalog database is on Standard Edition. Nowadays, a catalog database must be Enterprise Edition.
  • Your catalog database is very old and can’t be directly upgraded.

The IMPORT CATALOG command can move a recovery catalog or selected catalog schemas into a new catalog database.

Example

Given the above requirements, here is an example:

  • You have three target databases running on various releases, let’s say Oracle Database 19c, 12.1.0.2, and 11.2.0.4.
  • Those target databases must use a catalog schema matching their respective release. You have three catalog schemas in total:
    • One catalog schema is on catalog schema version 19
    • One catalog schema is on catalog schema version 12.1.0.2
    • One catalog schema is on catalog schema version 11.2.0.4
  • You have just one catalog database. The catalog database is on Oracle Database 21c.
  • You can store all three catalog schemas in the same catalog database. The catalog schema version and the catalog database release do not have to match.

Now imagine you upgrade the 11.2.0.4 database to 12.2.0.1. Now, you must upgrade the catalog schema version to 12.2.0.1 using the UPGRADE CATALOG command. Also, you must switch to an RMAN client on 12.2.0.1.

What About Zero Data Loss Recovery Appliance?

If you are fortunate to have a (ZDLRA), you must not touch the catalog database yourself. All operations on the catalog database must happen via racli. When you update the appliance software, it will upgrade the catalog database underneath the hood.

You can still upgrade the catalog schema using RMAN and the upgrade catalog command.

My Recommendation

  1. Keep your catalog database on Long-Term Support releases. At the time of writing, it means keeping your catalog database on Oracle Database 19c.
  2. Upgrade your catalog database to the next Long-Term Support release before you upgrade the target databases.
  3. Apply Release Updates regularly.

Thanks

Thanks to my good colleagues, Jony and Andrew, for their help and good pointers. Much appreciated.

Further Reading

Use Cluster Verification Utility (cluvfy) and Avoid Surprises

I guess no one really wants nasty surprises. That’s also true when you patch Oracle Grid Infrastructure 19c (GI). Luckily, you can prepare yourself using the Cluster Verification Utility and its oddly sounding command line interface, cluvfy (I pronounce it cluffy).

The Grid Infrastructure Release Update readme says:

2.1.3 Patch Installation Checks The Cluster Verification Utility (CVU) command line interface (CLUVFY) may be used to verify the readiness of the Grid Home to apply the patch. … The CLUVFY command line for patching ensures that the Grid Home can receive the new patch and also ensures that the patch application process completed successfully leaving the home in the correct state.

That sounds like a good idea. You can check the GI home before and after patching.

Where Is Cluster Verification Utility?

Although Cluster Verification Utility (CVU) is present in my Grid Infrastructure (GI) home, I always get the latest version from My Oracle Support. I find it via patch 30839369.

CVU is backward compatible, so I download the release Oracle Database 21.0.0.0.0.

  1. I must install CVU on one node only. CVU connects to the other nodes and checks them as well. I log on as grid:
    [grid@copenhagen1]$ export CVUHOME=/u01/app/grid/cvu
    [grid@copenhagen1]$ mkdir $CVUHOME
    
  2. I extract the zip file I downloaded from My Oracle Support patch 30839369:
    [grid@copenhagen1]$ cd $CVUHOME
    [grid@copenhagen1]$ unzip /u01/app/grid/patches/cvupack_linux_ol7_x86_64.zip   
    
  3. I check that it works:
    [grid@copenhagen1]$ export PATH=$CVUHOME/bin:$PATH
    [grid@copenhagen1]$ cluvfy -version
    Version 21.0.0.0.0 Build 011623x8664
    Full version 21.9.0.0.0
    

Check Before Patching

Before I start patching GI, I check my cluster with cluvfy. I log on as grid:

[grid@copenhagen1]$ export PATH=$CVUHOME/bin:$PATH
[grid@copenhagen1]$ cluvfy stage -pre patch

cluvfy prints the report to the screen. In this case, all is good, no problems were found:

Performing following verification checks ...

  cluster upgrade state ...PASSED
  OLR Integrity ...PASSED
  Hosts File ...PASSED
  Free Space: copenhagen2:/ ...PASSED
  Free Space: copenhagen1:/ ...PASSED
  OPatch utility version consistency ...PASSED
  Software home: /u01/app/19.0.0.0/grid ...PASSED
  ORAchk checks ...PASSED

Pre-check for Patch Application was successful.

Here are a few examples of problems detected by cluvfy:

  • No SSH connection between the nodes:
    User Equivalence ...FAILED (PRVG-2019, PRKC-1191)
    PRVF-4009 : User equivalence is not set for nodes: copenhagen2
    Verification will proceed with nodes: copenhagen1
    
  • Different versions of OPatch on the individual nodes:
    Performing following verification checks ...
    
       cluster upgrade state ...PASSED
       OLR Integrity ...PASSED
       Hosts File ...PASSED
       Free Space: copenhagen2:/ ...PASSED
       Free Space: copenhagen1:/ ...PASSED
       OPatch utility version consistency ...WARNING (PRVH-0668)
    

Check After Patching

The patch readme advises me to rerun cluvfy after patching:

[grid@copenhagen1]$ export PATH=$CVUHOME/bin:$PATH
[grid@copenhagen1]$ cluvfy stage -post patch

Luckily, I patched GI without any problems. cluvfy tells me all is good:

Performing following verification checks ...

  cluster upgrade state ...PASSED

Post-check for Patch Application was successful.

Happy Cluvfy’ing!

Appendix

Further Reading

Cluster Verification Utility in Your Grid Infrastructure Home

I can also find Cluster Verification Utility (CVU) in my GI home.

[grid@copenhagen1]$ cd $ORACLE_HOME/bin
[grid@copenhagen1]$ ls -l cluvfy
-rwxr-xr-x 1 root oinstall 10272 Feb 20 11:47 cluvfy

The tool gets updated with Release Updates. However, when I try to use it, it prints this warning:

[grid@copenhagen1]$ cd $ORACLE_HOME/bin
[grid@copenhagen1]$ ./cluvfy stage -pre patch
This software is "223" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 756671.1 for more details.

Even if I patch my GI home with the newest Release Update, I am not guaranteed to get the latest version of CVU.

Thus, I recommend always getting the latest version of CVU from My Oracle Support.

Other Blog Posts in This Series

How to Avoid Interruptions When You Patch Oracle Grid Infrastructure 19c

In previous blog posts, I have shown you have to patch Oracle Grid Infrastructure 19c (GI). When you patch GI, it happens in a rolling manner, node by node. The database is always up, but each instance is temporarily down.

If my application is connected to an instance going down, how do I ensure my application is not interrupted?

My demo system:

  • Is a 2-node RAC. The nodes are named copenhagen1 and copenhagen2.
  • Database is called CDB_COPENHAGEN
  • PDB is called SALES

Connect Using A Custom Service

  1. First, I create a dedicated service that my application must use:

    [oracle@copenhagen1]$ srvctl add service \
       -db CDB1_COPENHAGEN \
       -pdb SALES \
       -service SALESGOLD \
       -preferred CDB11,CDB12 \
       -failover_restore AUTO \
       -failoverretry 1 \
       -failoverdelay 3 \
       -commit_outcome TRUE \
       -failovertype AUTO \
       -replay_init_time 600 \
       -retention 86400 \
       -notification TRUE \
       -drain_timeout 300 \
       -stopoption IMMEDIATE
    
    • I want to use all nodes in my cluster for this service, so I set preferred to all instances of my database.
    • I set failover_restore AUTO to enable Transparent Application Continuity for the service.
    • When I stop the service, the sessions have 5 minutes to move to another instance as specified by drain_timeout.
    • If the sessions don’t drain in due time, the stopoption parameter tells that sessions are to be killed immediately.
  2. Now I start the service:

    [oracle@copenhagen1]$ srvctl start service \
       -db CDB1_COPENHAGEN \
       -service SALESGOLD
    
  3. I add an alias called SALESGOLD to tnsnames.ora:

    SALESGOLD =
    (DESCRIPTION =
       (CONNECT_TIMEOUT=90)(RETRY_COUNT=50)(RETRY_DELAY=3)(TRANSPORT_CONNECT_TIMEOUT=3)
       (ADDRESS_LIST =
          (LOAD_BALANCE=on)
          (ADDRESS = (PROTOCOL = TCP)(HOST=copenhagen-scan)(PORT=1521))
       )
       (CONNECT_DATA=
          (SERVICE_NAME = SALESGOLD)
       )
    )
    
    • I connect to SALESGOLD service using the SERVICE_NAME attribute.
    • I connect via the SCAN listener.
  4. I configure my application to connect via the SALESGOLD alias.

You should always configure your application to connect to a custom service. The database has default services, but you should not use them for your application. Custom services are the way to go if you want a highly available system.

Let’s Patch

It doesn’t matter whether you use in-place or out-of-place patching. If you are using out-of-place patching, do as much of the preparations as possible. This procedure starts right before a node goes down.

  1. First, I want to patch GI on copenhagen1.
  2. I stop my service SALESGOLD on that node only:
    [oracle@copenhagen1]$ srvctl stop service \
       -db CDB1_COPENHAGEN \
       -service SALESGOLD \
       -instance CDB11 \
       -force
    
    • I use the instance parameter to stop the service only on the node I am about to patch. In this case, CDB11 is the database instance name of my database CDB_COPENHAGEN running on copenhagen1.
    • GI marks the service as OFFLINE on copenhagen1, and new connections will be directed to the other node.
    • GI sends out a FAN event telling the instance is going down. The Oracle driver on the clients detects the event.
    • When I created the service, I set the drain timeout to 300. This gives database sessions 5 minutes to finish their work.
      • If desired, I can override the value using -drain_timeout.
    • If a session becomes idle before the timeout, the driver on the client automatically reconnects to the other node. The session continues on the other node; no error message is displayed.
    • If the session is still active after the timeout, it will be killed immediately. This I defined when I created the service using stopoption parameter.
      • I enabled Transparent Application Continuity (TAC) on my database service using the failover_restore parameter. TAC detects the session is killed and attempts to intervene.
      • If my application is fit for TAC, a new connection is made, any in-flight transaction is replayed in the new session; no error message is displayed.
      • If my application is not fit for TAC or if I was doing something that is not replayable, my session is terminated, and my application receives an error.
  3. Now, all sessions using SALESGOLD service have been drained from copenhagen1. I can bring down the GI stack to patch.
    • I use opatchauto apply for in-place patching.
    • Or, I use opatchauto apply -switch-clone for out-of-place patching.
    • Or, I use root.sh for out-of-place patching with switchGridHome.
    • Or, whatever method suits my environment. It doesn’t matter.
  4. When the GI stack is restarted, my service SALESGOLD is automatically restarted. Now, my application can connect to the instance on copehagen1 again.
  5. I ensure that the service is started:
    [grid@copenhagen1]$ crsctl stat resource -t
    
  6. I repeat steps 2-5 for the second node, copenhagen2.

My Application Can’t Use Application Continuity

If, for some reason, you are not ready to use Application Continuity or Tranparent Application Continuity yet, you can still use this procedure. You can change the failover_restore setting on the service to something that suits your application. You can still perform draining, but you need to find another method for dealing with those sessions that don’t drain in due time.

One option is to set the drain_timeout parameter high enough to allow everyone to finish their work and reconnect to another node. Just be aware that the other node must be able to handle the entire workload during that period.

If draining doesn’t work for you either (or the patch is non-rolling), you must take an outage to patch. You can follow the Minimum Downtime approach described in Rolling Patch – OPatch Support for RAC (Doc ID 244241.1):

  • Shut down GI stack on node 1
  • Patch node 1
  • Shut down GI stack on node 2
  • Patch node 2
  • Shut down GI stack on node 3
  • Start up node 1 and node 2
  • Patch node 3
  • Start up node 3

Happy Patching!

Appendix

Further Reading

Other Blog Posts in This Series

New Webinars Coming Up

I am really excited to announce two new webinars:

  • Data Pump Best Practices and Real World Scenarios April 5, 2023, 16:00 CET
  • Release and Patching Strategies for Oracle Database 23c May 10, 2023, 16:00 CET

Oracle Database 19c Upgrade Virtual Classroom

You can sign up here.

The entire team, Roy, Mike, Bill, Rodrigo, and myself, are working hard to polish all the details.

Data Pump Best Practices and Real World Scenarios

In short: It’s all the stuff we couldn’t fit into our last Data Pump webinar.

Here’s the full abstract: > We promised to share more information in our last Data Pump Deep Dive With Development seminar. And here we are back again. Data Pump best practices is the topic we would like to emphasize on today. This will include some common tips and tricks but target especially parallel optimizations and transformation. It is quite common that you restructure objects and types when you use Data Pump for a migration. So we will give a detailed overview on the most common scenarios. This will guide us directly to real world scenarios where we’ll demonstrate several of those best practices used by customers.

Release and Patching Strategies for Oracle Database 23c

This is a revamped version of our very first webinar, Release and Patching Strategy. It’s updated to reflect the latest changes, and we have included even more details and demos.

Last time, the interest for this webinar was huge, and we ended up maxing out of Zoom capacity. A lot of you couldn’t get it. So, you better be ready on time, or you might miss your seat.

The full abstract: > This is a session every Oracle customer needs to attend to. Oracle Database 23c, the next long-term support release will be available sometime this year. Now it is time to refresh your knowledge about the best and most efficient strategies for your future release planning. Are there changes to the release numbering? Are there important changes regarding database patching? We will give you a complete overview on the available patch bundles and recent and future changes. We’ll discuss and showcase why a proper patching strategy is of vital importance – and how you can automate and optimize certain essential tasks.

But I Can’t Make It

Don’t worry. As usual, we will publish the recording on our YouTube channel and share the slides with you. Keep an eye out on my Webinars page.

But it’s better to watch it live. The entire team will be there, and we will answer all your questions. I promise you; we won’t leave until all questions have been answered.

All Tech, No Marketing

Remember, our mantra is: All tech, no marketing.

These webinars are technical. This is the place for you if you want all the gory details and cool demos.

I hope to see you there

How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place OPatchAuto

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and opatchauto.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0

When I use this procedure, I patch both the GI home only. I can patch the database later.

Preparation

I need to download:

  1. Latest OPatch from My Oracle Support (6880880).
  2. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I use patch 34416665.

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Make Ready for Patching

I can do this in advance. It doesn’t cause any downtime.

  1. I ensure that there is passwordless SSH access between all the cluster nodes:

    [grid@copenhagen1]$ ssh copenhagen2 date
    
    [grid@copenhagen2]$ ssh copenhagen1 date
    
  2. I update OPatch to the latest available version.

    • On all nodes (copenhagen1 and copenhagen 2)
    • In the GI home as grid

    I just show how to do it for the GI home on one node:

    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  3. I extract the patch file on both nodes. I show it for just the first node:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ mkdir 34416665
     [grid@copenhagen1]$ mv p34416665_190000_Linux-x86-64.zip 34449117
     [grid@copenhagen1]$ cd 34416665
     [grid@copenhagen1]$ unzip p34416665_190000_Linux-x86-64.zip
    
  4. I check for patch conflicts.

    • I can find the details about which checks to make in the patch readme file.
    • I must check for conflicts in the GI home only.
    • Since I have the same patches on all nodes, I can check for conflicts on just one of the nodes.

    As grid with ORACLE_HOME set to the GI home:

    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34419443 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34444834 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34428761 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34580338 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/33575402 | grep checkConflictAgainstOHWithDetail
    

2. Clone Home and Patch

Like phase 1, I can do this in advance. No downtime.

  1. First, on copenhagen1, I clone the existing GI home. I set ORACLE_HOME to my existing GI home. As root:
    [root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
    [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto \
       apply /u01/software/34416665/34416665 \
       -prepare-clone \
       -oh $ORACLE_HOME
    
    • opatchauto prompts me for the location of the new GI home. I choose the following location:
      • /u01/app/19.17.0/grid
    • Optionally, I can specify the new location in a response file. See appendix.
    • opatchauto clones the GI home and applies the patches.
  2. Next, I clone on the other node. On copenhagen2, I repeat step 1.
    • opatchauto automatically chooses the same locations for GI home as I used on the first node.

Do not touch the new, cloned homes. The next step will fail if you make any changes (like applying additional patches).

3. Switch to the New GI Home

Now, I can complete the patching process by switching to the new GI home. I do this one node at a time.

  1. First, on copenhagen1, I switch to the new GI home. ORACLE_HOME is still set to my existing GI home. As root:
    [root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
    [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto apply -switch-clone
    
    • Be sure to start with the same node you did when you cloned the GI home (phase 2).
    • This step stops the entire GI stack, including the resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the GI home, opatchauto will move it to the new GI home, including copying listener.ora.
    • Optionally, if I want a more graceful approach, I can manually stop the services and perform draining.
    • In the end, GI restarts using the new, patched GI home. GI restarts the resources (databases and the like) as well.
    • Proceed with the rest of the nodes as quickly as possible.
  2. I connect to the other node, copenhagen2, and repeat step 1.
    • If I had more nodes in my cluster, I must process the nodes in the same order as I did the cloning (phase 2).
  3. I update any profiles (e.g., .bashrc) and other scripts referring to the GI home on all nodes.

That’s it! I have now patched my Grid Infrastructure deployment.

In my demo environment, each node was down for around 7 minutes. But the database remained up on other nodes all the time.

For simplicity, I have removed some of the prechecks. Please follow the patch readme file instructions and perform all the described prechecks.

In this blog post, I decided to perform the out-of-place patching as a two-step process. This gives me more control. I can also do it all in just one operation. Please see Grid Infrastructure Out of Place ( OOP ) Patching using opatchauto (Doc ID 2419319.1) for details.

Later on, I patch my database.

A Word about the Directory for the New GI Home

Be careful when choosing a location for the new GI home. The documentation lists some requirements you should be aware of.

In my demo environment, the existing GI home is:

/u01/app/19.0.0.0/grid

Since I am patching to 19.17.0, I think it makes sense to use:

/u01/app/19.17.0/grid

If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.

Don’t Forget to Clean Your Room

At a future point, I need to remove the old GI home. I use the deinstall tool in the Oracle home.

I execute the command on all nodes in my cluster:

$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local

But I wait a week or two before doing so. To ensure that everything runs smoothly on the new patch level.

Happy Patching!

Appendix

Installing Multiple Patches

In the example above, I am just installing one patch. Often, you need to install multiple patches, like Release Update, Monthly Recommend Patches (MRP), and perhaps also one-offs.

OPatchAuto can install multiple patches in one operation. You can use -phBaseDir to specify a directory where you place all the patches as subdirectories:

opatchauto apply -phBaseDir <directory> ...

How to Use Prepare-Clone in Silent Mode

When I execute opatchauto apply -prepare-clone, the program prompts for the location of the new GI home. For an unattended execution, I can specify the old and new GI home in a file, and reference that file using the -silent parameter.

[root@copenhagen1]$ cat prepare-clone.properties
/u01/app/19.0.0.0/grid=/u01/app/19.17.0/grid

[root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto apply ... -prepare-clone -silent prepare-clone.properties -oh $ORACLE_HOME

Further Reading

Other Blog Posts in This Series

How to Patch Oracle Grid Infrastructure 19c Using In-Place OPatchAuto

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the in-place method and opatchauto.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0

When I use this procedure, I patch both the GI home only. However, opatchauto has the option of patching the database home as well.

Preparation

I need to download:

  1. Latest OPatch from My Oracle Support (6880880).
  2. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I use patch 34416665.

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Make Ready For Patching

I can do this in advance. It doesn’t cause any downtime.

  1. I update OPatch to the latest available version.

    • On all nodes (copenhagen1 and copenhagen 2)
    • In the GI home as grid

    I just show how to do it for the GI home on one node

    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  2. I extract the patch file on both nodes. I show it for just the first node:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ mkdir 34416665
     [grid@copenhagen1]$ mv p34416665_190000_Linux-x86-64.zip 34449117
     [grid@copenhagen1]$ cd 34416665
     [grid@copenhagen1]$ unzip p34416665_190000_Linux-x86-64.zip
    
  3. I check for patch conflicts.

    • In the patch readme file I can find the details about which checks to make.
    • I must check for conflicts in the GI home only.
    • Since I have the same patches on all nodes, I can check for conflicts on just one of the nodes.

    As grid with ORACLE_HOME set to the GI home:

    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34419443 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34444834 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34428761 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/34580338 | grep checkConflictAgainstOHWithDetail
    [grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/34416665/34416665/33575402 | grep checkConflictAgainstOHWithDetail
    

2. Start Patching

Now, I start the actual patch process. I do this one node at a time.

  1. First, as root on copenhagen1, I use opatchauto to patch my GI home:

    [root@copenhagen1]$ export ORACLE_HOME=/u01/app/19.0.0.0/grid
    [root@copenhagen1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
    [root@copenhagen1]$ $ORACLE_HOME/OPatch/opatchauto \
       apply /u01/app/grid/patches/34416665/34416665 \
       -oh $ORACLE_HOME
    
    • This step stops the entire GI stack, including the resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • Optionally, if I want a more graceful approach, I can manually stop the services, and perform draining.
    • opatchauto applies the patches to the GI home.
    • In the end, opatchauto restarts GI, including the managed resources (databases and the like).
    • opatchauto does not patch the database home.
  2. I connect to the other node, copenhagen2, and repeat step 1.

That’s it! I have now patched my Grid Infrastructure.

In my demo environment, each node was down for around 15 minutes. But the database remained up on other nodes all the time.

For simplicity, I have removed some of the prechecks. Please follow the patch readme file instructions and perform all the described prechecks.

Later on, I can patch my database.

Happy Patching!

Appendix

Installing Multiple Patches

In the example above, I am just installing one patch. Often, you need to install multiple patches, like Release Update, Monthly Recommend Patches (MRP), and perhaps also one-offs.

OPatchAuto can install multiple patches in one operation. You can use -phBaseDir to specify a directory where you place all the patches as subdirectories:

opatchauto apply -phBaseDir <directory> ...

Other Blog Posts in This Series

How to Patch Oracle Grid Infrastructure 19c Using Out-Of-Place SwitchGridHome

Let me show you how I patch Oracle Grid Infrastructure 19c (GI) using the out-of-place method and the -switchGridHome parameter.

My demo system:

  • Is a 2-node RAC
  • Runs Oracle Linux
  • Is currently on 19.16.0, and I want to patch to 19.17.0
  • Uses grid as the owner of the software

I patch only the GI home. I can patch the database later.

Preparation

I need to download:

  1. Download the base release of Oracle Grid Infrastructure (LINUX.X64_193000_grid_home.zip) from oracle.com or Oracle Software Delivery Cloud.
  2. Latest OPatch from My Oracle Support (6880880).
  3. The 19.17.0 Release Update for Oracle Grid Infrastructure from My Oracle Support. I will use the combo patch (34449117).

You can use AutoUpgrade to easily download GI patches.

I place the software in /u01/software.

How to Patch Oracle Grid Infrastructure 19c

1. Prepare a New Grid Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

  1. I need to create a folder for the new Grid Home. I must do this as root on all nodes in my cluster (copenhagen1 and copenhagen 2):

    [root@copenhagen1]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen1]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen1]$ chmod -R 775 /u01/app/19.17.0
    
    [root@copenhagen2]$ mkdir -p /u01/app/19.17.0/grid
    [root@copenhagen2]$ chown -R grid:oinstall /u01/app/19.17.0
    [root@copenhagen2]$ chmod -R 775 /u01/app/19.17.0
    
  2. I switch to the Grid Home owner, grid.

  3. I ensure that there is passwordless SSH access between all the cluster nodes. It is a requirement for the installation, but sometimes it is disabled to strengthen security:

    [grid@copenhagen1]$ ssh copenhagen2 date
    
    [grid@copenhagen2]$ ssh copenhagen1 date
    
  4. I extract the base release of Oracle Grid Infrastructure into the new Grid Home. I work on one node only:

    [grid@copenhagen1]$ export NEWGRIDHOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    

    Optionally, you can use a golden image.

  5. I update OPatch to the latest version:

    [grid@copenhagen1]$ cd $NEWGRIDHOME
    [grid@copenhagen1]$ rm -rf OPatch
    [grid@copenhagen1]$ unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip
    
  6. Then, I check the Oracle Grid Infrastructure prerequisites. I am good to go, if the check doesn’t write any error messages to the console:

    [grid@copenhagen1]$ export ORACLE_HOME=$NEWGRIDHOME
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh -executePrereqs -silent
    
  7. I want to apply the 19.17.0 Release Update while I install the Grid Home. To do that, I must extract the patch file:

     [grid@copenhagen1]$ cd /u01/software
     [grid@copenhagen1]$ unzip -oq p34449117_190000_Linux-x86-64.zip -d 34449117
    
    • The combo patch contains the GI bundle patch which consists of:
      • OCW Release Update (patch 34444834)
      • Database Release Update (34419443)
      • ACFS Release Update (34428761)
      • Tomcat Release Update (34580338)
      • DBWLM Release Update (33575402)
    • I will apply all of them.
  8. Finally, I can install the new Grid Home:

    • I need to update the environment variables.
    • CLUSTER_NODES is a comma-separated list of all the nodes in my cluster.
    • The parameter -applyRU must point to the directory holding the OCW Release Update.
    • The parameter -applyOneOffs is a comma-separated list of the paths to each of the other Release Updates in the GI bundle patch.
    [grid@copenhagen1]$ export ORACLE_BASE=/u01/app/grid
    [grid@copenhagen1]$ export ORA_INVENTORY=/u01/app/oraInventory
    [grid@copenhagen1]$ export CLUSTER_NAME=$(olsnodes -c)
    [grid@copenhagen1]$ export CLUSTER_NODES=$(olsnodes | tr '\n' ','| sed 's/,\s*$//')
    [grid@copenhagen1]$ cd $ORACLE_HOME
    [grid@copenhagen1]$ ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyRU /u01/software/34449117/34449117/34416665 \
       -applyOneOffs /u01/software/34449117/34449117/34419443,/u01/software/34449117/34449117/34428761,/u01/software/34449117/34449117/34580338,/u01/software/34449117/34449117/33575402 \
       -responseFile $ORACLE_HOME/install/response/gridsetup.rsp \
       INVENTORY_LOCATION=$ORA_INVENTORY \
       ORACLE_BASE=$ORACLE_BASE \
       SELECTED_LANGUAGES=en \
       oracle.install.option=CRS_SWONLY \
       oracle.install.asm.OSDBA=asmdba \
       oracle.install.asm.OSOPER=asmoper \
       oracle.install.asm.OSASM=asmadmin \
       oracle.install.crs.config.ClusterConfiguration=STANDALONE \
       oracle.install.crs.config.configureAsExtendedCluster=false \
       oracle.install.crs.config.clusterName=$CLUSTER_NAME \
       oracle.install.crs.config.gpnp.configureGNS=false \
       oracle.install.crs.config.autoConfigureClusterNodeVIP=false \
       oracle.install.crs.config.clusterNodes=$CLUSTER_NODES
    
    • Although the script says so, I don’t run root.sh yet.
    • I install it in silent mode, but I could use the wizard instead.
    • You need to install the new GI home in a way that matches your environment.
    • For inspiration, you can check the response file used in the previous Grid Home on setting the various parameters.
    • If I have one-off patches to install, I can add them to the -applyOneOffs parameter.

2. Switch to the new Grid Home

Now, I can complete the patching process by switching to the new Grid Home. I do this one node at a time. Step 2 involves downtime.

  1. First, on copenhagen1, I switch to the new Grid Home:
    [grid@copenhagen1]$ export ORACLE_HOME=/u01/app/19.17.0/grid
    [grid@copenhagen1]$ export CURRENT_NODE=$(hostname)
    [grid@copenhagen1]$ $ORACLE_HOME/gridSetup.sh \
       -silent -switchGridHome \
       oracle.install.option=CRS_SWONLY \
       ORACLE_HOME=$ORACLE_HOME \
       oracle.install.crs.config.clusterNodes=$CURRENT_NODE \
       oracle.install.crs.rootconfig.executeRootScript=false
    
  2. Then, I run the root.sh script as root:
    • This step restarts the entire GI stack, including resources it manages (databases, listener, etc.). This means downtime on this node only. The remaining nodes stay up.
    • In that period, GI marks the services as OFFLINE so users can connect to other nodes.
    • If my database listener runs out of the Grid Home, GI will move it to the new Grid Home, including copying listener.ora.
    • Optionally, if I want a more graceful approach, I can manually stop the services, and perform draining.
    • In the end, GI restarts the resources (databases and the like).
    [root@copenhagen1]$ /u01/app/19.17.0/grid/root.sh
    
  3. I update any profiles (e.g., .bashrc) and other scripts referring to the Grid Home.
  4. I connect to the other node, copenhagen2, and repeat steps 2.1 to 2.3. I double-check that the CURRENT_NODE environment variable gets updated to copenhagen2.

That’s it! I have now patched my Grid Infrastructure deployment.

Later on, I can patch my databases as well.

A Word about the Directory for the New Grid Home

Be careful when choosing a location for the new Grid Home. The documentation lists some requirements you should be aware of.

In my demo environment, the existing Grid Home is:

/u01/app/19.0.0.0/grid

Since I am patching to 19.17.0, I think it makes sense to use:

/u01/app/19.17.0/grid

If your organization has a different naming standard, that’s fine. Just ensure you comply with the requirements specified in the documentation.

Don’t Forget to Clean Your Room

At a future point, I need to remove the old Grid Home. I use the deinstall tool in the Grid Home. I execute the command on all nodes in my cluster:

$ export OLD_GRID_HOME=/u01/app/19.0.0.0/grid
$ export ORACLE_HOME=OLD_GRID_HOME
$ $ORACLE_HOME/deinstall/deinstall -local

I will wait until:

  • I have seen the new Grid Home run without problems for a week or two.
  • I have patched my Oracle Databases managed by GI.
  • I have seen my Oracle Databases run without GI-related problems for a week or two.

Happy Patching!

Appendix

Windows

Oracle supports this functionality, SwitchGridHome, on Microsoft Windows starting from Oracle Database 23ai.

AIX

Check this out: Grid Infrastructure 19c Out-Of-Place Patching Fails on AIX

Further Reading

Other Blog Posts in This Series

Upgrading with Oracle Database Vault – AIOUG Follow-up

Last week I presented to AIOUG. My session was Upgrades and Migrations – What’s Cooking. I managed to answer most of the questions except one. As promised, here’s the answer.

Oracle Database Vault

One question came up on Oracle Database Vault. I couldn’t answer the question live. I knew we had recently made changes in this area, but the details were lost.

What do you need to consider when upgrading an Oracle Database that uses Oracle Database Vault?

First, when AutoUpgrade performs the pre-upgrade analysis, it will detect the presence of Oracle Database Vault. Information is written in the pre-upgrade summary reminding you to take due care.

You have two options when you upgrade to Oracle Database 19c or later:

  • Disable Oracle Database Vault during the upgrade
  • Or, grant the role DV_PATCH_ADMIN to SYS during the upgrade

You can find more information in Requirement for Upgrading Database with Database Vault (Doc ID 2757126.1).

Thanks

Thanks to AIOUG for hosting my webinar. I really enjoy presenting to the community in India. The audience is always really engaging and asks a lot of questions.

Unfortunately, I have yet to have the opportunity to present in person in India. I hope to change that one day.

Patching Oracle Grid Infrastructure 19c – Beginner’s Guide

This is the start of a blog post series on patching Oracle Grid Infrastructure 19c (GI). It is supposed to be easy to follow, so that I may have skipped a detail here and there.

I know my way around database patching. I have done it countless times. When it comes to GI, it’s the other way around. I have never really done it in the real world (i.e., before joining Oracle) and my knowledge was limited. I told my boss, Mike, and he gave me a challenge: Learn about it by writing a blog post series.

Why Do I Need to Patch Oracle Grid Infrastructure

Like any other piece of software, you need to patch GI to get rid of security issues and fix issues.

You should keep the GI and Oracle Database patch level in sync. This means that you need to patch GI and your Oracle Database at the same cadence. Ideally, that cadence is quarterly.

It is supported to run GI and Oracle Database at different patch levels as long as they are on the same release. GI is also certified to run some of the older Oracle Database releases. This is useful in upgrade projects. Check Oracle Clusterware (CRS/GI) – ASM – Database Version Compatibility (Doc ID 337737.1) for details.

A few examples:

GI Database Supported
19.18.0 19.18.0 Yes – recommended
19.16.0 19.18.0 Yes
19.18.0 19.16.0 Yes
19.18.0 11.2.0.4 Yes – used during upgrade, for instance
19.18.0 21.9.0 No

If possible and not too cumbersome, I recommend that you first patch GI and then Oracle Database. Some prefer to patch the two components in two separate operations, while others do it in one operation.

Which Patches Should You Apply to Oracle Grid Infrastructure

You should apply:

Whether you download the bundle patches individually or go with the combo patch is a matter of personal preference. Ultimately, they contain the same.

Some prefer an N-1 approach: When the April Release Update comes, they patch with the previous one from January; Always one quarter behind. For stability reasons, I assume.

What about OJVM patches for GI? The short answer is no.

Which Method Do I Use For Patching

You can patch in two ways:

  • In-place patching
  • Out-of-place patching
In-place Out-of-place
You apply patches to an existing Grid Home. You apply patches to a new Grid Home.
You need disk space for the patches. You need disk space for a brand new Grid Home and the patches.
You patch the existing Grid Home. When you start patching a node, GI drains all connections and moves services to other nodes. The node is down during patching. You create and patch a new Grid Home without downtime. You complete patching by switching to the new Grid Home. The node is down only during switching.
Longer node downtime. Shorter node downtime.
No changes to profile and scripts. Profile, scripts and the like must be updated to reflect the new Grid Home.
My recommended method.

Note: When I write node downtime, it does not mean database downtime. I discuss it shortly.

In other words:

In-place patching replaces the Oracle Clusterware software with the newer version in the same Grid home. Out-of-place upgrade has both versions of the same software present on the nodes at the same time, in different Grid homes, but only one version is active.

Oracle Fleet Patching and Provisioning

When you have more systems to manage, it is time to consider Fleet Patching and Provisioning (FPP).

Oracle Fleet Patching & Provisioning is the recommended solution for performing lifecycle operations (provisioning, patching & upgrades) across entire Oracle Grid Infrastructure and Oracle RAC Database fleets and the default solution used for Oracle Database Cloud services

It will make your life so much easier; more about that in a later blog post.

Zero Downtime Oracle Grid Infrastructure Patching

As of 19.16.0 you can also do Zero Downtime Oracle Grid Infrastructure Patching (ZDOGIP).

Use the zero-downtime Oracle Grid Infrastructure patching method to keep your Oracle RAC database instances running and client connections active during patching.

ZDOGIP is an extension to out-of-place patching. But ZDGIOP will not update the operating system drivers and will not bring down the Oracle stack (database instance, listener etc.). The new GI takes over control of the Oracle stack without users noticing. However, you must update the operating system drivers by taking down the node. But you can postpone it to a later point in time.

More details about ZDGIOP in a later blog post.

What about Oracle Database Downtime

When you patch GI on a node, the node is down. You don’t need to restart the operating system itself, but you do shut down the entire GI stack, including everything GI manages (database, listeners etc.).

What does that mean for Oracle Database?

Single Instance

If you have a single instance database managed by GI, your database is down during patching. Your users will experience downtime. By using out-of-place patching, you can reduce downtime.

Data Guard

If you have a Data Guard configuration, you can hide the outage from the end users.

First, you patch GI on your standby databases, then perform a switchover, and finally patch GI on the former primary database.

The only interruption is the switchover; a brownout period while the database switches roles. In the brownout period, the database appears to hang, but underneath the hood, you wait for the role switch to complete and connect to the new primary database.

If you have configured your application properly, it will not encounter any ORA-errors. Your users experience a short hang and continue as if nothing had happened.

RAC

If you have a RAC database, you can perform the patching in a rolling manner – node by node.

When you take down a node for patching, GI tells connections to drain from the affected instances and connect to other nodes.

If your application is properly configured, it will react to the drain events and connect seamlessly to another instance. The end users will not experience any interruption nor receive any errors.

If you haven’t configured your application properly or your application doesn’t react in due time, the connections will be forcefully terminated. How that will affect your users depend on the application. But it won’t look pretty.

Unless you configure Application Continuity. If so, the database can replay any in-flight transaction. From a user perspective, all looks fine. They won’t even notice that they have connected to a new instance and that the database replayed their transaction.

Happy Patching!

Appendix

Further Reading

Other Blog Posts in This Series

How to Migrate to Autonomous Database Using Database Migration Service and OCI CLI

You can migrate your Oracle Database to Autonomous Database using Database Migration Service (DMS). You can use the GUI or one of the many interfaces:

  • REST API
  • OCI CLI
  • PL/SQL
  • SDK for various programming languages (Java, Python, and others)

In this blog post, I use OCI CLI. You can install it on your computer or use the cloud shell.

The Scenario

This is my setup:

  • Source database: Oracle Database 11.2.0.4 running on a Base Database System in OCI.
  • Target database: Autonomous Database (transaction processing).

For simplicity, I will migrate all schemas in the database using Data Pump. You can cherrypick individual schemas or exclude/include specific objects if needed.

I have already created a Vault. DMS needs one to store sensitive information.

How to

I must execute all commands in the same shell. I need to specify a lot of information that I use later on:

#Specify a base name of the migration. All-migration related objects are prefixed with the name
export MIGRNAME=SALES

#Specify the OCID of the compartment where all the resources are running. This procedure assumes all resources are placed in the same compartment
export COMPARTMENTOCID="ocid1.compartment.oc1...."

#Vault details
export VAULTOCID="ocid1.vault.oc1...."
export VAULTKEYOCID="ocid1.key.oc1...."

Next, I specify information about the source database:

#Source database OCID
export SRCDBOCID="ocid1.database.oc1...."

#Network stuff
#Private IP address of the source DB System
export SRCHOSTIP=10.0.1.186
#Subnet OCID that the source DB System uses
export SRCSUBNETOCID="ocid1.subnet.oc1...."
#VCN OCID that the DB System uses
export SRCVCNOCID="ocid1.vcn.oc1...."
#Location of the private key file that can be used to communicate over SSH to the source host
export SRCHOSTKEYFILE=/Users/daniel/Documents/ssh/my-private-key

#Name and path of the database directory object. Remember to create the directory in the file system
export SRCDBDIRNAME=EXPDIR
export SRCDBDIRPATH=/u01/app/oracle/$SRCDBDIRNAME

#Connection details
#Details for non-CDB or PDB
export SRCPDBUSERNAME=SYSTEM
export SRCPDBPASSWORD=*****
export SRCPDBSVCNAME=SALESDB_fra1b4....oraclevcn.com
#If source is a PDB, fill in details for CDB. For non-CDB leave them blank
export SRCCDBUSERNAME=
export SRCCDBPASSWORD=
export SRCCDBSVCNAME=

Finally, I specify information about the target database – the autonomous database:

#Target ADB OCID
export TGTADBOCID="ocid1.autonomousdatabase.oc1...."
#Username and password - typically the ADMIN user
export TGTDBUSERNAME=ADMIN
export TGTDBPASSWORD=*****

Now, let the fun begin. I first create an object storage bucket which DMS uses to store dump files, log files, CPAT output and the like:

export BUCKETNAME=$MIGRNAME
export OSNAMESPACE=$(oci os bucket create \
  --compartment-id $COMPARTMENTOCID \
  --name $BUCKETNAME \
  --query "data.namespace" \
  --raw-output)

Then, I create a connection to the source database (non-CDB or PDB). If the source database is a PDB, I also create a connection to the source CDB:

#Create connection to source PDB/non-CDB
export SRCSUDOLOCATION=/usr/bin/sudo
export SRCSSHUSER=opc
export SRCPDBCONNNAME=$MIGRNAME"-SRC-PDB-CONN"

#The 'ssh-details' parameters include the contents of the private key file as a single-line string. Newlines from the file are converted to \n by the 'awk' command
export SRCPDBCONNOCID=$(oci database-migration connection create \
   --compartment-id $COMPARTMENTOCID \
   --database-type USER_MANAGED_OCI \
   --admin-credentials '{"password":"'$SRCPDBPASSWORD'","username":"'$SRCPDBUSERNAME'"}' \
   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --database-id $SRCDBOCID \
   --display-name $SRCPDBCONNNAME \
   --connect-descriptor '{"connectString": "'$SRCHOSTIP':1521/'$SRCPDBSVCNAME'"}' \
   --ssh-details '{"host":"'$SRCHOSTIP'","sudoLocation": "'$SRCSUDOLOCATION'","user":"'$SRCSSHUSER'","sshkey":"'"$(awk '{printf "%s\\n", $0}' $SRCHOSTKEYFILE)"'"}' \
   --private-endpoint '{"subnetId":"'$SRCSUBNETOCID'","vcnId":"'$SRCVCNOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --wait-for-state SUCCEEDED \
   --max-wait-seconds 120 \
   --query "data.resources[0].identifier" \
   --raw-output)

#Create connection to source CDB, if needed
if [[ -n $SRCCDBUSERNAME ]];then
	export SRCCDBCONNNAME=$MIGRNAME"-SRC-CDB-CONN"
	export SRCCDBCONNOCID=$(oci database-migration connection create \
	   --compartment-id $COMPARTMENTOCID \
	   --database-type USER_MANAGED_OCI \
	   --admin-credentials '{"password":"'$SRCCDBPASSWORD'","username":"'$SRCCDBUSERNAME'"}' \
	   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
	   --database-id $SRCDBOCID \
	   --display-name $SRCCDBCONNNAME \
	   --connect-descriptor '{"connectString":"'$SRCHOSTIP':1521/'$SRCCDBSVCNAME'"}' \
	   --ssh-details '{"host":"'$SRCHOSTIP'","sudoLocation": "'$SRCSUDOLOCATION'","user":"'$SRCSSHUSER'","sshkey":"'"$(awk '{printf "%s\\n", $0}' $SRCHOSTKEYFILE)"'"}' \
	   --private-endpoint '{"subnetId":"'$SRCSUBNETOCID'","vcnId":"'$SRCVCNOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
	   --wait-for-state SUCCEEDED \
	   --max-wait-seconds 120 \
	   --query "data.resources[0].identifier" \
	   --raw-output)
fi

Next, I create a connection to the target autonomous database:

export TGTCONNNAME=$MIGRNAME"-TGT-CONN"
export TGTCONNOCID=$(oci database-migration connection create \
   --compartment-id $COMPARTMENTOCID \
   --admin-credentials '{"password":"'$TGTDBPASSWORD'","username":"'$TGTDBUSERNAME'"}' \
   --database-type AUTONOMOUS \
   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --database-id $TGTADBOCID \
   --display-name $TGTCONNNAME \
   --wait-for-state SUCCEEDED \
   --max-wait-seconds 120 \
   --query "data.resources[0].identifier" \
   --raw-output)

Now, I will create a migration object which describes the migration. No changes are made to the database yet:

#Create the migration
export MIGROBJNAME=$MIGRNAME
if [[ -n $SRCCDBCONNOCID ]];then
   export $MIGRSRCCDBPARAM="--source-container-database-connection-id $SRCCDBCONNOCID"
else
   export MIGRSRCCDBPARAM=""
fi
export MIGROBJOCID=$(oci database-migration migration create \
   --compartment-id $COMPARTMENTOCID \
   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --source-database-connection-id $SRCPDBCONNOCID $MIGRSRCCDBPARAM \
   --target-database-connection-id $TGTCONNOCID \
   --type OFFLINE \
   --display-name $MIGROBJNAME \
   --data-transfer-medium-details '{"databaseLinkDetails": null,"objectStorageDetails": {"namespaceName": "'$OSNAMESPACE'","bucketName": "'$BUCKETNAME'"},"awsS3Details": null}' \
   --datapump-settings '{"exportDirectoryObject": {"name": "'$SRCDBDIRNAME'","path": "'$SRCDBDIRPATH'"}}' \
   --wait-for-state SUCCEEDED \
   --max-wait-seconds 120 \
   --query "data.resources[0].identifier" \
   --raw-output
)

Now, I can perform an evaluation. This is a sanity check which performs a lot of checks upfront. The command runs until the evaluation finishes.

#Evaluate
oci database-migration migration evaluate \
   --migration-id $MIGROBJOCID \
   --wait-for-state SUCCEEDED \
   --wait-for-state FAILED \
   --max-wait-seconds 3600

I can check the evaluation outcome, including the Cloud Premigration Advisor Tool (CPAT) report. You can find this information in the object storage bucket as well. You can run the evaluation as many times as needed:

#Get the last job and the details about it
export MIGRLASTJOBOCID=$(oci database-migration job list \
   --migration-id $MIGROBJOCID \
   --limit 1 \
   --sort-by timeCreated \
   --sort-order desc \
   --query "data.items[0].id" \
   --raw-output
)
oci database-migration job get-job-output-content \
   --job-id $MIGRLASTJOBOCID \
   --file -
#Get the CPAT report  
oci database-migration job get-advisor-report \
   --job-id $MIGRLASTJOBOCID 

Once I have cleared any issues preventing the migration, I can start the actual migration. The command will return control immediately when the migration is started. Optionally, I use the parameters --wait-for-state and --max-wait-seconds to keep it running until the command completes:

#Start the real migration
export MIGRSTARTJOBID=$(oci database-migration migration start \
   --migration-id $MIGROBJOCID \
   --query "data.id" \
   --raw-output
)

I use the below two commands to monitor the migration. The first command gives me an overall status. The second command returns a log file with additional details.

#Get the current status
oci database-migration job get \
   --job-id $MIGRSTARTJOBID 
oci database-migration job get-job-output-content \
   --job-id $MIGRSTARTJOBID \
   --file -

That’s it!

I have migrated my database to an autonomous database.

Appendix

Additional Resources

Using GUI to find REST API calls

Although the OCI CLI commands are documented, it can be hard to figure out exactly which parameters to add and the exact syntax. My colleague, Alex Kotopoulis, gave me rock-star advice.

Switch to the GUI and configure the migration as you want. Turn on "Web developer tools" (might have a different name in your browser) and investigate the network traffic. You can see the REST API calls made by the GUI and easily translate those into OCI CLI parameters.

Thanks Alex!