Show Me Your Upgrade Runbook and Let’s See What You Can Remove

Recently, I talked to a customer about upgrading Oracle Database. I was showing how you can get the most out of AutoUpgrade. The customer showed me their runbook, and together, we found several redundant tasks. AutoUpgrade already handled many of the checks and tasks.

Leave a comment and show me the tasks in your runbook. Let’s see if we can find something to remove.

What Did We Find At the Customer?

Before Upgrade

  • Purging recycle bin The customer actively used the recycle bin and manually purged it before upgrading. However, AutoUpgrade has a check on the recycle bin. From the prechecks log file:
    2024-04-17 10:15:51.775 INFO Running check PURGE_RECYCLEBIN [DB12] - CheckTrigger.call#96 
       EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM sys.recyclebin$'
    
    And purges it – if needed – during prefixups. From the prefixups log file:
    2024-04-17 10:15:44.972 INFO Running fixup [PURGE_RECYCLEBIN][DB12][PLSQL][PURGE DBA_RECYCLEBIN
    2024-04-17 10:15:44.972 INFO [EB5A67] [PURGE DBA_RECYCLEBIN
    2024-04-17 10:15:44.976 INFO [EB5A67] Executing SQL [/* [EB5A67]  */PURGE DBA_RECYCLEBIN
    2024-04-17 10:15:45.316 INFO # PURGE_RECYCLEBIN - Check.runFix#292 
    2024-04-17 10:15:45.317 INFO Finished fixup [PURGE_RECYCLEBIN][DB12][SUCCESSFUL] - FixUpTrigger.executeFixUp#231 
    

After Upgrade

  • Execute catuppst.sql This script performs:

    … remaining upgrade actions that do not require that the database be open in UPGRADE mode

    AutoUpgrade handles this part for you. Here is an extract from the phase overview during upgrade:

    $ grep -B2 "catuppst"  dbupgrade/phase.log
    [phase 102] type is 1 with 1 Files
    
    @catuppst.sql
    
  • Recompiling objects The customer recompiled objects using utlrp.sql. This is usually not needed. By default, AutoUpgrade performs a recompilation in the postfixups phase unless you override the behavior with run_utlrp. Here is an extract from the autocompile log file found in the postfixups directory:

    13:12:42   6      select count(*) into cnt from sys.dba_objects
    13:12:42   7         where status = 'INVALID';
    13:12:42   8      IF cnt > 0 THEN
    13:12:42   9          :utlprp_name := '/u01/app/oracle/product/23.0.0.0/dbhome_2/rdbms/admin/utlprpom.sql 2';
    13:12:42  10          :utldtchk_name := '/u01/app/oracle/product/23.0.0.0/dbhome_2/rdbms/admin/utldtchk.sql';
    13:12:42  11      END IF;
    
  • Datapatch The customer executed Datapatch after a successful upgrade. The upgrade engine already does this as one of the very last phases. Here is an extract from upg_summary.log:

    Component                               Current         Full     Elapsed Time
    Name                                    Status          Version  HH:MM:SS
    
    ...
    Datapatch                                                        00:00:09
    
  • Gathering database statistics The customer already knew that AutoUpgrade gathers dictionary statistics and that they needed to gather fixed objects statistics after a while. But, they were gathering database statistics to refresh the statistics on their application schemas. Although this is a good idea in some cases, I would not recommend this. AutoUpgrade does not touch user data during a database upgrade, so if you had good statistics before the upgrade, they are also good after. The only case I can think of is upgrades from Oracle Database 11.2.0.4. Oracle Database introduced new histogram types in Oracle Database 12c, which could significantly benefit some databases.

    By the way, here is a handy trick to gather fixed objects statistics some time after the upgrade.

Conclusion

While working with this customer, we found several redundant items. Getting those items out of the runbook means less overall downtime – and a less complex upgrade. AutoUpgrade makes upgrading Oracle Database much easier.

Please leave a comment if there are tasks that AutoUpgrade should do for you.

Appendix

Further Reading

Are Your Oracle Database Clients Ready for the next Database Upgrade?

Each new Oracle Database release changes the client/server communication protocol. A newer protocol supports new features, strengthens security, and so forth. Before upgrading your Oracle Database, you must ensure your clients can connect afterward. If not, your clients may face:

ORA-28040: No matching authentication procotol error
ORA-03134: Connections to this server version are no longer supported

Oracle lists the supported clients in the client/server interoperability support matrix (Doc ID 207303.1). Take a look – it is probably the most colorful MOS note.

For Oracle Database 23ai, your clients must be version:

  • 23ai
  • 21c
  • 19c

How do you know if your clients are ready for the next release?

A Solution

The view V$SESSION_CONNECT_INFO lists the client driver and version of connected sessions. You can join V$SESSION to get more details.

SELECT PROGRAM, CLIENT_DRIVER, CLIENT_VERSION 
FROM   GV$SESSION S, GV$SESSION_CONNECT_INFO CI 
WHERE  S.SID=CI.SID AND S.SERIAL#=CI.SERIAL# 

Here’s is an example of how that data might look like:

PROGRAM CLIENT_DRIVER CLIENT_VERSION
oracle SERVER 19.20.0.0.0
MyApp.exe ODPM.NET : 19.1.0.0.0 19.1.0.0.0
MyApp.exe ODPU.NET : 19.21.0.0.0 19.21.0.0.0
MyApp2.exe (null) 19.21.0.0.0
emagent jdbcthin 12.1.0.2.0
SQL Developer jdbcthin : 21.4.0.0.0 21.4.0.0.0
java jdbcoci : 19.22.0.0.0 19.22.0.0.0
sqlplus SQL*PLUS 19.22.0.0.0
  • If I want to upgrade to Oracle Database 23ai, it looks good except for the emagent with a 12.1.0.2 driver.
  • When client driver is SERVER it is the database itself making connections. In this case, the scheduler was running jobs. You can disregard those entries.
  • The entry with client driver (null) was a thick OCI client used by a C++ program.

The above query gives an overview of the situation right now. But we need to persist the data to check all clients over time. We can solve that with a scheduler job.

If you monitor your database for weeks or a month, you should know which clients connect to your Oracle Database.

A high-five to my friend Frank for helping with some sample connection data.

The Code

Here’s a piece of code that creates a schema, a few objects, and a scheduler job.

The scheduler runs the job every 5 minutes. The job finds new connections, notes the client’s driver, and stores that info in a table.

Run the following code as SYS AS SYSDBA:

--Create a schema
DROP USER ORA_CLIENT_CHECK CASCADE;
CREATE USER ORA_CLIENT_CHECK NO AUTHENTICATION;

--Required privileges
GRANT SELECT ON GV_$SESSION TO ORA_CLIENT_CHECK;
GRANT SELECT ON GV_$SESSION_CONNECT_INFO TO ORA_CLIENT_CHECK;
ALTER USER ORA_CLIENT_CHECK DEFAULT TABLESPACE USERS;
ALTER USER ORA_CLIENT_CHECK QUOTA UNLIMITED ON USERS;

--View contains a current list of clients connected
CREATE VIEW ORA_CLIENT_CHECK.CURRENT_SESSIONS AS (
SELECT DISTINCT S.MACHINE, S.OSUSER, S.PROGRAM, S.MODULE, CLIENT_DRIVER, CLIENT_VERSION FROM GV$SESSION S, GV$SESSION_CONNECT_INFO CI WHERE S.SID=CI.SID AND S.SERIAL#=CI.SERIAL# AND CI.CLIENT_DRIVER != 'SERVER');

--Create a table to hold the result
CREATE TABLE ORA_CLIENT_CHECK.CONNECTED_SESSIONS AS SELECT * FROM ORA_CLIENT_CHECK.CURRENT_SESSIONS WHERE 1=0;

--Create a scheduler job that runs every 5 minutes
BEGIN
   DBMS_SCHEDULER.create_job (
      job_name        => '"ORA_CLIENT_CHECK"."GET_NEW_CONNECTIONS"',
      job_type        => 'PLSQL_BLOCK',
      job_action      => 'BEGIN MERGE INTO ORA_CLIENT_CHECK.CONNECTED_SESSIONS OLD USING (SELECT * FROM ORA_CLIENT_CHECK.CURRENT_SESSIONS) NEW ON (OLD.MACHINE=NEW.MACHINE AND OLD.OSUSER=NEW.OSUSER AND OLD.PROGRAM=NEW.PROGRAM AND OLD.MODULE=NEW.MODULE AND OLD.CLIENT_DRIVER=NEW.CLIENT_DRIVER AND OLD.CLIENT_VERSION=NEW.CLIENT_VERSION) WHEN NOT MATCHED THEN INSERT (MACHINE, OSUSER, PROGRAM, MODULE, CLIENT_DRIVER, CLIENT_VERSION) VALUES(NEW.MACHINE, NEW.OSUSER, NEW.PROGRAM, NEW.MODULE, NEW.CLIENT_DRIVER, NEW.CLIENT_VERSION); COMMIT; END;',
      start_date      => SYSTIMESTAMP,
      repeat_interval => 'freq=minutely; interval=5; bysecond=0;',
      end_date        => NULL,
      enabled         => TRUE,
      comments        => 'Checks for new connections and store the client version/driver to table.');
END;
/

Query the table to find clients that you must upgrade:

SELECT * 
FROM   ORA_CLIENT_CHECK.CONNECTED_SESSIONS;

The Limitations

  • The code monitors a single database only.

  • If you use Active Data Guard, you must consider the users that connect to the standby only. You can use DML redirect to send the data back to the primary database.

  • If you have a job that connects very seldom or connects for a very short time only, there is a risk that the code won’t capture it.

Other Ideas

Unified Auditing captures much information, but the client driver and version is not part of it. If you can’t sample from V$SESSION_CONNECT_INFO, this could be a viable solution.

You can use the below query to find a unique list of sessions and the program they used to connect. Then, you connect each host and manually ensure that the client driver is up-to-date:

SELECT DISTINCT userhost, program, client_program_name 
FROM   unified_audit_trail;

Of course, this requires you to have configured Unified Auditing.

How to Upgrade Oracle Database and Replace the Operating System

A reader asked me for advice on upgrading Oracle Database and replacing the underlying operating system.

  • Currently on Oracle Database 12.1.0.2
  • Currently on Windows Server 2012
  • Upgrade to Oracle Database 19c
  • Move to new servers with Microsoft Windows Server 2022

What’s the recommended approach for transitioning to Oracle 19c on Windows 2022?

Oracle Data Guard

My first option is always Oracle Data Guard. It is often a superior option. You move the entire database, and the only interruption is a Data Guard switchover.

In this case, where the reader needs to replace the operating system, the first thing to check is platform certifications. Always check platform certifications on My Oracle Support. It has the most up-to-date information.

Here is an overview of the platform certification for Oracle Database 12.1.0.2 and 19c.

Oracle Database 12.1.0.2 Oracle Database 19c
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2 Windows Server 2012 R2
Windows Server 2016
Windows Server 2019
Windows Server 2022

Oracle Database 19c does not support the current platform, Windows Server 2012. Thus, the reader can’t set up a standby database on the new servers and transition via a regular switchover.

Windows Server 2012 R2

Let’s imagine the current servers were using Windows Server 2012 R2. Both database releases support this platform. I would recommend this approach:

  1. Upgrade to Oracle Database 19c on current servers.
  2. Set up new servers with Windows Server 2022.
  3. Create standby database on new server.
  4. Transition to new servers with a regular Data Guard switchver.

This approach requires two maintenance windows. Yet, it is still my favorite because it is very simple.

RMAN Backups

You could also use RMAN and incremental backups. You don’t need much downtime – just the time necessary for a final incremental backup and restore. Like with Data Guard, you bring over the entire database.

RMAN can restore backups from a previous version, and you can use that to your advantage.

  1. Provision new servers with just Oracle Database 19c.
  2. Backup on 12.1.0.2.
  3. Restore and recover the database on the new servers with Oracle Database 19c binaries.
  4. After the final incremental backup, open the new database in upgrade mode and perform the upgrade.

We covered this approach in one of our webinars; you can also find details in this blog post.

Move Storage

You can also unmount the storage from the old server, and attach it to the new server.

  1. Run AutoUpgrade in analyze mode to determine upgrade readiness.
  2. Down time starts.
  3. Run AutoUpgrade in fixup mode to fix any issues preventing the upgrade from starting.
  4. Cleanly shut down the source database.
  5. Move the storage to the new server.
  6. Start the database on the new server in upgrade mode.
  7. Start AutoUpgrade in upgrade mode to complete the upgrade.

This is just a high-level overview. For a real move, there are many more intermediate steps.

Be sure to have a proper rollback plan. You are re-using the data files and AutoUprade in upgrade mode does not create a guaranteed restore point.

Data Pump

Data Pump is also a viable option, especially for smaller, less complex databases. It also enables you to restructure your database, for example:

  • Transform old BasicFile LOBs to SecureFile
  • Implement partitioning
  • Exclude data (for archival)
  • You can import directly into a higher release and even directly into a PDB.

But – the larger the database, the longer downtime (generally speaking).

When you use Data Pump for upgrades, I recommend using a full database export.

Full Transportable Export/Import

You can also use transportable tablespaces for upgrades. You can even migrate directly into a PDB on Oracle Database 19c.

The downside of transportable tablespace is that you must copy the data files to the new system.

But often, you can unmount the storage and mount the storage on the new servers. This avoids the cumbersome process of copying the data files to the new system.

Another approach is to combine transportable tablespaces with incremental backups, if you want to lower the downtime needed. This approach leaves the original database untouched, leaving you with a perfect rollback option.

Oracle GoldenGate

You could also use Oracle GoldenGate. But for most upgrades, it is overkill, partly because of the restrictions and considerations. I see this as a sensible option only if you have very strict downtime or fallback requirements.

Conclusion

What is the best option?

It depends…

This post helps you make the best decision for your organization.

Upgrade Base Database Cloud Service to Oracle Database 23c

Please see updated post on Oracle Database 23ai:


Here’s a cool way of upgrading your database in OCI to Oracle Database 23c. I will move my PDB to a new Base Database System using refreshable clone PDBs and AutoUpgrade.

The benefit of using this approach is:

  • Shorter downtime than an in-place upgrade.
  • A brand-new Base Database System, which means the operating system and Oracle Grid Infrastructure is already on a newer version.

My Environment

I have one PDB that I want to upgrade. It’s called SALES.

Source

  • Base Database System on 19.20.0
  • Name DBS19

Target

  • Base Database System on 23.3.0
  • Name DBS23

How To

Prepare AutoUpgrade

  • I must use a version of AutoUpgrade that supports upgrades to Oracle Database 23c, and I must have AutoUpgrade on the source and target system.
    $ java -jar autoupgrade.jar -version
    
  • At the time of writing, the latest version of AutoUpgrade on My Oracle Support does not support 23 as a target release. Instead, I copy AutoUpgrade from the target Oracle Home (on DBS23) to the source Oracle Home (on DBS19). This version allows upgrades to Oracle Database 23c.
  • I create an AutoUpgrade config file, named sales.cfg, which I store on both servers:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/cfgtoollogs/keystore
    upg1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
    upg1.target_home=/u01/app/oracle/product/23.0.0.0/dbhome_1
    upg1.sid=CDB19
    upg1.pdbs=SALES
    upg1.target_cdb=CDB23
    upg1.source_dblink.SALES=CLONEPDB 600
    upg1.target_pdb_copy_option.SALES=file_name_convert=none
    upg1.target_version=23
    upg1.start_time=05/11/2023 02:00:00
    
    • I must specify global.keystore to allow AutoUpgrade to create a keystore to work with my encrypted PDB.
    • source_home and target_home list the Oracle Home of the source and target CDB, respectively. It doesn’t matter that the two homes existing on one server only.
    • sid and target_cdb contain the SID of the source and target CDB, respectively.
    • pdbs contains the name of the PDB I want to upgrade, sales. If needed, I could specify more PDBs.
    • source_dblink has the name of the database link (CLONEPDB) and the rate at which the target CDB brings over redo and rolls forward the copy (600 seconds or 10 minutes).
    • I want to use ASM and Oracle Managed Files, so I set target_pdb_copy_option accordingly.
    • Since my source and target CDB are not on the same host, AutoUpgrade can’t automatically determine the target version. I specify that manually using target_version.
    • start_time specifies when downtime starts. At this point, AutoUpgrade refreshes the PDB for the last time and then moves on with the upgrade.

Prepare Source PDB

  • I connect to the source PDB. I create a user (for a database link) and grant privileges:

    create user dblinkuser identified by ... ;
    grant create session, 
          create pluggable database, 
          select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  • After the upgrade, I can drop the user again.

Prepare Target CDB

  • I connect to the target CDB and create a database link pointing to my source PDB:
    create database link clonepdb
    connect to dblinkuser
    identified by dblinkuser
    using '<connection-string-to-source-pdb>';
    

Analyze and Fix Source PDB

  • First, I analyze the source PDB for upgrade readiness. On the source system:
    java -jar autoupgrade.jar -config sales.cfg -mode analyze
    
  • The summary report lists the following precheck failures, which I safely ignore:
    • TDE_PASSWORDS_REQUIRED – I will fix that on the target system.
    • TARGET_CDB_AVAILABILITY – the target CDB is remote, and AutoUpgrade can’t analyze it.
  • Then, I execute the preupgrade fixups:
    java -jar autoupgrade.jar -config sales.cfg -mode fixups
    
    • This changes my source PDB, so I do it as close to my maintenance window as possible.

Upgrade

  • Since my PDB is encrypted, I must add the source and target CDB keystore password to the AutoUpgrade keystore. I start the TDE console on the target host:

    java -jar autoupgrade.jar -config sales.cfg -load_password
    
  • In the TDE console, I add the keystore passwords of the source and target CDB:

    TDE> add CDB19
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    TDE> add CDB23
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    
  • I save the passwords and convert the AutoUpgrade keystore to an auto-login keystore:

    TDE> save
    Convert the keystore to auto-login [YES|NO] ? 
    
  • I start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config sales.cfg -mode deploy
    
    • AutoUpgrade copies the data files over the database link.
    • Rolls the copies of the data files forward with redo from the source.
    • At one point, issues a final refresh and disconnects the PDB from the source.
    • Upgrades the PDB.
  • I have now upgraded my PDB to Oracle Database 23c.

The Fine Print

You should:

  • Check the words for caution in my previous blog post on AutoUpgrade and refreshable clone PDBs.
  • Start a new full backup of the target database after the migration.
  • Familiarize yourself with the concept of the AutoUpgrade keystore.

Also, notice:

  • The PDB is now on a different Base Database System. You need to update your connection string.
  • The source PDB must be Oracle Database 19c or newer to upgrade directly to Oracle Database 23c.
  • The OCI console will recognize and display the new PDB after a while. You don’t have to do anything … than to wait for the automatic sync job.

How to Upgrade to Oracle Database 19c and Migrate to a PDB Using Refreshable Clone PDBs

At the recent Oracle DatabaseWorld at CloudWorld I spoke to several customers that had to upgrade to Oracle Database 19c and convert their non-CDB into the multitenant architecture.

Here is how to do it using Refreshable Clone PDBs.

My source database is:

  • A non-CDB
  • On Oracle Database 12.2 or newer

I want to:

  • Upgrade to Oracle Database 19c
  • Convert the database to a PDB
  • Plug it into an existing CDB

The Problem With PDB Conversion

The conversion to multitenant does not offer the same rollback options as an upgrade. Normally, when you upgrade a database, you rely on Flashback Database as the primary rollback option. However, that does not work for conversion to multitenant.

When you plug your non-CDB into a CDB, the CDB makes changes to the data file headers. Those changes are irreversible and prevents you from ever using those data files in a non-CDB. Not even Flashback Database can revert the changes.

So, what are your rollback options?

  • Restore a backup It might take longer than your organization can accept.
  • Make a copy of the data files before conversion It requires disk space and a longer downtime window to copy the data files.

This is where Refreshable Clone PDBs come into play.

Refreshable Clone PDBs

Here is an overview of what AutoUpgrade does for you:

Overview of the process

  1. AutoUpgrade creates a PDB in the target CDB as a refreshable clone PDB of the source non-CDB.
  2. The target CDB starts to copy the data files from the source non-CDB.
  3. The target CDB refreshes the PDB. In other words, it rolls forward the data files using the redo from the source non-CDB.
  4. Now, downtime starts. AutoUpgrade issues a final refresh to bring over the latest changes.
  5. AutoUpgrade disconnects the refreshable clone PDB from its source. Now, the PDB is a real, stand-alone PDB. AutoUpgrade upgrades the PDB and converts it into a proper PDB.

If something happens during the upgrade or conversion and you want to roll back, simply start the original non-CDB. It is left completely untouched.

You can learn about the concept in detail in our AutoUpgrade 2.0 webinar:

Refreshable clone PDBs does not work for cross-endian migrations (like AIX to Linux), but cross-platform should work fine (like Windows to Linux).

How To

  1. In the source non-CDB, I create a user:
    create user dblinkuser identified by ... ;
    grant create session, 
       create pluggable database, 
       select_catalog_role to dblinkuser;
    grant read on sys.enc$ to dblinkuser;
    
  2. In my target CDB, I create a database link connecting to my source non-CDB:
    create database link clonepdb 
       connect to dblinkuser identified by ...
       using 'source-db-alias';
    
    You can drop the database link after the migration.
  3. I create an AutoUpgrade config file called noncdb1.cfg:
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=NONCDB1
    upg1.target_cdb=CDB1
    upg1.source_dblink.NONCDB1=CLONEPDB 600
    upg1.target_pdb_name.NONCDB1=PDB1
    upg1.start_time=25/09/2023 06:30:00
    
    • source_home and target_home is the Oracle Home of the source non-CDB and target CDB respectively.
    • sid is the source non-CDB that I want to upgrade and convert.
    • target_cdb is the CDB into which I want to plug in the non-CDB. You must create the CDB in advance or use an existing one.
    • source_dblink has the name of the database link (CLONEPDB) and the rate at which the target CDB brings over redo and rolls forward the copy (600 seconds or 10 minutes).
    • target_pdb_name specifies that I want to rename the non-CDB to PDB1 when I plug it in. You can leave this out if you want to keep the name.
    • start_time specifies when downtime starts. At this point, AutoUpgrade refreshes the PDB for the last time and then moves on with upgrade and PDB conversion.
  4. Start AutoUpgrade in analyze mode on the source system:
    java -jar autoupgrade.jar -mode analyze -config noncdb1.cfg
    
  5. Run AutoUpgrade in fixups mode on the source system:
    java -jar autoupgrade.jar -mode fixups -config noncdb1.cfg
    
    • This runs the fixups identified by AutoUpgrade in analyze mode. You can run this task even after you start AutoUpgrade in deploy mode. Just ensure that the fixups complete before the final refresh (as specified in the start_time paramter).
  6. If there are no errors found in the analysis, I start AutoUpgrade in deploy mode:
    java -jar autoupgrade.jar -mode deploy noncdb1.cfg
    
    • AutoUpgrade copies the data files over the database link.
    • Rolls the copies of the data files forward with redo from the source non-CDB.
    • At one point, issues a final refresh and disconnects the PDB from the source non-CDB.
    • Upgrades and converts the database to a PDB.

Here’s a demo of it:

Words of Caution

Disconnect Users from Source Database

Right before the upgrade and conversion starts, AutoUpgrade executes a final refresh. The last redo from the source non-CDB is applied to ensure no data is lost. You must ensure that no users are connected to the source non-CDB after this time. Otherwise, that data will be lost.

AutoUpgrade starts the final refresh at the start time specified in the config file:

upg1.start_time=25/09/2023 06:30:00

You must be careful about disconnecting users from the source non-CDB. Remember, AutoUpgrade connects to the source non-CDB over a database link as a regular user (not SYS). This means the listener must be available, and you can’t enable restricted session or similar means.

Data Guard

If the target CDB is protected by Data Guard, special attention is needed to handle the standby databases. I explain the details in our AutoUpgrade 2.0 webinar:

Redo

The procedure relies on redo from the source non-CDB. Ensure that redo is kept in the Fast Recovery Area of the source non-CDB until it has been applied on the target PDB. Either postpone your archive backups or change the archive log deletion policy so the archive logs remain on disk.

Final Refresh

Check this blog post if you want to be in control over when the final refresh happens.

Services

You must recreate the services used in your connect strings.

Appendix

Further Reading

What to Do about RMAN Recovery Catalog When You Upgrade Oracle Database

When you upgrade your Oracle Database, you want to ensure your backup strategy is not compromised. The RMAN recovery catalog is a key part of your backup strategy. What do you need to take care of when upgrading Oracle Database? Don’t let the upgrade of your Oracle Database jeopardize your RMAN backup strategy.

First, let’s agree on the terminology:

  • Target Database – The database that you want to backup. There is where your data is stored.
  • Catalog Database – A regular Oracle Database with one or more catalog schemas.
  • Catalog Schema – A schema inside the catalog database which holds the metadata about the RMAN backups. You can register multiple databases in the same catalog schema.
  • Recovery Catalog – The catalog schema and catalog database together form the recovery catalog.

The topology of RMAN catalog and Oracle Database

What’s Required?

RMAN Client

The RMAN client you use for the backup must be the same version as the target database. I find it easiest to always use the RMAN executable from the same Oracle Home as the target database.

Previously, this was not a requirement, but it is in current versions.

Catalog Schema

The catalog schema version must be the same or higher compared to the RMAN client. When you upgrade the target database and start to use a newer RMAN client, you also need to upgrade the catalog schema.

  • Upgrade the target database.
  • Start RMAN using the executable from the target database Oracle Home.
  • Connect to the target database and recovery catalog and perform the upgrade:
    $ $ORACLE_HOME/bin/rman
    
    RMAN> connect target /
    RMAN> connect catalog <catalog_schema>/<catalog_schema_password>@<catalog_database_alias>
    RMAN> upgrade catalog noprompt;
    

If you register multiple databases into the same catalog schema, and you have already upgraded another target database to the same version, then there is no need to upgrade the catalog schema again.

When you upgrade with AutoUpgrade, I find it easiest to allow AutoUpgrade to perform the upgrade for me.

Since the catalog schema is backward compatible, you can downgrade the target database and still use a higher version catalog schema. In case of a target database downgrade, no changes are needed in the catalog schema.

Catalog Database

As long as the catalog database runs on a supported version, you should be home safe. Target databases on Oracle Database 19c support using catalog databases all the way back to 10.2.0.3. Hopefully, you don’t use such old versions anymore. The version of the catalog database does not have to match either the catalog schema or the target database.

You can upgrade the catalog database like any other database. Use AutoUpgrade in deploy mode, and that’s it. A catalog database requires no special attention. But you can always run $ORACLE_HOME/rdbms/admin/dbmsrmansys.sql in the catalog database to ensure all the prerequisites are met (plus dbmsrmanvpc.sql for VPC users).

If your catalog database is on Oracle Database 11g, there are a few details in My Oracle Support Doc ID 1970049.1 to be aware of.

In some situations, building a new catalog database is desirable. Like:

  • Your catalog database is on Standard Edition. Nowadays, a catalog database must be Enterprise Edition.
  • Your catalog database is very old and can’t be directly upgraded.

The IMPORT CATALOG command can move a recovery catalog or selected catalog schemas into a new catalog database.

Example

Given the above requirements, here is an example:

  • You have three target databases running on various releases, let’s say Oracle Database 19c, 12.1.0.2, and 11.2.0.4.
  • Those target databases must use a catalog schema matching their respective release. You have three catalog schemas in total:
    • One catalog schema is on catalog schema version 19
    • One catalog schema is on catalog schema version 12.1.0.2
    • One catalog schema is on catalog schema version 11.2.0.4
  • You have just one catalog database. The catalog database is on Oracle Database 21c.
  • You can store all three catalog schemas in the same catalog database. The catalog schema version and the catalog database release do not have to match.

Now imagine you upgrade the 11.2.0.4 database to 12.2.0.1. Now, you must upgrade the catalog schema version to 12.2.0.1 using the UPGRADE CATALOG command. Also, you must switch to an RMAN client on 12.2.0.1.

What About Zero Data Loss Recovery Appliance?

If you are fortunate to have a (ZDLRA), you must not touch the catalog database yourself. All operations on the catalog database must happen via racli. When you update the appliance software, it will upgrade the catalog database underneath the hood.

You can still upgrade the catalog schema using RMAN and the upgrade catalog command.

My Recommendation

  1. Keep your catalog database on Long-Term Support releases. At the time of writing, it means keeping your catalog database on Oracle Database 19c.
  2. Upgrade your catalog database to the next Long-Term Support release before you upgrade the target databases.
  3. Apply Release Updates regularly.

Thanks

Thanks to my good colleagues, Jony and Andrew, for their help and good pointers. Much appreciated.

Further Reading

Upgrading with Oracle Database Vault – AIOUG Follow-up

Last week I presented to AIOUG. My session was Upgrades and Migrations – What’s Cooking. I managed to answer most of the questions except one. As promised, here’s the answer.

Oracle Database Vault

One question came up on Oracle Database Vault. I couldn’t answer the question live. I knew we had recently made changes in this area, but the details were lost.

What do you need to consider when upgrading an Oracle Database that uses Oracle Database Vault?

First, when AutoUpgrade performs the pre-upgrade analysis, it will detect the presence of Oracle Database Vault. Information is written in the pre-upgrade summary reminding you to take due care.

You have two options when you upgrade to Oracle Database 19c or later:

  • Disable Oracle Database Vault during the upgrade
  • Or, grant the role DV_PATCH_ADMIN to SYS during the upgrade

You can find more information in Requirement for Upgrading Database with Database Vault (Doc ID 2757126.1).

Thanks

Thanks to AIOUG for hosting my webinar. I really enjoy presenting to the community in India. The audience is always really engaging and asks a lot of questions.

Unfortunately, I have yet to have the opportunity to present in person in India. I hope to change that one day.

Upgrade, Data Guard and Downtime

How about downtime when you upgrade your Oracle Database with Data Guard?

Short answer: You should expect slightly more downtime. Unless you head into a rolling upgrade.

How Long Does An Upgrade Take?

If I had a dime for every time someone asked me this question, I would have bought a tractor, retired, and worked full-time at our farm.

What a handsome Ford

My boss, Mike, created a long blog post about it without giving a real answer. If he can’t estimate it, no one can!

But I will give it a try anyway. A ballpark figure is 15-45 minutes for the actual upgrade. Then add all the extra stuff that surrounds the upgrade.

Does Data Guard Change That?

Yes, it does. Extra downtime is needed.

You don’t add any extra time to the actual upgrade of the primary database. During the upgrade, redo transport is on, and the primary database ships redo to the standby database, but it has no significant impact on the duration of the upgrade.

But you do need a little extra time before and after the upgrade.

Primary Standby Added Time
Restart Data Guard in new Oracle Home 5 min
Upgrade primary database
Verify standby has applied all redo 2 min
Optionally, test a switchover 5 min
TOTAL 7-12 min

If you follow the standby offline approach, you need more downtime. In that approach, you keep the standby database offline during the entire upgrade, and it needs time to catch up afterward. I would estimate it adds 5 min of extra downtime, totaling 12-17 min instead.

This is the best you can do with a physical standby database. But …

Rolling Upgrades

If you want to upgrade with almost no downtime at all, you can do rolling upgrades. A rolling upgrade uses DBMS_ROLLING to reduce the downtime for an upgrade to the time it takes to perform a regular switchover.

Under the hood, a rolling upgrade uses Data Guard as well. This blog post series is about upgrading with physical standby databases, but a rolling upgrade requires a logical standby database. It’s out of scope for this blog post series.

Why don’t we all do rolling upgrades?

  • A logical standby database has more restrictions than a physical one.
  • A logical standby database uses SQL apply instead of redo apply. SQL apply is not as efficient as redo apply and can become a performance bottleneck.
  • It’s more complicated.

But rolling upgrades are really cool, as you will see in this demo.

You can find more details in our webinar How Low Can You Go? Zero Downtime Operations. Slides and recording is available on demand.

Other Blog Posts in This Series

Upgrade, Data Guard, and Restore Points

Flashback Database is the best way to protect your Oracle Database during upgrade. It requires that you create restore points before the upgrade. If the upgrade fails, getting back to the starting point is easy and fast.

Enterprise Edition

It requires Enterprise Edition, but so does Data Guard. So it is not an issue here.

Guaranteed Restore Points

The restore points that you create to protect your Oracle Database upgrade should be guaranteed restore points. This ensures that you can always go back to the starting point. If you run out of disk space in your Fast Recovery Area (FRA), the database will halt and thus not jeopardize your fallback plan. The alternative to regular restore points is that the database will overwrite flashback logs to maintain operation, which is not desirable in this situation. You are relying on the restore point as your fallback.

Order of Creation

Always follow this order when creating restore points:

  1. All standby databases
  2. Primary database

The SCN of the restore points on the standby database must be the same or lower than on the primary database.

Imagine this scenario:

SCN Primary Standby
100 Restore point created
105 Restore point created
110 Upgrade starts
200 Upgrade fails

To flashback:

Primary Standby
Flash back standby database to SCN 100
Flash back primary database to SCN 105
Open primary database with resetlogs. Creates a new incarnation as of SCN 105
Roll forward standby database from SCN 101 to 105 and follow new incarnation

If the restore point on the standby database was created at SCN 106, then the primary database is already at a new incarnation (as of SCN 105). The standby database can’t follow and needs to be completely rebuilt.

Primary Database

When you upgrade with AutoUpgrade, it will create a guaranteed restore point for you. It is the default behavior.

The restore point created by AutoUpgrade protects the primary database only.

Standby Database

Always create the restore points on the standby database:

From the docs:

Restore points that are created on a primary database are automatically replicated to the standby database. The restore points created on the standby database are called replicated restore points. Irrespective of whether a restore point on the primary database is a guaranteed restore point or a normal restore point, the corresponding replicated restore point is always a normal restore point.

By design, all replicated restore points are not guaranteed; even if it was guaranteed on the primary database. When you protect a standby database during upgrade, you want the restore point to be guaranteed; thus, you must create them manually.

To create a restore point on the standby database, you must cancel redo apply:

SQL> alter database recover managed standby database cancel;
SQL> create restore point STBY_GRP guarantee flashback database;
SQL> alter database recover managed standby database disconnect from session;

If you forget to cancel redo apply, you run into:

SQL> create restore point STBY_GRP guarantee flashback database;
create restore point STBY_GRP guarantee flashback database
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'STBY_GRP'.
ORA-01153: an incompatible media recovery is active

Remove Restore Points

Don’t forget to remove the restore points after the upgrade. Do so when you have run your tests, and you are sure you won’t go back to the old release:

SQL> drop restore point STBY_GRP;
  • The order of removal is not important
  • It does not require downtime to remove the restore points

If you instruct AutoUpgrade to remove the restore point after upgrade (drop_grp_after_upgrade), it will happen only on the primary database. You must manually remove the restore points from the standby database.

Failure to remove the restore points will eventually fill up the FRA bringing the database to a complete halt.

Other Blog Posts in This Series