Downgrade and Data Guard

Oracle Database is capable of downgrading, and it is a cool fallback mechanism. You can use it even after go-live. Plus, Data Guard plays nicely together with downgrade. You don’t have to rebuild the standby database following a downgrade if you follow the correct procedure.

When To Use Downgrade

You should only consider downgrading after go-live. When users have made changes to your Oracle Database. In that case, you can downgrade to the previous release without any data loss. Before go-live, Flashback Database is our preferred fallback mechanism.

A comparison between the two:

Flashback Database Downgrade
Data loss No data loss
Use before go-live Use after go-live
After flashback, database is identical with before-upgrade state After downgrade, database is compatible with before-upgrade state, but not identical
Requires Enterprise Edition Works in all editions
Preferred method

General Considerations

It is a requirement that you have not changed the COMPATIBLE parameter. As soon as you change COMPATIBLE after upgrade, you can no longer downgrade your Oracle Database. If you have already changed the COMPATIBLE parameter, you must use other fallback methods like Data Pump or RMAN restore.

The old release Oracle Home must still exist on primary and standby hosts. I recommend that you keep them until you are absolutely sure you will not downgrade your Oracle Database.

When you downgrade your database, I recommend leaving your Grid Infrastructure at the new release. Don’t downgrade Grid Infrastructure as well. Ideally, you upgraded Grid Infrastructure to the new release in advance in a previous maintenance window. Thus, you know it can handle the old release of the database. Save yourself the added complexity of also downgrading Grid Infrastructure.

Data Guard broker does not support going back to a previous version. You must disable the broker before downgrade and afterward create a new configuration or restore broker configuration files from the old release.

The following works for Data Guard configuration with a physical standby database. Other considerations and a different procedure apply if you have a logical standby database.

Downgrade

To downgrade an Oracle Database protected by Data Guard the following applies:

  • You can downgrade hours, days, or even months after the upgrade. As long as you haven’t changed the COMPATIBLE parameter.
  • You must mount the standby database and start redo apply in real-time apply mode. Keep it running during the entire process.
  • You must not open the standby database until the end of the process.
  • The downgrade happens on the primary database. A downgrade will make changes to the data dictionary. Those changes are applied on the standby via redo.
  • A downgrade with a standby database generally follows the same procedure as a regular database. However, at the end of each step, you must ensure that the standby database has applied all the redo generated by the primary database during that step. I usually issue a few log switches and check the current sequence (SEQUENCE#) on the primary database. Then I wait for that sequence to be applied on the standby database.

Check our video on YouTube for essential information on downgrade and flip through the slides.

How To Downgrade

AutoUpgrade can’t perform downgrades. You need to do this manually.

In the following, $NEW_ORACLE_HOME refers to the release you were upgrading to (e.g., 19c), and $OLD_ORACLE_HOME refers to the release you upgraded from (e.g., 12.1.0.2).

  1. Follow the pre-downgrade instructions.
  2. Set the environment in your session to the new Oracle Home.
  3. For RAC, set CLUSTER_DATABASE parameter:
    alter system set cluster_database=false scope=spfile sid='*'
    
  4. Stop the primary database:
    srvctl stop database -d $ORACLE_UNQNAME
    
  5. Start the primary database (if RAC, just one instance) in downgrade mode:
    startup downgrade
    
  6. Set the executable flag on downgrade script in the new Oracle Home:
    chmod +x $ORACLE_HOME/bin/dbdowngrade
    
  7. Start downgrade script on primary database. The environment is still set to the new release Oracle Home.
    $ORACLE_HOME/bin/dbdowngrade
    
  8. Important: Ensure that the standby database has applied all redo from the downgrade operation. Perform a few log switches and note the sequence number. Ensure the standby database has applied that sequence.
  9. Shut down primary database and standby database (if RAC, all instances).
  10. Downgrade the standby database in Grid Infrastructure. This tells Grid Infrastructure to start the database in the old Oracle Home (in this example, it is 12.1.0.2):
    $ORACLE_HOME/bin/srvctl downgrade database \
      -db $ORACLE_UNQNAME
      -oraclehome $OLD_ORACLE_HOME
      -targetversion 12.1.0.2
    
  11. Important: Switch the environment in your session to the old Oracle Home.
  12. Start the standby database (if RAC, all instances).
  13. Start redo apply:
    alter database recover managed standby database disconnect from session
    
  14. Start primary database in upgrade mode (if RAC, only one instance):
    startup upgrade
    
  15. Finish the downgrade:
    set echo on termout on serveroutput on timing on
    spool catrelod.log
    @?/rdbms/admin/catrelod.sql
    
  16. Recompile:
    @?/rdbms/admin/utlrp.sql
    
  17. Set CLUSTER_DATABASE parameter and shut down:
    alter system set cluster_database=true scope=spfile sid='*'
    shutdown immediate
    
  18. Downgrade the primary database in Grid Infrastructure. Grid Infrastructure will now start the database in the old Oracle Home (in this example it is 12.1.0.2):
    $NEW_ORACLE_HOME/bin/srvctl downgrade database \
      -db $ORACLE_UNQNAME
      -oraclehome $OLD_ORACLE_HOME
      -targetversion 12.1.0.2
    
  19. Start the primary database:
    $ORACLE_HOME/bin/srvctl start database -d $ORACLE_UNQNAME
    
  20. Ensure all components are valid or option off:
    select comp_id, status 
    from   dba_registry
    where  status not in ('VALID', 'OPTION OFF')
    
  21. Important: Ensure that the standby database has applied all redo from the downgrade operation. Perform a few log switches and note the sequence number. Ensure the standby database has applied that sequence.
  22. Start Data Guard broker on primary and standby database
    • Either restore broker config files from old release, and start Data Guard broker.
    • Or, start Data Guard broker and recreate the configuration.
  23. : Ensure that your Data Guard configuration works. Use validate database command in Data Guard broker on all databases and ensure they are ready for switchover.
  24. Optionally, test your Data Guard by doing a switchover.
  25. Perform the post-downgrade instructions.

That’s it!

CDB

No specific Data Guard-related changes. The above procedure is written for a non-CDB database. The procedure is slightly different for a CDB; check the documentation or the demo below.

RAC

On the standby database, you can leave all instances running, and you don’t need to change CLUSTER_DATABASE.

Demo

Downgrade of a CDB running RAC and Data Guard environment:

Other Blog Posts in This Series

Upgrading in the cloud – VM DB Systems – What about downgrade?

In a previous blog post I showed how you could upgrade a 12.2 PDB by plugging it into a 19c CDB. But what about downgrade? Yeah, downgrade. You know, that really cool feature that you never practice, but you know you should.

In the previous blog post, I used the CDB that gets automatically created when you deploy a new 19c VM DB System and it comes with COMPATIBLE set to 19.0.0 – the default. When you provision a new VM DB System there is no way to control that parameter. Thus, when I plug in my old release PDB into the new release CDB the COMPATIBLE parameter is automatically raised, and I have lost the possibility of doing a downgrade. The only option to get back to the old release would be a Data Pump export.

If you want to preserve the possibility of doing a database downgrade, I strongly recommend you switch to Bare Metal DB Systems or ExaCS which are much more flexible. But if you insist on using VM DB Systems there is an option – it is cumbersome – but doable. And believe it or not – after working with Oracle Database for so many years it was the first time ever that I had to a downgrade – not even in a lab or a test environment (which became fairly obvious after I had asked for advice the 100th time that day).

Drop pre-created 19c database

To get a 19c CDB with a non-default COMPATIBLE setting we will drop the pre-created database and replace it with a backup that has the proper COMPATIBLE setting. This is supported and the same approach is used in the whitepaper “Hybrid Data Guard to Oracle Cloud Infrastructure”. It is a good read and it has a lot of script examples that I stole… oh… got inspired by.

Connect to the new release VM DB System and ensure that the environment variable ORACLE_UNQNAME is set to the DB_UNIQUE_NAME of the database:

echo $ORACLE_UNQNAME

If it is not set, you can get it from srvctl:

srvctl config database
export ORACLE_UNQNAME=...

Generate a script that can delete all data -, temp -, redo log – and control files:

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_replace_files.lst
SELECT 'asmcmd rm '|| name FROM V$DATAFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$TEMPFILE UNION ALL SELECT 'asmcmd rm '|| member FROM V$LOGFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$CONTROLFILE;
SPOOL OFF
host chmod 777 /tmp/db_replace_files.lst

You must edit the script and get rid of the unneeded lines. Next, stop the database:

srvctl stop database -d $ORACLE_UNQNAME -o immediate

And log on as grid and delete the files using the script we just created:

. /tmp/db_replace_files.lst

As oracle, now restart the database instance in NOMOUNT mode (can’t really go further since we nuked the control files) and set the COMPATIBLE to the same setting as the old release CDB.

srvctl start database -db $ORACLE_UNQNAME -o NOMOUNT

sql / as sysdba
SQL> ALTER SYSTEM SET COMPATIBLE='12.2.0' SCOPE=SPFILE;
SQL> CREATE PFILE FROM SPFILE;

srvctl stop database -db $ORACLE_UNQNAME -o immediate
srvctl start database -db $ORACLE_UNQNAME -o nomount

sql / as sysdba
SQL> SHOW PARAMETER COMPATIBLE

Now we have a new release instance running with the old COMPATIBLE setting. Obviously, it is of no use – yet. Starting a 19c instance with a lower compatible setting

Create a backup of old release CDB

I will use the old release CDB as the source for my new release CDB and thus preserve my COMPATIBLE setting. Obviously, the old release CDB must be upgraded to the new release, so let us first must ensure that the source CDB can actually be upgraded. Use AutoUpgrade in ANALYZE and FIXUPS mode which is described in a previous blog post. We must execute the ANALYZE and FIXUPS mode on the source system because the target system will only be able to open the database in UPGRADE mode, and these steps must be executed while the database is running in normal mode.

Next, I will create a File Storage Service that I can use to share files between the two VM DB Systems. The File Storage Service is really nice because the transfer speed in and out of the service depends on the network bandwidth of your VM DB System. So, if you add more CPUs to the system, you get more network bandwidth to the service automatically. It is really easy to create a File Storage Service and it is very well documented, so I will skip that part here. After that you can mount the file system in your systems (as opc):

sudo mkdir -p /mnt/db-downgrade-122
sudo mount x.x.x.x:/db-downgrade-122 /mnt/db-downgrade-122
sudo chmod 777 /mnt/db-downgrade-122

I can now take a backup of the CDB and store it directly in my NFS mount point:

rman target / 

RMAN> BACKUP DATABASE ROOT FORMAT '/mnt/db-downgrade-122/cdbroot_%U' PLUGGABLE DATABASE 'PDB$SEED' FORMAT '/mnt/db-downgrade-122/pdbseed_%U' PLUS ARCHIVELOG FORMAT '/mnt/db-downgrade-122/arch_%U';
RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/db-downgrade-122/cf_%U';

Since the database is encrypted, we also need a copy of the keystore (or wallet). An easy solution could be to put the keystore files into the File Storage Service but a safer approach is to transfer the file directly using scp. Although, the keystore is protected by a password, it is still safer to keep them separated.

At time of writing, in OCI you can find the location of the keystore in sqlnet.ora.

cat $ORACLE_HOME/network/admin/sqlnet.ora | grep -i encryption_wallet

But that will change at some point in time because as of Oracle Database release 19c the sqlnet.ora parameter ENCRYPTION_WALLET_LOCATION is deprecated. You might have to look at the database parameter WALLET_ROOT instead.

For now, just note down the location of the keystore.

Restore old release CDB and upgrade

Now we can restore the backup of the 12.2 database using the 19c binaries. This is supported and in fact newer releases of RMAN can always restore lower release backups. However, normally RMAN will try to open the database in normal mode which we can’t do because of the version mismatch. I will use the NOOPEN keyword which causes RMAN to leave the database in MOUNT mode and I can manually open the database with RESETLOGS and in UPGRADE mode.

But first we must copy the keystore files from the source database to the new release DB System. In my example I am also copying the auto-login keystore which you shouldn’t do if you are using local auto-login keystores. In that situation the auto-login keystore should be re-created at the new release system:

cd 
scp -i  oracle@:/cwallet.sso cwallet.sso
scp -i  oracle@:/ewallet.p12 ewallet.p12

Now let’s do the restore and open the database in upgrade mode:

rman auxiliary /

RMAN> DUPLICATE DATABASE TO CDB1 AS ENCRYPTED NOOPEN SKIP PLUGGABLE DATABASE SALES BACKUP LOCATION '/mnt/db-downgrade-122/';

And in the end, you can see that RMAN does not open the database: Using the NOPEN keyword you can prevent RMAN from opening a database at the end of the restore and recover operation

Then, you can open the database in UPGRADE mode and with RESETLOGS option to complete the restore. Also, drop the skipped pluggable databases from the data dictionary:

ALTER DATABASE OPEN RESETLOGS UPGRADE;
DROP PLUGGABLE DATABASE SALES INCLUDING DATAFILES;

Finally, you can use AutoUpgrade in UPGRADE mode to upgrade the CDB to the new release. You now have a new target system running on 19c but with a lower COMPATIBLE setting.

If you want to know more about the UPGRADE mode, have a look in the documentation.

Downgrading a PDB

Previously, we have laid the groundworks for the being able to do a downgrade. For the following, I am assuming that you have already upgraded your PDB to 19c and now you ended up in a big doo doo and have to downgrade.

Downgrading in VM DB Systems are also slightly more complicated than on other systems. Remember that VM DB Systems does only support one Oracle Home (the one that comes deployed automatically) and that means that we must move the database back to the source system. For that operation we can’t use data guard, RMAN, or any other fancy approach because they only work to the same or newer release. So, we will have to do an old-school cold copy of the database – and that requires additional downtime. But let’s get started!

I will create a file that gives me all the commands I need to copy the data files out of ASM and into my File Storage Service (you could also scp them directly to the old release system):

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_downgrade_files.lst
SELECT 'asmcmd cp ' || name || ' ''/mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) || '''' FROM V$DATAFILE
UNION
SELECT 'chown oracle:oinstall /mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) FROM V$DATAFILE ORDER BY 1;
SPOOL OFF

And then I can proceed with the actual downgrade. I need to ensure that the unified audit trail is emptied before the downgrade:

ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES OPEN DOWNGRADE;
EXEC DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, FALSE);
SPOOL /tmp/db_downgrade.lst
SET TERMOUT ON TIMING ON SERVEROUT ON ECHO ON
@?/rdbms/admin/catdwgrd.sql
SPOOL OFF

Unplug the PDB and because the PDB is encrypted I have to specify a password that can protect the sensitive information inside the manifest file:

ALTER SESSION SET CONTAINER=CDB$ROOT;
ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES UNPLUG INTO '/mnt/db-downgrade-122/manifest_sales.xml' ENCRYPT USING [a-secret-password];

Now we can copy the data files from the local storage and on to the File Storage Service so we can use at the source system. Use the commands that we generated previously:

asmcmd ...
chown ...

Now switching to the old release system and create the PDB from manifest file. I will use the data files right off the File Storage Service and optionally I can move them afterwards – as an online operation (you might not want to do that, but I ran out of disk space):

CREATE PLUGGABLE DATABASE SALES USING '/mnt/db-downgrade-122/manifest_sales.xml' SOURCE_FILE_DIRECTORY='/mnt/db-downgrade-122' NOCOPY KEYSTORE IDENTIFIED BY [keystore-password] DECRYPT USING [a-secret-password];

Start up the database in UPGRADE mode and open the keystore:

ALTER PLUGGABLE DATABASE SALES OPEN UPGRADE;
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE CONTAINER=all;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY [keystore-password] CONTAINER=all;

Switch to the SALES PDB and complete the downgrade:

ALTER SESSION SET CONTAINER=SALES;
SET TERMOUT ON ECHO ON TIMING ON
SPOOL /home/oracle/sales_catreload.log
@?/rdbms/admin/catrelod.sql
SPOOL OFF

Restart, Recompile and fresh stats:

ALTER PLUGGABLE DATABASE SALES CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE SALES OPEN;
@?/rdbms/admin/utlrp.sql
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
EXEC DBMS_STATS.GATHER_FIXED_OBJECT_STATS;

Check the state of the Oracle Data Dictionary

SET SQLFORMAT ANSICONSOLE LINES 300
SELECT COMP_ID, COMP_NAME, VERSION, STATUS FROM DBA_REGISTRY ORDER BY MODIFIED;

And there you have it. Not exactly super easy, which is why I highly recommend you to look at Bare Metal DB Systems or Exadata DB Systems if you are required to be able to do downgrades (or be prepared to use Data Pump instead).

Other Posts in This Series