Oracle Data Pump and Compression – Also Without a License

Whenever you use Data Pump to export from Oracle Database, you should use compression. It’s conveniently built into Data Pump.

Pros:

  • The dump file is much smaller:
    • Less disk space is needed.
    • Easier to transfer over the network.
  • Often it is faster to use compression when you measure the entire workflow (export, transfer, and import).
  • Imports are often faster because less data needs to be written from disk.

Cons:

How Do I Enable Data Pump Compression

You simply set COMPRESSION option:

$ expdp ... compression=all

You use COMPRESSION option only for exports. When you import, Data Pump handles it automatically.

You only need a license for Advanced Compression Option when you use compression during export. You don’t need a license to import a compressed dump file.

Medium Is a Good Compression Algorithm

I recommend you use the medium compression algorithm:

$ expdp ... compression=all compression_algorithm=medium

Our experience and tests show that it best balances between compression ratio and CPU.

Here are the results of a test my team did:

Algorithm File Size (MB) Compression Ratio Elapsed Time
NONE 5.800 1,0 2m 33s
BASIC 705 8,2 3m 03s
LOW 870 6,6 3m 11s
MEDIUM 701 8,2 3m 01s
HIGH 509 11,3 12m 16s

I would recommend high algorithm only if you need to transfer over a really slow network.

But I Don’t Have a License

gzip

You can still compress the dump file but not using Data Pump. Use OS utilities. In this case, I recommend splitting the dump file into pieces. It is easier to handle, and you can start transferring the dump files as they are compressed:

$ expdp ... filesize=5G dumpfile=myexp%L.dmp
$ gzip -r /u01/app/oracle/dpdir

Now, you transfer the files, uncompress and import:

[target]$ gunzip -r /u01/app/oracle/dpdir
[target]$ impdp ...

rsync

Another option is to use rsync. It has the option to compress the dump file over the network only:

$ expdp ... filesize=5G dumpfile=myexp%L.dmp
$ rsync -z ...

Cheatsheet

If you have the proper license, use Data Pump compression during export:

$ expdp ... compression=all compression_algorithm=medium

If you don’t have a license, compress the dump file over the wire only:

$ rsync -z ....

Don’t combine Data Pump compression and gzip/rsync! Compressing compressed stuff is not a good idea.

Can I Run Datapatch When Users Are Connected

The short answer is: Yes! The longer answer is: Yes, but very busy systems or in certain situations, you might experience a few hiccups.

The obvious place to look for the answer would be in the documentation. Unfortunately, there is no Patching Guide similar to the Upgrade Guide. The information in this blog post is pieced together from many different sources.

A few facts about patching with Datapatch:

  • The database must be open in read write mode.
  • You can’t run Datapatch on a physical standby database – even if it’s open (Active Data Guard).
  • A patch is not fully installed until you have executed Datapatch successfully.

How To

First, let me state that it is fully supported to run Datapatch on a running database with users connected.

The procedure:

  1. Install a new Oracle Home and use OPatch to apply the desired patches.
  2. Shut down the database.
  3. Restart the database in the new, patched Oracle Home.
  4. Downtime is over! Users are allowed to connect to the database
  5. Execute ./datapatch -verbose.
  6. End of procedure. The patch is now fully applied.

Often users move step 3 to the end of the procedure. That’s of course also perfectly fine, but it does extend the downtime needed and often is not needed.

What About RAC and Data Guard

The above procedure is exactly what happens in a rolling patch apply on a RAC database. When you perform a rolling patch apply on a RAC database, there is no downtime at all. You use opatchauto to patch a RAC database. opatchauto restarts all instances of the database in the patched Oracle Home in a rolling manner. Finally, it executes datapatch on the last node. Individual instances are down temporarily, but the database is always up.

It is a similar situation when you use the Standby First Patch Apply. First, you restart all standby databases in the patched Oracle Home. Then, you perform a switchover and restart the former primary database in the patched Oracle Home. Finally, you execute datapatch to complete the patch installation. You must execute datapatch on the primary database.

Either way, don’t use Datapatch until all databases or instances run on the new, patched Oracle Home.

That’s It?

Yes, but I did write initally that there might be hiccups.

Waits

Datapatch connects to the database like any other session to make changes inside the database. These changes could be:

  • Creating new tables
  • Altering existing tables
  • Creating or altering views
  • Recreating PL/SQL packages like DBMS_STATS

Imagine this scenario:

  1. Database is restarted in patched Oracle Home.
  2. A user connects and starts to use DBMS_STATS.
  3. You execute datapatch.
    1. DBMS_STATS must be recreated to fix a bug.
    2. Datapatch executes CREATE OR REPLACE PACKAGE SYS.DBMS_STATS .....
    3. The Datapatch session will go into a wait.
  4. User is done with DBMS_STATS.
  5. The Datapatch session will come out of wait and replace the package.

In this scenario, the patching procedure was prolonged due to the wait. But it was completed eventually.

Hangs

From time to time, we are told that Datapatch hangs. Most likely, it is not a real hang, but just a wait on a lock. You can connect to the database and identify the blocker. You might even want to kill the blocking session to allow Datapatch to do its work.

Timeouts

What will happen in the above scenario if the user never releases the lock on DBMS_STATS? After a while, the DDL statement executed by Datapatch will error out:

ORA-04021: timeout occurred while waiting to lock object

To resolve this problem, restart Datapatch and ensure that there are no blocking sessions.

Really Busy Databases

I recommend patching at off-peak hours to reduce the likelihood of hitting the above problems.

If possible, you can also limit the activity in the database while you perform the patching. If your application is using e.g. DBMS_STATS and locking on that object is often a problem, you can hold off these sessions for a little while.

Similarly, if Advanced Queeing is causing problems, perhaps it helps temporarily set aq_tm_processes to 0. Or, in the case of the scheduler, job_queue_processes.

If nothing helps your situation, you can patch in restricted mode. But that means downtime:

  1. SQL> startup restrict
  2. ./datapatch -verbose
  3. SQL> alter system disable restricted session;

I don’t recommend starting in upgrade mode. To get out of upgrade mode a database restart is needed extending the downtime window.

Datapatch And Resources

How much resources does Datapatch need? Should I be worried about Datapatch depleting the system?

No, you should not. The changes that Datapatch needs to make are not resource-intensive. However, a consequence of the DDL statements might be object invalidation. But even here, you should not worry. Datapatch will automatically recompile any ORACLE_MAINTAINED object that was invalidated by the patch apply. But the recompilation happens serially, i.e., less resources needed.

Of course, if you system is running at 99% capacity, it might be a problem. On the other hand, if your system is at 99%, patching problems are probably the least of your worries.

What About OJVM

If you are using OJVM and you apply the OJVM bundle patch, things are a little different.

Release RAC Rolling Standby-First Datapatch
Oracle Database 21c Fully No No Datapatch downtime.
Oracle Database 19c + 18c Partial No No Datapatch downtime, but java system is patched which requires ~10 second outage. Connected clients using java will receive ORA-29548.
Oracle Database 12.2 + 12.1 No No Datapatch must execute in upgrade mode.
Oracle Database 11.2.0.4 No No Similar to 12.2 and 12.1 except you don’t use Datapatch.

Mike Dietrich also has a good blog that you might want to read: Do you need STARTUP UPGRADE for OJVM?

Recommendations

  • Before starting the patching procedure and downtime, I recommend you recompile invalid objects.
    SQL> @?/rdbms/admin/utlrp
    
  • Always execute Datapatch with the -verbose flag. This will give you much better information about is going on.
    $ $ORACLE_HOME/OPatch/datapatch -verbose
    
  • Always use the latest OPatch.
  • Always use out-of-place patching, even for RAC databases.

Conclusion

Go ahead and patch your database with Datapatch while users are connected.

Further Reading

Installing Oracle Database 19c and All the Things to Put on Top

When you prepare for patching or upgrading Oracle Database 19c, you must also prepare an Oracle Home. Installing the Oracle Home is easy, but there is more to it.

Out-of-place Installation

I always use out-of-place installation. I install a new, fresh Oracle Home. I will move the databases into that Oracle Home as I upgrade or patch.

The alternative, in-place installation, leads to more downtime, is more error-prone, and makes fallback more complicated. In addition, in-place installation will gradually slow down patching; as Mike Dietrich describes in Binary patching is slow because of the inventory.

Download and Prepare Oracle Home

First, I download the base release from Oracle Software Delivery Cloud, aka e-delivery.

Find REL: Oracle Database 19.3.0.0.0 – Long Term Release, the right platform, and download.

Extract the zip file into a new Oracle Home location:

export NEW_ORACLE_HOME=<path>
mkdir -p $NEW_ORACLE_HOME
cd $NEW_ORACLE_HOME
unzip -oq /tmp/LINUX.X64_193000_db_home.zip
rm /tmp/LINUX.X64_193000_db_home.zip

Don’t run the installer yet.

Clone Existing Oracle Home

I could clone an existing Oracle Home and then just apply the new patches. But it will make me susceptible to the same issue described above about in-place patching.

Update OPatch

OPatch is needed later on to apply patches to the new Oracle Home. Get the latest version and install it into the new Oracle Home:

rm -rf $NEW_ORACLE_HOME/OPatch
cd $NEW_ORACLE_HOME
unzip -oq /tmp/<opatch_zip_file>
rm /tmp/<opatch_zip_file>

Patches

Now, I will determine which patches to apply to the Oracle Home.

  • Start by getting the latest Release Update. I really mean the latest. I have helped too many customers with issues, only to find out the issue is already solved in a later Release Update. If your database has JAVAVM installed, then get the combo patch.
  • Review the list of important one-off patches for the specific Release Update. The list contains important fixes that haven’t made into a Release Update yet. I don’t need to get all of them, but based on my knowledge of my database, I can cherrypick those that could be relevant.
  • If I am using Data Pump, I get the Data Pump bundle patch. Data Pump fixes rarely make it into Release Updates, but they are not RAC-Rolling Installable which is a clear requirement for inclusion in Release Update.
  • If my databases use time zone files that are newer than version 32, then I must also apply the matching time zone patch. The default time zone file in Oracle Database 19c is version 32, and the base release contains all previous versions. To check the time zone file version in a database:
    SQL> select * from v$timezone_file;
    
  • If I am using GoldenGate, I get the GoldenGate bundle patch.
  • If my database uses OJVM (see appendix), I get the OJVM patch that matches the Release Update I am using. I can also get the OJVM patch as a combo patch together with the Release Update.

Unzip

Now that I have downloaded a number of zip files, I go ahead and unzip the files into separate directories. In the below example, I am using 19.16 Release Update, Data Pump bundle patch and a time zone patch:

#Release Update 19.16.0
mkdir -p $NEW_ORACLE_HOME/patch/p34133642
cd $NEW_ORACLE_HOME/patch/p34133642
unzip -oq /tmp/p34133642_190000_Linux-x86-64.zip
rm /tmp/p34133642_190000_Linux-x86-64.zip

#Data Pump bundle patch
mkdir -p $NEW_ORACLE_HOME/patch/p34294932
cd $NEW_ORACLE_HOME/patch/p34294932
unzip -oq /tmp/p34294932_1916000DBRU_Generic
rm /tmp/p34294932_1916000DBRU_Generic

#Time zone patch version 37
mkdir -p $NEW_ORACLE_HOME/patch/p33613829
cd $NEW_ORACLE_HOME/patch/p33613829
unzip -oq /tmp/DSTV37_p33613829_190000_Linux-x86-64.zip
rm /tmp/DSTV37_p33613829_190000_Linux-x86-64.zip

Install

Now, I can install the Oracle Home and apply all the patches in one operation. Mike has a really good description of the functionality and a demo.

I do a silent installation using a response file. Notice how I am applying the patches during the installation using -applyRU and -applyOneOffs:

export ORACLE_BASE=<path_to_oracle_base>
export ORACLE_HOME=<path_to_oracle_home>
#Path to inventory is most likely /u01/app/oraInventory
export ORA_INVENTORY=<path_to_inventory>
cd $ORACLE_HOME
./runInstaller -ignorePrereqFailure -waitforcompletion -silent \
   -responseFile $ORACLE_HOME/install/response/db_install.rsp \
   -applyRU patch/p34133642/34133642 \
   -applyOneOffs patch/p34294932/34294932,patch/p33613829/33613829 \
   oracle.install.option=INSTALL_DB_SWONLY \
   UNIX_GROUP_NAME=oinstall \
   INVENTORY_LOCATION=$ORA_INVENTORY \
   SELECTED_LANGUAGES=en,en_GB \
   ORACLE_HOME=$ORACLE_HOME \
   ORACLE_BASE=$ORACLE_BASE \
   oracle.install.db.InstallEdition=EE \
   oracle.install.db.OSDBA_GROUP=dba \
   oracle.install.db.OSBACKUPDBA_GROUP=dba \
   oracle.install.db.OSDGDBA_GROUP=dba \
   oracle.install.db.OSKMDBA_GROUP=dba \
   oracle.install.db.OSRACDBA_GROUP=dba \
   oracle.install.db.isRACOneInstall=false \
   oracle.install.db.rac.serverpoolCardinality=0 \
   oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \
   oracle.install.db.ConfigureAsContainerDB=false \
   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
   DECLINE_SECURITY_UPDATES=true

You can read more about silent installation on oracle-base.com. That’s where I got inspired from. The reponse file db_install.rsp is the default one that comes with the Oracle Home. I don’t change anything in it.

Finally, I execute root.sh as root:

$ORACLE_HOME/root.sh

AutoUpgrade

Download the latest version of AutoUpgrade, and put it into $ORACLE_HOME/rdbms/admin.

Et Voilà

That’s it. I can now use the Oracle Home to upgrade or patch my Oracle Database 19c.

Appendix

Patches

As if the list of patches to apply wasn’t long enough. There are even more MOS notes!

Good news is that you don’t have to go through them, as long as you stay on the latest Release Update. If you check the notes, you will see that almost all bugs are already included in a Release Update. That’s a pretty strong argument for always using the latest Release Update.

  • Things to Consider to Avoid Prominent Wrong Result Problems on 19C Proactively (Doc ID 2606585.1)
  • Things to Consider to Avoid Database Performance Problems on 19c (Doc ID 2773012.1)
  • Things to Consider to Avoid SQL Performance Problems on 19c (Doc ID 2773715.1)
  • Things to Consider to Avoid SQL Plan Management (SPM) Related Problems on 19c (Doc ID 2774029.1)

Grid Infrastructure

If Grid Infrastructure manages my database, I must remember to keep GI and database patch level in sync.

It Looks Complicated

It is a little to cumbersome. We know, and that’s why there are several initiatives to make it easier.

You could also look at Oracle Fleet Patching & Provisioning (FPP). Philippe Fierens is product manager for FPP. You can read his blog posts or reach out to him (he is a nice guy who takes every opportunity to talk about FPP).

OJVM

If your database is using OJVM, then you must also apply the OJVM patch to your Oracle Home. You can check it using:

select version, status from dba_registry where comp_id=’JAVAVM’

I have seen many databases that had OJVM installed, but it was never used. In such case, you can remove the component from your database. Then you no longer need to apply the OJVM patch to your Oracle Home. Plus it has the added benefit that it will make your upgrades faster.

Mike Dietrich has a good blog – the OJVM Patching Saga. Catchy title!

Upgrade, Data Guard and Downtime

How about downtime when you upgrade your Oracle Database with Data Guard?

Short answer: You should expect slightly more downtime. Unless you head into a rolling upgrade.

How Long Does An Upgrade Take?

If I had a dime for every time someone asked me this question, I would have bought a tractor, retired, and worked full-time at our farm.

What a handsome Ford

My boss, Mike, created a long blog post about it without giving a real answer. If he can’t estimate it, no one can!

But I will give it a try anyway. A ballpark figure is 15-45 minutes for the actual upgrade. Then add all the extra stuff that surrounds the upgrade.

Does Data Guard Change That?

Yes, it does. Extra downtime is needed.

You don’t add any extra time to the actual upgrade of the primary database. During the upgrade, redo transport is on, and the primary database ships redo to the standby database, but it has no significant impact on the duration of the upgrade.

But you do need a little extra time before and after the upgrade.

Primary Standby Added Time
Restart Data Guard in new Oracle Home 5 min
Upgrade primary database
Verify standby has applied all redo 2 min
Optionally, test a switchover 5 min
TOTAL 7-12 min

If you follow the standby offline approach, you need more downtime. In that approach, you keep the standby database offline during the entire upgrade, and it needs time to catch up afterward. I would estimate it adds 5 min of extra downtime, totaling 12-17 min instead.

This is the best you can do with a physical standby database. But …

Rolling Upgrades

If you want to upgrade with almost no downtime at all, you can do rolling upgrades. A rolling upgrade uses DBMS_ROLLING to reduce the downtime for an upgrade to the time it takes to perform a regular switchover.

Under the hood, a rolling upgrade uses Data Guard as well. This blog post series is about upgrading with physical standby databases, but a rolling upgrade requires a logical standby database. It’s out of scope for this blog post series.

Why don’t we all do rolling upgrades?

  • A logical standby database has more restrictions than a physical one.
  • A logical standby database uses SQL apply instead of redo apply. SQL apply is not as efficient as redo apply and can become a performance bottleneck.
  • It’s more complicated.

But rolling upgrades are really cool, as you will see in this demo.

You can find more details in our webinar How Low Can You Go? Zero Downtime Operations. Slides and recording is available on demand.

Other Blog Posts in This Series

Upgrade, Data Guard, and Restore Points

Flashback Database is the best way to protect your Oracle Database during upgrade. It requires that you create restore points before the upgrade. If the upgrade fails, getting back to the starting point is easy and fast.

Enterprise Edition

It requires Enterprise Edition, but so does Data Guard. So it is not an issue here.

Guaranteed Restore Points

The restore points that you create to protect your Oracle Database upgrade should be guaranteed restore points. This ensures that you can always go back to the starting point. If you run out of disk space in your Fast Recovery Area (FRA), the database will halt and thus not jeopardize your fallback plan. The alternative to regular restore points is that the database will overwrite flashback logs to maintain operation, which is not desirable in this situation. You are relying on the restore point as your fallback.

Order of Creation

Always follow this order when creating restore points:

  1. All standby databases
  2. Primary database

The SCN of the restore points on the standby database must be the same or lower than on the primary database.

Imagine this scenario:

SCN Primary Standby
100 Restore point created
105 Restore point created
110 Upgrade starts
200 Upgrade fails

To flashback:

Primary Standby
Flash back standby database to SCN 100
Flash back primary database to SCN 105
Open primary database with resetlogs. Creates a new incarnation as of SCN 105
Roll forward standby database from SCN 101 to 105 and follow new incarnation

If the restore point on the standby database was created at SCN 106, then the primary database is already at a new incarnation (as of SCN 105). The standby database can’t follow and needs to be completely rebuilt.

Primary Database

When you upgrade with AutoUpgrade, it will create a guaranteed restore point for you. It is the default behavior.

The restore point created by AutoUpgrade protects the primary database only.

Standby Database

Always create the restore points on the standby database:

From the docs:

Restore points that are created on a primary database are automatically replicated to the standby database. The restore points created on the standby database are called replicated restore points. Irrespective of whether a restore point on the primary database is a guaranteed restore point or a normal restore point, the corresponding replicated restore point is always a normal restore point.

By design, all replicated restore points are not guaranteed; even if it was guaranteed on the primary database. When you protect a standby database during upgrade, you want the restore point to be guaranteed; thus, you must create them manually.

To create a restore point on the standby database, you must cancel redo apply:

SQL> alter database recover managed standby database cancel;
SQL> create restore point STBY_GRP guarantee flashback database;
SQL> alter database recover managed standby database disconnect from session;

If you forget to cancel redo apply, you run into:

SQL> create restore point STBY_GRP guarantee flashback database;
create restore point STBY_GRP guarantee flashback database
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'STBY_GRP'.
ORA-01153: an incompatible media recovery is active

Remove Restore Points

Don’t forget to remove the restore points after the upgrade. Do so when you have run your tests, and you are sure you won’t go back to the old release:

SQL> drop restore point STBY_GRP;
  • The order of removal is not important
  • It does not require downtime to remove the restore points

If you instruct AutoUpgrade to remove the restore point after upgrade (drop_grp_after_upgrade), it will happen only on the primary database. You must manually remove the restore points from the standby database.

Failure to remove the restore points will eventually fill up the FRA bringing the database to a complete halt.

Other Blog Posts in This Series

How To Upgrade Data Guard – Standby Offline Method

Let me show you how to upgrade your database to Oracle Database 19c when it is protected by Data Guard. I will use AutoUpgrade and follow the Standby Offline method.

This is my demo environment: Overview of the environment that is used for this procedure

  • Grid Infrastructure is managing the database.
  • Data Guard is configured using Data Guard broker.

Overall, the process looks like this: Overview of upgrade with a data guard

This procedure starts right before I start AutoUpgrade in DEPLOY mode. Downtime has begun, and no users are connected to the database.

Before Upgrade

I always use the latest version of AutoUpgrade. Download it and put it into $ORACLE_HOME/rdbms/admin.

A newer version of AutoUpgrade can also upgrade to older database releases, so don’t worry if the AutoUpgrade version doesn’t match the Oracle Database release that you are upgrading to.

Disable Fast-Start Failover (FSFO)

Fast-Start Failover must be disabled during upgrade. I connect to Data Guard CLI (dgmgrl) and disable it:

DGMGRL> disable fast_start failover;

Restore Point

My standby database is protected against errors during the upgrade by keeping it offline. However, to be on the safe side, I also create a guaranteed restore point on the standby database:

STANDBY SQL> alter database recover managed standby database cancel;
STANDBY SQL> create restore point stdby_before_upg guarantee flashback database;
STANDBY SQL> alter database recover managed standby database disconnect from session;

Stop Data Guard

I shut down and disable the standby database. Disabling the database is strictly speaking not necessary, but a better-safe-than-sorry approach:

[oracle@standby]$ $ORACLE_HOME/bin/srvctl stop database -d $ORACLE_UNQNAME -stopoption immediate
[oracle@standby]$ $ORACLE_HOME/bin/srvctl disable database -d $ORACLE_UNQNAME

Upgrade

During the upgrade, I want to defer redo log shipping to the standby database. I use the AutoUpgrade config file parameter defer_standby_log_shipping (see appendix for complete config file):

upg1.defer_standby_log_shipping=yes

Next, I upgrade the primary database by starting AutoUpgrade in DEPLOY mode:

[oracle@primary]$ java -jar autoupgrade.jar -config DB.cfg -mode deploy

As the upgrade progresses, it makes a lot of changes to the database data dictionary. Those changes are written to redo, which will accumulate on the primary database. The database will not send the redo logs to the standby database (yet).

At the end of the upgrade, AutoUpgrade informs me that redo log transport has been deferred and that I must enable it again:

---- Data Guard broker configuration is disabled. Enable it once ready to sync up standby database  ----
Enable Data Guard from DB_BOSTON using the following DGMGRL command: 
EDIT DATABASE DB_BOSTON SET STATE=TRANSPORT-ON;

After the upgrade, you should perform the necessary tests to validate the new database release. You should continue only when you are convinced to go live on the new release.

Remember that the standby database was left behind before we started, so if all other fails, simply restart the standby database and connect your users to it.

After Upgrade

Restart Standby Database In New Oracle Home

  • Ideally, I have stored these files outside the Oracle Home; like ASM. In that case, I don’t need to do anything:

    • SPFile
    • Password file
    • Data Guard broker config files
    • Network files (sqlnet.ora, tnsnames.ora etc.)

    If I stored the files inside the Oracle Home, I must move them to the new Oracle Home (see appendix).

  • I change my profiles and /etc/oratab (see appendix).

  • I upgrade the database in Grid Infrastructure, which updates the Oracle Home information. Grid Infrastructure now starts the database in the new Oracle Home. If you are licensed for Active Data Guard, you can change the startoption from mount to read only:

    [oracle@standby]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
    [oracle@standby]$ export ORACLE_HOME=$NEW_ORACLE_HOME
    [oracle@standby]$ $ORACLE_HOME/bin/srvctl upgrade database -db $ORACLE_UNQNAME -o $ORACLE_HOME
    [oracle@standby]$ $ORACLE_HOME/bin/srvctl modify database -db $ORACLE_UNQNAME -s "mount" -r PHYSICAL_STANDBY
    [oracle@standby]$ $ORACLE_HOME/bin/srvctl enable database -d $ORACLE_UNQNAME
    [oracle@standby]$ $ORACLE_HOME/bin/srvctl start database -d $ORACLE_UNQNAME
    

Set Parameters

Suppose I made any changes to the database initialization parameters as part of the upgrade of the primary database. In that case, I probably also want to make the same changes on the standby database.

I can make changes to database initialization parameters using these AutoUpgrade config file parameters:

  • remove_underscore_parameters
  • add_after_upgrade_pfile
  • add_during_upgrade_pfile
  • del_after_upgrade_pfile
  • del_during_upgrade_pfile

I can also review the Preupgrade Summary Report for suggested changes to the database initialization parameters.

STANDBY SQL> alter system set ... scope=both;

Re-enable Data Guard

To re-enable the Data Guard config, I use Data Guard CLI (dgmgrl) and re-enable redo transport:

DGMGRL> edit database "DB_BOSTON" set state=transport-on;

Now, redo is shipping to the standby database, and it will apply it. When the redo generated during the upgrade is applied on the standby database, it is implicitly upgraded.

Wait For Redo Apply

Ensure the standby database has caught up and applied all the redo generated during the upgrade.

DGMGRL> show database "DB_fra24r"

Database - DB_fra24r

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          1 hours 56 seconds (computed 1 second ago)
  Average Apply Rate: 3.38 MByte/s
  Real Time Query:    OFF
  Instance(s):
    DB

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold

Database Status:
WARNING

The database reports a WARNING because there is an apply lag. The standby database has not caught up yet.

Wait a while and repeat the show database command until status changes to SUCCESS and there is no apply lag. Use Data Guard Broker to verify data guard setup after upgrade

If Data Guard broker reports an error the standby database is not applying logs, I can try to enable the configuration again:

DGMGRL> enable configuration;

Validate Data Guard

I validate the setup and ensure both databases are ready for a switchover. The database will not allow a switchover if there are any problems in the Data Guard setup. It is a good way of checking things are fine:

DGMGRL> validate database "DB_BOSTON"
DGMGRL> validate database "DB_fra24r"

After upgrading a primary database (data guard) with autoupgrade you can use validate database to ensure everything is fine

Optionally, I perform a switchover as well:

DGMGRL> switchover to "DB_fra24r"

Re-enable Fast-Start Failover

I can now re-enable FSFO:

DGMGRL> enable fast_start failover;

Remove Restore Points

After the upgrade, I should perform the necessary tests to validate the new database release. I should remove the restore points on both databases when I am convinced to go live on the new release.

Miscellaneous

Check the database registration in listener.ora. I must update the Oracle Home information if there is a static configuration.

What If

  • What if your Oracle Database is not managed by Grid Infrastructure? You can still use the above procedure, but you must change the commands accordingly.

  • What if you don’t use Data Guard broker? Manually configured Data Guard environments are fully supported by AutoUpgrade, but you must change some commands accordingly.

Conclusion

It is not that complicated to upgrade your database, even if it is part of a Data Guard setup. Using AutoUpgrade is fully supported and highly recommended. A little extra legwork is needed to take care of the standby database. But the good thing is that your Data Guard setup is maintained throughout the process. This procedure showed how to use the standby offline approach, which requires slightly more downtime but gives you extra protection.

Other Blog Posts in This Series

Appendix

Broker Config Files

Here is a query to determine the location of the Data Guard broker config files. In this example, the files are stored outside the Oracle Home; in ASM:

SQL> select name, value from v$parameter where name like 'dg_broker_config_file%';

NAME                   VALUES
---------------------- --------------------------------
dg_broker_config_file1 +DATA/DB_FRA2PR/dr1db_fra2pr.dat
dg_broker_config_file2 +DATA/DB_FRA2PR/dr2db_fra2pr.dat

Database Files

Here is a command to see where the SPFile and password file are located. In this example, the SPFile is stored outside the Oracle Home. However, the password file is in the default location inside the Oracle Home. The latter must be moved when you restart a database in the new Oracle Home:

[oracle@standby]$ srvctl config database -d $ORACLE_UNQNAME | grep -E "Spfile|Password"
Spfile: +DATA/<DB_UNIQUE_NAME>/PARAMETERFILE/spfileDB.ora
Password file: 

Updating /etc/oratab

Here is a little snippet to update /etc/oratab to match the new release Oracle Home. Since I am using Grid Infrastructure to manage my database, I don’t set the database to start automatically:

export NEW_ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
export ORACLE_SID=DB
#Backup file
cp /etc/oratab /tmp/oratab
#Use sed to remove the line that starts with ORACLE_SID
sed '/^'"$ORACLE_SID"':/d' /tmp/oratab > /etc/oratab
#Add new entry
echo "$ORACLE_SID:$NEW_ORACLE_HOME:N" >> /etc/oratab

Updating .bashrc

Here is a little snippet to update .bashrc replacing the old Oracle Home with the new Oracle Home:

export OLD_ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
export NEW_ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
cp ~/.bashrc ~/.bashrc-backup
sed 's|'"$OLD_ORACLE_HOME"'|'"$NEW_ORACLE_HOME"'|g' ~/.bashrc-backup > ~/.bashrc

Alert Log Messages

Before the upgrade of the primary database, I stop the standby database. Since I didn’t defer redo transport in the primary database, it will complain about losing connection to the standby database. The primary database alert log will contain an entry similar to this:

2022-08-19T05:21:39.389976+00:00
ARCH: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (1089)
ARCH: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
krsg_gap_ping: Error 1089 when pinging <standby> (Process:TT00) (PID:82492)
2022-08-19T05:21:46.606834+00:00
TT04: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3113)
TT04: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
2022-08-19T05:21:46.607255+00:00
Errors in file /u01/app/oracle/diag/rdbms/db_boston/DB/trace/DB_tt04_66302.trc:
ORA-03113: end-of-file on communication channel
Error 3113 for archive log file 3 to '<standby>'
2022-08-19T05:21:47.002610+00:00
Errors in file /u01/app/oracle/diag/rdbms/db_boston/DB/trace/DB_tt04_66302.trc:
ORA-03113: end-of-file on communication channel

I can safely ignore it. My monitoring system might detect this and start to complain.

Config File

For your reference, this is the config file I used. It contains only the required information. All other parameters have a default value:

upg1.sid=DB
upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
upg1.defer_standby_log_shipping=yes

Synchronize Standby Database

When I run AutoUpgrade in ANALYZE mode and check the preupgrade summary report, I find this information message:

Synchronize your standby databases before database upgrade.

The standby database is not currently synchronized with its associated primary database.

To keep data in the source primary database synchronized with its associated standby databases, all standby databases must be synchronized before database upgrade. See My Oracle Support Note 2064281.1 for details.

Don’t worry about it. It tells me to ensure that all redo gets applied.

What does it say? Basically, it says that all redo generated on the primary database before the downtime window started, should be sent to and applied on the standby database. This way, my standby database is ready to replace your primary database at any time, if something goes really wrong. Strictly speaking it is not necessary to ensure that, but it is strongly recommended.

Further Reading

Changing COMPATIBLE Parameter and Data Guard

When you upgrade your Oracle Database, you should also decide how to raise the COMPATIBLE parameter. The considerations are the same when you use Data Guard, but the procedure is different.

Why

The main reason for raising COMPATIBLE is to enable new features.

Some new features are not backward compatible, e.g., blockchain tables. When someone introduces such features, the database:

  • is no longer backward compatible
  • can no longer be downgraded

When you upgrade, COMPATIBLE stays at the old setting. You need to actively raise it and allow the use of these new features. You are in total control.

Here is a video with more details about COMPATIBLE.

When

I recommend that you raise COMPATIBLE one or two weeks after the upgrade.

  • When you have seen that the new release works fine in production.
  • When you are confident that a database downgrade won’t be needed.

Raising COMPATIBLE requires a database restart, i.e., an outage. If such is unacceptable, you must raise COMPATIBLE as part of the upgrade. But be advised, it severely limits your fallback options.

AutoUpgrade does not change COMPATIBLE, unless you explicitly state it in the config file:

upg1.drop_grp_after_upgrade=yes
upg1.raise_compatible=yes

What

I recommend that you always set COMPATIBLE to the default of a database release:

  • 19.0.0
  • 21.0.0

If you only have non-CDBs then it might not matter much. But in a multitenant environment, having a uniform, COMPATIBLE setting is very beneficial. This allows PDBs to move between CDBs without problems.

How

You need to raise COMPATIBLE on all databases in your Data Guard configuration. The order is important:

  • First, standby databases (individual order is not important)
    STANDBY SQL> --Ensure redo apply is running
    STANDBY SQL> alter database recover managed standby database disconnect from session;
    STANDBY SQL> alter system set compatible='19.0.0' scope=spfile sid='*';
    STANDBY SQL> alter database recover managed standby database cancel;
    
    [oracle@standby]$ srvctl stop database -d $ORACLE_UNQNAME
    [oracle@standby]$ srvctl start database -d $ORACLE_UNQNAME -o mount
    
    STANDBY SQL> alter database recover managed standby database disconnect from session;
    
  • Last, primary database
    PRIMARY SQL> alter system set compatible='19.0.0' scope=spfile sid='*';
    
    [oracle@primary]$ srvctl stop database -d $ORACLE_UNQNAME
    [oracle@primary]$ srvctl start database -d $ORACLE_UNQNAME
    

Other Blog Posts in This Series

Further Reading

How To Upgrade Data Guard – MAA Method

Let me show you how to upgrade your database to Oracle Database 19c when it is protected by Data Guard. I will use AutoUpgrade and follow the MAA method.

This is my demo environment: Overview of the environment that is used for this procedure

  • Grid Infrastructure is managing the database.
  • Data Guard is configured using Data Guard broker.

Overall, the process looks like this: Overview of upgrade with a data guard

This procedure starts right before I start AutoUpgrade in DEPLOY mode. Downtime has begun, and no users are connected to the database.

Before Upgrade

I always use the latest version of AutoUpgrade. Download it and put it into $ORACLE_HOME/rdbms/admin.

A newer version of AutoUpgrade can also upgrade to older database releases, so don’t worry if the AutoUpgrade version doesn’t match the Oracle Database release that you are upgrading to.

Disable Fast-Start Failover (FSFO)

Fast-Start Failover must be disabled during upgrade. I connect to Data Guard CLI (dgmgrl) and disable it:

DGMGRL> disable fast_start failover;

Restore Point

I need to protect the standby database against errors during the upgrade. I create a guaranteed restore point on the standby database:

STANDBY SQL> alter database recover managed standby database cancel;
STANDBY SQL> create restore point stdby_before_upg guarantee flashback database;
STANDBY SQL> alter database recover managed standby database disconnect from session;

Set Parameters

If I plan on making changes to the database initialization parameters as part of the upgrade of the primary database, I probably also want to make the same changes on the standby database.

I can make changes to database initialization parameters using these AutoUpgrade config file parameters:

  • remove_underscore_parameters
  • add_after_upgrade_pfile
  • add_during_upgrade_pfile
  • del_after_upgrade_pfile
  • del_during_upgrade_pfile

I can also review the Preupgrade Summary Report for suggested changes to the database initialization parameters.

Save the changes to the SPFile only:

STANDBY SQL> alter system set ... scope=spfile;

Restart Standby Database In New Oracle Home

The standby database must be started in the new Oracle Home; the Oracle Home I upgrade to.

  • First, I stop the standby database:

    [oracle@standby]$ srvctl stop database -d $ORACLE_UNQNAME
    
  • Ideally, I have stored these files outside the Oracle Home; like ASM. In that case, I don’t need to do anything:

    • SPFile
    • Password file
    • Data Guard broker config files
    • Network files (sqlnet.ora, tnsnames.ora etc.)

    If I stored the files inside the Oracle Home, I must move them to the new Oracle Home (see appendix).

  • Next, I restart the database in the new Oracle Home. I must restart in mount mode:

    [oracle@standby]$ #Set environment to new Oracle Home
    [oracle@standby]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
    [oracle@standby]$ export PATH=$ORACLE_HOME/bin:$PATH
    [oracle@standby]$ srvctl upgrade database -d $ORACLE_UNQNAME -oraclehome $ORACLE_HOME
    [oracle@standby]$ srvctl start database -d $ORACLE_UNQNAME -o mount
    

    I don’t need to explicitly start redo apply. My Data Guard broker configuration has APPLY-ON for the standby database. Since the broker is still enabled, it will ensure that redo apply is started.

  • Finally, be sure also to change my profiles and /etc/oratab (see appendix).

Upgrade

Upgrade the primary database by starting AutoUpgrade in DEPLOY mode:

[oracle@primary]$ java -jar autoupgrade.jar -config DB.cfg -mode deploy

As the upgrade progresses, it makes a lot of changes to the database data dictionary. Those changes are written to redo, which the standby database will apply. Thus, the upgrade happens implicitly on the standby database via the redo.

After Upgrade

Check Standby Database

Ensure that the standby database has caught up and applied all the redo generated during the upgrade.

I use Data Guard CLI (dgmgrl) to check it. First, the configuration:

DGMGRL> show configuration

Configuration - DB_BOSTON_DB_fra2vp

  Protection Mode: MaxPerformance
  Members:
  DB_BOSTON - Primary database
    DB_fra2vp - Physical standby database 

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 28 seconds ago)

Status must be SUCCESS.

Next, I check each of the databases:

DGMGRL> show database "DB_BOSTON"
DGMGRL> show database "DB_fra2vp"

Both databases should have status SUCCESS, and the standby database must report no apply lag. Use Data Guard Broker to verify data guard setup after upgrade

If Data Guard broker reports an erroror the standby database is not applying logs, I can try to enable the configuration again:

DGMGRL> enable configuration;

Validate Data Guard

I validate the setup and ensure both databases are ready for a switchover. The database will not allow a switchover if there are any problems in the Data Guard setup. It is a good way of checking things are fine:

DGMGRL> validate database "DB_BOSTON"
DGMGRL> validate database "DB_fra2vp"

After upgrading a primary database (data guard) with autoupgrade you can use validate database to ensure everything is fine

Optionally, I perform a switchover as well:

DGMGRL> switchover to "DB_fra2vp"

Re-enable Fast-Start Failover

I can now re-enable FSFO:

DGMGRL> enable fast_start failover;

Active Data Guard

If the database is licensed to use Active Data Guard, I can now open the standby database in READ ONLY mode.

Remove Restore Points

After the upgrade, I should perform the necessary tests to validate the new database release. Only when I am convinced to go live on the new release, should I remove the restore points on both databases.

Miscellaneous

Check the database registration in listener.ora. I must update the Oracle Home information if there is a static configuration.

What If

  • What if your Oracle Database is not managed by Grid Infrastructure? You can still use the above procedure, but you must change the commands accordingly.

  • What if you don’t use Data Guard broker? Manually configured Data Guard environments are fully supported by AutoUpgrade, but you must change some commands accordingly.

Conclusion

It is not that complicated to upgrade your database, even if it is part of a Data Guard setup. Using AutoUpgrade is fully supported and highly recommended. A little extra legwork is needed to take care of the standby database. But the good thing is that your Data Guard setup is maintained throughout the process.

Here is a cool demo of the entire process:

Other Blog Posts in This Series

Appendix

Broker Config Files

Here is a query to determine the location of the Data Guard broker config files. In this example, the files are stored outside the Oracle Home; in ASM:

SQL> select name, value from v$parameter where name like 'dg_broker_config_file%';

NAME                   VALUES
---------------------- --------------------------------
dg_broker_config_file1 +DATA/DB_FRA2PR/dr1db_fra2pr.dat
dg_broker_config_file2 +DATA/DB_FRA2PR/dr2db_fra2pr.dat

Database Files

Here is a command to see where the SPFile and password file are located. In this example, the SPFile is stored outside the Oracle Home. However, the password file is in the default location inside the Oracle Home. The latter must be moved when you restart a database in the new Oracle Home:

[oracle@standby]$ srvctl config database -d $ORACLE_UNQNAME | grep -E "Spfile|Password"
Spfile: +DATA/<DB_UNIQUE_NAME>/PARAMETERFILE/spfileDB.ora
Password file: 

Updating /etc/oratab

Here is a little snippit to update /etc/oratab to match the new release Oracle Home. Since I am using Grid Infrastructure to manage my database, I don’t set the database to start automatically:

export NEW_ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
export ORACLE_SID=DB
#Backup file
cp /etc/oratab /tmp/oratab
#Use sed to remove the line that starts with ORACLE_SID
sed '/^'"$ORACLE_SID"':/d' /tmp/oratab > /etc/oratab
#Add new entry
echo "$ORACLE_SID:$NEW_ORACLE_HOME:N" >> /etc/oratab

Updating .bashrc

Here is a little snippit to update .bashrc replacing the old Oracle Home with the new Oracle Home:

export OLD_ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
export NEW_ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
cp ~/.bashrc ~/.bashrc-backup
sed 's|'"$OLD_ORACLE_HOME"'|'"$NEW_ORACLE_HOME"'|g' ~/.bashrc-backup > ~/.bashrc

Config File

For your reference, this is the config file I used. It contains only the required information. All other parameters have a default value:

upg1.sid=DB
upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_1

Synchronize Standby Database

When I run AutoUpgrade in ANALYZE mode and check the preupgrade summary report, I find this information message:

Synchronize your standby databases before database upgrade.

The standby database is not currently synchronized with its associated primary database.

To keep data in the source primary database synchronized with its associated standby databases, all standby databases must be synchronized before database upgrade. See My Oracle Support Note 2064281.1 for details.

Don’t worry about it. It tells me to ensure that all redo gets applied

What does it say? Basically, it says that all redo generated on the primary database before the downtime window started, should be sent to and applied on the standby database. This way, my standby database is ready to replace your primary database at any time, if something goes really wrong. Strictly speaking it is not necessary to ensure that, but it is strongly recommended.

Further Reading

Downgrade and Data Guard

Oracle Database is capable of downgrading, and it is a cool fallback mechanism. You can use it even after go-live. Plus, Data Guard plays nicely together with downgrade. You don’t have to rebuild the standby database following a downgrade if you follow the correct procedure.

When To Use Downgrade

You should only consider downgrading after go-live. When users have made changes to your Oracle Database. In that case, you can downgrade to the previous release without any data loss. Before go-live, Flashback Database is our preferred fallback mechanism.

A comparison between the two:

Flashback Database Downgrade
Data loss No data loss
Use before go-live Use after go-live
After flashback, database is identical with before-upgrade state After downgrade, database is compatible with before-upgrade state, but not identical
Requires Enterprise Edition Works in all editions
Preferred method

General Considerations

It is a requirement that you have not changed the COMPATIBLE parameter. As soon as you change COMPATIBLE after upgrade, you can no longer downgrade your Oracle Database. If you have already changed the COMPATIBLE parameter, you must use other fallback methods like Data Pump or RMAN restore.

The old release Oracle Home must still exist on primary and standby hosts. I recommend that you keep them until you are absolutely sure you will not downgrade your Oracle Database.

When you downgrade your database, I recommend leaving your Grid Infrastructure at the new release. Don’t downgrade Grid Infrastructure as well. Ideally, you upgraded Grid Infrastructure to the new release in advance in a previous maintenance window. Thus, you know it can handle the old release of the database. Save yourself the added complexity of also downgrading Grid Infrastructure.

Data Guard broker does not support going back to a previous version. You must disable the broker before downgrade and afterward create a new configuration or restore broker configuration files from the old release.

The following works for Data Guard configuration with a physical standby database. Other considerations and a different procedure apply if you have a logical standby database.

Downgrade

To downgrade an Oracle Database protected by Data Guard the following applies:

  • You can downgrade hours, days, or even months after the upgrade. As long as you haven’t changed the COMPATIBLE parameter.
  • You must mount the standby database and start redo apply in real-time apply mode. Keep it running during the entire process.
  • You must not open the standby database until the end of the process.
  • The downgrade happens on the primary database. A downgrade will make changes to the data dictionary. Those changes are applied on the standby via redo.
  • A downgrade with a standby database generally follows the same procedure as a regular database. However, at the end of each step, you must ensure that the standby database has applied all the redo generated by the primary database during that step. I usually issue a few log switches and check the current sequence (SEQUENCE#) on the primary database. Then I wait for that sequence to be applied on the standby database.

Check our video on YouTube for essential information on downgrade and flip through the slides.

How To Downgrade

AutoUpgrade can’t perform downgrades. You need to do this manually.

In the following, $NEW_ORACLE_HOME refers to the release you were upgrading to (e.g., 19c), and $OLD_ORACLE_HOME refers to the release you upgraded from (e.g., 12.1.0.2).

  1. Follow the pre-downgrade instructions.
  2. Set the environment in your session to the new Oracle Home.
  3. For RAC, set CLUSTER_DATABASE parameter:
    alter system set cluster_database=false scope=spfile sid='*'
    
  4. Stop the primary database:
    srvctl stop database -d $ORACLE_UNQNAME
    
  5. Start the primary database (if RAC, just one instance) in downgrade mode:
    startup downgrade
    
  6. Set the executable flag on downgrade script in the new Oracle Home:
    chmod +x $ORACLE_HOME/bin/dbdowngrade
    
  7. Start downgrade script on primary database. The environment is still set to the new release Oracle Home.
    $ORACLE_HOME/bin/dbdowngrade
    
  8. Important: Ensure that the standby database has applied all redo from the downgrade operation. Perform a few log switches and note the sequence number. Ensure the standby database has applied that sequence.
  9. Shut down primary database and standby database (if RAC, all instances).
  10. Downgrade the standby database in Grid Infrastructure. This tells Grid Infrastructure to start the database in the old Oracle Home (in this example, it is 12.1.0.2):
    $ORACLE_HOME/bin/srvctl downgrade database \
      -db $ORACLE_UNQNAME
      -oraclehome $OLD_ORACLE_HOME
      -targetversion 12.1.0.2
    
  11. Important: Switch the environment in your session to the old Oracle Home.
  12. Start the standby database (if RAC, all instances).
  13. Start redo apply:
    alter database recover managed standby database disconnect from session
    
  14. Start primary database in upgrade mode (if RAC, only one instance):
    startup upgrade
    
  15. Finish the downgrade:
    set echo on termout on serveroutput on timing on
    spool catrelod.log
    @?/rdbms/admin/catrelod.sql
    
  16. Recompile:
    @?/rdbms/admin/utlrp.sql
    
  17. Set CLUSTER_DATABASE parameter and shut down:
    alter system set cluster_database=true scope=spfile sid='*'
    shutdown immediate
    
  18. Downgrade the primary database in Grid Infrastructure. Grid Infrastructure will now start the database in the old Oracle Home (in this example it is 12.1.0.2):
    $NEW_ORACLE_HOME/bin/srvctl downgrade database \
      -db $ORACLE_UNQNAME
      -oraclehome $OLD_ORACLE_HOME
      -targetversion 12.1.0.2
    
  19. Start the primary database:
    $ORACLE_HOME/bin/srvctl start database -d $ORACLE_UNQNAME
    
  20. Ensure all components are valid or option off:
    select comp_id, status 
    from   dba_registry
    where  status not in ('VALID', 'OPTION OFF')
    
  21. Important: Ensure that the standby database has applied all redo from the downgrade operation. Perform a few log switches and note the sequence number. Ensure the standby database has applied that sequence.
  22. Start Data Guard broker on primary and standby database
    • Either restore broker config files from old release, and start Data Guard broker.
    • Or, start Data Guard broker and recreate the configuration.
  23. : Ensure that your Data Guard configuration works. Use validate database command in Data Guard broker on all databases and ensure they are ready for switchover.
  24. Optionally, test your Data Guard by doing a switchover.
  25. Perform the post-downgrade instructions.

That’s it!

CDB

No specific Data Guard-related changes. The above procedure is written for a non-CDB database. The procedure is slightly different for a CDB; check the documentation or the demo below.

RAC

On the standby database, you can leave all instances running, and you don’t need to change CLUSTER_DATABASE.

Demo

Downgrade of a CDB running RAC and Data Guard environment:

Other Blog Posts in This Series

Upgrade of Time Zone File and Flashback Database

Someone asked me on the blog:

Can you revert an upgrade of the time zone file using Flashback Database?

And the short answer: Yes!

A time zone file update is a two-step process:

  • Time zone patch to Oracle Home
  • Time zone file upgrade of the database

Applying the patch to the Oracle Home is a simple process. The Oracle Home already has a lot of time zone files present (the previous versions), so you can safely add a newer version. It doesn’t hurt anything; you should generally never roll them off again. The time zone patch adds a file with time zone definitions (in $ORACLE_HOME/oracore/zoneinfo).

The second step is to upgrade the time zone file inside the database. This step will make changes to the data dictionary using the information from the time zone file in the Oracle Home. All the changes made in this step are made inside the database. There are situations when you want to revert the second step: Upgrading the time zone file in the database. For instance:

  • If the time zone file upgrade fails.
  • You performed a database upgrade to a new release and upgraded the time zone file afterward. Now you find a critical issue in the new release and decide to roll everything back.

How To Upgrade

Let’s give it a try and see how it works. I start with an Oracle Database running 19c, and I want to upgrade the time zone file and then revert the operation.

  • The database uses the default time zone file version (32 in Oracle Database 19c):

    SQL> select * from v$timezone_file;
    
    FILENAME                VERSION     CON_ID
    -------------------- ---------- ----------
    timezlrg_32.dat              32 	        0
    
  • I create a user and a table. Also, I insert data related to a time zone change in Yukon, Canada, that comes in with time zone file 35:

    SQL> create user sales no authentication;
    SQL> alter user sales quota unlimited on users;
    SQL> grant create table to sales;
    SQL> create table sales.t1 (
            c1 timestamp with time zone
         );   
    
    SQL> insert into sales.t1 values(TIMESTAMP '2020-03-07 12:00:00.00 America/Whitehorse');
    SQL> insert into sales.t1 values(TIMESTAMP '2020-03-08 12:00:00.00 America/Whitehorse');
    SQL> insert into sales.t1 values(TIMESTAMP '2020-03-09 12:00:00.00 America/Whitehorse');
    SQL> commit;
    
  • I download the time zone patch from Master Note DST FAQ : Updated DST Transitions and New Time Zones in Oracle RDBMS and OJVM Time Zone File Patches (Doc ID 412160.1). Time zone files are cumulative, so I take the latest one, version 38. I apply it using opatch like any other patch to the Oracle Home. This is the result after applying the patch:

    $ ./opatch lsinventory
    Oracle Interim Patch Installer version 12.2.0.1.28
    Copyright (c) 2022, Oracle Corporation.  All rights reserved.
    
    ...
    
    Patch  34006614     : applied on Sat Jul 30 12:46:16 CEST 2022
    Unique Patch ID:  24751709
    Patch description:  "RDBMS - DSTV38 UPDATE - TZDATA2022A"
       Created on 27 Apr 2022, 04:16:52 hrs PST8PDT
       Bugs fixed:
         34006614
    
  • So far – no changes have been made to the database. It still runs with the old time zone file.

  • Before performing the time zone file upgrade, I create a restore point:

    SQL> create restore point before_tz_upg guarantee flashback database;
    
    Restore point created.
    
  • I start the time zone file upgrade by running the pre-upgrade check:

    SQL> @utltz_upg_check.sql
    
    INFO: Starting with RDBMS DST update preparation.
    INFO: NO actual RDBMS DST update will be done by this script.
    ...
    A prepare window has been successfully ended.
    INFO: A newer RDBMS DST version than the one currently used is found.
    INFO: Note that NO DST update was yet done.
    INFO: Now run utltz_upg_apply.sql to do the actual RDBMS DST update.
    INFO: Note that the utltz_upg_apply.sql script will
    INFO: restart the database 2 times WITHOUT any confirmation or prompt.
    
  • It looks good, so go ahead with the actual upgrade. This step requires a maintenance window:

    SQL> @utltz_upg_apply.sql
    
    INFO: If an ERROR occurs, the script will EXIT SQL*Plus.
    INFO: The database RDBMS DST version will be updated to DSTv38 .
    WARNING: This script will restart the database 2 times
    ...
    An upgrade window has been successfully ended.
    INFO: Your new Server RDBMS DST version is DSTv38 .
    INFO: The RDBMS DST update is successfully finished.
    
  • That’s it. I have upgraded the time zone file in the database.

    SQL> select * from v$timezone_file;
    
    FILENAME		VERSION     CON_ID
    -------------------- ---------- ----------
    timezlrg_38.dat 	     38 	 0
    

How To Fall Back

To answer the original question: Can I use Flashback Database to back out of a time zone file upgrade?

  • I restart the database in mount mode:

    SQL> shutdown immediate
    ...
    ORACLE instance shut down.
    
    SQL> startup mount
    ORACLE instance started.
    ...
    Database mounted.
    
  • I try to use Flashback Database

    SQL> flashback database to restore point before_tz_upg;
    
    Flashback complete.
    
    SQL> alter database open resetlogs;
    
    Database altered.
    
  • It works. The database time zone file is back at version 32:

    SQL> select * from v$timezone_file;
    
    FILENAME		VERSION     CON_ID
    -------------------- ---------- ----------
    timezlrg_32.dat 	     32 	 0
    
  • I can also select my data again:

    SQL> select * from sales.t1;
     
    C1
    -----------------------------------------------
    07-MAR-20 12.00.00.000000 PM AMERICA/WHITEHORSE
    08-MAR-20 12.00.00.000000 PM AMERICA/WHITEHORSE
    09-MAR-20 12.00.00.000000 PM AMERICA/WHITEHORSE
    
  • Don’t roll off the time zone patch from the Oracle Home. It makes no difference to the database. Once you have reverted the second step, the database will work like before the upgrade. If you insist on doing so, just ensure that no other database uses the time zone file that came in with the patch.

Conclusion

This proves that you can use Flashback Database to revert an upgrade of the database time zone file. Flashback Database is easy and effective. There is no need to roll off the time zone patch from the Oracle Home. It makes no difference to the database.