How to Apply Patches Out-of-place to Oracle Grid Infrastructure and Oracle Data Guard Using Standby-First

I strongly recommend that you always patch out-of-place. Here’s an example of how to do it on Oracle Grid Infrastructure (GI) and Oracle Data Guard using Standby-First Patch Apply.

Standby-First Patch Apply allows you to minimize downtime to the time it takes to perform a Data Guard switchover. Further, it allows you to test the apply mechanism on the standby database by temporarily converting it into a snapshot standby database.

The scenario:

  • Oracle Grid Infrastructure 19c and Oracle Database 19c
  • Patching from Release Update 19.17.0 to 19.19.0
  • Vertical patching – GI and database at the same time
  • Data Guard setup with two RAC databases
    • Cluster 1: copenhagen1 and copenhagen2
    • Cluster 2: aarhus1 and aarhus2
    • DB_NAME: CDB1
    • DB_UNIQUE_NAME: CDB1_COPENHAGEN and CDB1_AARHUS
  • Using Data Guard broker
  • Patching GI using SwitchGridHome method

Let’s get started!

Step 1: Prepare

I can make the preparations without interrupting the database.

  • I ensure my environment meets the requirements for Standby-First Patch Apply.

  • I deploy new GI homes to all four hosts.

    • I use the SwitchGridHome method.
    • Very important: I only perform step 1 (Prepare a New Grid Home).
    • I apply the Release Update 19.19.0 as part of the deployment using gridSetup.sh ... -applyRU ... -applyOneOffs as described in the blog post.
  • I deploy new database homes to all four hosts.

  • I also recompile invalid objects. This can make it easier for Datapatch later in the process:

    PRIMARY SQL> @?/rdbms/admin/utlrp
    

Step 2: Restart Standby in New Oracle Homes

Now, I can move the standby database to the new GI and database homes.

  • On the standby hosts, aarhus1 and aarhus2, I first move the database configuration files from the old database home to the new one.

  • I change the database configuration in GI. Next time the database restarts, it will be in the new Oracle Home:

    [oracle@aarhus1]$ $OLD_ORACLE_HOME/bin/srvctl modify database \
       -db $ORACLE_UNQNAME \
       -oraclehome $NEW_ORACLE_HOME
    
  • I switch to the new GI on all standby hosts, aarhus1 and aarhus2, by executing step 2 (Switch to the new Grid Home) of the SwitchGridHome method.

    • It involves running gridSetup.sh ... -switchGridHome and root.sh.
    • You can perform the switch in a rolling manner or all at once.
    • The switch restarts the standby database instance. The standby database instance restarts in the new Oracle Home.
    • If the profile of grid (like .bashrc) sets the ORACLE_HOME environment variable, I ensure to update it.
  • If I had multiple standby databases, I would process all standby databases in this step.

Step 3: Test Standby Database

This is an optional step, but I recommend that you do it.

  • I convert the standby database (CDB1_AARHUS) to a snapshot standby database:
    DGMGRL> convert database CDB1_AARHUS to snapshot standby;
    
  • I test Datapatch on the standby database. It is important that I run the command on the standby database:
    [oracle@aarhus1]$ $ORACLE_HOME/OPatch/datapatch -verbose
    
  • I can also test my application on the standby database.
  • At the end of my testing, I revert the standby database to a physical standby database. The database automatically reverts all the changes made during testing:
    DGMGRL> convert database CDB1_AARHUS to physical standby;
    

Step 4: Switchover

I can perform the previous steps without interrupting my users. This step requires a maintenance window because I am doing a Data Guard switchover.

  • I check that my standby database is ready to become primary. Then, I start a Data Guard switchover:
    DGMGRL> connect sys/<password> as sysdba
    DGMGRL> validate database CDB1_AARHUS;
    DGMGRL> switchover to CDB1_AARHUS;
    

A switchover does not have to mean downtime.

If my application is configured properly, the users will experience a brownout; a short hang, while the connections switch to the new primary database.

Step 5: Restart New Standby in New Oracle Homes

Now, the primary database runs on aarhus1 and aarhus2. Next, I can move the new standby hosts, copenhagen1 and copenhagen2, to the new GI and database homes.

  • I repeat step 2 (Restart Standby In New Oracle Homes) but this time for the new standby hosts, copenhagen1 and copenhagen2.

Step 6: Complete Patching

Now, both databases in my Data Guard configuration run out of the new Oracle Homes.

Only proceed with this step once all databases run out of the new Oracle Home.

I need to run this step as fast as possible after I have completed the previous step.

  • I complete the patching by running Datapatch on the primary database (CDB1_AARHUS). I add the recomp_threshold parameter to ensure Datapatch recompiles all objects that the patching invalidated:

    [orale@aarhus1]$ $ORACLE_HOME/OPatch/datapatch \
       -verbose \
       -recomp_threshold 10000
    
    • I only need to run Datapatch one time. On the primary database and only on one of the instances.
  • I can run Datapatch while users are connected to my database.

  • Optionally, I can switch back to the original primary database on copenhagen1 and copenhagen2, if I prefer to run it there.

That’s it. Happy patching!

Appendix

Further Reading

Other Blog Posts in This Series

Files to Move During Oracle Database Out-Of-Place Patching

I strongly recommend that you patch your Oracle Database using the out-of-place method. It has many advantages over in-place patching. But when you move your Oracle Database from one Oracle Home to another, you also need to move a lot of files.

Which files are that, and how can you make it easier for you? Also, some files might exist already in the target Oracle Home; what do you do then?

Files to Move

Password File

Linux:

dbs/orapw<ORACLE_SID>

Windows:

database\pwd<ORACLE_SID>.ora

You can override the default location in Windows using the following registry entries:

ORA_<ORACLE_SID>_PWFILE
ORA_PWFILE

If you use Grid Infrastructure, you can put the password file outside of the Oracle Home:

srvctl modify datatabase \
   -d $ORACLE_UNQNAME
   -pwfile <NEW_LOCATION_OUTSIDE_ORACLE_HOME>

I recommend storing it in ASM.

Parameter Files

Linux:

dbs/init<ORACLE_SID>.ora
dbs/spfile<ORACLE_SID>.ora

Windows:

database\init<ORACLE_SID>.ora
database\spfile<ORACLE_SID>.ora

Parameter files may include other files using the IFILE parameter.

You can redirect the server parameter file to a location outside the Oracle Home using the SPFILE parameter in your parameter file. If you use Grid Infrastructure, you can also redirect the server parameter file:

srvctl modify datatabase \
   -d $ORACLE_UNQNAME
   -spfile <NEW_LOCATION_OUTSIDE_ORACLE_HOME>

I recommend storing it in ASM.

Oratab

You need to update the database instance entry in the oratab file:

/etc/oratab

On Solaris, you find the file in:

/var/opt/oracle/oratab

On Windows, the file does not exist. Instead, you re-register the instance in the registry when you use oradim.exe.

Profile Scripts

Many people have profile scripts that set the environment to a specific database. Be sure to update the Oracle Home in such scripts.

Network Files

Network configuration files:

network/admin/ldap.ora
network/admin/listenener.ora
network/admin/sqlnet.ora
network/admin/tnsnames.ora

tnsnames.ora, sqlnet.ora and listener.ora can include contents from other files using the IFILE parameter, although the support of it is somewhat… questionable according to Allows for IFILE Ifile Support and Oracle Net (Doc ID 1339269.1).

You can redirect the files using the TNS_ADMIN environment variable. On Windows, you can also redirect using the TNS_ADMIN registry entry. If you use Grid Infrastructure, you can set the TNS_ADMIN environment variable as part of the cluster registration:

srvctl setenv database \
   -d $ORACLE_UNQNAME \
   -env "TNS_ADMIN=<NEW_LOCATION_OUTSIDE_ORACLE_HOME>"

Data Guard Broker Config Files

Linux:

dbs/dr1<ORACLE_SID>.dat
dbs/dr2<ORACLE_SID>.dat

Windows:

database\dr1<ORACLE_SID>.dat
database\dr2<ORACLE_SID>.dat

You can redirect the broker config files using the parameter DG_BROKER_CONFIG_FILEn:

alter system set db_broker_start=false;
alter system set dg_broker_config_file1='<NEW_LOCATION>/dr1<ORACLE_SID>.dat';
alter system set dg_broker_config_file2='<NEW_LOCATION>/dr2<ORACLE_SID>.dat';
alter system set db_broker_start=true;

I recommend storing the files in ASM.

Admin directory

admin subdirectory in Oracle Home:

admin

If you don’t set ORACLE_BASE environment variable, the database uses the Oracle Home for that location. It can contain diagnostic information like logs and tracing which you might want to move to the new Oracle Home.

In rare cases, the TDE keystore will go in there as well. This is definitely a folder that you want to keep.

admin/$ORACLE_UNQNAME/wallet

I recommend having a dedicated ORACLE_BASE location. Always set ORACLE_BASE environment variable for all databases. This will ensure that the database will not create an admin directory in the Oracle Home.

If you use TDE Tablespace Encryption, I strongly recommend that you store the database keystore outside of the Oracle Home using the WALLET_ROOT parameter.

Direct NFS

The Direct NFS configuration file:

dbs/oranfstab

The file might exist in the target Oracle Home, in which case you must merge the contents.

Typically, on Windows, the files from dbs are stored in database folder. But that’s different for this specific file (thanks Connor for helping out).

Centrally Managed Users

One of the default locations of the configuration file for Active Directory servers for centrally managed users is.

ldap/admin/dsi.ora

I recommend using the LDAP_ADMIN environment variable to redirect the file to a location outside of the Oracle Home.

LDAP

Configuration of Directory Usage Parameters:

ldap/admin/ldap.ora

I recommend using the LDAP_ADMIN or TNS_ADMIN environment variable to redirect the file to a location outside of the Oracle Home.

Oracle Messaging Gateway

The Messaging Gateway default initialization file:

mgw/admin/mgw.ora

The file might exist in the target Oracle Home, in which case you must merge the contents.

Oracle Database Provider for DRDA

Configuration file for Oracle Database Provider for DRDA:

drdaas/admin/drdaas.ora

The file might exist in the target Oracle Home, in which case you must merge the contents.

Oracle Text

If you use Oracle Text, you can generate a list of files that you must copy to the target Oracle Home:

ctx/admin/ctx_oh_files.sql

Oracle Database Gateway for ODBC

ODBC gateway initialization file:

hs/admin/init<ORACLE_SID>.ora

External Procedures

You can define the environment for external procedures in extproc.ora. Such configuration might exist in the target Oracle Home already, in which case you must merge the contents:

hs/admin/extproc.ora

Other Things to Consider

Enterprise Manager

After moving the database to a new Oracle Home, you need to reconfigure the target in Enterprise Manager. The Oracle Home path is part of the configuration. You can easily change it with emcli:

emcli modify_target \
   -type='oracle_database' \
   -name='<target_name>' \
   -properties='OracleHome:<new_oracle_home_path>'

Also, if you moved the listener as part of the patching to a new Oracle Home, you need to update that as well.

On My Oracle Support you can find an example on how to bulk update multiple targets.

In a future version of AutoUpgrade, it will be able to modify the target in Enterprise Manager for you.

Oracle Key Vault

There should be no additional configuration needed if you are using Oracle Key Vault and you move the database to a new Oracle Home.

You can find information on how to configure Oracle Key Vault in an Oracle Database in the documentation.

ZDLRA

When you patch out-of-place, you should always ensure that the target Oracle Home has the latest version of libra.so. AutoUpgrade does not copy this file for you, because there is no way to tell which version is the latest version.

Ideally, you configure ZDLRA in via sqlnet.ora and store the wallet outside of the Oracle Home. If so, the ZDLRA configuration works in the target Oracle Home because AutoUpgrade takes care of sqlnet.ora.

If you use ra_install.jar to configure ZDLRA, the script will:

  • Create a file: $ORACLE_HOME/dbs/ra<ORACLE_SID>.ora
  • Create a folder with a wallet: $ORACLE_HOME/dbs/wallet

In this case, you must manually copy the files to the target Oracle Home. You can avoid this by using sqlnet.ora for the configuration instead.

AutoUpgrade does not copy these files for you, because of the issue described above with libra.so.

Database Directories

You must update certain internal database directories, when you move the database to a new Oracle Home. The easiest way is to run:

@?/rdbms/admin/utlfixdirs.sql

In multitenant, you need to run the script in CDB$ROOT only.

On My Oracle Support you find a list of all the directories that you must update if you don’t use the script above.

In a future version of AutoUpgrade, it will change all the applicable directories for you automatically.

How to Make It Easy

Use AutoUpgrade

The easiest way to patch your Oracle Database is to use AutoUpgrade. It takes care of everything for you (unless stated otherwise above). You need a config file:

patch1.source_home=/u01/app/oracle/product/19.18.0
patch1.target_home=/u01/app/oracle/product/19.19.0
patch1.sid=MYDB

Then you start AutoUpgrade:

java -jar autoupgrade.jar -config mydb.cfg -mode deploy

That’s it! You don’t need to worry about all those configuration files. AutoUpgrade got you covered.

Use Read-Only Oracle Home

If you use Read-Only Oracle Home, the process is slightly easier.

You can’t store any custom files in the Oracle Home. Instead, you store all configuration files in Oracle Base Home; a directory outside the Oracle Home. You can find all the files you need to copy in Oracle Base Home.

When you create a new Read-Only Oracle Home, the installer will create a new Oracle Base Home specifically for that new Oracle Home. As part of the patching, you move the files into that new Oracle Base Home.

Conclusion

Did I miss any files? Please leave a comment if you move other files when you patch out-of-place.

Release and Patching Strategies for Oracle Database 23c

A few weeks ago my team hosted the sixteenth episode of our Virtual Classroom Seminars. The webinar is called Release and Patching Strategies for Oracle Database 23c.

If you couldn’t participate you can now watch the recording and flip through the slides.

Recording

The recording of the webinar is posted on our YouTube channel:

The video is divided into chapters and from the video description you can jump right into the topic of your interest.

My favorite moments:

If you have a question, leave a comment on YouTube and we will get back to you.

Slides

You can download the slides here.

We received a lot of questions during the webinar. So many of them were really good and relevant. I decided to make a new version of the slide deck which answers many of the questions you asked.

I would like to thank all that asked a question. It provides us with valuable feedback and enables us to make even better material for you.

What’s Next?

Mike Dietrich and I will host episode 17 of our Virtual Classroom on Thursday June 22, 16:00 CEST:

From SR to Patch – Insights into the Oracle Database Development Process

Have you ever wondered why this bug fix hasn’t been included in the next Release Update? Or why somebody from Oracle Support asked you to upgrade to Oracle Database 23c – even Oracle 23c is not available yet for your environment? We’ll explain this – and much more. From SR to Patch describes the whole process from you, opening a service request for a defect to the final delivery of a fix. This is your rare chance to get insights into the Oracle Database development process from insiders. And even if you are a long-time Oracle expert you will still learn something new you are not aware about yet.

Sign up and put a mark in your calendar.

And After That?

The autumn will be quite busy with Oracle CloudWorld and all the preparations. But I hope we will be able to make another webinar towards the end of the year.

If you have any ideas and have a request for a topics that we should cover, please leave a comment and we will take it into consideration.

Did You Sign Up For Oracle CloudWorld Yet?

Oracle CloudWorld takes place in Las Vegas, September 18-21. Before the summer holiday season kicks in, you should get a ticket and mark your calendar for the coolest Oracle event of the year.

This year will be even better!

Our Sessions and Labs

I am very excited about the plans we have for Oracle CloudWorld. My team (like most other teams at Oracle) works really hard to prepare new content for you. We will have brand-new sessions and hands-on labs ready.

Currently, this is our confirmed session:

And our hands-on labs:

And we will repeat our very popular no-slide zone on patching GI and database:

Is that it? Of course not; we have more in the pipeline. It’s just waiting for a final confirmation. Keep an eye out on the session catalog; we update it constantly.

Master Classes

In addition, we have something new and special for you this year – a 4-hour top-notch learning experience.

  • Oracle Database Upgrade and Performance Tuning Master Class

You’ll leave this master class knowing how to use all the tools in your toolbox to ensure great database performance after your upgrade.

We have taken as much knowledge as we can and compiled it into an intense 4-hour learning experience. It’s our course, and we will be there to train you. This is a unique opportunity.

Check out the details and other training classes at the Pre-Event Training page.

Free Digital Training and Certification

Registering for Oracle CloudWorld gives you access to free digital training on Oracle Cloud Infrastructure and Cloud Apps. After that, you can take free certification exams as well. All part of the deal.

You can start your learning experience now – and then complete it with sessions at Oracle CloudWorld. This is a great opportunity.

Want More?

Then there’s also:

  • CloudWorld Party – who’s gonna play this year?
  • Demogrounds – see all the cool stuff in action
  • Exhibition area – wander around and get inspired
  • Networking – talk to your peers and grow your career
  • LiveLabs – try and learn
  • Events – cool stuff waiting to happen
  • Las Vegas – it’s a unique and crazy place, worth a visit (although I do miss San Francisco)

Book Your Ticket

If you need to convince your manager, here’s some ammo for that talk.

The sooner you book your ticket, the cheaper it is.

Fun Facts 2022

  • 1.304 customer sessions with 855 different customers
  • 168 partners sponsoring and exhibiting in the CloudWorld Hub
  • 7.233 hours of live-stream and on-demand video
  • 25.000+ liters of coffee served
  • 2.250 cake pops consumed at the CloudWorld party (I won’t say how many I ate)
  • 6 tons of left-over food and materials donated to local organizations
  • 100% of energy sourced from renewables such as solar and wind
  • 0 plastic water bottles distributed to CloudWorld attendees

See you in Las Vegas

Is Oracle Java Virtual Machine (OJVM) Used In My Database?

In Oracle Database, you can store and execute Java code directly in the database using Oracle Java Virtual Machine (OJVM). You can take your Java code, put it into the database and execute it. Now your code sits right next to the data, and you can get some amazing benefits.

But it comes at a price:

It’s really great if you are using OJVM; it has awesome functionality. But if you don’t, consider removing it. When I talk to customers, the big question is often:

How do I know if it is in use in my database?

Setting Things Straight

First, this blog post is about using OJVM: Java code in the database. Often, people mistakenly think they use OJVM because they have a Java application connecting to the database. They don’t. Having a Java application connecting to the database and using OJVM are two completely different things.

OJVM is often also referred to as:

  • JAVAVM
  • JServer JAVA Virtual Machine

Is OJVM Installed?

You can check if the OJVM component is installed in the database:

SQL> select con_id, comp_id, comp_name, version, status
     from cdb_registry
     where comp_id='JAVAVM';

If the query returns no rows, then OJVM is not installed.

Has Someone Added Java Code?

You can check if someone has added Java code to the database:

SQL> select con_id, owner, oracle_maintained, status, count(*) 
     from cdb_objects
     where object_type like '%JAVA%' 
     group by con_id, owner, oracle_maintained, status;

The column ORACLE_MAINTAINED indicates whether a regular user added it. If a user has added Java code, you can use the columns CREATED and LAST_DDL_TIME to find out when it happened. This might help you.

Is Someone Using OJVM Right Now ?

You can query x$kglob to determine the use of OJVM since the last startup. Refer to RAC Rolling Install Process for the "Oracle JavaVM Component Database PSU/RU" (OJVM PSU/RU) Patches (Doc ID 2217053.1) for details.

The MOS note also shows how to identify which sessions that use OJVM.

OJVM Dependencies

Be advised the following components in Oracle Database depend on OJVM. If you are using one of them, you can’t remove OJVM. The following components use OJVM:

  • Spatial Data Option (SDO)
  • Oracle XDK (XDK)
  • Oracle Multimedia (ORDIM)

Conclusion

Oracle Database is a converged database. You have so many great features directly available in the database. You can do many cool things with them – including OJVM. But if you are not using OJVM or any dependent components in Oracle Database, you can remove OJVM. It will save you time during patch installation and upgrades.

I used an example from oracle-base.com. Visit the article for all the details.

Appendix

Further Reading

Test Data

conn / as sysdba

--Query the current status
select comp_id, comp_name, version, status
from dba_registry
where comp_id='JAVAVM';

select con_id, owner, oracle_maintained, status, count(*) 
from cdb_objects
where object_type like '%JAVA%' 
group by con_id, owner, oracle_maintained, status;

--Shows status in current container and on current instance
--since last startup (MOS Doc ID 2217053.1)
select count(*) from x$kglob where KGLOBTYP = 29 OR KGLOBTYP = 56;

--Create user and small java code 
create tablespace appts;
create user appuser identified by appuser;
alter user appuser default tablespace appts;
grant dba to appuser;
conn appuser/appuser
--Thanks to oracle-base.com for a good example
--For further details:
--https://oracle-base.com/articles/8i/jserver-java-in-the-database
CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "Mathematics" AS
import java.lang.*;
import java.sql.*;
import oracle.sql.*;
import oracle.jdbc.driver.*;

public class Mathematics
{
  
  public static int addNumbers (int Number1, int Number2)
  {
    try
    {
      int iReturn = -1;
      
      // Connect to the database
      Connection conn = null;
      OracleDriver ora = new OracleDriver(); 
      conn = ora.defaultConnection(); 
      
      // Check record exists, and create it if it doesn't
      Statement statement = conn.createStatement();
      ResultSet resultSet = statement.executeQuery("SELECT " + Number1 + " + " + Number2 + " FROM dual");
      if (resultSet.next())
      {
        iReturn = resultSet.getInt(1);
      }
      resultSet.close();
      statement.close();
      conn.close();

      return iReturn;
    }
    catch (Exception e)
    {
      return -1;
    }
  } 

};
/
show errors java source "Mathematics"

--Query the current status
conn / as sysdba
select con_id, owner, oracle_maintained, status, count(*) 
from cdb_objects
where object_type like '%JAVA%' 
group by con_id, owner, oracle_maintained, status;

select comp_id, comp_name, version, status
from dba_registry
where comp_id='JAVAVM';

select count(*) from x$kglob where KGLOBTYP = 29 OR KGLOBTYP = 56;

--Expose java code
conn appuser/appuser
CREATE OR REPLACE FUNCTION AddNumbers (p_number1  IN  NUMBER, p_number2  IN  NUMBER) RETURN NUMBER
AS LANGUAGE JAVA 
NAME 'Mathematics.addNumbers (int, int) return int';
/

--Query the current status
conn / as sysdba
select con_id, owner, oracle_maintained, status, count(*) 
from cdb_objects
where object_type like '%JAVA%' 
group by con_id, owner, oracle_maintained, status;

select comp_id, comp_name, version, status
from dba_registry
where comp_id='JAVAVM';

select count(*) from x$kglob where KGLOBTYP = 29 OR KGLOBTYP = 56;

--Use Java code
conn appuser/appuser
SELECT addNumbers(1,2)
FROM   dual;

--Check which sessions have actively used Java since last startup
--Refer to MOS Doc ID 2217053.1 for the query

Why Does Data Pump Need a Read Write Tablespace

The other day I had to export data using Data Pump and ran into an error:

expdp appuser/appuser ...

ORA-31626: job does not exist
ORA-31633: unable to create master table "APPUSER.SYS_EXPORT_SCHEMA_01"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1163
ORA-01647: tablespace 'APPTS' is read-only, cannot allocate space in it
ORA-06512: at "SYS.KUPV$FT", line 1056
ORA-06512: at "SYS.KUPV$FT", line 1044

APPTS is the name of the default tablespace of the exporting user, APPUSER. It appears that Data Pump requires that the default tablespace of the exporting user has its default tablespace in read write mode. Why is that?

The Control Table

To keep track of an export or import job, Data Pump writes information to a so-called control table. The control table is a regular table stored in the database.

Data Pump creates the control table in the default tablespace of the user executing the Data Pump job. It is not possible to store the control table in a different tablespace.

At the end of an export, as the very last thing, Data Pump exports the control table to the dump file. Similarly, at the start of an import, as the very first thing, Data Pump imports the control table.

Normally, you don’t see the control table in the database because Data Pump drops it when a job ends. You can change that by setting KEEP_MASTER=YES. This allows you to query the control table. The name of the control table is the same as the Data Pump job:

SQL> select * 
     from <executing_user>.<name_of_data_pump_job>

If you change the name of the Data Pump job using JOB_NAME, you can alter the name of the control table.

Previously, the control table was called the master table. That’s why the parameter KEEP_MASTER is named as it is.

Learn more about the control table in the webinar Data Pump Extreme – Deep Dive with Development.

Other Requirements

So, if Data Pump must create a table in the database, does that mean there are other requirements?

Yes, it does. Here is a list of the requirements:

  • Database must be open in read write mode
  • Executing user must have CREATE TABLE privilege
  • Executing user must have a quota on its default tablespace

Exporting From a Standby Database

Since there is a requirement that the database must be open in READ WRITE mode, it means that you can’t use Data Pump to export from a standby database. A standby database is always either MOUNTED or OPEN READ ONLY.

Workarounds

  1. If you have a standby database, you can temporarily convert it into a snapshot standby database and perform the export..
  2. Perform the import directly over a database link using the NETWORK_LINK option. See My Oracle Support note How To Use DataPump Export (EXPDP) To Export From Physical Standby Database (Doc ID 1356592.1) for details. Thanks to Franck Pachot for the tip.

Option 1 is preferable since Data Pump in network mode has a few restrictions.

How to use Transportable Tablespace with TDE Tablespace Encryption

You can use transportable tablespace to migrate your data between databases. It is a great way of moving your data. How does it work together with TDE Tablespace Encryption?

It depends on which kind of migration you are planning. The endian format of the source and target platforms (or operating system) plays a vital role.

First, you determine the endian format of the source and target database:

SQL> select platform_name, endian_format from v$transportable_platform;
  • If the endian formats are identical, you will perform a same-endian migration.
  • If the endian formats differ, you will perform a cross-endian migration.

Same-endian Migration of TDE Encrypted Tablespaces

It is supported to use transportable tablespace when you migrate to a different platform as long as the endian format does not change.

Oracle Database uses a two-tier key architecture which consists of two encryption keys:

  • Master Encryption Key (MEK)
  • Tablespace Encryption Key (TEK)

In the tablespace, the TEK is stored. When you use transportable tablespace, you copy the data files, and thus, the TEK remains the same. The data in the tablespace continues to be encrypted during the entire migration using the same TEK.

But what about the MEK? It is required to get access to the TEK.

Option 1: Use ENCRYPTION_PASSWORD parameter

  1. During Data Pump transportable tablespace export, you specify an ENCRYPTION_PASSWORD:

    expdp ... encryption_password=<a_strong_and_secure_password>
    

    The encryption password is not the MEK of the source but a special password you choose for the migration only.

  2. On import, you specify the encryption password:

    impdp ... encryption_password=<a_strong_and_secure_password>
    

A benefit of this option is that the source and target database is encrypted using a different MEK. You can query the database and verify that no new MEK is added to the target database. The target database continues to use its own MEK:

SQL> select * from v$encryption_keys;

According to the documentation, this is the recommended option.

Option 2: Import Master Encryption Key

  1. You start the Data Pump transportable tablespace export.
  2. In the source database, you export the source database MEK:
    SQL> administer key management export keys 
         with secret "<another_strong_and_secure_password>"
         to '/home/oracle/secure_location/exported-keys'
         force keystore identified by "<source_database_MEK";
    
  3. You import the source database MEK into the target database:
    SQL> administer key management import keys 
         with secret "<another_strong_and_secure_password>"
         from '/home/oracle/secure_location/exported-keys'
         force keystore identified by "<target_database_MEK>"
         with backup;
    
  4. You start the Data Pump transportable tablespace import.

By querying v$encryption_keys, you can see that another key has been added to the database.

You can read more about export and import of MEKs in the documentation.

Option 3: Oracle Key Vault

If you are using Oracle Key Vault, it’s very easy to allow the target database to access the source database master encryption key.

When you perform the Data Pump transportable tablespace import in the target database, it will already have access to the encryption keys that protect the tablespaces. Nothing further is needed.

What About Rekeying?

If you make the source database encryption key available to the target database, consider whether you also want to perform a rekey operation. This applies to options 2 and 3.

ORA-39396

If you use options 2 or 3, you will receive the below warning during Data Pump transportable tablespace export:

ORA-39396: Warning: exporting encrypted data using transportable option without password

This is expected behavior:

This warning points out that in order to successfully import such a transportable tablespace job, the target database wallet must contain a copy of the same database master key used in the source database when performing the export. Using the ENCRYPTION_PASSWORD parameter during the export and import eliminates this requirement.

Cross-endian Migration of TDE Encrypted Tablespaces

If your database is encrypted, you must use the newer M5 migration method. It supports encrypted tablespaces.

The older XTTS v4 doesn’t support encrypted tablespace.

If you have an encrypted tablespace and you want to use transportable tablespace:

  1. Decrypt the tablespace
  2. Migrate the tablespace using transportable tablespace
  3. Re-encrypt the tablespace

Appendix

Further Reading

Other Blog Posts in This Series

How to Clone Oracle Home Without Using Clone.pl

Cloning Oracle Homes is a convenient way of getting a new Oracle Home. It’s particularly helpful when you need to patch out-of-place.

A popular method for cloning Oracle Homes is to use clone.pl. However, in Oracle Database 18c, it is deprecated.

[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use runInstaller script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

This Is How You Should Clone Oracle Home

You should use runInstaller to create golden images instead of clone.pl. Golden image is just another word for the zip file containing the Oracle Home.

How to Create a Golden Image

  1. First, only create a golden image from a freshly installed Oracle Home. Never use an Oracle Home that is already in use. As soon as you start to use an Oracle Home you taint it with various files and you don’t want to carry those files around in your golden image. The golden image must be completely clean.

  2. Then, you create a directory where you can store the golden image:

    export GOLDIMAGEDIR=/u01/app/oracle/goldimages
    mkdir -p $GOLDIMAGEDIR
    
  3. Finally, you create the golden image:

    $ORACLE_HOME/runInstaller -createGoldImage \
       -destinationLocation $GOLDIMAGEDIR \
       -silent
    
    • If you need to exclude files, you can use -exclFiles. It accepts a wilcard, so for example you can specify -exclFiles network/admin* to exclude all files and subdirectories in a directory.
  4. The installer creates the golden image as a zip file in the specified directory. The name of the zip file is unique and printed on the console.

Check the documentation for further details.

How to Deploy from a Golden Image

  1. First, you create a directory for the new Oracle Home and unzip the golden image:
    export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_2
    mkdir -p $ORACLE_HOME
    cd $ORACLE_HOME
    unzip -q /u01/app/oracle/goldimages/my_golden_image.zip
    
  2. Next, you need to install the Oracle Home. You can do it interactively:
    cd $ORACLE_HOME
    ./runInstaller
    
    Or, you can do it in silent mode:
    cd $ORACLE_HOME
    ./runInstaller -ignorePrereq -waitforcompletion -silent \
    ...
    

That’s it!

If you need to read further on silent installations, check out oracle-base.com.

The Oracle Database 23ai documentation has an good example of using use the new procedure.

But How about OraInst.loc?

One of the differences between clone.pl and runInstaller is that the latter does not include the file $ORACLE_HOME/oraInst.loc.

This is intentional because the file is not needed for golden image deployment. runInstaller recreates the file when you install the golden image.

One of the things listed in oraInst.loc is the location of the Oracle inventory. Either runInstaller finds the value itself, or you can specify it on the command line using INVENTORY_LOCATION=<path-to-inventory>.

You can read more about oraInst.loc in the documentation or MOS note Significance of oraInst.loc When Installing Oracle Products and Applying Patches (Doc ID 418537.1).

Naming Your Golden Image

If you want your zip file (the golden image) to have a specific name, you have two options:

  1. Rename the zip file after executing runInstaller -createGoldImage.
  2. Use the secret parameter -name, which allows you to specify a name for the zip file. To name the zip file my_golden_image.zip:
    $ORACLE_HOME/runInstaller -createGoldImage \
       ... \
       -name my_golden_image.zip
    

Why Is Clone.pl Deprecated?

Previously, many tools existed to do the same – clone an Oracle Home. Now, we have consolidated our resources into one tool.

From now on, there is one method for cloning Oracle Home. That is easier for everyone.

In addition, runInstaller has some extra features that clone.pl doesn’t. For instance:

  • Better error reporting
  • Precheck run
  • Multimode awareness
  • Ability to apply patches during installation

When Will It Be Desupported?

I don’t know. Keep an eye out on the Upgrade Guide, which contains information about desupported features.

However, I can see in the Oracle Database 23c documentation that clone.pl is still listed. But that’s subject to change until Oracle Database 23c is released.

Pro Tips

  • Remember, you can install patches to your golden image after it has been unzipped – as part of the installation.

  • If you clone Oracle Homes because you are doing out-of-place patching, you are on the right track. I strongly recommend always using out-of-place patching. Also, when you patch out-of-place, remember to move all the database configuration files.

  • If you clone Oracle Homes, you keep adding stuff to the same Oracle Home. Over time the Oracle Home will increase in size. The more patches you install over time, the more the Oracle Home increases in size. OPatch has functionality to clean up inactive patches from an Oracle Home. Consider running it from time to time using opatch util deleteinactivepatches. Mike Dietrich has a really good blog post about it. I also describe it in our of our previous webinars:

Appendix

Thanks to Anil Nair for pointing me in the right direction.

Further Reading

Why Does User ID Columns Change Data Type When You Recreate a Queue Table

Advanced Queueing offers great queueing functionality built into Oracle Database. When you want to create a queue, Oracle Database will create several supporting objects. Depending on how you use Advanced Queueing and Oracle Database, these objects might change.

Let’s investigate a case that came up during a project.

The Situation

After I recreated a queue table, some of the underlying objects changed definition. Specifically, columns that apparently contained user information changed from a NUMBER to a VARCHAR2.

Object Name Column Name Data Type Before Data Type After
AQ$<queue_table> ENQ_USER_ID NUMBER VARCHAR2
AQ$<queue_table> DEQ_USER_ID NUMBER VARCHAR2
AQ$_<queue_table>_F ENQ_UID NUMBER VARCHAR2
AQ$_<queue_table>_F DEQ_UID NUMBER VARCHAR2
AQ$_<queue_table>_H DEQUEUE_USER NUMBER VARCHAR2
AQ$_<queue_table>_L DEQUEUE_USER NUMBER VARCHAR2

The column data type changed from NUMBER to VARCHAR2. I created the queue tables using DBMS_AQADM and Oracle Database created the AQ$ objects recursively.

Is this something to be worried about?

Why Does the Data Type Change

Advanced Queueing has been around for a while and it has evolved. To control the behavior of Advanced Queueing, you can use the compatible parameter when you create queue tables.

In Oracle Database 19c, you can set compatible parameter on a queue table to one of the following:

  • 8.0
  • 8.1
  • 10.0

When you create a queue table, it is an optional parameter:

SQL> begin
        dbms_aqadm.create_queue_table (
           ...
           compatible => '10.0'
        );
     end;

If you don’t explicitly specify a compatible setting, it is derived from the database instance parameter compatible.

You can find the compatible setting of a queue table using:

SQL> select queue_table, compatible from user_queue_tables;

In the documentation, you can find information on which functionality gets enabled by which compatible setting. In this case, the following is of interest:

Mixed case (upper and lower case together) queue names, queue table names, and subscriber names are supported if database compatibility is 10.0

When you set compatible on the queue table to 10.0 there is better support for certain user names (subscriber names), and that requires a different data type on the underlying objects.

How to Solve the Problem

There are two options:

  1. You can recreate the queue tables using the same compatible setting. You start by querying USER_QUEUE_TABLES to find the compatible setting. Then, use DBMS_AQADM.CREATE_QUEUE_TABLE to recreate the queue and remember to specify the correct compatible setting.
  2. You can adapt the newest compatible setting on your queues. The underlying objects change. You can use all the features of Advanced Queueing. You should test your application and ensure it works with the new setting.

I recommend option 2. It is uses the default setting for compatible. The default setting has been around in many years, so it is thoroughly tested and I assume that most customers use this configuration.

You Can Migrate Old Queue Tables

You can migrate old queues in your Oracle Database. Any queues that don’t have compatible set to 10.0, you can migrate to the newest compatible setting:

SQL> begin
        dbms_aqadm.migrate_queue_table(..., compatible => '10.0.0');
     end;

Now you can start to use all the features in Advanced Queueing.

You can query the data dictionary to find old queues in your Oracle Database:

SQL> select queue_table, compatible
     from user_queue_tables
     where compatible != '10.0.0';

Appendix

Thanks to oracle-base.com for supplying the starting point for my test case.

Test Case

conn / as sysdba
--create user and grant privileges
drop user appuser cascade;
create user appuser identified by appuser;
grant dba to appuser;

conn appuser/appuser
--type used for queue payload
create type car_type as object (
  name            varchar2(20),
  color           varchar2(10)
);
/

--get the database instance compatible setting
--used to derive the queue table compatible setting
--if not specified
select value from v$parameter where name='compatible';

begin
   --create queue table without expliciti 'compatible'
   --compatible should be 10.0.0
   dbms_aqadm.create_queue_table (
      queue_table            => 'APPUSER.CAR_QUEUE_TAB',
      queue_payload_type     => 'APPUSER.CAR_TYPE');
   --create new queue table with lower compatible setting
   dbms_aqadm.create_queue_table (
      queue_table            => 'APPUSER.CAR_QUEUE_TAB8',
      queue_payload_type     => 'APPUSER.CAR_TYPE',
      compatible => '8.0');
end;
/

--verify queue table compatible setting
select queue_table, compatible from user_queue_tables;

Data Pump and a Norwegian Sneak Peek

Earlier this month, the team and I presented our webinar Data Pump – Best Practices and Real World Scenarios. Over the years, we have accumulated information from many different customer projects, and we wanted to compile all that information into a webinar. You can watch the result on YouTube or flip through the slides.

On YouTube, the recording is divided into pieces, so you can easily dive right into the subject that has your particular interest.

My two favorite chapters:

Next month, in May, we are hosting another webinar about Release and Patching Strategies for Oracle Database 23c. Sign up now and secure your seat. It is, by the way, the 16th webinar in our series. If you want, you can watch the previous ones on demand.

The Norwegian Sneak Peak

Next week, the Norwegian Oracle User Group is having its annual spring conference. I am attending, and I am really excited. The agenda and line-up are really amazing. It’s not too late to sign up.

I am presenting at the conference as well. The slides are ready and uploaded to my blog. If you want, you can take a sneak peek:

See you in Oslo?