How to Upgrade Encrypted Oracle Database and Move to New Server

Is Autoupgrade with TDE only possible for in place upgrade (same server)? Are there any ways to do it for out of place (new db home in a different server) with autoupgrade? It seems like the target_home have to be specified.

A reader asked that question on my blog.

The answer is yes; you can upgrade an Oracle Database and move to a new server. We are considering upgrading a non-CDB or an entire CDB using Transparent Data Encryption (TDE) Tablespace Encryption.

Move to New Server and Transparent Data Encryption

When you upgrade your Oracle Database, you often want to move to new hardware. AutoUpgrade fully supports this use case. Mike Dietrich mentions this in his blog post and video.

When you upgrade an encrypted non-CDB or entire CDB, the database must have an auto-login keystore.

There are no further requirements.

The Instructions

I am using the DB12 database from our hands-on lab. You can provision a lab and try it out yourself (for free). See the appendix for instructions on how to encrypt the DB12 database.

Old Server

  1. Always use the latest version of AutoUpgrade.
  2. Create a config file:
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.sid=DB12
    upg1.target_version=19
    
    • I don’t specify target_home because it does not exist on the old server. Instead, I specify target_version, so AutoUpgrade knows which checks to execute.
  3. Check the database for upgrade readiness:
    java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  4. Downtime starts.
  5. Run the preupgrade fixups:
    java -jar autoupgrade.jar -config DB12.cfg -mode fixups
    
  6. Perform a clean shutdown of the database
    shutdown immediate
    

New Server

There is only one server in the lab environment, so I can’t physically move to a new server. But by moving the instance manually to the new home, I can simulate the same behavior.

  1. Move SPFile and password file to the new Oracle home on the new server. The below instructions work in the hands-on lab only:
    export OLD_ORACLE_HOME=/u01/app/oracle/product/12.2.0.1
    export NEW_ORACLE_HOME=/u01/app/oracle/product/19
    export ORACLE_SID=DB12
    cp $OLD_ORACLE_HOME/dbs/spfile$ORACLE_SID.ora $NEW_ORACLE_HOME/dbs
    cp $OLD_ORACLE_HOME/dbs/orapw$ORACLE_SID $NEW_ORACLE_HOME/dbs
    
  2. Register the instance in /etc/oratab:
    export NEW_ORACLE_HOME=/u01/app/oracle/product/19
    export ORACLE_SID=DB12   
    cp /etc/oratab /tmp/oratab
    sed '/^'"$ORACLE_SID"':/d' /tmp/oratab > /etc/oratab
    echo "$ORACLE_SID:$NEW_ORACLE_HOME:N" >> /etc/oratab
    cat /etc/oratab
    
    • Use srvctl as well if you have Oracle Grid Infrastructure.
  3. Move the database files (control files, redo logs, and data and temp files) to the new server.
    • If you need to change any of the paths, see the appendix.
    • Alternatively, unmount the storage from the old server and mount it on the new one.
  4. I want to use the new wallet_root parameter to configure TDE. I copy the keystore files to a new location that matches the naming requirements of wallet_root:
    export OLD_KEYSTORE=$ORACLE_BASE/admin/$ORACLE_SID/wallet
    export NEW_KEYSTORE=$ORACLE_BASE/admin/$ORACLE_SID/wallet/tde
    mkdir -p $NEW_KEYSTORE
    cp $OLD_KEYSTORE/cwallet.sso $NEW_KEYSTORE
    cp $OLD_KEYSTORE/ewallet.p12 $NEW_KEYSTORE
    
    • You should consider moving any backup keystore files as well.
  5. I start a new instance of the database in the new Oracle home and configure TDE using the new parameters:
    export ORACLE_HOME=/u01/app/oracle/product/19
    export PATH=$ORACLE_HOME/bin:$PATH
    sqlplus / as sysdba<<EOF
       startup nomount
       alter system set wallet_root='$ORACLE_BASE/admin/$ORACLE_SID/wallet' scope=spfile;
       shutdown immediate
       startup nomount
       alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE' scope=both;
    EOF
    
  6. Start the instance in upgrade mode:
    sqlplus / as sysdba<<EOF
       alter database mount;
       alter database open upgrade;
    EOF
    
  7. Create an AutoUpgrade config file:
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=DB12
    
  8. Start AutoUpgrade in upgrade mode:
    java -jar autoupgrade.jar -config DB12.cfg -mode upgrade
    

That’s it! I just upgraded my encrypted Oracle Database and moved it to a new server.

Appendix

Keystore Type

You must have an auto-login database keystore to upgrade it on the new server. Check the keystore type in the source database:

select wrl_type, wallet_type from v$encryption_wallet;
  • AUTOLOGIN – You can copy the auto-login keystore file (cwallet.sso) from the old to the new server.
  • LOCAL_AUTOLOGIN – The keystore file is bound to the old server. You must create a new auto-login keystore on the new server.

To create a new local auto-login keystore:

startup mount
administer key management create local auto_login keystore ...;
shutdown immediate
startup upgrade

Hands-on Lab

If you want to try the procedure in our hands-on lab, you can use these instructions to encrypt the DB12 database.

  1. Add TDE configuration to sqlnet.ora:
echo 'ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=$ORACLE_BASE/admin/$ORACLE_SID/wallet)))' >> $ORACLE_HOME/network/admin/sqlnet.ora
  1. Create keystore directory:
mkdir -p $ORACLE_BASE/admin/$ORACLE_SID/wallet
  1. Create the keystore and complete the TDE configuration:
sqlplus / as sysdba <<EOF
   --Restart to re-read sqlnet.ora with keystore setting
   shutdown immediate
   startup
   
   --Configure TDE
   ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '$ORACLE_BASE/admin/$ORACLE_SID/wallet' IDENTIFIED BY "<tde-keystore-pwd>";
   ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "<tde-keystore-pwd>";
   ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "<tde-keystore-pwd>" WITH BACKUP;
   ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '$ORACLE_BASE/admin/$ORACLE_SID/wallet' IDENTIFIED BY "<tde-keystore-pwd>";

   --Create data
   create tablespace users2 encryption encrypt;
   grant dba to appuser identified by oracle;  
   create table appuser.t1 tablespace users2 as select * from all_objects;
EOF

Locations

In the instructions, I am using the same paths for the database files. If you need to change the location of the control file or redo logs, then it might be easier to create a PFile on the source and use that instead of the SPFile.

If you need to change the location of data or temp files, it might be easier to re-create the control file. In this case, you need an alter database backup controlfile to '/tmp/ctl.txt' on the source database. Edit the trace file to generate the create controlfile statement.

How to Upgrade Oracle Database and Replace the Operating System

A reader asked me for advice on upgrading Oracle Database and replacing the underlying operating system.

  • Currently on Oracle Database 12.1.0.2
  • Currently on Windows Server 2012
  • Upgrade to Oracle Database 19c
  • Move to new servers with Microsoft Windows Server 2022

What’s the recommended approach for transitioning to Oracle 19c on Windows 2022?

Oracle Data Guard

My first option is always Oracle Data Guard. It is often a superior option. You move the entire database, and the only interruption is a Data Guard switchover.

In this case, where the reader needs to replace the operating system, the first thing to check is platform certifications. Always check platform certifications on My Oracle Support. It has the most up-to-date information.

Here is an overview of the platform certification for Oracle Database 12.1.0.2 and 19c.

Oracle Database 12.1.0.2 Oracle Database 19c
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2 Windows Server 2012 R2
Windows Server 2016
Windows Server 2019
Windows Server 2022

Oracle Database 19c does not support the current platform, Windows Server 2012. Thus, the reader can’t set up a standby database on the new servers and transition via a regular switchover.

Windows Server 2012 R2

Let’s imagine the current servers were using Windows Server 2012 R2. Both database releases support this platform. I would recommend this approach:

  1. Upgrade to Oracle Database 19c on current servers.
  2. Set up new servers with Windows Server 2022.
  3. Create standby database on new server.
  4. Transition to new servers with a regular Data Guard switchver.

This approach requires two maintenance windows. Yet, it is still my favorite because it is very simple.

RMAN Backups

You could also use RMAN and incremental backups. You don’t need much downtime – just the time necessary for a final incremental backup and restore. Like with Data Guard, you bring over the entire database.

RMAN can restore backups from a previous version, and you can use that to your advantage.

  1. Provision new servers with just Oracle Database 19c.
  2. Backup on 12.1.0.2.
  3. Restore and recover the database on the new servers with Oracle Database 19c binaries.
  4. After the final incremental backup, open the new database in upgrade mode and perform the upgrade.

We covered this approach in one of our webinars; you can also find details in this blog post.

Move Storage

You can also unmount the storage from the old server, and attach it to the new server.

  1. Run AutoUpgrade in analyze mode to determine upgrade readiness.
  2. Down time starts.
  3. Run AutoUpgrade in fixup mode to fix any issues preventing the upgrade from starting.
  4. Cleanly shut down the source database.
  5. Move the storage to the new server.
  6. Start the database on the new server in upgrade mode.
  7. Start AutoUpgrade in upgrade mode to complete the upgrade.

This is just a high-level overview. For a real move, there are many more intermediate steps.

Be sure to have a proper rollback plan. You are re-using the data files and AutoUprade in upgrade mode does not create a guaranteed restore point.

Data Pump

Data Pump is also a viable option, especially for smaller, less complex databases. It also enables you to restructure your database, for example:

  • Transform old BasicFile LOBs to SecureFile
  • Implement partitioning
  • Exclude data (for archival)
  • You can import directly into a higher release and even directly into a PDB.

But – the larger the database, the longer downtime (generally speaking).

When you use Data Pump for upgrades, I recommend using a full database export.

Full Transportable Export/Import

You can also use transportable tablespaces for upgrades. You can even migrate directly into a PDB on Oracle Database 19c.

The downside of transportable tablespace is that you must copy the data files to the new system.

But often, you can unmount the storage and mount the storage on the new servers. This avoids the cumbersome process of copying the data files to the new system.

Another approach is to combine transportable tablespaces with incremental backups, if you want to lower the downtime needed. This approach leaves the original database untouched, leaving you with a perfect rollback option.

Oracle GoldenGate

You could also use Oracle GoldenGate. But for most upgrades, it is overkill, partly because of the restrictions and considerations. I see this as a sensible option only if you have very strict downtime or fallback requirements.

Conclusion

What is the best option?

It depends…

This post helps you make the best decision for your organization.

What Happens to Your Oracle Data Guard During Conversion to Multitenant

Oracle Data Guard is an amazing piece of tech. It helps keeping your data safe. When you convert to the multitenant architecture, it is crucial that you don’t jeopardize your Data Guard configuration.

Follow the below steps to bring along your standby database.

What’s The Problem

When you prepare for multitenant conversion, you prepare two things:

  • Data files – you make the data files consistent by opening the non-CDB in read-only mode.
  • Manifest file – you create an XML file which contains information about the non-CDB.

The manifest file contains information about the data files, including the location. However, the manifest file lists only the location on the primary database. There is no information about the standby database.

When you plug in the non-CDB, the plug-in happens without problems on the CDB primary database. It reads the manifest file and finds the data files.

But what about the CDB standby database? Since the manifest file does not list the file location on the standby host, how can the standby database find the corresponding data files?

The Options

There are two options which you control with the standbys clause on the create pluggable database statement:

  • Enabled recovery:
    • You specify standbys=all, or you explicitly list the standby database in the standbys clause.
    • On plug-in, the CDB standby database must find the data files. How the standby database finds the data files depends on the configuration.
    • The new PDB is protected by Data Guard immediately on plug-in.
    • If the standby database fails to find the data files, recovery stops for the entire CDB. All your PDBs are now unprotected unless you use PDB Recovery Isolation (see appendix).
  • Deferred recovery:
    • You specify standbys=none, or you don’t list the standby database in the standbys clause.
    • On plug-in, the CDB standby notes the creation of the PDB but does not attempt to find and recover the data files.
    • The new PDB is not protected by Data Guard until you provide the data files and re-enable recovery as described in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1). Typically, this means restoring all data files to the standby system. The other PDBs in the CDB standby are fully protected during the entire process.

Convert with AutoUpgrade

You must convert with deferred recovery on the CDB standby database. AutoUpgrade uses this approach by default:

upg1.manage_standbys_clause=standbys=none

When AutoUpgrade completes, you must follow the process to restore the data files on the CDB standby database and re-enable recovery.

There is no way to plug in with enabled recovery. This includes the alias trick. This requires work on the primary and standby systems. AutoUpgrade is a fully automated process that does not allow you to intervene midway.

If you set manage_standbys_clause to anything but the default to plug in with enabled recovery, you will most likely end up in problems. Either the data files are missing on the standby system or not at the right SCN. This stops the MRP process in the standby database. Since the MRP process is responsible for recovering all the other PDBs as well, you are not only breaking the recently added PDB, but also all other PDBs.

Convert Manually

ASM

You can plug-in with enabled recovery and use the data files on the standby. The standby database searches the OMF location for the data files. ASM does not you manually moving files into an OMF location. Instead, you can create aliases in the OMF location as described in Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1). The standby database follows the plug-in operation.

This option won’t work, if you use the as clone clause on the create pluggable database statement. The clause generates a new GUID and since the GUID is part of the OMF location, you won’t be able to create aliases upfront.

Alternatively, you can plug in with deferred recovery.

OMF in File System

You can plug-in with enabled recovery and use the data files on the standby. The CDB standby database searches the OMF location for the data files. Either:

  • Move the data files into the OMF location.
  • Create soft links in the OMF location for each data file pointing to the current location.

These options won’t work, if you want to use the as clone clause. The clause generates a new GUID and since the GUID is part of the OMF location, you don’t know the OMF location upfront.

If you set standby_pdb_source_file_directory in the CDB standby database, it looks for the data files in that directory. However, it will always copy the data files into the OMF location. Even if you specify create pluggable database ... nocopy. Setting standby_pdb_source_file_directory is, however, compatible with the as clone clause.

Alternatively, you can plug in with deferred recovery.

Regular Files

The database uses regular files when db_create_file_dest is empty.

If you plug in with enabled recovery, the CDB standby database expects to find the data files in the exact same location (path and file name) as on the primary database. The location is either the full path from the manifest file or the location specified by create pluggable database ... source_file_directory='<data_file_location>'.

If the data files are in a different location on the CDB standby database, you either:

  • Set db_file_name_convert in your CDB standby database. This changes the name of each of the data files accordingly.
  • Set standby_pdb_source_file_directory in your CDB standby database. When media recovery looks for a specific file during plug-in, it searches this directory instead of the full path from the manifest file.

You can plug-in using the as clone clause without problems.

Alternatively, you can plug in with deferred recovery.

Refreshable Clone PDBs

When you migrate a non-CDB using refreshable clone PDBs, you are using a clone of the non-CDB database. Thus, there are no existing data files on the standby database that you can use.

You can only create a refreshable clone PDB with deferred recovery (standbys=none). After you transition the refreshable clone PDB into a regular, stand-alone PDB using alter pluggable database ... refresh mode none, you must follow the process to restore the data files and re-enable recovery. If you use AutoUpgrade, you must wait until the entire job completes.

Until you have completed the recovery process, the PDB is not protected by Data Guard.

For further information, including how Oracle Cloud Infrastructure makes it easier for you, have a look at Sinan’s blog post.

Important

Whichever method you choose, you must check your Data Guard configuration before going live.

  1. Check the recovery status on all standby databases:

    select name, recovery_status
    from   v$pdbs;
    
  2. Test the Data Guard configuration by performing a switchover.

Don’t go live without checking your Data Guard configuration!

Appendix

PDB Recovery Isolation

PDB Recovery Isolation is a new feature in Oracle Database 21c.

In an Active Data Guard environment, PDB recovery isolation ensures that media recovery of a CDB on the standby is not impacted when one or more PDBs are not consistent with the rest of the CDB.

Source: About PDB Recovery Isolation

If you plug in a database with standbys=all (via a refreshable clone PDB) and the standby database can’t find the data files, PDB recovery isolation kicks in:

  • The standby database disables recovery of the affected PDB.
  • The standby database restores the data files from the primary database.
  • After restore, the standby database re-enables recovery of the PDB.
  • The affected PDB is unprotected until the process is completed.
  • The other PDBs are unaffected by the situation.

PDB Recovery Isolation reduces risk and automates the resolution of the problem.

At the time of writing, it requires a license for Active Data Guard.

Further Reading

Thank You

A big thank you to my valued colleague, Sinan Petrus Toma, for teaching me about PDB recovery isolation.

How to Fix ORA-14519 Conflicting Tablespace Blocksizes During Oracle Data Pump Import

Recently, our team has received multiple questions about an error occurring during an Oracle Data Pump import:

ORA-14519: Conflicting tablespace blocksizes for table : Tablespace <string> block size <number< [partition specification] conflicts with previously specified/implied tablespace <string> block size <number< [object-level default]

Data Pump can’t import a partitioned table if you don’t store the table itself and all partitions in tablespaces with the same block size.

What is the Problem?

In almost all cases, you must store a partitioned table and all its partitions in tablespaces with the same block size. The documentation states:

Use caution when creating partitioned objects in a database with tablespaces of different block sizes. The storage of partitioned objects in such tablespaces is subject to some restrictions. Specifically, all partitions of the following entities must reside in tablespaces of the same block size:

  • Conventional tables
  • Indexes

You find the same restriction in Common Questions and Errors on Partition Operations Involving Tablespaces of Different Block Sizes (Doc ID 1401064.1):

Q: Can I store different partitions of the same table in tablespaces of different block sizes? A: No. For each conventional table (except index-organized tables), all partitions of that table must be stored in tablespaces with the same block size.

In the cases we looked at, Data Pump import recreates the partitioned table as it was defined in the source database. But the definition of the partitioned table is invalid – it spans tablespaces of different block size. The target database rejects the CREATE TABLE statement.

Solution

Fix in Source Database

In My Oracle Support note IMPORT OF PARTITIONED TABLE IN NON DEFAULT BLOCKSIZE TABLESPACE FAILS WITH ORA-14519(Doc ID 272229.1), the specified solution is to fix the invalid table in the source database. Often, users don’t want to change the source database. Such a change might have to go through a change management process, or the user might not be willing to take the risk of changing things in the source database.

The solution works, but not ideal.

Fix in Target Database

My colleague, Klaus Gronau, came up with a better solution. Fix it during import in the target database. The example below uses a schema-based Data Pump job.

  1. Extract the defintion of the offending table and schema:
    impdp ... include="table:""in('MYPARTTAB1')""" \
              include="user:""in('APPUSER')""" \
              include=tablespace_quota \			 
              sqlfile=myparttab1.sql
    
    • The SQL file also contains objects depending on the table like constraints and indexes.
  2. Edit the SQL file:
    vi myparttab1.sql
    
    • Ensure that the tablespace of the table itself matches the tablespaces used by the partitions. Check the tablespace clauses.
    • Optionally, you can move the constraint and index creation out of the file. After the next Data Pump import, you can add the constraints and indexes. This might speed up the process. You might even change the parallel degree on the index creation to create it faster.
  3. Create the user, quotas, partitioned table and dependent objects using the SQL file.
    @myparttab1.sql
    
  4. Perform the Data Pump import:
    impdp ... table_exists_action=truncate \
              data_options=trust_existing_table_partitions
    
    • The table_exists_action tells Data Pump to truncate any existing tables. The partitioned table is already empty since we just created it.
    • The data_options parameter instructs Data Pump to trust that the definition of the partitioned table matches that stored in the dump file. Data Pump does not perform any validation of the table, which is fine because we just created the table using the definition from the dump file.

Word of advice:

  • This method uses the exclude and include clauses to filter out the offending table. If the dump file contains a table of the same name, but in a different schema, then the filter applies to both those tables. The filter does not take the schema into account. It will match all tables, in any schema, with the name myparttab1.

Transportable Tablespace

If you move data with transportable tablespaces, there is no way to change the table definition on import. You must fix the problem in the source database.

Appendix

Thanks

A big shoutout to my colleague, Klaus Gronau, who provided the test case and solution for this issue.

How Can It Happen?

  • Observe the following test case:

    alter system set db_32k_cache_size=30M scope=both;
    create tablespace users_8 datafile size 50m blocksize 8k;
    create tablespace users_32 datafile size 50m blocksize 32k;
    
    create user appuser identified by "appuser" default tablespace users_8;
    grant connect, create table to appuser;
    alter user appuser quota unlimited on users_8;
    alter user appuser quota unlimited on users_32;
    
    connect appuser/appuser
    
  • The following create table statement fails because of the different tablespace block sizes:

    create table myparttab1 (
       col1 number
    ) tablespace users_8 
    partition by range (col1) (
       partition myparttab1_1 values less than (100001) tablespace users_32 ,
       partition myparttab1_2 values less than (maxvalue) tablespace users_32
    );
    
    ERROR at line 5:
    ORA-14519: Conflicting tablespace blocksizes for table : Tablespace USERS_32
    block size 32768 [partition specification] conflicts with previously
    specified/implied tablespace USERS_8 block size 8192 [object-level default]
    
  • But if you remove the table tablespace specification, it works. See how I removed tablespace users_8. However, the user default tablespace is users_8. The table segment of the partitioned table inherits the user default. The table definition is now unsupported:

    create table myparttab1 (
       col1 number
    ) 
    partition by range (col1) (
       partition myparttab1_1 values less than (100001) tablespace users_32 ,
       partition myparttab1_2 values less than (maxvalue) tablespace users_32
    );
    
    Table created.
    
  • You can check the table and partition defintion in the data dictionary:

    select 'TABLE' as type, 
           table_name as segment_name, 
           nvl(tablespace_name, (select default_tablespace from user_users))  as tablespace 
    from user_tables 
    where table_name='MYPARTTAB1'
    union all
    select 'PARTITION', 
           partition_name, 
           tablespace_name 
    from user_tab_partitions 
    where table_name='MYPARTTAB1';
    
    TYPE	  SEGMENT_NAME	  TABLESPACE
    --------- --------------- ---------------
    TABLE	  MYPARTTAB1	  USERS_8
    PARTITION MYPARTTAB1_1	  USERS_32
    PARTITION MYPARTTAB1_2	  USERS_32
    
  • We filed a bug to tighten the syntax validation in the create table statement. For now, store the table segment in the same tablespace as the partitions using the tablespace clause on the create table statement.

How Often Do You Patch Your Oracle Database Client?

The most important reason why you must patch your Oracle Database is security. The threats any company faces today is very different than 10 or 20 years ago. Especially with the introduction of ransomware, everyone is a target.

When I talk to database specialists, there is consensus about the importance of applying patches. Luckily, it’s rare nowadays that you have to argue with people over it. Further, I see more and more companies putting this higher on the agenda. I know patching is extra work, but you must do it. You must avoid the embarrassment and potentially devasting effect on your company, as explained by Connor McDonald.

How do you patch the Oracle Database? Release Updates are the vehicle for delivering security fixes to your Oracle Database; so far, so good. But what about the client? How often do you patch your Oracle Database clients? Do you need to patch the client?

As always, the answer is: It depends… on which clients you are using.

The Critical Patch Updates

The place to look for information about security issues is the critical patch updates (CPU). Be sure to check it every quarter when the new one comes out.

If a specific client is affected, it is listed. For example, check the CPU from January 2023. It lists a vulnerability in Oracle Data Provider for .NET.

If you look back a few more quarters, vulnerabilities in the client appears to be rare. But they do occur.

Client-Only

If you use client-only installations, you can check the Oracle Database Server Risk Matrix in the CPU. In the text, Oracle states whether the vulnerabilities affect client-only installations.

Here is an overview of the last three years showing whether the vulnerabilities affected the client-only installation.

Release Update Client-only affected
October 2023 No
July 2023 Yes
April 2023 No
January 2023 Yes
October 2022 No
July 2022 Yes
April 2022 No
January 2022 No
October 2021 No
July 2021 Yes
April 2021 Yes
January 2021 No

To patch a client-only installation, you download the latest Release Update and apply it to the Oracle home, just like if it was the Oracle Database itself (the server).

You can use ORAdiff to find a list of included fixes.

Instant Client

For instant client, you download a new package and overwrite the existing instant client.

JDBC

To update JDBC, you download new jar files or use Maven Central Repository. On the JDBC download page, you can find a list of bugs fixed in the various releases. Here is the list for 19c.

ODP.NET

For Oracle Data Provider for .NET (ODP.NET), you can find the latest drivers on NuGet Gallery. The readme section has a list of bugs fixed since the previous version.

OCI

For OCI (Oracle Call Interface), you get the latest instant client and extract the relevant files from there.

ODAC

The Oracle Data Access Components (ODAC) package also contains other clients. You download the latest version and follow the instructions to unpack it.

Recommendation

For the database server, I strongly recommend:

  • Applying patches every quarter.
  • Using the latest Release Updates, although I do also understand why some people prefer to use the second latest Release Update (an N-1 approach).

For the database clients, I recommend:

  • Having a structured process to evaluate the risk when the Critical Patch Update Advisories come out every quarter.
  • Asses not only the security issues but also functional issues. Some drivers have a bugs fixed list. Use it to determine whether you use functionality that could benefit from the bug fixes.
  • Applying patches to your clients periodically. This ensures you have a structured and well-tested process. When it becomes urgent to patch your client, it’s easy because you’ve already done it so many times.

In general, I strongly recommend:

How to Trace Oracle Data Pump

Data Pump is hanging!

If you ever think the above, the answer is probably:

No, it’s not, it’s working…

It might be working in a suboptimal way, but nevertheless, it’s working. Here’s how you can tell what’s going on.

My Ultimate Tracing Guide

I use this approach every time I work with Oracle Data Pump and need to see what happens underneath the hood.

Before Data Pump

  • I always ensure dictionary and fixed objects statistics are current before starting Data Pump:

    begin
       dbms_stats.gather_schema_stats('SYS');
       dbms_stats.gather_schema_stats('SYSTEM');
       dbms_stats.gather_fixed_objects_stats;
    end;
    /
    
    • I usually don’t use gather_dictionary_stats. I prefer gathering schema stats instead. I’ve seen a few edge cases with gather_dictionary_stats over the years, but mostly it works fine. But you know, old dog and new tricks…
    • You should always follow this advice even if you are not tracing a specific problem.
  • I change the AWR snapshot interval to 15 minutes to get better granularity and manually create an AWR snapshot:

    begin
       dbms_workload_repository.modify_snapshot_settings(null, 15);
       dbms_workload_repository.create_snapshot;
    end;
    /
    
    
  • Optionally, if I need to diagnose a performance problem, I enable SQL trace for Data Pump processes:

    alter system set events 'sql_trace {process: pname = dw | process: pname = dm} level=8';
    
    • Or a specific SQL ID:
    alter system set events 'sql_trace[SQL:  ]';
    

Starting Data Pump

  • I add diagnostic information (metrics and logtime) to the log file and turn on tracing:
    expdp ... metrics=yes logtime=all trace=1FF0300
    impdp ... metrics=yes logtime=all trace=1FF0300
    
    • 1FF0300 turns on tracing for more or less everything in Data Pump, but there are other trace levels.

After Data Pump

  • I create a new AWR snapshot:
    begin
       dbms_workload_repository.create_snapshot;
    end;
    /
    
    • You might also want to reset the AWR snap interval to the previous setting.
  • Plus, an AWR report spanning the entire period.
    @?/rdbms/admin/awrrpt
    
  • Data Pump writes the tracing into the process trace files. You can find them in the database trace directory:
    • Control process trace files named *dm*.trc.
    • Worker process trace files named *dw*.trc.

Happy Debugging

Normally, the above information is enough to figure out the underlying problem. If you supply the above to Oracle Support when you open the next case, they’ll be super excited.

Further Reading

It’s a Wrap – DOAG 2023 Conference + Exhibition

Relaxing a bit after yet another exciting visit to Germany for the annual DOAG conference. The conference always delivers. Many great speakers and many interesting session. If you have yet to arrive, you should try it out. Even if you don’t speak German, there are so many English sessions. Don’t hesitate to visit next year.

This year was slightly different, as all my sessions were with either Mike or Roy. I really enjoy presenting with other people. It brings extra energy into the session, and I’m sure the audience also feels it. When you also have a completely full room – even with a standing crowd also – that serves as additional motivation.

The Slides

Move to Oracle Database 23c – CDB-architecture migration on auto-pilot

The slides contain essential information for everyone who will work on upgrades to Oracle Database 23c and adopting the multitenant architecture.

Data Pump Best Practices, Secrets and Real World Scenarios

The slides give an overview of some new features and tips and tricks for troubleshooting in Data Pump. Finally, it describes how LOBs work in the database, why Data Pump might not deliver the expected performance – and what you can do about it.

Help! My Database Is Still on 8i!

What do product managers do when they find an empty slot at a conference? Of course, they quickly throw in another presentation. Which is what happened this year. We took the audience a walk down memory lane to the database of the previous millennium.

Thanks

Thanks to the organizers in DOAG. Again, the conference was a huge success. I enjoy coming here. This is a great conference. If you can’t go to Oracle CloudWorld this is a perfect second option.

Thanks to everyone who attended my sessions – especially those with all the good questions.

See you next year!

Pictures

DOAG 2023 DOAG 2023 DOAG 2023 DOAG 2023 DOAG 2023 DOAG 2023

Data Pump and Parallel Transportable Jobs

In migrations, you often use transportable tablespaces or Full Transportable Export/Import (FTEX). Downtime is always of concern when you migrate, so having Data Pump perform the transportable job in parallel is a huge benefit.

How much benefit? Let’s find out with a little benchmark.

The Results

Let’s start with the interesting part. How much time can you save using parallel transportable jobs in Data Pump.

The following table lists four different scenarios. Each scenario consists of an export and an import. The total is the time Data Pump needs to finish the migration – the sum of export and import. In a real migration, many other things are in play, but here, we are looking solely at Data Pump performance.

Export Time Import Time Total
19c, no parallel 2h 2m 19c, no parallel 6h 44m 8h 46m
23ai, parallel 4 1h 48m 23ai, parallel 4 2h 33m 4h 21m
19c, no parallel 2h 2m 23ai, parallel 16 1h 23m 3h 25m
23ai, parallel 16 1h 8m 23ai, parallel 16 1h 23m 2h 31m

The first row is what you can do in Oracle Database 19c, almost 9 hours. Compare that to the last row you can do with parallel 16 in Oracle Database 23ai, almost a 3.5x reduction.

If you migrate from Oracle Database 19c to Oracle Database 23ai (3rd row), you can still benefit from parallel import and gain a significant benefit.

The Benchmark

I used the following Oracle homes:

My test database:

  • E-Business Suite database
  • 300 GB physical size
  • 630.000 database objects, including
    • 89.000 tables
    • 66.000 indexes
    • 60.000 packages
  • CPU_COUNT = 16
  • SGA_TARGET = 64G

My test machine:

  • OCI shape VM.Standard.E4.Flex
  • 8 CPUs
  • 128 GB mem
  • Fast disks (max. IOPS 128.000)

How to

It’s very easy to enable parallel transportable jobs. If you want to use 16 parallel workers, on export:

expdp ... parallel=16 dumpfile=ftex%L.dmp

On import:

impdp ... parallel=16

What Happens

A Data Pump job consists of several object paths that Data Pump must process. An object path could be tables, indexes, or package bodies.

Parallel Export

Each worker takes an object path and starts to process it. The worker works alone on this object path. You get parallelism by multiple workers processing multiple object paths simultaneously.

Parallel Import

During import, Data Pump must process each object path in a certain order. Data Pump can only import constraints once it has imported tables, for example.

Data Pump processes each object path in the designated order, then splits the work in one object path to many workers. You get parallelism by multiple workers processing one object path in parallel.

The Fine Print

  • Parallel transportable jobs work in Oracle Database 21c and later. In Oracle Database 19c, a transportable job has no parallel capabilities.

  • Data Pump can use parallel only on transportable jobs via a dump file. Network mode is not an option for parallel transportable jobs.

  • If you export in Oracle Database 19c (which does not support parallel transportable jobs), you can still perform a parallel import into Oracle Database 23ai.

  • The export parallel degree and import parallel degree do not have to match. You can export with parallel degree 1 and import with parallel degree 16.

  • When you enable parallel jobs, Data Pump starts more workers. How much extra resources do they use?

    • I didn’t notice any significant difference in undo or temp tablespace use.
    • I didn’t notice any extra pressure on the streams pool, either. I had the streams pool set to 256M, and the database didn’t perform any SGA resize operation during my benchmark.

Conclusion

For migrations, parallel transportable jobs in Data Pump are a huge benefit. Every minute of downtime often counts, and this has a massive impact.

It’s a Wrap – UKOUG Conference ’23

I just finished my presentation at the UKOUG conference. This time, it was held at the Oracle office in Reading. Two intense days full of learning experiences.

It’s the 40th anniversary of UKOUG – that’s truly amazing. The community started when I was just a little child and still lives on today, what a change tech has undergone since then.

Congratulations to the board and the entire community on the 40th anniversary.

The Slides

Patch Me If You Can – Grid Infrastructure Edition

This is a modification of an existing talk about database patching, but mostly on Oracle Grid Infrastructure. But since Oracle Database and Grid Infrastructure go hand in hand, it also has some database stuff.

You should flip through the slides if you work with Oracle Grid Infrastructure. And remember – always patch out-of-place.

Help! My Database is still on 8i!

I also had the opportunity to close the conference with my 8i talk. I really like this talk because it is a walk down memory lane. Plus, it includes demos using Oracle 8i Database. It’s cool to be old school.

For a little laugh, you can find a comparison of Oracle Database releases and mobile phones of the same age.

Thanks

Thanks to the board of UKOUG and the organizers for pulling yet another successful conference. Thanks to the sponsors making it all possible and to everyone who attended my sessions or the conference in general.

It keeps impressing me how much you can learn in such a short time. My head is full. Luckily, the weekend is coming up.

P.S. The chocolate fountain was amazing (see below)!

Pictures

Welcome to UKOUG Conference '23 Presenting Patch Me If You Can - Grid Infrastructure Edition Red Carpet at the 40th Anniversary Celebration The Chocolate Fountain Cool art

How to Use Oracle Data Pump When You Have Common Objects

One of the benefits of the multitenant architecture in Oracle Database is the concept of common objects. If you want to move data around with Oracle Data Pump, is there anything you must know?

What Are Common Objects

The documentation describes common objects as:

A common phenomenon defined in a root is the same in all containers plugged in to this root. … For example, if you create a common user account while connected to CDB$ROOT, then this user account is common to all PDBs and application roots in the CDB.

A simpler explanation:

The stuff you create as C##.... in the root container (CDB$ROOT).

The principles of commonality are:

  • A common phenomenon is the same in every existing and future container. …
  • Only a common user can alter the existence of common phenomena. More precisely, only a common user logged in to either the CDB root or an application root can create, destroy, or modify attributes of a user, role, or object that is common to the current container.

For illustration purposes, imagine a common profile used by a local user. You would create such as:

alter session set container=CDB$ROOT;
create profile c##commonprofile1 ... ;
alter session set container=pdb1;
create user localuser1 ... profile c##commonprofile1;

With other object types, you can make it even more complex, but the principles remain the same.

What Happens In Oracle Data Pump

On Export

Data Pump

  • Connects to the PDB.
  • Extracts the relevant data and metadata.

Using the above example, Data Pump extracts the user DDL. The user DDL specifies the use of a common profile.

On Import

Data Pump

  • Connects to the PDB.
  • Assumes that you have already created the common objects.
  • Creates the local user localuser1 and specifies the use of the common profile c##commonprofile1.

If the common profile does not exist already, the creation of the user fails:

Processing object type DATABASE_EXPORT/SCHEMA/USER
ORA-39083: Object type USER:"LOCALUSER1" failed to create with error:
ORA-02380: profile C##COMMONPROFILE1 does not exist

Failing sql is:
 CREATE USER "LOCAL1" ... PROFILE "C##COMMONPROFILE1"

Why Don’t Data Pump Create The Common Objects?

  • First, Data Pump connects to the PDB itself to perform the import. Since you must create common objects in the root container, that’s obviously a problem. The importing user might not have access to the root container, so switching containers and creating common objects is impossible.

  • Also, it would break the contract in the multitenant architecture that each PDB is isolated. If one PDB were allowed to create common objects, those objects would also be available in other PDBs, and the isolation would no longer exist.

  • Finally, it would open for all sorts of security issues if a user connected to a PDB could create common objects.

What Can You Do?

A CDB Blueprint

Ideally, you should have some CDB blueprint; A definition of the common objects used in your databases. When you create a new CDB, you have an afterburner that creates those common objects according to your company guidelines.

Examine the Dump File

You can also examine the DDLs in the Data Pump file and create a list of common objects.

  1. Extract the DDL:
    impdp ... sqlfile=import_ddls.sql
    
  2. Search for C## and build a list of common objects needed:
    grep -n -i "C##" import_ddls.sql
    
    This is a crude search, and I imagine awk aficionados can improve it.
  3. Extract the DDL for the common objects from the source CDB:
    l_stmt := dbms_metadata.get_ddl(...);
    dbms_output.put_line(l_stmt);
    l_stmt := dbms_metadata.get_granted_ddl(...);
    dbms_output.put_line(l_stmt);
    
    Use dbms_metadata.get_granted_ddl to extract object and role grants and system privileges.
  4. Create the common objects in the target CDB before import:
    alter session set container=cdb$root;
    create profile c##commonprofile1 ... ;