It’s a Wrap – Transportable Tablespaces to the Extreme

Recently, we hosted our webinar Cross Platform Migration – Transportable Tablespaces to the Extreme. You can now watch the recording on our YouTube channel.

The audience at our webinar was very active and asked many good questions. To know all the details, you can read the Q&A and the slides.

Some of the attendees asked questions we were not able to answer during the webinar. Those questions – and answers – are included in the Q&A.

The New Procedure

For cross-platform and cross-endian migrations, we have a new procedure called M5. It replaces the former XTTS v4 Perl script that has been used for many migrations. We could see the need for changes as Oracle Databases world-wide grows in size and complexity. M5 implements the latest RMAN and Data Pump technology to deliver the fastest possible migrations.

You can download the M5 scripts and read the procedure on My Oracle Support (Doc ID 2999157.1).

Next Webinar

Mark your calendar for our next webinar:

Move to Oracle Database 23c – Everything you need to know about Oracle Multitenant

Oracle Database 23c does only support the CDB architecture. If you haven’t migrated to Oracle Multitenant yet, then you will be with your upgrade to 23c. How do you approach it in the most efficient way? What are the other options? And why is this a migration unless you have PDBs already? All this and way much more about how to work with Multitenant, how AutoUpgrade automates the entire move for you, end-to-end – and best practices and tips and tricks. We’ll guide you, and you will be ready to move to Oracle Database 23c right away

Sign up now and secure your seat.

All tech – no marketing!

Happy Migrating!

The Next-generation Cross-platform Migration for Oracle Database

I am very pleased to share that Oracle has officially launched a new method for cross-platform migrations of Oracle Database.

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

The M5 script is the next-generation cross-platform transportable tablespace migration procedure for Oracle Database

You can also use the script for cross-endian migrations, so this is perfect for AIX, HP-UX or SPARC migrations to Exadata Database Machine or any other little endian platform.

Next-generation

Before we launched M5, you would use the XTTS v4 Perl script for such migrations.

Timeline of methods for cross-endian migrations

We launched the latest version of the XTTS v4 Perl script many years ago. Over the last years, as databases grew bigger and bigger, we saw multiple issues with XTTS v4 Perl script, including:

  • No multisection backups for bigfile tablespaces
  • No encrypted tablespaces
  • Inefficient parallelism
  • Incomplete multitenant support

We wanted to solve all those issues with M5 – and we did! M5 uses newer RMAN functionality, and we made the procedure much simpler. It relies entirely on out-of-the-box RMAN functionality. On the source:

BACKUP ... TABLESPACE ... ;

On target we use:

RESTORE ALL FOREIGN DATAFILES ... ;

The latter command was introduced in Oracle Database 18c and enhanced in Oracle Database 19c. This means that the requirements for source and target database are:

Want to Know More?

We have a webinar later today about this new method. If you are interested, we still have open seats.

Cross Platform Migration – Transportable Tablespaces to the Extreme, February 22, 16:00 CET

Next week, we will add the recording to our YouTube channel. Be sure to subscribe so you don’t miss out on anything.

If you want a sneak peek at the slides, go ahead.

I have an elaborate blog post series about cross-platform migrations. I will soon update it with more information about the M5 migration method.

Happy Migrating!

Kickstart Learning in 2024

What better way to start the new year of the Dragon than to learn something new. If it sounds like a good idea, you’ll be pleased to know that we have two new webinars coming your way shortly.

Like the previous webinars, it’s all tech, no marketing!

Cross Platform Migration – Transportable Tablespaces to the Extreme

Recently, the team and I worked with two big customers who had to migrate cross-endian to Exadata. Both customers had huge and very active databases and needed to migrate with as little downtime as possible.

In this webinar, you can hear details on how one of the customers performed the migration using a new procedure developed by Oracle. In addition, we will share the lessons learned and our new best practices for such migrations.

> Transportable Tablespaces and Full Transportable Export Import aren’t a secret feature. But there are many things to take care on when you migrate cross-platform, and especially cross-Endianness. Today we will give you insights and show you how we worked together to migrate a 200TB database writing up to 15TB redo per day from Oracle SPARC SuperCluster to Oracle Exadata with a database downtime of less than 6 hours. Follow us into the extreme. Extreme load. Extreme complexity. And an extremely skilled team working toegther. Such projects aren’t done in 4 weeks. And we will tell you the entire story, from start to the nights of the nights.

Sign up here.

Move to Oracle Database 23c – Everything you need to know about Oracle Multitenant

Oracle Database 23c is on its way. Hopefully, it doesn’t come as a surprise to you that this release supports the multitenant architecture only. If you haven’t migrated your Oracle Database yet, you’ll need to do so as part of the upgrade to Oracle Database 23c.

This webinar shares all the things you need to know to migrate a database to the multitenant architecture successfully, including essential parts like Data Guard, backup, and your rollback plans.

> Oracle Database 23c does only support the CDB architecture. If you haven’t migrated to Oracle Multitenant yet, then you will be with your upgrade to 23c. How do you approach it in the most efficient way? What are the other options? And why is this a migration unless you have PDBs already? All this and way much more about how to work with Multitenant, how AutoUpgrade automates the entire move for you, end-to-end – and best practices and tips and tricks. We’ll guide you, and you will be ready to move to Oracle Database 23c right away

Sign up here.

Registration

The webinars are free, of course. You just need to sign up.

Our entire team will be present during the webinar to answer all your questions. I promise we won’t stop the webinar until we have answered all the questions. If you cannot participate, you can find the recording shortly after the webinar.

If you can’t wait, you can start learning already by watching the previous webinars.

Register your seat now.

How to Export and Import Statistics Faster Using DBMS_STATS in Parallel

When you migrate Oracle Database, you often need to transport statistics using DBMS_STATS. Also, during migrations, you want to minimize the downtime. How can you transfer statistics as quickly as possible?

Export and Import

The easiest solution is to use dbms_stats.export_database_stats and dbms_stats.import_database_stats. But, the procedures have no parallel capabilities. If you have an Oracle Database with many objects, it will take a long time.

Parallel Export and Import

dbms_stats also allows you to export and import statistics on schema or table level. You can use this to your advantage to run multiple export or import commands at the same time.

Imagine a database with four schemas.

If you export database stats, the database creates a list of statistics from all four schemas. Then, it exports them one by one.

  • Session 1: exec dbms_stats.export_database_stats( ... );

In contrast, if you start four different sessions, you can export one schema in each session and thus finish much faster.

  • Session 1: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA1');
  • Session 2: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA2');
  • Session 3: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA3');
  • Session 4: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA4');

Benchmark

This benchmark gives you an idea of how much time you can save.

It is an Oracle E-Business Suite (EBS) database with 1.400 schemas and 150.000 tables/indexes.

I compare exporting and importing statistics on database level and on schema level. For schema level, I divide the schemas into chunks and process them from separate sessions simultaneously.

Method Time to export Time to import Total
Database 4m 5s 21m 5s 25m 10s
Schema, parallel 8 57s 3m 58s 4m 55s
Schema, parallel 16 53s 3m 45s 4m 38s

I can save more than 20 minutes in this benchmark by doing the work in parallel. Between parallel 8 and parallel 16, there is not much to gain, probably because of contention.

How to Export and Import in Parallel Using DBMS_STATS

Here is some simple code you can use to export and import in parallel.

I haven’t coded for years, so use at your own risk :)

  1. Create a dedicated user for the statistics staging table and create a control table with the list of schemas to process.

    drop user statuser cascade;
    grant dba to statuser identified by statuser;
    alter user statuser quota unlimited on users;
    --Create the DBMS_STATS staging table
    exec dbms_stats.create_stat_table('STATUSER', 'DBSTATS', 'USERS');
    --Populate control table
    declare
       l_stmt VARCHAR2(1000);
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stat_tab_owner ALL_USERS.USERNAME%TYPE := 'STATUSER';
    begin
       --Create control table and populate it
       --Order schemas by how many tables they have
       l_stmt := 'CREATE TABLE ' || l_stat_tab_owner || '.' || l_ctl_tab_name; 
       l_stmt :=l_stmt || ' as select o.owner stat_id, count(o.table_name) as cnt, 0 exp_status, 0 imp_status from dba_tables o, dba_users u where u.username not in (''SYS'', ''SYSTEM'', ''' || l_stat_tab_owner || ''') and u.oracle_maintained=''N'' and o.owner=u.username group by owner order by 2 desc';
       execute immediate l_stmt;
    end;
    /
    
  2. Start any number of database sessions and run the code below in each. The code selects a schema from the control table and performs a schema-level export of statistics into the staging table.

    declare
       l_cur_stat_id ALL_USERS.USERNAME%TYPE;
       l_statown ALL_USERS.USERNAME%TYPE := 'STATUSER';
       l_stattab ALL_TABLES.TABLE_NAME%TYPE := 'DBSTATS';
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stmt VARCHAR2(1000);
    begin
       begin
          --Loop until you reach "no_data_found exception"
          while true loop
             --Select a schema from the control table
             --Order by count (cnt) to export largest schemas first
             --Select for update to ensure only one session process a schema
             l_stmt := 'select stat_id from ' || l_statown || '.' || l_ctl_tab_name || ' where exp_status = 0 and rownum=1 order by cnt desc for update';
             execute immediate l_stmt into l_cur_stat_id;
             
             --Mark the schemas as "in progress"
             --Commit to release lock on control table			
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set exp_status=1 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
    
             --Perform the schema level export into DBMS_STATS staging table
             dbms_stats.export_schema_stats (ownname=>l_cur_stat_id, stattab=>l_stattab,statid=>l_cur_stat_id, statown=>l_statown);
    
             --Mark the schema as completed in the control table
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set exp_status=2 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
          end loop;
       exception when no_data_found then
          --"No_data_found" exception occurs when there are no more rows/schemas to process	  
          null;
       end;
    end;
    /
    
  3. Now move the DBSTATS schema to the target database and run the import. Start any number of database sessions and run the code below in each. The code selects a schema from the control table and performs a schema-level import of statistics into the staging table.

    declare
       l_cur_stat_id ALL_USERS.USERNAME%TYPE;
       l_statown ALL_USERS.USERNAME%TYPE := 'STATUSER';
       l_stattab ALL_TABLES.TABLE_NAME%TYPE := 'DBSTATS';
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stmt VARCHAR2(1000);
       no_stats_imp EXCEPTION;
       PRAGMA EXCEPTION_INIT(no_stats_imp, -20000);
    begin
       begin
            --Loop until you reach "no_data_found exception"
          while true loop
             --Select a schema from the control table
             --Order by count (cnt) to import largest schemas first
             --Select for update to ensure only one session process a schema            l_stmt := 'select stat_id from ' || l_statown || '.' || l_ctl_tab_name || ' where imp_status = 0 and rownum=1 order by cnt desc for update';
             execute immediate l_stmt into l_cur_stat_id;
     		
             --Mark the schemas as "in progress"
             --Commit to release lock on control table					
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set imp_status=1 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
    
             begin
                --Perform the schema level import into DBMS_STATS staging table	
                dbms_stats.import_schema_stats (ownname=>l_cur_stat_id, stattab=>l_stattab,statid=>l_cur_stat_id, statown=>l_statown);
             exception when no_stats_imp THEN
                --An empty schema, i.e., no tables or indexes			
                null;
             end;
     		
             --Mark the schema as completed in the control table             
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set imp_status=2 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
          end loop;
       exception when no_data_found then
          --"No_data_found" exception occurs when there are no more rows/schemas to process	  	  
          null;
       end;
    end;
    /   
    

Does Exporting Database Statistics Include the Dictionary Statistics?

I recently worked on a migration of Oracle Database to a new platform. The customer was sensitive to downtime. We looked at each item in the migration run book and asked: “Can we make it faster?”

We discussed exporting database statistics, and the question: Does dbms_stats.export_database_stats include dictionary statistics?

The documentation states:

This procedure exports statistics for all objects in the database and stores them in the user statistics tables identified by statown.stattab.

What exactly do all objects mean?

Let’s Find Out

Here is a little test case.

I gather dictionary statistics. All but 77 dictionary tables now have statistics (it’s intentional that some are left un-analyzed):

exec dbms_stats.gather_dictionary_stats;
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;
	   
       CNT
----------
	77

I export database statistics into a staging table:

grant dba to statuser identified by statuser;
begin
   dbms_stats.create_stat_table(
      'STATUSER', 
      'DBSTATS');
   dbms_stats.export_database_stats(
      stattab=>'DBSTATS',
      statown=>'STATUSER');
   dbms_stats.gather_table_stats(
      'STATUSER', 
      'DBSTATS');
end;
/

I delete dictionary statistics. Now, 1674 dictionary tables are without statistics:

exec dbms_stats.delete_dictionary_stats(force=>true);
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;

       CNT
----------
      1674

I import statistics from the staging table. I am back at 77 dictionary tables without statistics.

begin
   dbms_stats.import_database_stats(
      stattab=>'DBSTATS',
      statown=>'STATUSER', 
      force=>TRUE);
end;
/
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;

       CNT
----------
	77

Conclusion: dbms_stats.export_database_stats includes dictionary statistics.

What about Fixed Objects Statistics

I can use the same test case as above (more or less). Here is the query I use to check for missing statistics on fixed objects:

select count(*) cnt
from   dba_tab_statistics 
where  object_type='FIXED TABLE'
       and last_analyzed is null;

The answer is: dbms_stats.export_database_stats does not include fixed objects statistics.

To transfer fixed object statistics, you must use the dedicated export and import functions.

Is It a Good Idea to Transfer Dictionary Statistics?

Well… It depends.

If time allows, I recommend you always gather dictionary statistics in the target database at the appropriate time.

If you want to transfer statistics, this is my recommendation:

Type Command Recommendation
Full export full=y Should be acceptable
Full transportable full=y transportable=always Should be acceptable
Transportable tablespace transport_tablespaces Hmm, not sure it’s a good idea
Tablespace export tablespaces=... Hmm, not sure it’s a good idea
Schema export schemas=... Hmm, probably a bad idea
Table export tables=... No, most likely a bad idea

How to Upgrade Encrypted Oracle Database and Move to New Server

Is Autoupgrade with TDE only possible for in place upgrade (same server)? Are there any ways to do it for out of place (new db home in a different server) with autoupgrade? It seems like the target_home have to be specified.

A reader asked that question on my blog.

The answer is yes; you can upgrade an Oracle Database and move to a new server. We are considering upgrading a non-CDB or an entire CDB using Transparent Data Encryption (TDE) Tablespace Encryption.

Move to New Server and Transparent Data Encryption

When you upgrade your Oracle Database, you often want to move to new hardware. AutoUpgrade fully supports this use case. Mike Dietrich mentions this in his blog post and video.

When you upgrade an encrypted non-CDB or entire CDB, the database must have an auto-login keystore.

There are no further requirements.

The Instructions

I am using the DB12 database from our hands-on lab. You can provision a lab and try it out yourself (for free). See the appendix for instructions on how to encrypt the DB12 database.

Old Server

  1. Always use the latest version of AutoUpgrade.
  2. Create a config file:
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.sid=DB12
    upg1.target_version=19
    
    • I don’t specify target_home because it does not exist on the old server. Instead, I specify target_version, so AutoUpgrade knows which checks to execute.
  3. Check the database for upgrade readiness:
    java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  4. Downtime starts.
  5. Run the preupgrade fixups:
    java -jar autoupgrade.jar -config DB12.cfg -mode fixups
    
  6. Perform a clean shutdown of the database
    shutdown immediate
    

New Server

There is only one server in the lab environment, so I can’t physically move to a new server. But by moving the instance manually to the new home, I can simulate the same behavior.

  1. Move SPFile and password file to the new Oracle home on the new server. The below instructions work in the hands-on lab only:
    export OLD_ORACLE_HOME=/u01/app/oracle/product/12.2.0.1
    export NEW_ORACLE_HOME=/u01/app/oracle/product/19
    export ORACLE_SID=DB12
    cp $OLD_ORACLE_HOME/dbs/spfile$ORACLE_SID.ora $NEW_ORACLE_HOME/dbs
    cp $OLD_ORACLE_HOME/dbs/orapw$ORACLE_SID $NEW_ORACLE_HOME/dbs
    
  2. Register the instance in /etc/oratab:
    export NEW_ORACLE_HOME=/u01/app/oracle/product/19
    export ORACLE_SID=DB12   
    cp /etc/oratab /tmp/oratab
    sed '/^'"$ORACLE_SID"':/d' /tmp/oratab > /etc/oratab
    echo "$ORACLE_SID:$NEW_ORACLE_HOME:N" >> /etc/oratab
    cat /etc/oratab
    
    • Use srvctl as well if you have Oracle Grid Infrastructure.
  3. Move the database files (control files, redo logs, and data and temp files) to the new server.
    • If you need to change any of the paths, see the appendix.
    • Alternatively, unmount the storage from the old server and mount it on the new one.
  4. I want to use the new wallet_root parameter to configure TDE. I copy the keystore files to a new location that matches the naming requirements of wallet_root:
    export OLD_KEYSTORE=$ORACLE_BASE/admin/$ORACLE_SID/wallet
    export NEW_KEYSTORE=$ORACLE_BASE/admin/$ORACLE_SID/wallet/tde
    mkdir -p $NEW_KEYSTORE
    cp $OLD_KEYSTORE/cwallet.sso $NEW_KEYSTORE
    cp $OLD_KEYSTORE/ewallet.p12 $NEW_KEYSTORE
    
    • You should consider moving any backup keystore files as well.
  5. I start a new instance of the database in the new Oracle home and configure TDE using the new parameters:
    export ORACLE_HOME=/u01/app/oracle/product/19
    export PATH=$ORACLE_HOME/bin:$PATH
    sqlplus / as sysdba<<EOF
       startup nomount
       alter system set wallet_root='$ORACLE_BASE/admin/$ORACLE_SID/wallet' scope=spfile;
       shutdown immediate
       startup nomount
       alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE' scope=both;
    EOF
    
  6. Start the instance in upgrade mode:
    sqlplus / as sysdba<<EOF
       alter database mount;
       alter database open upgrade;
    EOF
    
  7. Create an AutoUpgrade config file:
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=DB12
    
  8. Start AutoUpgrade in upgrade mode:
    java -jar autoupgrade.jar -config DB12.cfg -mode upgrade
    

That’s it! I just upgraded my encrypted Oracle Database and moved it to a new server.

Appendix

Keystore Type

You must have an auto-login database keystore to upgrade it on the new server. Check the keystore type in the source database:

select wrl_type, wallet_type from v$encryption_wallet;
  • AUTOLOGIN – You can copy the auto-login keystore file (cwallet.sso) from the old to the new server.
  • LOCAL_AUTOLOGIN – The keystore file is bound to the old server. You must create a new auto-login keystore on the new server.

To create a new local auto-login keystore:

startup mount
administer key management create local auto_login keystore ...;
shutdown immediate
startup upgrade

Hands-on Lab

If you want to try the procedure in our hands-on lab, you can use these instructions to encrypt the DB12 database.

  1. Add TDE configuration to sqlnet.ora:
echo 'ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=$ORACLE_BASE/admin/$ORACLE_SID/wallet)))' >> $ORACLE_HOME/network/admin/sqlnet.ora
  1. Create keystore directory:
mkdir -p $ORACLE_BASE/admin/$ORACLE_SID/wallet
  1. Create the keystore and complete the TDE configuration:
sqlplus / as sysdba <<EOF
   --Restart to re-read sqlnet.ora with keystore setting
   shutdown immediate
   startup
   
   --Configure TDE
   ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '$ORACLE_BASE/admin/$ORACLE_SID/wallet' IDENTIFIED BY "<tde-keystore-pwd>";
   ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "<tde-keystore-pwd>";
   ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "<tde-keystore-pwd>" WITH BACKUP;
   ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '$ORACLE_BASE/admin/$ORACLE_SID/wallet' IDENTIFIED BY "<tde-keystore-pwd>";

   --Create data
   create tablespace users2 encryption encrypt;
   grant dba to appuser identified by oracle;  
   create table appuser.t1 tablespace users2 as select * from all_objects;
EOF

Locations

In the instructions, I am using the same paths for the database files. If you need to change the location of the control file or redo logs, then it might be easier to create a PFile on the source and use that instead of the SPFile.

If you need to change the location of data or temp files, it might be easier to re-create the control file. In this case, you need an alter database backup controlfile to '/tmp/ctl.txt' on the source database. Edit the trace file to generate the create controlfile statement.

How to Upgrade Oracle Database and Replace the Operating System

A reader asked me for advice on upgrading Oracle Database and replacing the underlying operating system.

  • Currently on Oracle Database 12.1.0.2
  • Currently on Windows Server 2012
  • Upgrade to Oracle Database 19c
  • Move to new servers with Microsoft Windows Server 2022

What’s the recommended approach for transitioning to Oracle 19c on Windows 2022?

Oracle Data Guard

My first option is always Oracle Data Guard. It is often a superior option. You move the entire database, and the only interruption is a Data Guard switchover.

In this case, where the reader needs to replace the operating system, the first thing to check is platform certifications. Always check platform certifications on My Oracle Support. It has the most up-to-date information.

Here is an overview of the platform certification for Oracle Database 12.1.0.2 and 19c.

Oracle Database 12.1.0.2 Oracle Database 19c
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2 Windows Server 2012 R2
Windows Server 2016
Windows Server 2019
Windows Server 2022

Oracle Database 19c does not support the current platform, Windows Server 2012. Thus, the reader can’t set up a standby database on the new servers and transition via a regular switchover.

Windows Server 2012 R2

Let’s imagine the current servers were using Windows Server 2012 R2. Both database releases support this platform. I would recommend this approach:

  1. Upgrade to Oracle Database 19c on current servers.
  2. Set up new servers with Windows Server 2022.
  3. Create standby database on new server.
  4. Transition to new servers with a regular Data Guard switchver.

This approach requires two maintenance windows. Yet, it is still my favorite because it is very simple.

RMAN Backups

You could also use RMAN and incremental backups. You don’t need much downtime – just the time necessary for a final incremental backup and restore. Like with Data Guard, you bring over the entire database.

RMAN can restore backups from a previous version, and you can use that to your advantage.

  1. Provision new servers with just Oracle Database 19c.
  2. Backup on 12.1.0.2.
  3. Restore and recover the database on the new servers with Oracle Database 19c binaries.
  4. After the final incremental backup, open the new database in upgrade mode and perform the upgrade.

We covered this approach in one of our webinars; you can also find details in this blog post.

Move Storage

You can also unmount the storage from the old server, and attach it to the new server.

  1. Run AutoUpgrade in analyze mode to determine upgrade readiness.
  2. Down time starts.
  3. Run AutoUpgrade in fixup mode to fix any issues preventing the upgrade from starting.
  4. Cleanly shut down the source database.
  5. Move the storage to the new server.
  6. Start the database on the new server in upgrade mode.
  7. Start AutoUpgrade in upgrade mode to complete the upgrade.

This is just a high-level overview. For a real move, there are many more intermediate steps.

Be sure to have a proper rollback plan. You are re-using the data files and AutoUprade in upgrade mode does not create a guaranteed restore point.

Data Pump

Data Pump is also a viable option, especially for smaller, less complex databases. It also enables you to restructure your database, for example:

  • Transform old BasicFile LOBs to SecureFile
  • Implement partitioning
  • Exclude data (for archival)
  • You can import directly into a higher release and even directly into a PDB.

But – the larger the database, the longer downtime (generally speaking).

When you use Data Pump for upgrades, I recommend using a full database export.

Full Transportable Export/Import

You can also use transportable tablespaces for upgrades. You can even migrate directly into a PDB on Oracle Database 19c.

The downside of transportable tablespace is that you must copy the data files to the new system.

But often, you can unmount the storage and mount the storage on the new servers. This avoids the cumbersome process of copying the data files to the new system.

Another approach is to combine transportable tablespaces with incremental backups, if you want to lower the downtime needed. This approach leaves the original database untouched, leaving you with a perfect rollback option.

Oracle GoldenGate

You could also use Oracle GoldenGate. But for most upgrades, it is overkill, partly because of the restrictions and considerations. I see this as a sensible option only if you have very strict downtime or fallback requirements.

Conclusion

What is the best option?

It depends…

This post helps you make the best decision for your organization.

What Happens to Your Oracle Data Guard During Conversion to Multitenant

Oracle Data Guard is an amazing piece of tech. It helps keeping your data safe. When you convert to the multitenant architecture, it is crucial that you don’t jeopardize your Data Guard configuration.

Follow the below steps to bring along your standby database.

What’s The Problem

When you prepare for multitenant conversion, you prepare two things:

  • Data files – you make the data files consistent by opening the non-CDB in read-only mode.
  • Manifest file – you create an XML file which contains information about the non-CDB.

The manifest file contains information about the data files, including the location. However, the manifest file lists only the location on the primary database. There is no information about the standby database.

When you plug in the non-CDB, the plug-in happens without problems on the CDB primary database. It reads the manifest file and finds the data files.

But what about the CDB standby database? Since the manifest file does not list the file location on the standby host, how can the standby database find the corresponding data files?

The Options

There are two options which you control with the standbys clause on the create pluggable database statement:

  • Enabled recovery:
    • You specify standbys=all, or you explicitly list the standby database in the standbys clause.
    • On plug-in, the CDB standby database must find the data files. How the standby database finds the data files depends on the configuration.
    • The new PDB is protected by Data Guard immediately on plug-in.
    • If the standby database fails to find the data files, recovery stops for the entire CDB. All your PDBs are now unprotected unless you use PDB Recovery Isolation (see appendix).
  • Deferred recovery:
    • You specify standbys=none, or you don’t list the standby database in the standbys clause.
    • On plug-in, the CDB standby notes the creation of the PDB but does not attempt to find and recover the data files.
    • The new PDB is not protected by Data Guard until you provide the data files and re-enable recovery as described in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1). Typically, this means restoring all data files to the standby system. The other PDBs in the CDB standby are fully protected during the entire process.

Convert with AutoUpgrade

You must convert with deferred recovery on the CDB standby database. AutoUpgrade uses this approach by default:

upg1.manage_standbys_clause=standbys=none

When AutoUpgrade completes, you must follow the process to restore the data files on the CDB standby database and re-enable recovery.

There is no way to plug in with enabled recovery. This includes the alias trick. This requires work on the primary and standby systems. AutoUpgrade is a fully automated process that does not allow you to intervene midway.

If you set manage_standbys_clause to anything but the default to plug in with enabled recovery, you will most likely end up in problems. Either the data files are missing on the standby system or not at the right SCN. This stops the MRP process in the standby database. Since the MRP process is responsible for recovering all the other PDBs as well, you are not only breaking the recently added PDB, but also all other PDBs.

Convert Manually

ASM

You can plug-in with enabled recovery and use the data files on the standby. The standby database searches the OMF location for the data files. ASM does not you manually moving files into an OMF location. Instead, you can create aliases in the OMF location as described in Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1). The standby database follows the plug-in operation.

This option won’t work, if you use the as clone clause on the create pluggable database statement. The clause generates a new GUID and since the GUID is part of the OMF location, you won’t be able to create aliases upfront.

Alternatively, you can plug in with deferred recovery.

OMF in File System

You can plug-in with enabled recovery and use the data files on the standby. The CDB standby database searches the OMF location for the data files. Either:

  • Move the data files into the OMF location.
  • Create soft links in the OMF location for each data file pointing to the current location.

These options won’t work, if you want to use the as clone clause. The clause generates a new GUID and since the GUID is part of the OMF location, you don’t know the OMF location upfront.

If you set standby_pdb_source_file_directory in the CDB standby database, it looks for the data files in that directory. However, it will always copy the data files into the OMF location. Even if you specify create pluggable database ... nocopy. Setting standby_pdb_source_file_directory is, however, compatible with the as clone clause.

Alternatively, you can plug in with deferred recovery.

Regular Files

The database uses regular files when db_create_file_dest is empty.

If you plug in with enabled recovery, the CDB standby database expects to find the data files in the exact same location (path and file name) as on the primary database. The location is either the full path from the manifest file or the location specified by create pluggable database ... source_file_directory='<data_file_location>'.

If the data files are in a different location on the CDB standby database, you either:

  • Set db_file_name_convert in your CDB standby database. This changes the name of each of the data files accordingly.
  • Set standby_pdb_source_file_directory in your CDB standby database. When media recovery looks for a specific file during plug-in, it searches this directory instead of the full path from the manifest file.

You can plug-in using the as clone clause without problems.

Alternatively, you can plug in with deferred recovery.

Refreshable Clone PDBs

When you migrate a non-CDB using refreshable clone PDBs, you are using a clone of the non-CDB database. Thus, there are no existing data files on the standby database that you can use.

You can only create a refreshable clone PDB with deferred recovery (standbys=none). After you transition the refreshable clone PDB into a regular, stand-alone PDB using alter pluggable database ... refresh mode none, you must follow the process to restore the data files and re-enable recovery. If you use AutoUpgrade, you must wait until the entire job completes.

Until you have completed the recovery process, the PDB is not protected by Data Guard.

For further information, including how Oracle Cloud Infrastructure makes it easier for you, have a look at Sinan’s blog post.

Important

Whichever method you choose, you must check your Data Guard configuration before going live.

  1. Check the recovery status on all standby databases:

    select name, recovery_status
    from   v$pdbs;
    
  2. Test the Data Guard configuration by performing a switchover.

Don’t go live without checking your Data Guard configuration!

Appendix

PDB Recovery Isolation

PDB Recovery Isolation is a new feature in Oracle Database 21c.

In an Active Data Guard environment, PDB recovery isolation ensures that media recovery of a CDB on the standby is not impacted when one or more PDBs are not consistent with the rest of the CDB.

Source: About PDB Recovery Isolation

If you plug in a database with standbys=all (via a refreshable clone PDB) and the standby database can’t find the data files, PDB recovery isolation kicks in:

  • The standby database disables recovery of the affected PDB.
  • The standby database restores the data files from the primary database.
  • After restore, the standby database re-enables recovery of the PDB.
  • The affected PDB is unprotected until the process is completed.
  • The other PDBs are unaffected by the situation.

PDB Recovery Isolation reduces risk and automates the resolution of the problem.

At the time of writing, it requires a license for Active Data Guard.

Further Reading

Thank You

A big thank you to my valued colleague, Sinan Petrus Toma, for teaching me about PDB recovery isolation.

How to Fix ORA-14519 Conflicting Tablespace Blocksizes During Oracle Data Pump Import

Recently, our team has received multiple questions about an error occurring during an Oracle Data Pump import:

ORA-14519: Conflicting tablespace blocksizes for table : Tablespace <string> block size <number< [partition specification] conflicts with previously specified/implied tablespace <string> block size <number< [object-level default]

Data Pump can’t import a partitioned table if you don’t store the table itself and all partitions in tablespaces with the same block size.

What is the Problem?

In almost all cases, you must store a partitioned table and all its partitions in tablespaces with the same block size. The documentation states:

Use caution when creating partitioned objects in a database with tablespaces of different block sizes. The storage of partitioned objects in such tablespaces is subject to some restrictions. Specifically, all partitions of the following entities must reside in tablespaces of the same block size:

  • Conventional tables
  • Indexes

You find the same restriction in Common Questions and Errors on Partition Operations Involving Tablespaces of Different Block Sizes (Doc ID 1401064.1):

Q: Can I store different partitions of the same table in tablespaces of different block sizes? A: No. For each conventional table (except index-organized tables), all partitions of that table must be stored in tablespaces with the same block size.

In the cases we looked at, Data Pump import recreates the partitioned table as it was defined in the source database. But the definition of the partitioned table is invalid – it spans tablespaces of different block size. The target database rejects the CREATE TABLE statement.

Solution

Fix in Source Database

In My Oracle Support note IMPORT OF PARTITIONED TABLE IN NON DEFAULT BLOCKSIZE TABLESPACE FAILS WITH ORA-14519(Doc ID 272229.1), the specified solution is to fix the invalid table in the source database. Often, users don’t want to change the source database. Such a change might have to go through a change management process, or the user might not be willing to take the risk of changing things in the source database.

The solution works, but not ideal.

Fix in Target Database

My colleague, Klaus Gronau, came up with a better solution. Fix it during import in the target database. The example below uses a schema-based Data Pump job.

  1. Extract the defintion of the offending table and schema:
    impdp ... include="table:""in('MYPARTTAB1')""" \
              include="user:""in('APPUSER')""" \
              include=tablespace_quota \			 
              sqlfile=myparttab1.sql
    
    • The SQL file also contains objects depending on the table like constraints and indexes.
  2. Edit the SQL file:
    vi myparttab1.sql
    
    • Ensure that the tablespace of the table itself matches the tablespaces used by the partitions. Check the tablespace clauses.
    • Optionally, you can move the constraint and index creation out of the file. After the next Data Pump import, you can add the constraints and indexes. This might speed up the process. You might even change the parallel degree on the index creation to create it faster.
  3. Create the user, quotas, partitioned table and dependent objects using the SQL file.
    @myparttab1.sql
    
  4. Perform the Data Pump import:
    impdp ... table_exists_action=truncate \
              data_options=trust_existing_table_partitions
    
    • The table_exists_action tells Data Pump to truncate any existing tables. The partitioned table is already empty since we just created it.
    • The data_options parameter instructs Data Pump to trust that the definition of the partitioned table matches that stored in the dump file. Data Pump does not perform any validation of the table, which is fine because we just created the table using the definition from the dump file.

Word of advice:

  • This method uses the exclude and include clauses to filter out the offending table. If the dump file contains a table of the same name, but in a different schema, then the filter applies to both those tables. The filter does not take the schema into account. It will match all tables, in any schema, with the name myparttab1.

Transportable Tablespace

If you move data with transportable tablespaces, there is no way to change the table definition on import. You must fix the problem in the source database.

Appendix

Thanks

A big shoutout to my colleague, Klaus Gronau, who provided the test case and solution for this issue.

How Can It Happen?

  • Observe the following test case:

    alter system set db_32k_cache_size=30M scope=both;
    create tablespace users_8 datafile size 50m blocksize 8k;
    create tablespace users_32 datafile size 50m blocksize 32k;
    
    create user appuser identified by "appuser" default tablespace users_8;
    grant connect, create table to appuser;
    alter user appuser quota unlimited on users_8;
    alter user appuser quota unlimited on users_32;
    
    connect appuser/appuser
    
  • The following create table statement fails because of the different tablespace block sizes:

    create table myparttab1 (
       col1 number
    ) tablespace users_8 
    partition by range (col1) (
       partition myparttab1_1 values less than (100001) tablespace users_32 ,
       partition myparttab1_2 values less than (maxvalue) tablespace users_32
    );
    
    ERROR at line 5:
    ORA-14519: Conflicting tablespace blocksizes for table : Tablespace USERS_32
    block size 32768 [partition specification] conflicts with previously
    specified/implied tablespace USERS_8 block size 8192 [object-level default]
    
  • But if you remove the table tablespace specification, it works. See how I removed tablespace users_8. However, the user default tablespace is users_8. The table segment of the partitioned table inherits the user default. The table definition is now unsupported:

    create table myparttab1 (
       col1 number
    ) 
    partition by range (col1) (
       partition myparttab1_1 values less than (100001) tablespace users_32 ,
       partition myparttab1_2 values less than (maxvalue) tablespace users_32
    );
    
    Table created.
    
  • You can check the table and partition defintion in the data dictionary:

    select 'TABLE' as type, 
           table_name as segment_name, 
           nvl(tablespace_name, (select default_tablespace from user_users))  as tablespace 
    from user_tables 
    where table_name='MYPARTTAB1'
    union all
    select 'PARTITION', 
           partition_name, 
           tablespace_name 
    from user_tab_partitions 
    where table_name='MYPARTTAB1';
    
    TYPE	  SEGMENT_NAME	  TABLESPACE
    --------- --------------- ---------------
    TABLE	  MYPARTTAB1	  USERS_8
    PARTITION MYPARTTAB1_1	  USERS_32
    PARTITION MYPARTTAB1_2	  USERS_32
    
  • We filed a bug to tighten the syntax validation in the create table statement. For now, store the table segment in the same tablespace as the partitions using the tablespace clause on the create table statement.

How Often Do You Patch Your Oracle Database Client?

The most important reason why you must patch your Oracle Database is security. The threats any company faces today is very different than 10 or 20 years ago. Especially with the introduction of ransomware, everyone is a target.

When I talk to database specialists, there is consensus about the importance of applying patches. Luckily, it’s rare nowadays that you have to argue with people over it. Further, I see more and more companies putting this higher on the agenda. I know patching is extra work, but you must do it. You must avoid the embarrassment and potentially devasting effect on your company, as explained by Connor McDonald.

How do you patch the Oracle Database? Release Updates are the vehicle for delivering security fixes to your Oracle Database; so far, so good. But what about the client? How often do you patch your Oracle Database clients? Do you need to patch the client?

As always, the answer is: It depends… on which clients you are using.

The Critical Patch Updates

The place to look for information about security issues is the critical patch updates (CPU). Be sure to check it every quarter when the new one comes out.

If a specific client is affected, it is listed. For example, check the CPU from January 2023. It lists a vulnerability in Oracle Data Provider for .NET.

If you look back a few more quarters, vulnerabilities in the client appears to be rare. But they do occur.

Client-Only

If you use client-only installations, you can check the Oracle Database Server Risk Matrix in the CPU. In the text, Oracle states whether the vulnerabilities affect client-only installations.

Here is an overview of the last three years showing whether the vulnerabilities affected the client-only installation.

Release Update Client-only affected
October 2023 No
July 2023 Yes
April 2023 No
January 2023 Yes
October 2022 No
July 2022 Yes
April 2022 No
January 2022 No
October 2021 No
July 2021 Yes
April 2021 Yes
January 2021 No

To patch a client-only installation, you download the latest Release Update and apply it to the Oracle home, just like if it was the Oracle Database itself (the server).

You can use ORAdiff to find a list of included fixes.

Instant Client

For instant client, you download a new package and overwrite the existing instant client.

JDBC

To update JDBC, you download new jar files or use Maven Central Repository. On the JDBC download page, you can find a list of bugs fixed in the various releases. Here is the list for 19c.

ODP.NET

For Oracle Data Provider for .NET (ODP.NET), you can find the latest drivers on NuGet Gallery. The readme section has a list of bugs fixed since the previous version.

OCI

For OCI (Oracle Call Interface), you get the latest instant client and extract the relevant files from there.

ODAC

The Oracle Data Access Components (ODAC) package also contains other clients. You download the latest version and follow the instructions to unpack it.

Recommendation

For the database server, I strongly recommend:

  • Applying patches every quarter.
  • Using the latest Release Updates, although I do also understand why some people prefer to use the second latest Release Update (an N-1 approach).

For the database clients, I recommend:

  • Having a structured process to evaluate the risk when the Critical Patch Update Advisories come out every quarter.
  • Asses not only the security issues but also functional issues. Some drivers have a bugs fixed list. Use it to determine whether you use functionality that could benefit from the bug fixes.
  • Applying patches to your clients periodically. This ensures you have a structured and well-tested process. When it becomes urgent to patch your client, it’s easy because you’ve already done it so many times.

In general, I strongly recommend: