It’s a Wrap – Transportable Tablespaces to the Extreme

Recently, we hosted our webinar Cross Platform Migration – Transportable Tablespaces to the Extreme. You can now watch the recording on our YouTube channel.

The audience at our webinar was very active and asked many good questions. To know all the details, you can read the Q&A and the slides.

Some of the attendees asked questions we were not able to answer during the webinar. Those questions – and answers – are included in the Q&A.

The New Procedure

For cross-platform and cross-endian migrations, we have a new procedure called M5. It replaces the former XTTS v4 Perl script that has been used for many migrations. We could see the need for changes as Oracle Databases world-wide grows in size and complexity. M5 implements the latest RMAN and Data Pump technology to deliver the fastest possible migrations.

You can download the M5 scripts and read the procedure on My Oracle Support (Doc ID 2999157.1).

Next Webinar

Mark your calendar for our next webinar:

Move to Oracle Database 23c – Everything you need to know about Oracle Multitenant

Oracle Database 23c does only support the CDB architecture. If you haven’t migrated to Oracle Multitenant yet, then you will be with your upgrade to 23c. How do you approach it in the most efficient way? What are the other options? And why is this a migration unless you have PDBs already? All this and way much more about how to work with Multitenant, how AutoUpgrade automates the entire move for you, end-to-end – and best practices and tips and tricks. We’ll guide you, and you will be ready to move to Oracle Database 23c right away

Sign up now and secure your seat.

All tech – no marketing!

Happy Migrating!

The Next-generation Cross-platform Migration for Oracle Database

I am very pleased to share that Oracle has officially launched a new method for cross-platform migrations of Oracle Database.

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

The M5 script is the next-generation cross-platform transportable tablespace migration procedure for Oracle Database

You can also use the script for cross-endian migrations, so this is perfect for AIX, HP-UX or SPARC migrations to Exadata Database Machine or any other little endian platform.

Next-generation

Before we launched M5, you would use the XTTS v4 Perl script for such migrations.

Timeline of methods for cross-endian migrations

We launched the latest version of the XTTS v4 Perl script many years ago. Over the last years, as databases grew bigger and bigger, we saw multiple issues with XTTS v4 Perl script, including:

  • No multisection backups for bigfile tablespaces
  • No encrypted tablespaces
  • Inefficient parallelism
  • Incomplete multitenant support

We wanted to solve all those issues with M5 – and we did! M5 uses newer RMAN functionality, and we made the procedure much simpler. It relies entirely on out-of-the-box RMAN functionality. On the source:

BACKUP ... TABLESPACE ... ;

On target we use:

RESTORE ALL FOREIGN DATAFILES ... ;

The latter command was introduced in Oracle Database 18c and enhanced in Oracle Database 19c. This means that the requirements for source and target database are:

Want to Know More?

We have a webinar later today about this new method. If you are interested, we still have open seats.

Cross Platform Migration – Transportable Tablespaces to the Extreme, February 22, 16:00 CET

Next week, we will add the recording to our YouTube channel. Be sure to subscribe so you don’t miss out on anything.

If you want a sneak peek at the slides, go ahead.

I have an elaborate blog post series about cross-platform migrations. I will soon update it with more information about the M5 migration method.

Happy Migrating!

Data Pump and Parallel Transportable Jobs

In migrations, you often use transportable tablespaces or Full Transportable Export/Import (FTEX). Downtime is always of concern when you migrate, so having Data Pump perform the transportable job in parallel is a huge benefit.

How much benefit? Let’s find out with a little benchmark.

The Results

Let’s start with the interesting part. How much time can you save using parallel transportable jobs in Data Pump.

The following table lists four different scenarios. Each scenario consists of an export and an import. The total is the time Data Pump needs to finish the migration – the sum of export and import. In a real migration, many other things are in play, but here, we are looking solely at Data Pump performance.

Export Time Import Time Total
19c, no parallel 2h 2m 19c, no parallel 6h 44m 8h 46m
23c, parallel 4 1h 48m 23c, parallel 4 2h 33m 4h 21m
19c, no parallel 2h 2m 23c, parallel 16 1h 23m 3h 25m
23c, parallel 16 1h 8m 23c, parallel 16 1h 23m 2h 31m

The first row is what you can do in Oracle Database 19c, almost 9 hours. Compare that to the last row you can do with parallel 16 in Oracle Database 23c, almost a 3.5x reduction.

If you migrate from Oracle Database 19c to Oracle Database 23c (3rd row), you can still benefit from parallel import and gain a significant benefit.

The Benchmark

I used the following Oracle homes:

My test database:

  • E-Business Suite database
  • 300 GB physical size
  • 630.000 database objects, including
    • 89.000 tables
    • 66.000 indexes
    • 60.000 packages
  • CPU_COUNT = 16
  • SGA_TARGET = 64G

My test machine:

  • OCI shape VM.Standard.E4.Flex
  • 8 CPUs
  • 128 GB mem
  • Fast disks (max. IOPS 128.000)

How to

It’s very easy to enable parallel transportable jobs. If you want to use 16 parallel workers, on export:

expdp ... parallel=16 dumpfile=ftex%L.dmp

On import:

impdp ... parallel=16

What Happens

A Data Pump job consists of several object paths that Data Pump must process. An object path could be tables, indexes, or package bodies.

Parallel Export

Each worker takes an object path and starts to process it. The worker works alone on this object path. You get parallelism by multiple workers processing multiple object paths simultaneously.

Parallel Import

During import, Data Pump must process each object path in a certain order. Data Pump can only import constraints once it has imported tables, for example.

Data Pump processes each object path in the designated order, then splits the work in one object path to many workers. You get parallelism by multiple workers processing one object path in parallel.

The Fine Print

  • Parallel transportable jobs work in Oracle Database 21c and later. In Oracle Database 19c, a transportable job has no parallel capabilities.

  • Data Pump can use parallel only on transportable jobs via a dump file. Network mode is not an option for parallel transportable jobs.

  • If you export in Oracle Database 19c (which does not support parallel transportable jobs), you can still perform a parallel import into Oracle Database 23c.

  • The export parallel degree and import parallel degree do not have to match. You can export with parallel degree 1 and import with parallel degree 16.

  • When you enable parallel jobs, Data Pump starts more workers. How much extra resources do they use?

    • I didn’t notice any significant difference in undo or temp tablespace use.
    • I didn’t notice any extra pressure on the streams pool, either. I had the streams pool set to 256M, and the database didn’t perform any SGA resize operation during my benchmark.

Conclusion

For migrations, parallel transportable jobs in Data Pump are a huge benefit. Every minute of downtime often counts, and this has a massive impact.

Understand How a Change of Database Time Zone Affects Transportable Tablespaces

In a recent migration using full transportable export/import, I noticed a lot of time spent on the following SQL:

SELECT NVL((SELECT 2
            FROM   sys.ku$_all_tsltz_tables 
            WHERE  owner = :1 AND table_name = :2), 0) 
FROM   sys.dual

The SQL ID was g3qu7py3g0yg0. Each execution of the SQL was a few seconds, but Data Pump executed the statement many times.

Data Pump also wrote in the log file that the database timezone differed:

01-NOV-23 07:43:22.152: W-1 Source time zone is +00:00 and target time zone is -07:00.

The following applies to full transportable export/imports using the following Data Pump parameters:

full=y
transportable=always

What Happens?

In a full transportable import, if the database time zone is different, Data Pump translates any data of type Timestamp with local timezone (TSLTZ) to the new database timezone.

On startup, Data Pump detects the difference in the source and target database timezone. For each table Data Pump checks whether it must convert data. If needed, the data is converted from the source database timezone and stored in the target database timezone.

The check and conversion takes time. This is the price you must pay to change the database timezone on import.

Alternatively, you must import into a database with the same timezone. Then Data Pump completely skips the check and conversion.

In the migration, we could save 22 minutes on import by importing into the same database time zone.

Changing the Database Timezone

You can find the database timezone using:

select dbtimezone from dual;

If you don’t have any tables with TSLTZ columns, you can change the database timezone:

alter database set time_zone='+00:00';
shutdown immediate
startup

The database timezone affects only:

  • TSLTZ columns
  • Function CURRENT_DATE
  • Function CURRENT_TIMESTAMP
  • Function LOCALTIMESTAMP

If you don’t use any of the above, it should be safe to change the database timezone.

Columns of type Timestamp with timezone (TSTZ) and the database timezone file (v$timezone_file) are totally unrelated to the database timezone.

Full Transportable vs. Traditional Transportable

In a traditional transportable import, Data Pump does not import tables with columns of type TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). You must move such tables using a regular Data Pump import.

As mentioned above, a full transportable export/import Data Pump translates the data to match the new database timezone.

Other Blog Posts in This Series

How to use Transportable Tablespace with TDE Tablespace Encryption

You can use transportable tablespace to migrate your data between databases. It is a great way of moving your data. How does it work together with TDE Tablespace Encryption?

It depends on which kind of migration you are planning. The endian format of the source and target platforms (or operating system) plays a vital role.

First, you determine the endian format of the source and target database:

SQL> select platform_name, endian_format from v$transportable_platform;
  • If the endian formats are identical, you will perform a same-endian migration.
  • If the endian formats differ, you will perform a cross-endian migration.

Same-endian Migration of TDE Encrypted Tablespaces

It is supported to use transportable tablespace when you migrate to a different platform as long as the endian format does not change.

Oracle Database uses a two-tier key architecture which consists of two encryption keys:

  • Master Encryption Key (MEK)
  • Tablespace Encryption Key (TEK)

In the tablespace, the TEK is stored. When you use transportable tablespace, you copy the data files, and thus, the TEK remains the same. The data in the tablespace continues to be encrypted during the entire migration using the same TEK.

But what about the MEK? It is required to get access to the TEK.

Option 1: Use ENCRYPTION_PASSWORD parameter

  1. During Data Pump transportable tablespace export, you specify an ENCRYPTION_PASSWORD:

    expdp ... encryption_password=<a_strong_and_secure_password>
    

    The encryption password is not the MEK of the source but a special password you choose for the migration only.

  2. On import, you specify the encryption password:

    impdp ... encryption_password=<a_strong_and_secure_password>
    

A benefit of this option is that the source and target database is encrypted using a different MEK. You can query the database and verify that no new MEK is added to the target database. The target database continues to use its own MEK:

SQL> select * from v$encryption_keys;

According to the documentation, this is the recommended option.

Option 2: Import Master Encryption Key

  1. You start the Data Pump transportable tablespace export.
  2. In the source database, you export the source database MEK:
    SQL> administer key management export keys 
         with secret "<another_strong_and_secure_password>"
         to '/home/oracle/secure_location/exported-keys'
         force keystore identified by "<source_database_MEK";
    
  3. You import the source database MEK into the target database:
    SQL> administer key management import keys 
         with secret "<another_strong_and_secure_password>"
         from '/home/oracle/secure_location/exported-keys'
         force keystore identified by "<target_database_MEK>"
         with backup;
    
  4. You start the Data Pump transportable tablespace import.

By querying v$encryption_keys, you can see that another key has been added to the database.

You can read more about export and import of MEKs in the documentation.

Option 3: Oracle Key Vault

If you are using Oracle Key Vault, it’s very easy to allow the target database to access the source database master encryption key.

When you perform the Data Pump transportable tablespace import in the target database, it will already have access to the encryption keys that protect the tablespaces. Nothing further is needed.

What About Rekeying?

If you make the source database encryption key available to the target database, consider whether you also want to perform a rekey operation. This applies to options 2 and 3.

ORA-39396

If you use options 2 or 3, you will receive the below warning during Data Pump transportable tablespace export:

ORA-39396: Warning: exporting encrypted data using transportable option without password

This is expected behavior:

This warning points out that in order to successfully import such a transportable tablespace job, the target database wallet must contain a copy of the same database master key used in the source database when performing the export. Using the ENCRYPTION_PASSWORD parameter during the export and import eliminates this requirement.

Cross-endian Migration of TDE Encrypted Tablespaces

You can’t migrate an encrypted tablespace to a platform with a different endian format using transportable tablespace. It is not supported.

If you have an encrypted tablespace and you want to use transportable tablespace:

  1. Decrypt the tablespace
  2. Migrate the tablespace using transportable tablespace
  3. Re-encrypt the tablespace

Appendix

Further Reading

Other Blog Posts in This Series

Understand Why Data Pump Produce ORA-39218 or ORA-39216 on Evolved Types

Transportable Tablespaces is a great way of migrating data from one Oracle Database to another. However, if your database uses object types and those object types have evolved over time, you might hit an error during import:

W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch

Failing sql is:
BEGIN SYS.DBMS_METADATA.CHECK_TYPE('APPUSER','CAR_TYPE','4','613350A0E37618A6108C500936D0C6162C',''); END;

Typically, there will be a number of dependent objects that fails with ORA-39112.

Let me explain the problem and what you can do about it.

Object Types and Evolution

The term object types refers to database objects that you create using the CREATE TYPE statement.

Object types are also known as user-defined types (UDT) and abstract data types (ADT).

When Oracle Database introduced object types many releases ago, they could not evolve. In simple words, you only had CREATE TYPE available. If you needed to change the type, you had to drop and recreate it. Later on, Oracle Database allowed type evolution, and you could also do ALTER TYPE.

An Example of Evolution

Let me illustrate object types and evolution using an example:

  • I create a supertype called VEHICLE_TYPE
  • I create another type called CAR_INFO_TYPE
  • I create a subtype called CAR_TYPE which inherits VEHICLE_TYPE and includes an attribute based on CAR_INFO_TYPE

Overview of the types used in the example - part 1

I add the three types:

CREATE TYPE "APPUSER"."VEHICLE_TYPE" 
   IS OBJECT (
      make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE" 
   IS OBJECT (
      model VARCHAR2(40)
);
/
CREATE TYPE "APPUSER"."CAR_TYPE" 
   UNDER VEHICLE_TYPE (
      car_info CAR_INFO_TYPE
);
/

I query the data dictionary to find the types and verify that their version is 1. I expect this because the types are brand-new.

select o.name, t.version# 
from obj$ o, user$ u,type$ t 
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;

NAME             VERSION#
-------------------------
CAR_INFO_TYPE           1
CAR_TYPE                1
VEHICLE_TYPE            1

Now I want to add two additional attributes to CAR_INFO_TYPE:

  • horsepower
  • color

Overview of the types used in the example - part 2

ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/

Now, I check the data dictionary again. CAR_INFO_TYPE has now advanced two versions to version 3. Two ALTER TYPE statements mean two new versions. But notice CAR_TYPE. It advances as well.

select o.name, t.version# 
from obj$ o, user$ u,type$ t 
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;

NAME             VERSION#
-------------------------
CAR_INFO_TYPE           1
CAR_INFO_TYPE           2
CAR_INFO_TYPE           3
CAR_TYPE                1
CAR_TYPE                2
CAR_TYPE                3
VEHICLE_TYPE            1

If you alter a type that another type uses, both types evolve. No ALTER TYPE was issued on CAR_TYPE, yet it still evolve. This becomes a problem later on.

Same Objects but Different Evolution

If I use the example above but change the order of statements, the types will evolve in a different way. They will still be the same objects, but they evolve differently.

Let me illustrate how the types can evolve differently. I execute the ALTER TYPE statements on CAR_INFO_TYPE immediately after its CREATE TYPE statement:

CREATE TYPE "APPUSER"."VEHICLE_TYPE" 
   IS OBJECT (
      make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE" 
   IS OBJECT (
      model VARCHAR2(40)
);
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/
CREATE TYPE "APPUSER"."CAR_TYPE" 
   UNDER VEHICLE_TYPE (
      car_info CAR_INFO_TYPE
);
/

Here you can see the two approaches compared to each other

First approach Second approach
CREATE TYPE "APPUSER"."VEHICLE_TYPE" CREATE TYPE "APPUSER"."VEHICLE_TYPE"
CREATE TYPE "APPUSER"."CAR_INFO_TYPE" CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
CREATE TYPE "APPUSER"."CAR_TYPE" ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" CREATE TYPE "APPUSER"."CAR_TYPE"

When I check the version of the types, I see that CAR_TYPE doesn’t evolve together with CAR_INFO_TYPE:

select o.name, t.version# 
from obj$ o, user$ u,type$ t 
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;

NAME             VERSION#
-------------------------
CAR_INFO_TYPE           1
CAR_INFO_TYPE           2
CAR_INFO_TYPE           3
CAR_TYPE                1
VEHICLE_TYPE            1

In contrast to the previous example, because I changed the order of the statements, CAR_TYPE is now still on version 1 (not 3 as previously).

When you alter a type, you decide whether you also want to update the rows in a table that uses that type.

  • If you choose to update the corresponding table data, it can take quite some time on a big table.
  • If you choose to skip the update, the alteration will be fast. However, when you need to read the data, the database needs to use the evolution information to determine how to read the data.

What Data Pump Does

Data Pump can recreate the types during a regular import and transportable tablespaces. But it will handle the types one-by-one:

  • Create VEHICLE_TYPE (and all alterations if there were any)
  • Then create CAR_INFO_TYPE and all alterations
  • Then create CAR_TYPE (and all alterations if there were any)

Now, the definition of the types is correct (it has the correct attributes and stuff). But the evolution of the types does not necessarily match that of the source database. Unless the types in the source database were created in the same order as Data Pump does, then the versions of the types might be different.

What Happens During a Regular Import

During a regular (not transportable) export, Data Pump always use external table as access method when there is an evolved type involved. This access method causes a CREATE TABLE <external table> ... AS SELECT which converts each row’s version of a type to the highest version. All rows in the dump file is now of the highest version of the type.

On import, Data Pump creates the types, and perhaps the versions of the types are different. When Data Pump loads the data into the table, all rows are on the same version because they were converted during export. The data access layer underneath Data Pump is now more lenient. If one type version is found with a matching hash code, then data can be imported safely.

Data Pump then checks for a matching object type using a hash code only. The version does not have to match. Here’s a query to get the hash code:

select o.name, t.version#, t.hashcode 
from obj$ o, user$ u, type$ t 
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;

What Happens During a Transportable Tablespace Import

In transportable tablespace mode, things are different. Data Pump does not load the data into a dump file. Instead, it is kept in the data files. To read the data correctly, you must have the correct type definitions and information on how they evolve.

If the type evolution doesn’t match those of the source database, then Data Pump can’t be sure the data is accurate and will issue an error. Using the above query, the type evolution must match on version and hash code.

Solution

This is not a bug in Data Pump. Data Pump ensures that your data is not corrupted during import. Due to a restriction in the implementation of type evolution, Data Pump is not able to completely recreate a type’s evolution and this might cause problems.

These are your options:

Option 1: Use Regular Data Pump Import

As described above, during a regular import Data Pump handles the types and dependent objects differently.

  1. Perform transportable tablespace import.
  2. Find types and dependent objects that Data Pump can’t import.
  3. Import the affected types and objects using a regular Data Pump import.

Option 2: Recreate Types and Dependent Objects Without Data

In one case I worked on, a queue table used the evolved type. Before migration, the application team would ensure that the application processed all messages in the queue. Thus, at the time of migration, the queue was empty.

  1. The transportable tablespace import failed to create the queue table due to evolved type.
  2. After import, the DBA recreated the queue using DBMS_AQADMIN.
  3. Also, the DBA granted permissions on the queue.

If a regular table used the evolved type, you would need to recreate the table using CREATE TABLE syntax instead.

Option 3: Recreate Types Without Evolution Before Export

Recreate the types in the source database in the same way as Data Pump will create it on the target database. One type at a time. First, the CREATE TYPE statement and any ALTER TYPE statements on the same type. Then next type, and the next, and the next…

If a dependent table or queue uses the evolved type, it can be quite cumbersome and time consuming to recreate the type in the source database.

Option 4: Create Type in Target Using Same Evolution Before Import

If you know the evolution history of the types in the source database (i.e. in which order the CREATE TYPE and ALTER TYPE statements were issued), then you can create the types manually in the target database and ensure they evolve in the same way. You must creat the types exactly in the same way (including number of attributes, the order they were added and so forth).

  1. In target database, you create and alter types in the correct order. In order to do that you must also create the schema.

  2. You verify that types are of the same version in source and target database. The hash code must be identical as well:

    select o.name, t.version#, t.hashcode 
    from obj$ o, user$ u, type$ t 
    where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
    order by o.name, t.version#;
    
  3. You perform the transportable tablespace import.

  4. Data Pump reports an error for the types that already exist. You ignore those errors. You can also exclude types from the import (previous step) using EXCLUDE=TYPES.

  5. Ensure that dependent objects that use the affected types are valid. It could be other types, tables, or queues.

Conclusion

Evolved types can hit ORA-39218 or ORA-39216 during a transportable tablespace import. Implementation restrictions in object types cause these errors. Data Pump can’t evolve the types in the same way, and issues an error. You must find a different way of moving the types and dependent objects.

Happy migrating!

Appendix

References

Complete Test Case

For reference, here is the test case I used to research. Use our hands-on lab, Hitchhiker’s Guide for Upgrading to Oracle Database 19c.

1. Create Test Data

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES

CONN / AS SYSDBA

DROP USER APPUSER CASCADE;
DROP TABLESPACE APPS INCLUDING CONTENTS AND DATAFILES;

CREATE TABLESPACE APPS;
CREATE USER APPUSER IDENTIFIED BY APPUSER DEFAULT TABLESPACE APPS;
ALTER USER APPUSER QUOTA UNLIMITED ON APPS;
GRANT CONNECT TO APPUSER;
GRANT CREATE TYPE TO APPUSER;
GRANT CREATE TABLE TO APPUSER;
GRANT AQ_ADMINISTRATOR_ROLE TO APPUSER;

CONN APPUSER/APPUSER

CREATE TYPE "APPUSER"."VEHICLE_TYPE" 
   IS OBJECT (
      make VARCHAR2 (40)
) NOT FINAL;
/

CREATE TYPE "APPUSER"."CAR_INFO_TYPE" 
   IS OBJECT (
      model VARCHAR2(40)
);
/

CREATE TYPE "APPUSER"."CAR_TYPE" 
   UNDER VEHICLE_TYPE (
      car_info CAR_INFO_TYPE
);
/

ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/

CREATE TABLE "APPUSER"."CARS" (
   id number, 
   car_info car_type
);

INSERT INTO "APPUSER"."CARS"
VALUES (1, APPUSER.CAR_TYPE('FORD', APPUSER.CAR_INFO_TYPE('FIESTA', 200, 'BLUE')));
COMMIT;

BEGIN
   DBMS_AQADM.CREATE_QUEUE_TABLE (
      queue_table        => 'APPUSER.CAR_QUEUE_TABLE',
      queue_payload_type => 'APPUSER.CAR_TYPE');

   DBMS_AQADM.CREATE_QUEUE (
      queue_name  => 'APPUSER.CAR_QUEUE',
      queue_table => 'APPUSER.CAR_QUEUE_TABLE');

   DBMS_AQADM.START_QUEUE (
      queue_name => 'APPUSER.CAR_QUEUE',
      enqueue    => TRUE);
END;
/

DECLARE
  l_enqueue_options     DBMS_AQ.enqueue_options_t;
  l_message_properties  DBMS_AQ.message_properties_t;
  l_message_handle      RAW(16);
  l_event_msg           APPUSER.CAR_TYPE;
BEGIN
  l_event_msg := APPUSER.CAR_TYPE('VOLVO', APPUSER.CAR_INFO_TYPE('V90', 400, 'GREY'));

  DBMS_AQ.enqueue(queue_name          => 'APPUSER.CAR_QUEUE',        
                  enqueue_options     => l_enqueue_options,     
                  message_properties  => l_message_properties,   
                  payload             => l_event_msg,             
                  msgid               => l_message_handle);

  COMMIT;
END;
/

conn / as sysdba
select o.name, o.oid$, t.hashcode, t.version# 
from obj$ o, user$ u,type$ t 
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;

2. Transportable Tablespace Using FTEX

mkdir /tmp/dpdir
rm /tmp/dpdir/*
mkdir /u02/oradata/CDB2/pdb2

sqlplus / as sysdba<<EOF
ALTER TABLESPACE USERS READ ONLY;
ALTER TABLESPACE APPS READ ONLY;
create or replace directory dpdir as '/tmp/dpdir';
grant read, write on directory dpdir to system;
EOF

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba<<EOF
   startup
   alter pluggable database PDB2 close;
   drop pluggable database PDB2 including datafiles;
   create pluggable database PDB2 admin user adm identified by adm file_name_convert=('pdbseed', 'pdb2');
   alter pluggable database PDB2 open;
   alter pluggable database PDB2 save state;
   alter session set container=PDB2;
   create directory mydir as '/tmp/dpdir';
   grant read, write on directory mydir to system;
   create public database link SOURCEDB connect to system identified by oracle using 'UP19';
EOF

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba<<EOF
set line 300
select 'cp ' || file_name || ' /u02/oradata/CDB2/pdb2/df' || file_id as c1 from dba_data_files where tablespace_name in ('USERS', 'APPS');
EOF

#Copy the files

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
impdp system/oracle@pdb2 network_link=sourcedb full=y \
transportable=always metrics=y exclude=statistics directory=mydir \
logfile=pdb2.log \
transport_datafiles='/u02/oradata/CDB2/pdb2/df4' \
transport_datafiles='/u02/oradata/CDB2/pdb2/df5'

sqlplus appuser/APPUSER@PDB2<<EOF
  select * from appuser.cars;
  select * from APPUSER.CAR_QUEUE_TABLE;
EOF

If Importing Statistics Using DBMS_STATS Is Slow

When migrating Oracle Databases, you often transport the statistics using dbms_stats. Our team got involved in an interesting case the other day, and I would like to share the solution with you.

Problem

A customer migrated to Oracle Database 19c and decided to move the statistics using dbms_stats.import_schema_stats. They noticed that the procedure started to burn a lot of CPU, and after 38 hours, they gave up.

A SQL Monitor report showed one statement which spent almost all the time. An insert statement into sys.dbms_stats_id_map_tab with a subselect referencing the staging table. The staging table is the one you specify in the argument stattab in the call to dbms_stats.import_schema_stats. The staging holds the optimizer statistics in a portable format. From the staging table, the procedure can insert it into the data dictionary in the proper format.

Obviously, they could see already that the procedure would use far too much time.

Diagnosing

The first shot was to look at the code behind dbms_stats. But this specific code hadn’t changed since its introduction many releases ago.

Recursive statements that touch the data dictionary immediately brought our attention to dictionary statistics. But the customer told us that they were not stale.

The customer ran SQL Tuning Advisor on the offending statement, and one of the findings was about stale statistics. SQL Tuning Advisor recommended gathering statistics on the staging table and corresponding indexes.

One of our performance experts looked at the execution plan and found a pattern he had seen before. He tried to disable Join Predicate Push Down in the session. It helped, but this was just a workaround. We wanted to find the root cause.

Solution

The SQL Tuning advisor came up with the real problem. Stale statistics on the staging table and corresponding indexes. Once the customer gathered statistics on the staging table and indexes, the import of statistics finished in 2 hours and 27 minutes which was acceptable.

We also discovered that the dictionary statistics were not as accurate as the customer had initially concluded. In fact, by mistake, they had misinformed us. A fresh run of dbms_stats.gather_dictionary_stats gave a slight performance improvement as well.

Recommendations

These recommendations are now part of our best practices for migrations.

  1. Before importing statistics using dbms_stats.import_schema_stats, gather stats on the staging table.

  2. Immediately after importing a lot of data, and before you import statistics with dbms_stats or anything else, you should gather dictionary statistics. This applies to regular Data Pump imports and transportable tablespace metadata imports.

The run book should look like this:

$ impdp parfile=import.par ...
$ sqlplus / as sysdba

SQL> exec dbms_stats.gather_dictionary_stats; 
SQL> exec dbms_stats.gather_table_stats(
		ownname=>'SYSTEM',
		tabname=>'MY_STG_TAB_1',
		cascade=>TRUE);
SQL> exec dbms_stats.import_schema_stats(
		ownname=>'SYSTEM', 
		stattab=>'MY_STG_TAB_1', 

In the above example, the staging table is called SYSTEM.MY_STG_TAB_1.

The recommendation applies as well if you are using the procedures dbms_stats.import_database_stats or dbms_stats.import_table_stats.

Dictionary Statistics

Why is it important to gather dictionary statistics immediately after the import?

When you import data or use transportable tablespaces, you often have a brand new, empty database. Then you import a lot of objects. Those objects are represented as rows in the data dictionary. For instance, the tables you import now appear as rows in SYS.TAB$, the partitions in SYS.TABPART$, the indexes in SYS.IND$, and so forth.
Those internal tables were almost empty before – there were not a lot of tables. Now they have a lot of rows. This means that the statistics are stale. When you start to use functionality in the database, like importing statistics, recursive queries using the internal tables will be executed. With stale statistics on the dictionary, you can have suboptimal execution plans and bad performance. Gathering dictionary statistics can fix this for you.

Conclusion

Statistics are always vital, whether optimizer statistics on user data or internally in the data dictionary. Be sure to verify the accuracy of your statistics when you have problems.

Also, SQL Tuning Advisor is a great tool. It can quickly come up with suggestions for fixing problems. Use the recommendations as input to your troubleshooting. SQL Tuning Advisor also works on internal tables.

Additional Information

We have a few videos on our YouTube channel which have more information about transporting statistics with dbms_stats.

XTTS: Full Transportable Export/Import

In this blog post series, I use Full Transportable Export/Import (FTEX) to move the metadata during a cross-platform transportable tablespace migration (XTTS). The documentation states:

You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.

Requirements

A different blog post already covers the requirements for FTEX. Below is a supplement to that list:

  • The user performing the export and import must have the roles DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE, respectively. Don’t run the Data Pump jobs as SYS AS SYSDBA!
  • During export, the default tablespace of the user performing the export must not be one of the tablespaces being transported. In addition, the default tablespace of the user performing the export must be writable. Data Pump needs this to create the control table.
  • The target database (non-CDB) or PDB must not contain a tablespace of the same name as one of the tablespaces being transported. Often this is the case with the USERS tablespace. Either use Data Pump remap_tablespace or rename the tablespace (alter tablespace users rename to users2).
  • All tablespaces are transported. It is not possible to exclude a tablespace or a user from the operation.

What Is Included?

Generally, you should count on everything is included, except SYS objects and things specified in the next chapter. Below is a list of things that are included as well. It is a list of examples from previous questions I have been asked.

  • If a user schema has tables in SYSTEM or SYSAUX tablespace, such tables are also transported. But they are not stored in the transported tablespaces. Instead, those tables are exported into the dump file using conventional export. Examples:
    SQL> --Exported into dump file
    SQL> create table app.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table app.my_tab2 (...) tablespace users;
    
  • If you created any new tables as SYSTEM or any other internal schema, except SYS, those tables will also be transported. If such tables are in the SYSTEM or SYSAUX tablespace, then they are exported into the dump file. Examples:
    SQL> --Exported into dump file
    SQL> create table system.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table system.my_tab2 (...) tablespace users;
    
    No need to emphasize that you should never create any objects in Oracle maintained schemas. But we all know it happens…
  • Public and private database links.
  • Private synonyms.
  • Profiles.
  • Directories including the privileges granted on them, although they are owned by SYS. The contents stored in the directory in the file system must be moved manually.
  • External tables definition, but the underlying external files must be moved manually.
  • BFILE LOBs, but the underlying external files must be moved manually.
  • Queues, but you must manually start the queues after import (DBMS_AQADM.START_QUEUE).
  • Temporary tables – both global and private ones.
  • All schema level triggers (CREATE TRIGGER ... ON SCHEMA), including on system events, except those owned by SYS
  • All database level triggers (CREATE TRIGGER ... ON DATABASE) owned by an internal schema, except SYS.
  • SQL patches.
  • SQL plan baselines.
  • SQL profiles.
  • SQL plan directives.
  • User-owned scheduler objects.
  • Unified auditing policies and audit records.

What Is Not Included?

The transport does not include any object owned by SYS. Here are some examples:

  • User-created tables in SYS schema are not transported at all. You must re-create such tables (but you should never create such tables in the first place).
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab1 (...) tablespace system;
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab2 (...) tablespace users;
    
  • Grants on tables or views owned by SYS, like DBA_USERS or v$datafile.
  • Any trigger owned by SYS.
  • SYS-owner scheduler objects.

In addition, the following is not included:

  • Index monitoring (ALTER INDEX ... MONITORING USAGE).
  • Public synonyms.
  • AWR data is not included. You can move such data using the script $ORACLE_HOME/rdbms/admin/awrextr.sql.

How Does It Work?

There are two keywords used to start a full transportable job: TRANSPORTABLE and FULL. If you want to start an FTEX import directly over a network link:

$ impdp ... transportable=always full=y network_link ...

If you want to use dump files:

$ expdp ... transportable=always full=y
$ impdp ... full=y

A Few Words of Advice

Practice, practice, practice

  • Start on a small database and work on your runbook.
  • Eventually, prove it works on a production-size database.

Automate

  • To ensure consistency. There are many steps, and it is easy to overlook a step or miss a detail.
  • To avoid human error. Humans make mistakes. Period!

Save logs

  • Data Pump
  • RMAN
  • Terminal output

Automate clean-up procedure

  • To repeat tests and effectively clean up the target environment.
  • In case of failure and rollback during production migration, you should know how to resume operations safely.

Shut down source database

  • Be sure to offline source database after migration. Having users connect to the wrong database after a migration is a disaster.

Data Pump Import

  • Importing directly into the target database using the NETWORK_LINK option is recommended.

Timezone File Version

Check the timezone file version of your source and target database:

SQL> select * from v$timezone_file;

If they differ and the target timezone file version is higher than the source database, Data Pump will convert any TIMESTAMP WITH TIME ZONE (TSTZ) column to the newer timezone conventions. The conversion happens automatically during import.

Since Data Pump must update data during import, it requires that Data Pump can turn the tablespaces READ WRITE. Thus, you can’t use TRANSPORTABLE=KEEP_READ_ONLY if you have tables with TSTZ columns. Trying to do so will result in:

ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues resulting from time zone version mismatch.
Source time zone version is ?? and target time zone version is ??.

If your target database has a lower timezone file version, you can’t use FTEX. You must upgrade the timezone file in your database.

TDE Tablespace Encryption

If the source database has one or more encrypted tablespaces, you must either:

  • Supply the keystore password on export using the Data Pump option ENCRYPTION_PASSWORD.
  • Specify ENCRYPTION_PWD_PROMPT=YES and Data Pump will prompt for the keystore password. This approach is more safer because the encryption password is otherwise stored in the shell history.

You can read more about Full Mode and transportable tablespaces in the documentation.

You can only transport encrypted tablespaces, if the source and target platform share the same Endian format. For example, going from Windows to Linux is fine, because they are both little Endian platforms. Going from AIX to Linux will not work, that’s big to little Endian. When a tablespace is transported to a platform of a different Endian format, the data files must be converted. The conversion does not work on encrypted tablespaces. The only option is to decrypt the tablespace before transport.

Further Reading

Documentation:

MOS notes:

Blog posts:

Other Blog Posts in This Series

XTTS: Prerequisites

When migrating Oracle Database to a different endian format using transportable tablespaces and incremental backups (XTTS), a list of requirements must be met. The following list of requirements exist when using:

V4 Perl Script

The most important requirements – for a complete list check MOS note:

  • Windows is not supported.
  • RMAN on the source system must not have DEVICE TYPE DISK configured to COMPRESSED.
  • RMAN on the source system must not have default channel configured to type SBT.
  • For Linux: Minimum version for source and destination is 11.2.0.3.
  • Other platforms: Minimum version for source and destination is 12.1.0.2.
  • Disk space for a complete backup of the database on both source and target host. If your data files take up 100 TB, you need an additional 100 TB of free disk space. For 12c databases, and if your data files have a lot of free space, the backup might be smaller due to RMAN unused block compression.

Also worth mentioning is that the Perl script during the roll forward phase (applying level 1 incremental) will need to restart the target database. Applying the incremental backups on the target data files happens in NOMOUNT mode. Be sure nothing else uses the target database while you roll forward.

Block Change Tracking (BCT) is strongly recommended on the source database. Note, that this is an Enterprise Edition feature (in OCI: DBCS EE-EP or ExaCS). If you don’t enable BCT the incremental backups will be much slower because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.

What If – V3 Perl Script

If disk space is a problem or if you can’t meet any of the other requirements, check out the below two MOS notes:

They describe a previous version of the Perl script, version 3. The scripts use DBMS_FILE_TRANSFER to perform the conversion of the data files in-flight. That way no extra disk space is needed. However, DBMS_FILE_TRANSFER has a limitation that data files can’t be bigger than 2 TB.

Also, the V3 scripts might be useful for very old databases.

Transportable Tablespaces In General

To get a complete list of limitations on transporting data, you should look in the documentation. The most notable are:

  • Source and target database must have compatible character sets. If the character sets in both databases are not the same, check documentation for details.
  • No columns can be encrypted with TDE Column Encryption. The only option is to remove the encryption before migration and re-encrypt afterward.
  • TDE Tablespace Encryption is supported for same-endian migration if the source database is 12.1.0.2 or newer. If you need to go across endianness, you must decrypt the tablespaces and re-encrypt after migration. Remember, Oracle Database 12.2 can encrypt tablespaces online.
  • If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the conversion on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
  • Requires Enterprise Edition.
  • The database timezone file version in the target database must be equal to or higher than the source database.
  • The database time zone must match if you have tables with TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). If you have such tables, and the database time zone does not match, those tables are skipped during import. You can then move the affected tables using a normal Data Pump table mode export and import. To check the database time zone:
    SQL> select dbtimezone from dual;
    
    You can alter the time zone for a database with an ALTER DATABASE statement.

Full Transportable Export/Import

FTEX automates the process of importing the metadata. It is simpler to use and automatically includes all the metadata in your database. Compared to a traditional transportable tablespace import, FTEX is a lot easier and removes a lot of manual work from the end user. But there are a few requirements that must be met:

  • Source database must be 11.2.0.3 or higher.
  • Target database must be 12.1.0.1 or higher.
  • Requires Enterprise Edition.
  • COMPATIBLE must be set to 12.0.0 or higher in both source and target database. If your source database is an Oracle Database 11g, this is not possible. In that case, set version to 12 or higher during Data Pump export instead.

If you can’t meet the requirements, check out traditional transportable tablespace. It have different requirements, and it allows more customization.

Other Blog Posts in This Series

XTTS: Pro Tips

When doing the XTTS blog post series, I came across a lot of valuable information. These nuggets were not suitable for a dedicated blog post but are still worth sharing.

Pro Tip 1: The Other Backups

When you are preparing for an XTTS migration, you will be doing a lot of backups of the source database. Will those backups somehow interfere with your existing backups?

The Perl script takes the first backup – the initial level 0 backup – using:

RMAN> backup for transport .... ;

It is a backup created for cross-platform transport and not something to be used to restore the source database. The documentation states about cross-platform backups:

RMAN does not catalog backup sets created for cross-platform transport in the control file. This ensures that backup sets created for cross-platform transport are not used during regular restore operations.

This is good because it ensures that your recovery strategy will not take those backups into account. Most likely, you will be moving the files from the source system pretty soon, and in that case, you don’t want RMAN depending on them.

But after the initial level 0 backup, you will create level 1 incremental backups. The incremental backups are just regular incremental backups:

RMAN> backup incremental from scn ... tablespace ... ;

It is not a cross-platform backup, so it will be recorded in the control file and/or recovery catalog. Once you move those incremental backups away from the source system, be sure to tidy them up in the RMAN catalog:

RMAN> crosscheck backupset;
RMAN> #Optionally, remove them
RMAN> delete expired backupset;

Sum up: While preparing for the migration, keep taking your regular backups.

Pro Tip 2: Data Pump Parameters

Use a parameter file for your Data Pump export and import. Especially, the import will be a lot easier because you don’t need to write a very long command line with character escapes and the like:

$ cat export.par
directory=mydir
full=y
transportable=always
...

$ expdp ... parfile=export.par

Use multiple dump files and split the files at a reasonable size:

dumpfile=xtts%L.dmp
filesize=5g

Add diagnostic information to the log file (Oracle Database 12c and beyond):

metrics=y
logtime=all

Exclude statistics and check my blog post on how to deal with statistics:

exclude=table_statistics,index_statistics

Pro Tip 3: Generate Data Files Parameters For Data Pump

The list of files that Data Pump needs, I generate with this query. It works if you are using ASM, and transport_datafile will point to the alias – not the real file:

export ASMDG=+DATA
asmcmd ls -l $ASMDG | grep '^DATAFILE' | sed -n -e 's/ => .*//p' | sed -n -e 's/^.* N \s*/transport_datafiles='$ASMDG'\//p'

Pro Tip 4: Queries

Generate a comma separated list of tablespaces:

select 
   listagg(tablespace_name, ',') within group (order by tablespace_name) 
from 
   dba_tablespaces 
where 
   contents='PERMANENT' 
   and tablespace_name not in ('SYSTEM','SYSAUX');

Generate a comma separated list of tablespaces in n batches:

define batches=8
select 
   batch, 
   listagg(tablespace_name, ',') within group (order by tablespace_name) 
from (
   select 
      mod(rownum, &batches) as batch, 
      tablespace_name 
   from (
      select 
         t.tablespace_name, 
         sum(d.bytes) 
      from 
         dba_tablespaces t, 
         dba_data_files d 
      where 
         t.tablespace_name=d.tablespace_name
         and t.contents='PERMANENT' 
         and t.tablespace_name not in ('SYSTEM','SYSAUX')
      group by 
         t.tablespace_name
         order by 2 desc
      )
   ) 
group by batch;

Generate read-only commands

select 
   'ALTER TABLESPACE ' || tablespace_name ||' READ ONLY;' 
from 
   dba_tablespaces 
where 
   contents='PERMANENT' 
   and tablespace_name not in ('SYSTEM','SYSAUX');

Pro Tip 5: Troubleshooting

Be sure to always run the Perl script with debug option enabled:

$ #Set environment variable
$ export XTTDEBUG=1
$ #Or use --debug flag on commands
$ $ORACLE_HOME/perl/bin/perl xttdriver.pl ... --debug 3

MOS notes:

Here’s a video on how to troubleshoot in Data Pump:

Pro Tip 6: Update Perl

Rarely have I seen issues in the Perl script caused by an old, outdated version of Perl. Depending on which Perl instance you are using:

Pro Tip 7: Additional Information

These webinars might be of interest to you:

You can also flip the slides of all the webinars.

Pro Tip 8: Zero Data Loss Recovery Appliance (ZDLRA)

If you have a ZDLRA you can use it to make your migration easier:

Pro Tip 9: E-Business Suite (EBS)

Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12.2 Using Oracle Database 19c Enterprise Edition On a Multitenant Environment (Doc ID 2674405.1)

Pro Tip 10: ORA-39032: function TRANSPORTABLE is not supported in FULL jobs

If you get this error stack when exporting in an Oracle Database 11g:

ORA-39005: inconsistent arguments
ORA-39032: function TRANSPORTABLE is not supported in FULL jobs

You most likely forgot to add the version parameter to your Data Pump export:

$ expdp ... version=12

Pro Tip 11: Advanced Queues (AQ)

A few good MOS notes to read:

The first MOS note is especially interesting:

Now it comes the interesting part, let say DBA needs to export a particular schema that contains queues, and import into another database, so if at the end of the impdp operation the DBA does a simple comparison of the number objects from the origin database schema with the target database schema, there is a big chance these counts will not match, but there will be no errors or warnings at the expdp/impdp logs. This happens exactly because during the export/import we will not consider the queue objects created on the fly on the source side, usually the ones ending by _P and _D, thus the target database may not have these objects, but of course, they may get created later on whenever required during the use of the queue. This is an expected behavior and the functionally is working as expected. A suggested way to check whether everything has been imported successfully is to use a simple query to check the total number of "QUEUE" type objects instead, for example: SQL> select count(*) from DBA_OBJECTS where owner=’&schema’ and object_type = ‘QUEUE’;

Pro Tip 12: ORA-39218 or ORA-39216

If your Data Pump metadata import fails with the below error, you are having problems with evolved types. This blog post tells you what to do:

W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch

Failing sql is:
BEGIN SYS.DBMS_METADATA.CHECK_TYPE('APPUSER','CAR_TYPE','4','613350A0E37618A6108C500936D0C6162C',''); END;

Other Blog Posts in This Series