How to Migrate to Autonomous Database Using Database Migration Service and OCI CLI

You can migrate your Oracle Database to Autonomous Database using Database Migration Service (DMS). You can use the GUI or one of the many interfaces:

  • PL/SQL
  • SDK for various programming languages (Java, Python, and others)

In this blog post, I use OCI CLI. You can install it on your computer or use the cloud shell.

The Scenario

This is my setup:

  • Source database: Oracle Database running on a Base Database System in OCI.
  • Target database: Autonomous Database (transaction processing).

For simplicity, I will migrate all schemas in the database using Data Pump. You can cherrypick individual schemas or exclude/include specific objects if needed.

I have already created a Vault. DMS needs one to store sensitive information.

How to

I must execute all commands in the same shell. I need to specify a lot of information that I use later on:

#Specify a base name of the migration. All-migration related objects are prefixed with the name

#Specify the OCID of the compartment where all the resources are running. This procedure assumes all resources are placed in the same compartment
export COMPARTMENTOCID="ocid1.compartment.oc1...."

#Vault details
export VAULTOCID="ocid1.vault.oc1...."
export VAULTKEYOCID="ocid1.key.oc1...."

Next, I specify information about the source database:

#Source database OCID
export SRCDBOCID="ocid1.database.oc1...."

#Network stuff
#Private IP address of the source DB System
#Subnet OCID that the source DB System uses
export SRCSUBNETOCID="ocid1.subnet.oc1...."
#VCN OCID that the DB System uses
export SRCVCNOCID="ocid1.vcn.oc1...."
#Location of the private key file that can be used to communicate over SSH to the source host
export SRCHOSTKEYFILE=/Users/daniel/Documents/ssh/my-private-key

#Name and path of the database directory object. Remember to create the directory in the file system
export SRCDBDIRPATH=/u01/app/oracle/$SRCDBDIRNAME

#Connection details
#Details for non-CDB or PDB
#If source is a PDB, fill in details for CDB. For non-CDB leave them blank

Finally, I specify information about the target database – the autonomous database:

#Target ADB OCID
export TGTADBOCID="ocid1.autonomousdatabase.oc1...."
#Username and password - typically the ADMIN user
export TGTDBPASSWORD=*****

Now, let the fun begin. I first create an object storage bucket which DMS uses to store dump files, log files, CPAT output and the like:

export OSNAMESPACE=$(oci os bucket create \
  --compartment-id $COMPARTMENTOCID \
  --name $BUCKETNAME \
  --query "data.namespace" \

Then, I create a connection to the source database (non-CDB or PDB). If the source database is a PDB, I also create a connection to the source CDB:

#Create connection to source PDB/non-CDB
export SRCSUDOLOCATION=/usr/bin/sudo
export SRCSSHUSER=opc

#The 'ssh-details' parameters include the contents of the private key file as a single-line string. Newlines from the file are converted to \n by the 'awk' command
export SRCPDBCONNOCID=$(oci database-migration connection create \
   --compartment-id $COMPARTMENTOCID \
   --database-type USER_MANAGED_OCI \
   --admin-credentials '{"password":"'$SRCPDBPASSWORD'","username":"'$SRCPDBUSERNAME'"}' \
   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --database-id $SRCDBOCID \
   --display-name $SRCPDBCONNNAME \
   --connect-descriptor '{"connectString": "'$SRCHOSTIP':1521/'$SRCPDBSVCNAME'"}' \
   --ssh-details '{"host":"'$SRCHOSTIP'","sudoLocation": "'$SRCSUDOLOCATION'","user":"'$SRCSSHUSER'","sshkey":"'"$(awk '{printf "%s\\n", $0}' $SRCHOSTKEYFILE)"'"}' \
   --private-endpoint '{"subnetId":"'$SRCSUBNETOCID'","vcnId":"'$SRCVCNOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --wait-for-state SUCCEEDED \
   --max-wait-seconds 120 \
   --query "data.resources[0].identifier" \

#Create connection to source CDB, if needed
if [[ -n $SRCCDBUSERNAME ]];then
	export SRCCDBCONNOCID=$(oci database-migration connection create \
	   --compartment-id $COMPARTMENTOCID \
	   --database-type USER_MANAGED_OCI \
	   --admin-credentials '{"password":"'$SRCCDBPASSWORD'","username":"'$SRCCDBUSERNAME'"}' \
	   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
	   --database-id $SRCDBOCID \
	   --display-name $SRCCDBCONNNAME \
	   --connect-descriptor '{"connectString":"'$SRCHOSTIP':1521/'$SRCCDBSVCNAME'"}' \
	   --ssh-details '{"host":"'$SRCHOSTIP'","sudoLocation": "'$SRCSUDOLOCATION'","user":"'$SRCSSHUSER'","sshkey":"'"$(awk '{printf "%s\\n", $0}' $SRCHOSTKEYFILE)"'"}' \
	   --private-endpoint '{"subnetId":"'$SRCSUBNETOCID'","vcnId":"'$SRCVCNOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
	   --wait-for-state SUCCEEDED \
	   --max-wait-seconds 120 \
	   --query "data.resources[0].identifier" \

Next, I create a connection to the target autonomous database:

export TGTCONNOCID=$(oci database-migration connection create \
   --compartment-id $COMPARTMENTOCID \
   --admin-credentials '{"password":"'$TGTDBPASSWORD'","username":"'$TGTDBUSERNAME'"}' \
   --database-type AUTONOMOUS \
   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --database-id $TGTADBOCID \
   --display-name $TGTCONNNAME \
   --wait-for-state SUCCEEDED \
   --max-wait-seconds 120 \
   --query "data.resources[0].identifier" \

Now, I will create a migration object which describes the migration. No changes are made to the database yet:

#Create the migration
if [[ -n $SRCCDBCONNOCID ]];then
   export $MIGRSRCCDBPARAM="--source-container-database-connection-id $SRCCDBCONNOCID"
export MIGROBJOCID=$(oci database-migration migration create \
   --compartment-id $COMPARTMENTOCID \
   --vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
   --source-database-connection-id $SRCPDBCONNOCID $MIGRSRCCDBPARAM \
   --target-database-connection-id $TGTCONNOCID \
   --type OFFLINE \
   --display-name $MIGROBJNAME \
   --data-transfer-medium-details '{"databaseLinkDetails": null,"objectStorageDetails": {"namespaceName": "'$OSNAMESPACE'","bucketName": "'$BUCKETNAME'"},"awsS3Details": null}' \
   --datapump-settings '{"exportDirectoryObject": {"name": "'$SRCDBDIRNAME'","path": "'$SRCDBDIRPATH'"}}' \
   --wait-for-state SUCCEEDED \
   --max-wait-seconds 120 \
   --query "data.resources[0].identifier" \

Now, I can perform an evaluation. This is a sanity check which performs a lot of checks upfront. The command runs until the evaluation finishes.

oci database-migration migration evaluate \
   --migration-id $MIGROBJOCID \
   --wait-for-state SUCCEEDED \
   --wait-for-state FAILED \
   --max-wait-seconds 3600

I can check the evaluation outcome, including the Cloud Premigration Advisor Tool (CPAT) report. You can find this information in the object storage bucket as well. You can run the evaluation as many times as needed:

#Get the last job and the details about it
export MIGRLASTJOBOCID=$(oci database-migration job list \
   --migration-id $MIGROBJOCID \
   --limit 1 \
   --sort-by timeCreated \
   --sort-order desc \
   --query "data.items[0].id" \
oci database-migration job get-job-output-content \
   --job-id $MIGRLASTJOBOCID \
   --file -
#Get the CPAT report  
oci database-migration job get-advisor-report \

Once I have cleared any issues preventing the migration, I can start the actual migration. The command will return control immediately when the migration is started. Optionally, I use the parameters --wait-for-state and --max-wait-seconds to keep it running until the command completes:

#Start the real migration
export MIGRSTARTJOBID=$(oci database-migration migration start \
   --migration-id $MIGROBJOCID \
   --query "" \

I use the below two commands to monitor the migration. The first command gives me an overall status. The second command returns a log file with additional details.

#Get the current status
oci database-migration job get \
   --job-id $MIGRSTARTJOBID 
oci database-migration job get-job-output-content \
   --job-id $MIGRSTARTJOBID \
   --file -

That’s it!

I have migrated my database to an autonomous database.


Additional Resources

Using GUI to find REST API calls

Although the OCI CLI commands are documented, it can be hard to figure out exactly which parameters to add and the exact syntax. My colleague, Alex Kotopoulis, gave me rock-star advice.

Switch to the GUI and configure the migration as you want. Turn on "Web developer tools" (might have a different name in your browser) and investigate the network traffic. You can see the REST API calls made by the GUI and easily translate those into OCI CLI parameters.

Thanks Alex!

Why Does It Take so Long to Import Check Constraints?

Short answer: If the data was validated originally, it must be validated (again) during import. If you are short on time, you can add the constraints as NOT VALIDATED instead. But be aware of the drawbacks.

The Problem

I got a question from a colleague importing customer data as part of a migration. Data Pump was spending a lot of time in TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT.

We attached to the Data Pump session and used the status command:

$ impdp attach=<job_name>

Import> status
Worker 1 Status:
  State: EXECUTING                      
  Object Schema: SH
  Object Name: C_CUST2_EMAIL
  Worker Parallelism: 1

In another session, we ran a new Data Pump import, but this time into a SQL file so we could examine the metadata for the object C_CUST2_EMAIL:

$ impdp parfile=<par_file_name> sqlfile=ddl.sql

$ cat ddl.sql | grep C_CUST2_EMAIL

Data Pump is adding a check constraint and enables it with the ENABLE clause. But there is a missing detail – the constraint is also validated even though there is no VALIDATE clause. It is the default.

What Happens

In the source database, the check constraint was enabled and validated. During import, Data Pump adds the constraint with the same attributes. The only way to add a validated constraint – is to validate it.

But if the constraint was validated during export, then we know data in the dump file is validated as well. When importing data that we know is validated, why do we need to validate it again?

Because we know, but the database doesn’t know it. In order for a check constraint to be marked as validated, the database must ensure that it is truly the case. It trusts no one and insists on validating the data.

And when you import data from a dump file, can you actually be sure all data is good? What if …

  • someone tampered with the dump file
  • data was changed during export using the remap option
  • data was changed as part of character set conversion
  • data was changed as part of timezone file conversion

Why Does It Takes So Long To Validate A Constraint

To answer that, let’s trace:

SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 4';
SQL> ALTER TABLE sh.customers2 ADD CONSTRAINT c_cust2_valid 
     CHECK (cust_valid in ('I', 'A')) 

Run the trace file through tkprof and have a look at the first recursive statement after the ALTER TABLE ... ADD CONSTRAINT statement:

select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 
from "SH"."CUSTOMERS2" A 
where not ( cust_email like '%@%')

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1     49.03      66.61    3103448    3103510          1           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     49.03      66.62    3103448    3103510          1           0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 100     (recursive depth: 3)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  TABLE ACCESS FULL CUSTOMERS2 (cr=3103510 pr=3103448 pw=0 time=66618486 us starts=1 cost=2 size=39 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file scattered read                      24274        0.20         22.45
  PGA memory operation                            1        0.00          0.00

The database checks the table for any existing data that violates the constraint. It does so by issuing a SELECT using the inverse function of the check constraint. This results in a full table scan, and that is what takes so long.

If you have multiple check constraints on the same table, Data Pump adds them one by one, and that results in multiple full table scans.

Since full table scans rarely populate the buffer cache, you end up with a lot of disk reading as well. You can verify that in the example above in the row source part. Physical reads (pr) are almost the same as consistent gets (cr). Almost every read requires a physical read.

But I Know Data Is Good

If you are sure that the data is good and you want to avoid the validation of data, you can create the check constraints as enabled but not validated. Existing data is not checked, but new data will be validated.

There are two ways of doing that:

1. Change Constraints in Source

Before exporting the data, in the source database, change all validated check constraints to NOVALIDATE:


But be aware that you are changing the source database. If you need to use it for other purposes or as fallback, this might not be a good idea.

2. Change Constraints During Import

First, import the data, but exclude the constraints. This will be much faster because no time is spent on validating constraints:

$ impdp ... exclude=constraint

Next, extract the definition of all constraints from the Data Pump dump file and into a SQL file:

$ impdp ... include=constraint sqlfile=ddl.sql

Now, add the NOVALIDATE clause to all the statements:

CHECK (cust_email like '%@%') 

But There Is a Catch!

Regardless of which method you choose, there is a catch. Validated constraints are very useful to the database because it enables the optimizer to perform query rewrite and potentially improve query performance. Also, index access method might become available instead of full table scans.

You want to get those constraints validated. But you don’t have to do it during the import. Validating an enabled, not validated constraint does not require a lock on the table, so you can do it while the database is open for business.

What About Rely

If you follow option 2 and create the constraints after import as enabled and not validated, you could also add the rely clause:

CHECK (cust_email like '%@%') 

Rely tells the database that you know the data is good. The optimizer still doesn’t trust you until you set the parameter QUERY_REWRITE_INTEGRITY to TRUSTED. Now, the optimizer can now benefit from some query rewrite options, but not all of them.

Nothing beats a truly validated constraint!


Importing validated check constraints takes time. If you want to save time in your maintenance window, instruct Data Pump to exclude the constraints, and add them yourself as enabled and not validated. This will save a lot of time that Data Pump otherwise would have to spend on validating the constraints.

The database benefits from validated constraints, so you should validate them. The database can validate constraints without a lock on the table if the constraints are already enabled. Thus, you can postpone the validation to a later time in your maintenance window, and you can perform other activities at the same time. Perhaps you can validate constraints while users are testing the database. Or wait until the next maintenance window. Just be aware that you might see some SQLs degrade until all constraints are validated.

If Importing Statistics Using DBMS_STATS Is Slow

When migrating Oracle Databases, you often transport the statistics using dbms_stats. Our team got involved in an interesting case the other day, and I would like to share the solution with you.


A customer migrated to Oracle Database 19c and decided to move the statistics using dbms_stats.import_schema_stats. They noticed that the procedure started to burn a lot of CPU, and after 38 hours, they gave up.

A SQL Monitor report showed one statement which spent almost all the time. An insert statement into sys.dbms_stats_id_map_tab with a subselect referencing the staging table. The staging table is the one you specify in the argument stattab in the call to dbms_stats.import_schema_stats. The staging holds the optimizer statistics in a portable format. From the staging table, the procedure can insert it into the data dictionary in the proper format.

Obviously, they could see already that the procedure would use far too much time.


The first shot was to look at the code behind dbms_stats. But this specific code hadn’t changed since its introduction many releases ago.

Recursive statements that touch the data dictionary immediately brought our attention to dictionary statistics. But the customer told us that they were not stale.

The customer ran SQL Tuning Advisor on the offending statement, and one of the findings was about stale statistics. SQL Tuning Advisor recommended gathering statistics on the staging table and corresponding indexes.

One of our performance experts looked at the execution plan and found a pattern he had seen before. He tried to disable Join Predicate Push Down in the session. It helped, but this was just a workaround. We wanted to find the root cause.


The SQL Tuning advisor came up with the real problem. Stale statistics on the staging table and corresponding indexes. Once the customer gathered statistics on the staging table and indexes, the import of statistics finished in 2 hours and 27 minutes which was acceptable.

We also discovered that the dictionary statistics were not as accurate as the customer had initially concluded. In fact, by mistake, they had misinformed us. A fresh run of dbms_stats.gather_dictionary_stats gave a slight performance improvement as well.


These recommendations are now part of our best practices for migrations.

  1. Before importing statistics using dbms_stats.import_schema_stats, gather stats on the staging table.

  2. Immediately after importing a lot of data, and before you import statistics with dbms_stats or anything else, you should gather dictionary statistics. This applies to regular Data Pump imports and transportable tablespace metadata imports.

The run book should look like this:

$ impdp parfile=import.par ...
$ sqlplus / as sysdba

SQL> exec dbms_stats.gather_dictionary_stats; 
SQL> exec dbms_stats.gather_table_stats(
SQL> exec dbms_stats.import_schema_stats(

In the above example, the staging table is called SYSTEM.MY_STG_TAB_1.

The recommendation applies as well if you are using the procedures dbms_stats.import_database_stats or dbms_stats.import_table_stats.

Dictionary Statistics

Why is it important to gather dictionary statistics immediately after the import?

When you import data or use transportable tablespaces, you often have a brand new, empty database. Then you import a lot of objects. Those objects are represented as rows in the data dictionary. For instance, the tables you import now appear as rows in SYS.TAB$, the partitions in SYS.TABPART$, the indexes in SYS.IND$, and so forth.
Those internal tables were almost empty before – there were not a lot of tables. Now they have a lot of rows. This means that the statistics are stale. When you start to use functionality in the database, like importing statistics, recursive queries using the internal tables will be executed. With stale statistics on the dictionary, you can have suboptimal execution plans and bad performance. Gathering dictionary statistics can fix this for you.


Statistics are always vital, whether optimizer statistics on user data or internally in the data dictionary. Be sure to verify the accuracy of your statistics when you have problems.

Also, SQL Tuning Advisor is a great tool. It can quickly come up with suggestions for fixing problems. Use the recommendations as input to your troubleshooting. SQL Tuning Advisor also works on internal tables.

Additional Information

We have a few videos on our YouTube channel which have more information about transporting statistics with dbms_stats.

XTTS: Full Transportable Export/Import

In this blog post series, I use Full Transportable Export/Import (FTEX) to move the metadata during a cross-platform transportable tablespace migration (XTTS). The documentation states:

You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.


A different blog post already covers the requirements for FTEX. Below is a supplement to that list:

  • The user performing the export and import must have the roles DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE, respectively. Don’t run the Data Pump jobs as SYS AS SYSDBA!
  • During export, the default tablespace of the user performing the export must not be one of the tablespaces being transported. In addition, the default tablespace of the user performing the export must be writable. Data Pump needs this to create the control table.
  • The target database (non-CDB) or PDB must not contain a tablespace of the same name as one of the tablespaces being transported. Often this is the case with the USERS tablespace. Either use Data Pump remap_tablespace or rename the tablespace (alter tablespace users rename to users2).
  • All tablespaces are transported. It is not possible to exclude a tablespace or a user from the operation.

What Is Included?

Generally, you should count on everything is included, except SYS objects and things specified in the next chapter. Below is a list of things that are included as well. It is a list of examples from previous questions I have been asked.

  • If a user schema has tables in SYSTEM or SYSAUX tablespace, such tables are also transported. But they are not stored in the transported tablespaces. Instead, those tables are exported into the dump file using conventional export. Examples:
    SQL> --Exported into dump file
    SQL> create table app.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table app.my_tab2 (...) tablespace users;
  • If you created any new tables as SYSTEM or any other internal schema, except SYS, those tables will also be transported. If such tables are in the SYSTEM or SYSAUX tablespace, then they are exported into the dump file. Examples:
    SQL> --Exported into dump file
    SQL> create table system.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table system.my_tab2 (...) tablespace users;
    No need to emphasize that you should never create any objects in Oracle maintained schemas. But we all know it happens…
  • Public and private database links.
  • Private synonyms.
  • Profiles.
  • Directories including the privileges granted on them, although they are owned by SYS. The contents stored in the directory in the file system must be moved manually.
  • External tables definition, but the underlying external files must be moved manually.
  • Temporary tables
  • All schema level triggers (CREATE TRIGGER ... ON SCHEMA), including on system events, except those owned by SYS
  • All database level triggers (CREATE TRIGGER ... ON DATABASE) owned by an internal schema, except SYS.
  • SQL patches.
  • SQL plan baselines.
  • SQL profiles.

What Is Not Included?

The transport does not include any object owned by SYS. Here are some examples:

  • User-created tables in SYS schema are not transported at all. You must re-create such tables (but you should never create such tables in the first place).
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab1 (...) tablespace system;
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab2 (...) tablespace users;
  • Grants on tables or views owned by SYS, like DBA_USERS or v$datafile.
  • Any trigger owned by SYS.

In addition, the following is not included:

  • Public synonyms.
  • AWR data is not included. You can move such data using the script $ORACLE_HOME/rdbms/admin/awrextr.sql.

How Does It Work?

There are two keywords used to start a full transportable job: TRANSPORTABLE and FULL. If you want to start an FTEX import directly over a network link:

$ impdp ... transportable=always full=y network_link ...

If you want to use dump files:

$ expdp ... transportable=always full=y
$ impdp ... full=y

A Few Words of Advice

Practice, practice, practice

  • Start on a small database and work on your runbook.
  • Eventually, prove it works on a production-size database.


  • To ensure consistency. There are many steps, and it is easy to overlook a step or miss a detail.
  • To avoid human error. Humans make mistakes. Period!

Save logs

  • Data Pump
  • RMAN
  • Terminal output

Automate clean-up procedure

  • To repeat tests and effectively clean up the target environment.
  • In case of failure and rollback during production migration, you should know how to resume operations safely.

Shut down source database

  • Be sure to offline source database after migration. Having users connect to the wrong database after a migration is a disaster.

Data Pump Import

  • Importing directly into the target database using the NETWORK_LINK option is recommended.

Timezone File Version

Check the timezone file version of your source and target database:

SQL> select * from v$timezone_file;

If they differ and the target timezone file version is higher than the source database, Data Pump will convert any TIMESTAMP WITH TIME ZONE (TSTZ) column to the newer timezone conventions. The conversion happens automatically during import.

Since Data Pump must update data during import, it requires that Data Pump can turn the tablespaces READ WRITE. Thus, you can’t use TRANSPORTABLE=KEEP_READ_ONLY if you have tables with TSTZ columns. Trying to do so will result in:

ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues resulting from time zone version mismatch.
Source time zone version is ?? and target time zone version is ??.

If your target database has a lower timezone file version, you can’t use FTEX. You must upgrade the timezone file in your database.

TDE Tablespace Encryption

If the source database has one or more encrypted tablespaces, you must either:

  • Supply the keystore password on export using the Data Pump option ENCRYPTION_PASSWORD.
  • Specify ENCRYPTION_PWD_PROMPT=YES and Data Pump will prompt for the keystore password. This approach is more safer because the encryption password is otherwise stored in the shell history.

You can read more about Full Mode and transportable tablespaces in the documentation.

You can only transport encrypted tablespaces, if the source and target platform share the same Endian format. For example, going from Windows to Linux is fine, because they are both little Endian platforms. Going from AIX to Linux will not work, that’s big to little Endian. When a tablespace is transported to a platform of a different Endian format, the data files must be converted. The conversion does not work on encrypted tablespaces. The only option is to decrypt the tablespace before transport.

Further Reading


MOS notes:

Blog posts:

Other Blog Posts in This Series

XTTS: Prerequisites

When migrating Oracle Database to a different endian format using transportable tablespaces and incremental backups (XTTS), a list of requirements must be met. The following list of requirements exist when using:

V4 Perl Script

The most important requirements – for a complete list check MOS note:

  • Windows is not supported.
  • RMAN on the source system must not have DEVICE TYPE DISK configured to COMPRESSED.
  • RMAN on the source system must not have default channel configured to type SBT.
  • For Linux: Minimum version for source and destination is
  • Other platforms: Minimum version for source and destination is
  • Disk space for a complete backup of the database on both source and target host. If your data files take up 100 TB, you need an additional 100 TB of free disk space. For 12c databases, and if your data files have a lot of free space, the backup might be smaller due to RMAN unused block compression.

Also worth mentioning is that the Perl script during the roll forward phase (applying level 1 incremental) will need to restart the target database. Applying the incremental backups on the target data files happens in NOMOUNT mode. Be sure nothing else uses the target database while you roll forward.

Block Change Tracking (BCT) is strongly recommended on the source database. Note, that this is an Enterprise Edition feature (in OCI: DBCS EE-EP or ExaCS). If you don’t enable BCT the incremental backups will be much slower because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.

What If – V3 Perl Script

If disk space is a problem or if you can’t meet any of the other requirements, check out the below two MOS notes:

They describe a previous version of the Perl script, version 3. The scripts use DBMS_FILE_TRANSFER to perform the conversion of the data files in-flight. That way no extra disk space is needed. However, DBMS_FILE_TRANSFER has a limitation that data files can’t be bigger than 2 TB.

Also, the V3 scripts might be useful for very old databases.

Transportable Tablespaces In General

To get a complete list of limitations on transporting data, you should look in the documentation. The most notable are:

  • Source and target database must have compatible character sets. If the character sets in both databases are not the same, check documentation for details.
  • No columns can be encrypted with TDE Column Encryption. The only option is to remove the encryption before migration and re-encrypt afterward.
  • TDE Tablespace Encryption is supported for same-endian migration if the source database is or newer. If you need to go across endianness, you must decrypt the tablespaces and re-encrypt after migration. Remember, Oracle Database 12.2 can encrypt tablespaces online.
  • If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the conversion on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
  • Requires Enterprise Edition.
  • The database timezone file version in the target database must be equal to or higher than the source database.
  • The database time zone must match if you have tables with TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). If you have such tables, and the database time zone does not match, those tables are skipped during import. You can then move the affected tables using a normal Data Pump table mode export and import. To check the database time zone:
    SQL> select dbtimezone from dual;
    You can alter the time zone for a database with an ALTER DATABASE statement.

Full Transportable Export/Import

FTEX automates the process of importing the metadata. It is simpler to use and automatically includes all the metadata in your database. Compared to a traditional transportable tablespace import, FTEX is a lot easier and removes a lot of manual work from the end user. But there are a few requirements that must be met:

  • Source database must be or higher.
  • Target database must be or higher.
  • Requires Enterprise Edition.
  • COMPATIBLE must be set to 12.0.0 or higher in both source and target database. If your source database is an Oracle Database 11g, this is not possible. In that case, set version to 12 or higher during Data Pump export instead.

If you can’t meet the requirements, check out traditional transportable tablespace. It have different requirements, and it allows more customization.

Other Blog Posts in This Series

XTTS: Pro Tips

When doing the XTTS blog post series, I came across a lot of valuable information. These nuggets were not suitable for a dedicated blog post but are still worth sharing.

Pro Tip 1: The Other Backups

When you are preparing for an XTTS migration, you will be doing a lot of backups of the source database. Will those backups somehow interfere with your existing backups?

The Perl script takes the first backup – the initial level 0 backup – using:

RMAN> backup for transport .... ;

It is a backup created for cross-platform transport and not something to be used to restore the source database. The documentation states about cross-platform backups:

RMAN does not catalog backup sets created for cross-platform transport in the control file. This ensures that backup sets created for cross-platform transport are not used during regular restore operations.

This is good because it ensures that your recovery strategy will not take those backups into account. Most likely, you will be moving the files from the source system pretty soon, and in that case, you don’t want RMAN depending on them.

But after the initial level 0 backup, you will create level 1 incremental backups. The incremental backups are just regular incremental backups:

RMAN> backup incremental from scn ... tablespace ... ;

It is not a cross-platform backup, so it will be recorded in the control file and/or recovery catalog. Once you move those incremental backups away from the source system, be sure to tidy them up in the RMAN catalog:

RMAN> crosscheck backupset;
RMAN> #Optionally, remove them
RMAN> delete expired backupset;

Sum up: While preparing for the migration, keep taking your regular backups.

Pro Tip 2: Data Pump Parameters

Use a parameter file for your Data Pump export and import. Especially, the import will be a lot easier because you don’t need to write a very long command line with character escapes and the like:

$ cat export.par

$ expdp ... parfile=export.par

Use multiple dump files and split the files at a reasonable size:


Add diagnostic information to the log file (Oracle Database 12c and beyond):


Exclude statistics and check my blog post on how to deal with statistics:


Pro Tip 3: Generate Data Files Parameters For Data Pump

The list of files that Data Pump needs, I generate with this query. It works if you are using ASM, and transport_datafile will point to the alias – not the real file:

export ASMDG=+DATA
asmcmd ls -l $ASMDG | grep '^DATAFILE' | sed -n -e 's/ => .*//p' | sed -n -e 's/^.* N \s*/transport_datafiles='$ASMDG'\//p'

Pro Tip 4: Queries

Generate a comma separated list of tablespaces:

   listagg(tablespace_name, ',') within group (order by tablespace_name) 
   and tablespace_name not in ('SYSTEM','SYSAUX');

Generate a comma separated list of tablespaces in n batches:

define batches=8
   listagg(tablespace_name, ',') within group (order by tablespace_name) 
from (
      mod(rownum, &batches) as batch, 
   from (
         dba_tablespaces t, 
         dba_data_files d 
         and t.contents='PERMANENT' 
         and t.tablespace_name not in ('SYSTEM','SYSAUX')
      group by 
         order by 2 desc
group by batch;

Generate read-only commands

   'ALTER TABLESPACE ' || tablespace_name ||' READ ONLY;' 
   and tablespace_name not in ('SYSTEM','SYSAUX');

Pro Tip 5: Troubleshooting

Be sure to always run the Perl script with debug option enabled:

$ #Set environment variable
$ export XTTDEBUG=1
$ #Or use --debug flag on commands
$ $ORACLE_HOME/perl/bin/perl ... --debug 3

MOS notes:

Here’s a video on how to troubleshoot in Data Pump:

Pro Tip 6: Update Perl

Rarely have I seen issues in the Perl script caused by an old, outdated version of Perl. Depending on which Perl instance you are using:

Pro Tip 7: Additional Information

These webinars might be of interest to you:

You can also flip the slides of all the webinars.

Pro Tip 8: Zero Data Loss Recovery Appliance (ZDLRA)

If you have a ZDLRA you can use it to make your migration easier:

Pro Tip 9: E-Business Suite (EBS)

Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12.2 Using Oracle Database 19c Enterprise Edition On a Multitenant Environment (Doc ID 2674405.1)

Pro Tip 10: ORA-39032: function TRANSPORTABLE is not supported in FULL jobs

If you get this error stack when exporting in an Oracle Database 11g:

ORA-39005: inconsistent arguments
ORA-39032: function TRANSPORTABLE is not supported in FULL jobs

You most likely forgot to add the version parameter to your Data Pump export:

$ expdp ... version=12

Pro Tip 11: Advanced Queues (AQ)

A few good MOS notes to read:

The first MOS note is especially interesting:

Now it comes the interesting part, let say DBA needs to export a particular schema that contains queues, and import into another database, so if at the end of the impdp operation the DBA does a simple comparison of the number objects from the origin database schema with the target database schema, there is a big chance these counts will not match, but there will be no errors or warnings at the expdp/impdp logs. This happens exactly because during the export/import we will not consider the queue objects created on the fly on the source side, usually the ones ending by _P and _D, thus the target database may not have these objects, but of course, they may get created later on whenever required during the use of the queue. This is an expected behavior and the functionally is working as expected. A suggested way to check whether everything has been imported successfully is to use a simple query to check the total number of "QUEUE" type objects instead, for example: SQL> select count(*) from DBA_OBJECTS where owner=’&schema’ and object_type = ‘QUEUE’;

Pro Tip 12: ORA-39218 or ORA-39216

If your Data Pump metadata import fails with the below error, you are having problems with evolved types. This blog post tells you what to do:

ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch

Failing sql is:
BEGIN SYS.DBMS_METADATA.CHECK_TYPE('APPUSER','CAR_TYPE','4','613350A0E37618A6108C500936D0C6162C',''); END;

Other Blog Posts in This Series

XTTS: Make It Fast

You have various means at your disposal if you want the migration using cross-platform transportable tablespaces (XTTS) and incremental backups to perform better.


I strongly recommend that you apply the recent-most Release Update to your target Oracle Database. Use the download assistant to find it.

Also, I strongly recommend that you apply the Data Pump bundle patch as well. The Data Pump bundle patch is a collection of important Data Pump patches. The MOS note Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1) has more details and links to the patches.

Use Backup Sets

If both source and target databases are Oracle Database 12c or newer, you should set the following in


RMAN will use backup sets using the new backup for transport syntax. Backup sets are better than image copies because RMAN automatically adds unused block compression. Unused block compression can shrink the size of the backup and improve performance.

Block Change Tracking

Enable block change tracking on source database. Although strictly speaking not required, it is strongly recommended, because it will shorten the time it takes to perform incremental backups dramatically. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata:

SQL> select status, filename from v$block_change_tracking;
SQL> alter database enable block change tracking;

If the source database in on 19.9 or earlier, and you don’t get much benefit out block change tracking, you should look at Bug 29148799 – Performance Issue During Rman Backup When Block Change Tracking Is Enabled (Doc ID 29148799.8).

Parallel Options

If you look in, there is a parameter called parallel. What does it do?

It controls the number of batches in which the backup and restore/recover commands run. The Perl script will split the tablespaces into n batches – n is parallel from One batch will process all the data files belonging to those tablespaces. If you have 20 tablespaces, the Perl script will run in four batches of five tablespaces. If each tablespace has three data files, the Perl script will run four batches of each 15 data files.

Each batch will process n data files at the same time. n being the default parallelism assigned to the disk channel. To find the current parallelism (here it is two):

RMAN> show all;

If you want to change it to eight:


When you restore and convert the data files on the target database, it will also use the RMAN configuration parameter.

To enable parallel backup and restore, be sure to change the default disk parallelism on both source and target database host.

For image file backups (usermantransport=0), when the data files are converted on the target database, it will use the parallel degree specified in parameter parallel. Backup sets are converted using the RMAN configuration parameter.

Multiple Perl Scripts

If you really want to squeeze the very last drop of performance out of your system, or if you want to use multiple RAC nodes, you can use multiple Perl scripts.

Normally, you only have one Perl script with corresponding files like

[oracle@sales12 xtts]$ pwd
[oracle@sales12 xtts]$ ls -l
total 260
-rw-r--r-- 1 oracle oinstall   5169 Mar 11 19:30 xtt.newproperties
-rw-r--r-- 1 oracle oinstall    266 Mar 11 19:30
-rw-r--r-- 1 oracle oinstall   1390 Mar 11 19:30 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall     71 Mar 11 19:30 xttdbopen.sql
-rw-r--r-- 1 oracle oinstall 180408 Mar 11 19:30
-rw-r--r-- 1 oracle oinstall  11710 Mar 11 19:30 xttprep.tmpl
-rw-r--r-- 1 oracle oinstall     52 Mar 11 19:30 xttstartupnomount.sql

That one script will process all the tablespaces:

[oracle@sales12 xtts]$ cat

The idea with multiple Perl scripts is that you have many sets of Perl scripts; each set working on a unique batch of tablespaces.

So instead of just one folder, I could have four folders. Each folder is a complete Perl script with all the files. Download and extract to four folders:

[oracle@sales12 ~]$ pwd
[oracle@sales12 ~]$ ls -l
drwxr-xr-x 2 oracle oinstall  4096 Mar 11 19:30 xtts1
drwxr-xr-x 2 oracle oinstall  4096 Mar 11 19:30 xtts2
drwxr-xr-x 2 oracle oinstall  4096 Mar 11 19:30 xtts3
drwxr-xr-x 2 oracle oinstall  4096 Mar 11 19:30 xtts4

Each of the files will work on a unique set of tablespaces:

[oracle@sales12 xtts]$ cat /home/oracle/xtts1/
[oracle@sales12 xtts]$ cat /home/oracle/xtts2/
[oracle@sales12 xtts]$ cat /home/oracle/xtts3/
[oracle@sales12 xtts]$ cat /home/oracle/xtts4/

You must also ensure that src_scratch_location and dest_scratch_location are set to different locations. Each set of Perl scripts must have dedicated scratch locations.

You have multiple concurrent sessions running when you need to backup and restore/recover. Each session will use one of the Perl scripts, and, thus, process the tablespaces concurrently.

SSH session 1:

export TMPDIR=/home/oracle/xtts1
$ORACLE_HOME/perl/bin/perl --backup

SSH session 2 (notice I changed TMPDIR to another directory, xtts2):

export TMPDIR=/home/oracle/xtts2
$ORACLE_HOME/perl/bin/perl --backup

SSH session 3:

export TMPDIR=/home/oracle/xtts3
$ORACLE_HOME/perl/bin/perl --backup

SSH session 4:

export TMPDIR=/home/oracle/xtts4
$ORACLE_HOME/perl/bin/perl --backup

In the above example, I used four different Perl scripts and four concurrent sessions. But you can scale up if you have the resources for it. One of our customers ran with 40 concurrent sessions!

You must ensure to include all your tablespaces. The sum of all the tablespaces in your Perl scripts must be all of the tablespaces in your database. Don’t forget one of the tablespaces.

On RAC, you can run the Perl scripts on all the nodes, utilizing all your resources.

Watch this video to learn how a customer migrated a 230 TB database using multiple Perl scripts

Database Complexity

The Data Pump export and import must be done during downtime. The time it takes to perform these two tasks is often critical.

How long will it take? It depends (classic IT answer)!

The complexity of your user data dictionary has the biggest impact. The more objects, generally the longer the export and import will take. Also, certain features like partitioning have a big impact as well. It might be impossible to reduce the user data dictionary complexity, but it can have a big impact. In some situations, I have seen old or obsolete data in the database. Or partitions that had to be archived. Or entire groups of tables that were used by a feature in the application that was no longer in use. Getting rid of such data can have an impact.

Another thing to look at is invalid objects. If you have objects that can’t compile, check the reason and whether the object can be dropped. Often these invalid objects are just remnants from old times. Getting rid of those will make the export faster, the import will be faster as well and the database won’t have to spend time trying to compile them.

A thing to test: What works best in your migration: Data Pump in dump file mode or network mode? Normally, we recommend dump file mode because it has much better parallel capabilities. But metadata export and import for transportable tablespace jobs happen serially anyway, so there might be a benefit of using Data Pump in network mode. When using Data Pump in network mode, you just start the Data Pump import without first doing an export. The information is loaded directly into the target database over a database link.

Skip Statistics

I recommend that you skip statistics when you export:


Instead, either:

  1. Regather new statistics on target database.
  2. Import statistics from source database using DBMS_STATS.
  3. Import statistics from a test system using DBMS_STATS.

Options 1 and 3 are especially appealing if your target database is very different from the source. Imagine going from AIX to Exadata, from to 19c, and non-CDB to PDB. The platform itself is very different; Exadata has superiour capabilities. In addition, it is a new version with other histogram types and different architecture. In this case, it does make sense to get new statistics that can better reflect the new environment.

We discuss statistics during migrations in detail in our webinar Performance Stability, Tips and Tricks and Underscores.

Dictionary Statistics

Accurate statistics are always important and it applies to Data Pump jobs as well. You should gather dictionary statistics:

  • Within reasonable time before Data Pump export.
  • Immediately after Data Pump import.

I usually go just schema stats on SYS and SYSTEM but you can use the dedicated procedure as well:

SQL> begin 
SQL> --Or
SQL> exec dbms_stats.gather_dictionary_stats;

Parallel Metadata Export and Import

Starting with Oracle Database 21, Data Pump supports parallel export and import of metadata when using transportable tablespaces. Add the following to your Data Pump parameter file. n is the level of parallelism:


If an export was made in a lower release that didn’t support parallel export, you can still import in parallel. Parallel Metadata import works regardless of how the Data Pump export was made.

Other Blog Posts in This Series

XTTS: ASM Aliases and Why You Should Get Rid of Them

If you migrate an Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups and if your target database use ASM, you should get rid of the aliases after the migration.

What Is an ASM Alias?

When you use ASM, there is tight control over the file names. ASM strictly enforces the naming standard dictated by Oracle Managed Files (OMF), and only the database can create file names that comply with OMF.

Sometimes it is handy to create files in other locations in ASM that still refer to a database file. Here you can use aliases. Aliases work like a symbolic link in the file system.

How can you tell if a file is an alias?

Alias Oracle ASM file names are distinguished from fully qualified file names or numeric file names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers, such as in the format USERS.259.685366091.

When you use ls -l you can also tell whether a file is an alias. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, you can see in the Name column that it is an alias. The => indicate it:

Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855

You can read about Fully Qualified File Name Form in the ASM documentation, if you are interested.

Why Are the Aliases Created?

When the Perl script is restoring and recovering the data files on the target database, they do not belong to any database yet. The tablespaces have not been plugged into any database yet. Hence, it is impossible to figure out the right OMF name of the data files. As an alternative, ASM names the data files according to the syntax of the source database. For instance, it will use the source database GUID (select guid from v$containers) as part of the name. In addition, the Perl script creates ASM aliases using the following format: <dest_datafile_location>/<tablespace_name>_<file#>.dbf

When you perform the Data Pump import, you can refer to the aliases in your Data Pump parameter file (transport_datafile). Using the aliases is especially useful if you plan on having a standby database.

How Do I Get Rid of the Aliases?

After performing the Data Pump import, the tablespaces are plugged into a database, and now the data files belong to a database. But the target database is referring to the data files either via:

  • An ASM alias
  • Or directly via the absolute file name. As described earlier, the absolute file path uses the OMF syntax of the source database

Let me illustrate that. Imagine:

  • In dest_datafile_location=+DATA.
  • My data file is named users01.dbf, belongs to tablespace USERS and has file ID 65.
  • Target DB_UNIQUE_NAME is SALES2.
  • Source database GUID is 86D5DC2587337002E0532AB2A8C0A57C.

How will the file be registered in the database?

  • If I used the aliases, it is known as +DATA/users_65.dbf.
  • If I used the absolute file name, it is known as +DATA/SALES2/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/users.280.1099469855. ASM generates the last two sets of numbers.

Neither of the two formats is proper OMF names. What is the real OMF name? Imagine:

  • Target database GUID is DA495482D68D0220E0530F01000A98DF
  • The real OMF file name is (notice the change in GUID): +DATA/SALES2/DA495482D68D0220E0530F01000A98DF/DATAFILE/users.280.1099469855

You can get the GUID of a database by using select guid from v$containers.

In ASM, only the database can store a file in OMF syntax. You must fix this from the target database. The easiest way is to use online data file move. If you don’t specify the target location, the database will generate an OMF name:

SQL> --using file number
SQL> alter database move datafile 65;
SQL> --using full name
SQL> alter database move datafile '+DATA/users_65.dbf';

How does the move work?

  • It is a entirely online operation.
  • It is a block-by-block copy.
  • The database copies the data file. While the copy operation takes place, the two files are kept in sync until the database can switch to the new file. After that, the database removes the original file.
  • If the data file belongs to a PDB, you must switch your session to that container.

You can learn more about online data file move in our YouTube video:

Why Bother?

If my database works fine, why should I worry? I can think of at least two reasons:

  • Comply to naming standard
  • Avoid problems in other migrations

Comply to naming standard

I highly recommend that you use and comply with any naming standard, including OMF. Data files that are not appropriately stored according to OMF, should be moved to the correct location.

When I worked outside in the real world as a DBA, I remember multiple occasions of loss of data files. In many situations, a DBA had found a file apparently not belonging to a database – at least according to the naming standard. But the file was used by a database; it was just not stored in the correct location. When the file was removed = big problem in the database.

With many databases and many hosts, it is very important that you make standards and keep with the standards. Otherwise, you will end up in a big mess.

Avoid problems in other migrations

This is especially relevant if you need to perform multiple migrations to the same database host.

The Perl script used for the migration will create the aliases in the location specified by dest_datafile_location. The names of the aliases are very simple, and there is a risk that another migration will try to make the same alias.

Imagine you already did one migration. The database uses the alias +DATA/users_4.dbf. Now you want to make a second migration, and this database also wants to use +DATA/users_4.dbf. The same alias can’t be used for two different files. Big problem!

A user left a comment on my blog telling me this actually lead to corruption in the first database. That risk is a very good reason for getting rid of the aliases and using only proper OMF file names.


ASM aliases are created automatically as part of the migration. The aliases are very useful during the migration, but I highly recommend getting rid of the aliases right after the migration.

Other Blog Posts in This Series

XTTS: Testing the Procedure Multiple Times Using Read-Only Tablespaces

You can perform multiple tests of the procedure for cross-platform transportable tablespaces (XTTS) and incremental backups. You don’t have to restore and recover the data files every time. It makes your testing effort a lot more efficient.

Default Behaviour

When you perform a Data Pump import of transportable tablespaces, the data files plugs into the target database, and the tablespaces turns into read write mode immediately. It happens in the phase DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK.

Dump Pump must turn the tablespaces into read write mode for several reasons:

  • Suppose you have tables with columns to type timestamp with timezone information (TSTZ) and there is a different in the database time zone file version. In that case, all TSTZ columns must be updated to reflect the new timezone conventions.
  • The data file bitmap of free space must be updated in case you have segments in your tablespaces that are not imported.

When the tablespaces switches to read write mode the data file headers are updated. Now, the data files changes in such a way that they belong to that database (I imagine it has something to do with DBID, SCN, and so forth).

This also means that you can only import the tablespaces one time. If you want to repeat the process, you need to restore and recover the data files. Not even Flashback Database can save you.


A new option was added to Data Pump in Oracle Database 19c to solve this: TRANSPORTABLE=KEEP_READ_ONLY. Here is what the documentation says:

If specified, then data files remain in read-only mode. As a result of this setting, the tables containing TSTZ column data cannot be updated, and are dropped from the import. In addition, data file bitmaps cannot be rebuilt.

Keeping the tablespaces read only and leaving the data files untouched sounds good. But there are some restrictions:

  • If you have TSTZ columns, they can’t be checked and updated. This means that the entire table with a TSTZ column is skipped during import. You will see this error in the Data Pump log file: ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues. To solve it, you need to manually import the table afterward using a Data Pump table mode export and import.
  • If you have any segments in your data files that no longer belong to an object (e.g. if that specific object was not imported), the free space bitmap can’t be updated, and you have free space that can’t be used. You can solve this later on when the tablespaces are in read write mode using dbms_space_admin.tablespace_rebuild_bitmaps.

If you can live with these restrictions, you can use this feature.


You can use TRANSPORTABLE=KEEP_READ_ONLY when testing in the following way:

  1. Perform the backups on the source database using $ORACLE_HOME/perl/bin/perl --backup.
  2. Transfer res.txt and restore/recover the data files using $ORACLE_HOME/perl/bin/perl --restore
  3. Now you want to test the migration procedure. Set the tablespaces in the source database in read only mode, do an incremental backup and perform the Data Pump export. Optionally, use a temporarily activated standby database to avoid interruptions on the primary production database.
  4. Set a guaranteed restore point in the target database.
  5. After recovering the data files on the target system, you perform the Data Pump import. You must set the Data Pump parameter TRANSPORTABLE=KEEP_READ_ONLY to leave the data files untouched.
  6. When you are done with your tests, you either flash back to the restore point or delete the data in your database. The latter would be a series of DROP commands (schema, roles etc.) followed by DROP TABLESPACE commands. To preserve the data files, be sure to use DROP TABLESPACE ... INCLUDING CONTENTS KEEP DATAFILES. The KEEP clause is especially vital on ASM and with OMF.

    When you specify INCLUDING CONTENTS, the KEEP DATAFILES clause lets you instruct the database to leave untouched the associated operating system files, including Oracle Managed Files. You must specify this clause if you are using Oracle Managed Files and you do not want the associated operating system files removed by the INCLUDING CONTENTS clause.

  7. You can now continue to recover the data files on your target system with new incremental backups from the source database. You can roll forward the data files even though we had them plugged into a database. This is really cool. This allows you to repeat the test with new fresh production data without the tedious task of completely restoring and recovering the data files.
  8. Repeat the test cycle as many times as you want. If the data files are left untouched using TRANSPORTABLE=KEEP_READ_ONLY and you don’t turn them into read write mode manually, you can repeat the process.

Production Migration

Can you use TRANSPORTABLE=KEEP_READ_ONLY for the production migration? Yes, you can. But eventually, you will need to turn the tablespaces into read write mode.

I would not usually recommend it. But recently, I was involved in a specific case where it was helpful.

During testing, a customer ran into an issue where Data Pump hung during import. They killed the import and tried to repeat the Data Pump import. Unfortunately, Data Pump imports of transportable tablespaces are not resumable until Oracle Database 21c. So they had to start Data Pump all over. But the data files had already been touched by the target database. Now Data Pump refused to progress with the import because the data files were not as expected. The customer was advised on how to avoid the Data Pump hang. But they were still a little concerned about their upcoming production migration. They would like to keep the tablespaces in read only mode. Just in case something similar would happen. In that case, they could easily start all over because the data files were untouched.


During a migration project, you should use the Data Pump feature TRANSPORTABLE=KEEP_READ_ONLY to ease your testing efforts. In some rare cases, it might also be helpful for production migrations.

Other Blog Posts in This Series

XTTS: Testing the Procedure On Activated Standby Database

While preparing for the production migration of your Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups, you should test the procedure.

It is often impossible to get a full copy of the production database or a comparable test system. If you still want to test the migration procedure, you need to take the backups on the production database. That might have an impact on your production database. You could offload the backups to a standby database but periodically must set tablespaces in read only mode to perform the Data Pump exports. This means downtime for your application.

But there is a way to do all the work on the standby database. This will allow you to test the migration procedure without affecting the primary production database.

You can also read about the procedure in the MOS note Using XTTs in a Data Guard Environment.

How To

You should use the procedure described in a previous blog post about backups on a standby database. When you reach the final incremental backups you need to follow the information below.

I assume you have conducted a level 0 and a number of incremental backups on the standby database. Now it is time for the final incremental backup. We will do this on the standby database as well. You don’t need to touch the primary database except for a few minor changes (archive current log and defer redo log transport).

Now you have a consistent backup of the data files and a corresponding Data Pump export; both components were taken from a standby database. Transfer the files to your target database and test the procedure.

Production Migration

The heading says you can use this procedure for testing. How about the production migration?

Well, it could work, but I would not recommend it.

When you use this approach, you leave the primary database open for business. At the same time, you take the final incremental backup. This opens for a potential catastrophe. Users entering data into the open, primary database while you have taken the last incremental backup. That would mean data loss!

Using the other standby database approach, you are sure that no more data is entered into the source database, when you perform the final migration. You ensure this by setting the tablespaces read only in the primary database. This is why I do not recommend that you use the approach with an activated standby database for your production migration.


You should test your migration procedure and get comfortable with your runbook. You can use a temporarily activated standby database if you don’t have a dedicated test environment. This allows you to perform realistic testing without interrupting the primary database and your users.

Other Blog Posts in This Series