XTTS: Full Transportable Export/Import

In this blog post series, I use Full Transportable Export/Import (FTEX) to move the metadata during a cross-platform transportable tablespace migration (XTTS). The documentation states:

You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.

Requirements

A different blog post already covers the requirements for FTEX. Below is a supplement to that list:

  • The user performing the export and import must have the roles DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE, respectively. Don’t run the Data Pump jobs as SYS AS SYSDBA!
  • During export, the default tablespace of the user performing the export must not be one of the tablespaces being transported. In addition, the default tablespace of the user performing the export must be writable. Data Pump needs this to create the control table.
  • The target database (non-CDB) or PDB must not contain a tablespace of the same name as one of the tablespaces being transported. Often this is the case with the USERS tablespace. Either use Data Pump remap_tablespace or rename the tablespace (alter tablespace users rename to users2).
  • All tablespaces are transported. It is not possible to exclude a tablespace or a user from the operation.

What Is Included?

Generally, you should count on everything is included, except SYS objects and things specified in the next chapter. Below is a list of things that are included as well. It is a list of examples from previous questions I have been asked.

  • If a user schema has tables in SYSTEM or SYSAUX tablespace, such tables are also transported. But they are not stored in the transported tablespaces. Instead, those tables are exported into the dump file using conventional export. Examples:
    SQL> --Exported into dump file
    SQL> create table app.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table app.my_tab2 (...) tablespace users;
    
  • If you created any new tables as SYSTEM or any other internal schema, except SYS, those tables will also be transported. If such tables are in the SYSTEM or SYSAUX tablespace, then they are exported into the dump file. Examples:
    SQL> --Exported into dump file
    SQL> create table system.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table system.my_tab2 (...) tablespace users;
    
    No need to emphasize that you should never create any objects in Oracle maintained schemas. But we all know it happens…
  • Public and private database links.
  • Private synonyms.
  • Profiles.
  • Directories including the privileges granted on them, although they are owned by SYS. The contents stored in the directory in the file system must be moved manually.
  • External tables definition, but the underlying external files must be moved manually.
  • Temporary tables
  • All schema level triggers (CREATE TRIGGER ... ON SCHEMA), including on system events, except those owned by SYS
  • All database level triggers (CREATE TRIGGER ... ON DATABASE) owned by an internal schema, except SYS.
  • SQL patches.
  • SQL plan baselines.
  • SQL profiles.

What Is Not Included?

The transport does not include any object owned by SYS. Here are some examples:

  • User-created tables in SYS schema are not transported at all. You must re-create such tables (but you should never create such tables in the first place).
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab1 (...) tablespace system;
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab2 (...) tablespace users;
    
  • Grants on tables or views owned by SYS, like DBA_USERS or v$datafile.
  • Any trigger owned by SYS.

In addition, the following is not included:

  • Public synonyms.
  • AWR data is not included. You can move such data using the script $ORACLE_HOME/rdbms/admin/awrextr.sql.

How Does It Work?

There are two keywords used to start a full transportable job: TRANSPORTABLE and FULL. If you want to start an FTEX import directly over a network link:

$ impdp ... transportable=always full=y network_link ...

If you want to use dump files:

$ expdp ... transportable=always full=y
$ impdp ... full=y

A Few Words of Advice

Practice, practice, practice

  • Start on a small database and work on your runbook.
  • Eventually, prove it works on a production-size database.

Automate

  • To ensure consistency. There are many steps, and it is easy to overlook a step or miss a detail.
  • To avoid human error. Humans make mistakes. Period!

Save logs

  • Data Pump
  • RMAN
  • Terminal output

Automate clean-up procedure

  • To repeat tests and effectively clean up the target environment.
  • In case of failure and rollback during production migration, you should know how to resume operations safely.

Shut down source database

  • Be sure to offline source database after migration. Having users connect to the wrong database after a migration is a disaster.

Data Pump Import

  • Importing directly into the target database using the NETWORK_LINK option is recommended.

Timezone File Version

Check the timezone file version of your source and target database:

SQL> select * from v$timezone_file;

If they differ and the target timezone file version is higher than the source database, Data Pump will convert any TIMESTAMP WITH TIME ZONE (TSTZ) column to the newer timezone conventions. The conversion happens automatically during import.

Since Data Pump must update data during import, it requires that Data Pump can turn the tablespaces READ WRITE. Thus, you can’t use TRANSPORTABLE=KEEP_READ_ONLY if you have tables with TSTZ columns. Trying to do so will result in:

ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues resulting from time zone version mismatch.
Source time zone version is ?? and target time zone version is ??.

If your target database has a lower timezone file version, you can’t use FTEX. You must upgrade the timezone file in your database.

TDE Tablespace Encryption

If the source database has one or more encrypted tablespaces, you must either:

  • Supply the keystore password on export using the Data Pump option ENCRYPTION_PASSWORD.
  • Specify ENCRYPTION_PWD_PROMPT=YES and Data Pump will prompt for the keystore password. This approach is more safer because the encryption password is otherwise stored in the shell history.

You can read more about Full Mode and transportable tablespaces in the documentation.

You can only transport encrypted tablespaces, if the source and target platform share the same Endian format. For example, going from Windows to Linux is fine, because they are both little Endian platforms. Going from AIX to Linux will not work, that’s big to little Endian. When a tablespace is transported to a platform of a different Endian format, the data files must be converted. The conversion does not work on encrypted tablespaces. The only option is to decrypt the tablespace before transport.

Further Reading

Documentation:

MOS notes:

Blog posts:

Other Blog Posts in This Series

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s