XTTS: Full Transportable Export/Import

In this blog post series, I use Full Transportable Export/Import (FTEX) to move the metadata during a cross-platform transportable tablespace migration (XTTS). The documentation states:

You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.

Requirements

A different blog post already covers the requirements for FTEX. Below is a supplement to that list:

  • The user performing the export and import must have the roles DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE, respectively. Don’t run the Data Pump jobs as SYS AS SYSDBA!
  • During export, the default tablespace of the user performing the export must not be one of the tablespaces being transported. In addition, the default tablespace of the user performing the export must be writable. Data Pump needs this to create the control table.
  • The target database (non-CDB) or PDB must not contain a tablespace of the same name as one of the tablespaces being transported. Often this is the case with the USERS tablespace. Either use Data Pump remap_tablespace or rename the tablespace (alter tablespace users rename to users2).
  • All tablespaces are transported. It is not possible to exclude a tablespace or a user from the operation.

What Is Included?

Generally, you should count on everything is included, except SYS objects and things specified in the next chapter. Below is a list of things that are included as well. It is a list of examples from previous questions I have been asked.

  • If a user schema has tables in SYSTEM or SYSAUX tablespace, such tables are also transported. But they are not stored in the transported tablespaces. Instead, those tables are exported into the dump file using conventional export. Examples:
    SQL> --Exported into dump file
    SQL> create table app.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table app.my_tab2 (...) tablespace users;
    
  • If you created any new tables as SYSTEM or any other internal schema, except SYS, those tables will also be transported. If such tables are in the SYSTEM or SYSAUX tablespace, then they are exported into the dump file. Examples:
    SQL> --Exported into dump file
    SQL> create table system.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table system.my_tab2 (...) tablespace users;
    
    No need to emphasize that you should never create any objects in Oracle maintained schemas. But we all know it happens…
  • Public and private database links.
  • Private synonyms.
  • Profiles.
  • Directories including the privileges granted on them, although they are owned by SYS. The contents stored in the directory in the file system must be moved manually.
  • External tables definition, but the underlying external files must be moved manually.
  • Temporary tables
  • All schema level triggers (CREATE TRIGGER ... ON SCHEMA), including on system events, except those owned by SYS
  • All database level triggers (CREATE TRIGGER ... ON DATABASE) owned by an internal schema, except SYS.
  • SQL patches.
  • SQL plan baselines.
  • SQL profiles.

What Is Not Included?

The transport does not include any object owned by SYS. Here are some examples:

  • User-created tables in SYS schema are not transported at all. You must re-create such tables (but you should never create such tables in the first place).
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab1 (...) tablespace system;
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab2 (...) tablespace users;
    
  • Grants on tables or views owned by SYS, like DBA_USERS or v$datafile.
  • Any trigger owned by SYS.

In addition, the following is not included:

  • Public synonyms.
  • AWR data is not included. You can move such data using the script $ORACLE_HOME/rdbms/admin/awrextr.sql.

How Does It Work?

There are two keywords used to start a full transportable job: TRANSPORTABLE and FULL. If you want to start an FTEX import directly over a network link:

$ impdp ... transportable=always full=y network_link ...

If you want to use dump files:

$ expdp ... transportable=always full=y
$ impdp ... full=y

A Few Words of Advice

Practice, practice, practice

  • Start on a small database and work on your runbook.
  • Eventually, prove it works on a production-size database.

Automate

  • To ensure consistency. There are many steps, and it is easy to overlook a step or miss a detail.
  • To avoid human error. Humans make mistakes. Period!

Save logs

  • Data Pump
  • RMAN
  • Terminal output

Automate clean-up procedure

  • To repeat tests and effectively clean up the target environment.
  • In case of failure and rollback during production migration, you should know how to resume operations safely.

Shut down source database

  • Be sure to offline source database after migration. Having users connect to the wrong database after a migration is a disaster.

Data Pump Import

  • Importing directly into the target database using the NETWORK_LINK option is recommended.

Timezone File Version

Check the timezone file version of your source and target database:

SQL> select * from v$timezone_file;

If they differ and the target timezone file version is higher than the source database, Data Pump will convert any TIMESTAMP WITH TIME ZONE (TSTZ) column to the newer timezone conventions. The conversion happens automatically during import.

Since Data Pump must update data during import, it requires that Data Pump can turn the tablespaces READ WRITE. Thus, you can’t use TRANSPORTABLE=KEEP_READ_ONLY if you have tables with TSTZ columns. Trying to do so will result in:

ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues resulting from time zone version mismatch.
Source time zone version is ?? and target time zone version is ??.

If your target database has a lower timezone file version, you can’t use FTEX. You must upgrade the timezone file in your database.

TDE Tablespace Encryption

If the source database has one or more encrypted tablespaces, you must either:

  • Supply the keystore password on export using the Data Pump option ENCRYPTION_PASSWORD.
  • Specify ENCRYPTION_PWD_PROMPT=YES and Data Pump will prompt for the keystore password. This approach is more safer because the encryption password is otherwise stored in the shell history.

You can read more about Full Mode and transportable tablespaces in the documentation.

You can only transport encrypted tablespaces, if the source and target platform share the same Endian format. For example, going from Windows to Linux is fine, because they are both little Endian platforms. Going from AIX to Linux will not work, that’s big to little Endian. When a tablespace is transported to a platform of a different Endian format, the data files must be converted. The conversion does not work on encrypted tablespaces. The only option is to decrypt the tablespace before transport.

Further Reading

Documentation:

MOS notes:

Blog posts:

Other Blog Posts in This Series

XTTS: How to Migrate a Database Using Full Transportable Export Import and Incremental Backups

These steps will guide you through a migration of a database using Full Transportable Export/Import (FTEX) and incremental backups. I covered the concept in a previous blog post, which you should read to understand the basics. Remember Transportable Tablespaces and Full Transportable Export/Import requires Enterprise Edition.

My demo environment looks like this: Overview of demo environment for migrating using FTEX and incremental backups

I have an 12.1.0.2 database that I want to migrate to a PDB in a new CDB that runs 19c.

Check Prerequisites

Create a new PDB called SALES in the target CDB:

TARGET/CDB1 SQL> create pluggable database sales admin user admin identified by admin;
TARGET/CDB1 SQL> alter pluggable database sales open;
TARGET/CDB1 SQL> alter pluggable database sales save state;

Prepare the database to use TDE Tablespace Encryption:

TARGET/CDB1 SQL> alter session set container=sales;
TARGET/CDB1 SQL> administer key management set key force keystore identified by <keystore-pwd> with backup;

Verify SQL*Net connectivity from source host to target PDB:

[oracle@source]$ sqlplus system@<target ip>/<pdb-service-name>

Verify database character set and national character set are the same:

SOURCE/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

TARGET/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

Ensure the source database is in ARCHIVELOG mode:

SOURCE/SALES SQL> select log_mode from v$database;

Enable block change tracking on source database. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata. Although strictly speaking not required, it is strongly recommended:

SOURCE/SALES SQL> select status, filename from v$block_change_tracking;
SOURCE/SALES SQL> alter database enable block change tracking;

Ensure that you can connect from the source to the target host as oracle:

[oracle@source]$ ssh <target ip> date

Identify Tablespaces And External Data

Identify all the tablespaces that you will migrate. With FTEX you should transport all the tablespaces, except those that contain Oracle maintained data, like SYSTEM, SYSAUX, UNDO and so forth:

SOURCE/SALES SQL> select tablespace_name from dba_tablespaces;

Save the list of tablespaces for later. In my demo, I only have the tablespace SALES except the Oracle maintained ones.

Next, on the target database ensure that any of the existing tablespaces doesn’t conflict with the ones you are transporting:

TARGET/SALES SQL> select tablespace_name from dba_tablespaces;

If there is a conflict of names, you have to drop or rename the tablespaces in the target database.

Use DBMS_TDB to easily identify external stuff like directories, external tables and BFILEs. Any files stored in the file system outside the database must be manually transferred to the file system on the target host:

SOURCE/SALES SQL> SET SERVEROUTPUT ON
SOURCE/SALES SQL> DECLARE
      external BOOLEAN;
   BEGIN
      external := DBMS_TDB.CHECK_EXTERNAL;
   END;
/

Download and Configure Perl Scripts

Create a folder to hold the perl scripts, download the scripts from MOS doc ID 2471245.1, and unzip:

[oracle@source]$ rm -rf /home/oracle/xtts
[oracle@source]$ mkdir /home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ --Download file from MOS
[oracle@source]$ unzip rman_xttconvert_VER4.3.zip

Create a working directory (aka. scratch location) which will hold the backups. Ensure that you have enough space at this location at both source and target database.

[oracle@source]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@source]$ mkdir -p /u01/app/oracle/xtts_scratch

Create the same location on the target host:

[oracle@target]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@target]$ mkdir -p /u01/app/oracle/xtts_scratch

Configure your migration in xtt.properties. In this demo the file looks like this:

tablespaces=SALES
platformid=13
src_scratch_location=/u01/app/oracle/xtts_scratch
dest_scratch_location=/u01/app/oracle/xtts_scratch
dest_datafile_location=+DATA
asm_home=/u01/app/19.0.0.0/grid
asm_sid=+ASM1
parallel=4
rollparallel=4
getfileparallel=4
metatransfer=1
dest_user=oracle
dest_host=<target_ip>
desttmpdir=/u01/app/oracle/xtts_scratch
srcconnstr=sys/<password>@sales
destconnstr=sys/<password>@newsales
usermantransport=1

A little explanation:

  • platformid is set to 13 because this is a Linux migration. You can get the number by querying v$transportable_platform.
  • Adjust the parallel options according to the capabilities of the source and target system.
  • When you are using ASM disk group in dest_datafile_location you must also set asm_home and asm_sid.

Finally, copy the scripts (and the configuration) to your target system:

[oracle@source]$ scp -r /home/oracle/xtts/ <target_ip>:/home/oracle/

Initial Backup and Restore

Now, you can start the first initial backup of the database. You take it while the source database is up and running, so it doesn’t matter if the backup/restore cycle take hours or days to complete:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

The perl script has been configured in such a way that it automatically transfers the backups to the target system. In addition to that, a small text file must be transferred as well:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

Now, on the target system, you can restore the backup that was just taken. If needed, the data files are automatically converted to the proper endian format. If conversion is needed, you need space for a copy of all the data files:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Incremental Backup and Restore

You can – and should – run the incremental backup and restores as many times as possible. The more frequent you run them, the faster they will run because there will be fewer changes. At least, close to the migration downtime window starts you should run them often, to minimize the time it will take to perform the final backup and restore:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Final Incremental Backup and Restore

Gather dictionary statistics to ensure the Data Pump export runs as fast as possible:

SOURCE/SALES SQL> exec dbms_stats.gather_dictionary_stats;

Now downtime starts! Set the tablespaces read-only:

SOURCE/SALES SQL> alter tablespace SALES read only;

Perform the final incremental backup:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

You will receive an error because the tablespace is read-only. This is ignorable: This error is ignorable because the tablespace was set read-only on purpose

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Import Metadata Using FTEX

Create a directory object that points to the xtts folder:

TARGET/SALES SQL> create directory LOGDIR as '/home/oracle/xtts';

Next, create a database link to the source database that can be used to import the metadata. If the source database is already a PDB, ensure that the database link points directly into the PDB:

TARGET/SALES SQL> create public database link SRCLNK connect to system identified by <password> using '//<source_ip>:1521/<service_name>';

Test that it works:

TARGET/SALES SQL> select * from dual@srclnk;

Next, create a par file (sales_imp.par) that you can use for the Data Pump import (see appendix below for explanation):

network_link=SRCLNK
full=y
transportable=always
metrics=y
logtime=all
exclude=TABLE_STATISTICS,INDEX_STATISTICS
exclude=SYS_USER
exclude=TABLESPACE:"IN('TEMP')"
exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')"
directory=logdir
logfile=sales_imp.log
transport_datafiles='+DATA/CDB1_FRA2VD/B2D617FCB79B0684E053AF01000A6DCE/DATAFILE/SALES.281.105552609'

Start Data Pump and perform the import. newsales is a TNS alias that points into the SALES PDB in the target CDB. If you have encrypted tablespaces, you should use the option encryption_pwd_prompt. It allows you to input the TDE password. It can be omitted if there are no encrypted tablespaces.

$ impdp system@newsales parfile=sales_imp.par encryption_pwd_prompt=yes

Once the import has completed, you should examine the Data Pump log file for any critical errors. Check the appendix (see below) for ignorable errors:

[oracle@target]$ vi /home/oracle/xtts/sales_imp.log

That’s it! Your data has been migrated. Now would be a good time to:

  • Check data files for corruption using RMAN VALIDATE command

    Although not mandatory, it is recommended if time allows. It is a read-only check that you can run while other stuff is happening in the database. See step 6.1 in MOS doc ID 2471245.1.

  • Gather dictionary statistics

  • Test your application

  • Start a backup

  • Gather statistics – they were excluded from the export

  • Drop the database link that points to the source database

  • Cleanup the file system:

    • /home/oracle/xtts
    • /u01/app/oracle/xtts_scratch

Conclusion

Even huge, TB-sized, databases can be migrated with very little downtime by using incremental backups. By using the perl script from My Oracle Support and combined with Full Transportable Export/Import it is a simple process. In addition, you can even migrate to a new endian format, to a higher release and into a PDB in one operation. It requires Enterprise Edition and you must have plenty of disk space – potentially twice the size of your database.

There is a video on our YouTube channel that you can watch. It demos the entire process. I suggest that you subscribe to our channel and get notified whenever there are new videos.

Thanks to my good colleague, Robert Pastijn, for supplying a runbook that was used as inspiration.

Further Reading

Other Blog Posts in This Series

Appendix

If Source Database Is in OCI and Automatic Backup Is Enabled

If the source database is running in OCI and you have enabled automatic backup, you must make a few changes.

In xttprep.tmpl around line 319 change:

cp('backup for transport allow inconsistent ' ||

to

cp('set encryption off for all tablespaces;set compression algorithm "basic";backup for transport allow inconsistent ' ||

In xttdriver.pl around line 4268 change:

my $rman_str1 = "set nocfau;";

to

my $rman_str1 = "set nocfau;".
                "set encryption off for all tablespaces ;".
                "set compression algorithm 'basic' ;" ;

ORA-02085

If you get ORA-02085 when querying over the database link:

TARGET/SALES SQL> alter system set global_names=false;

ORA-39187, ORA-39921 And ORA-39922

If the Data Pump job aborts and complain about object named CLI_SWPXXXXXXXX or SYS_ILYYYYYYYYY:

ORA-39123: Data Pump transportable tablespace job aborted
ORA-39187: The transportable set is not self-contained, violation list is
ORA-39921: Default Partition (Table) Tablespace SYSAUX for CLI_SWPXXXXXXXX not contained in transportable set.
ORA-39922: Default Partition (Index) Tablespace SYSAUX for SYS_ILYYYYYYYYY not contained in transportable set.
Job "SYS"."SYS_EXPORT_FULL_01" stopped due to fatal error at Wed Jul 18 13:51:01 2018 elapsed 0 00:05:55

You should have a look at MOS note ORA-39187 & ORA-39921 for CLI_SWP$ tables while performing Full Transportable Tablespace export (Doc ID 2426177.1). The objects are related to Unified Auditing.

Data Pump Parameters

Use network_link to specify the name of the database link that points back to the source database.

full=y and transportable=always instructs Data Pump to perform a full transportable export/import.

exclude=TABLE_STATISTICS,INDEX_STATISTICS exclude statistics from the import. It is better and faster to gather new, fresh statistics on the target database. If you insist on importing your statistics, you should use DBMS_STATS.

exclude=SYS_USER excludes the import of the SYS user. In a PDB that is not even allowed, and most likely you are not interested in importing the definition of the SYS user.

exclude=TABLESPACE:"IN('TEMP')" excludes the temporary tablespace from the import. Most likely there is already a temporary tablespace in the new, target PDB. It is faster to create a TEMP tablespace in advance – and name it the same as in the source database.

A change was made to Spatial in 19c and some Spatial admin users are removed. To avoid errors/noise in the log file you can safely exclude them from the import by specifying exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')".

transport_datafiles is used to specify the data files that make you the tablespace you are transporting. Specify the parameter multiple times to specify more data files. You can use asmcmd to get the data file paths and names.

Data Pump Ignorable Errors

Multimedia desupported in 19c, but code is still there. You can safely disregard this error:

Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
ORA-39342: Internal error - failed to import internal objects tagged with ORDIM due to ORA-00955: name is already used by an existing object.

Package is removed in 12.2. See ORA-39083 And ORA-04042 Errors On DBMS_DEFER_SYS When Importing Into 12.2 Database (Doc ID 2335846.1):

Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
ORA-39083: Object type PROCACT_SYSTEM failed to create with error:ORA-04042: procedure, function, package, or package body does not exist

Failing sql is:
BEGIN
SYS.DBMS_UTILITY.EXEC_DDL_STATEMENT('GRANT EXECUTE ON DBMS_DEFER_SYS TO "DBA"');COMMIT; END;