XTTS: Introduction – Minimal Downtime Migration with Full Transportable Export Import and Incremental Backups

If you need to migrate a database to the cloud or anywhere else for that matter, you should consider using cross-platform transportable tablespaces and incremental backup (XTTS). Even for really large databases – 10s or 100s of TB – you can still migrate with minimal downtime. And it works across different endian formats. In fact, for the majority of cross-endian projects this method is used.

In addition to minimal downtime, XTTS has the following benefits:

  • You can implicitly upgrade the database by migrating directly into a higher release
  • You can migrate from a non-CDB and into a PDB
  • You can keep downtime at a minimum by using frequent incremental backups
  • You can migrate across endianness – e.g. from AIX or Solaris to Oracle Linux

Endian-what?

Endianness is determined by the operating system. Simplified, it determines in which order bytes are stored in memory:

  • Big endian: stores the most significant byte of a word at the smallest memory address.
  • Little endian: stores the least-significant byte at the smallest address.

Wikipedia has an article for the interested reader.

Which platform uses which endian format? There is a query for that:

SQL> select platform_name, endian_format from v$transportable_platform;

If your source and target platform does not use the same endian format, then you need a cross-endian migration.

How Does It Work

To concept is explained in this video on our YouTube Channel:

Basically, you need to migrate two things:

  • Data
  • Metadata

Data

The data itself is stored in data files and you will be using transportable tablespaces for this. Since the source and target platform are not the same, the data files must be converted to the new format. Only the data files that make up user tablespaces are transported. The system tablespaces, like SYSTEM and SYSAUX, are not transported.

If you have a big database, it will take a lot of time to copy the data files. Often this is a problem because the downtime window is short. To overcome this you can use a combination of RMAN full backups (backup set or image copies) and incremental backups.

There are a few ways to do this which is covered in the following MOS notes:

The first two methods are using version 3 of a Perl script (xttdriver.pl), whereas the last method uses version 4 of the same Perl script. Version 4 offers a much simplified method and I will use that version for this blog post series.

Version 4 of the Perl script has a list of requirements. If your project can’t meet these requirements, check if the previous version 3 can be used.

Metadata

Transferring the data files is just part of the project. Information on what is inside the data files, the metadata, is missing because the system tablespaces were not transferred. The metadata is needed by the target database, otherwise, the data files are useless. The Transportable Tablespace concept as a whole does not work for system tablespaces, but instead we can use Data Pump.

You can use either:

  • Traditional transportable tablespace
  • Or, the newer full transportable export/import (FTEX)

For this blog post series, I am only focusing on FTEX. But if you run into problems with FTEX, or if you can’t meet any of the FTEX requirements, you should look into the traditional transportable tablespace method.

Here are a few examples of metadata that Data Pump must transfer:

  • Users
  • Privileges
  • Packages, procedudes and functions
  • Table and index defintions (the actual rows and index blocks are in the data files)
  • Temporary tables
  • Views
  • Synonyms
  • Directories
  • Database links
  • And so forth

Conclusion

By using a combination of cross-platform transportable tablespaces and incremental backups, you can migrate even huge databases to the cloud. And it even works for cross-endian migrations, like AIX or Solaris to Oracle Linux.

You can watch our YouTube playlist and watch videos on cross-platform transportable tablespaces.

Further Reading

Other Blog Posts in This Series

4 thoughts on “XTTS: Introduction – Minimal Downtime Migration with Full Transportable Export Import and Incremental Backups

  1. A major issue with the xTTS Perl scripts V4 approach is that the process starts with RMAN taking file image copies of NONCDB – hence the need to have storage space available to copy your NONCDB application data. If an organisation needs to use incremental backups to migrate – i.e. it has huge business critical database that can only afford a brief window of downtime, then it almost certainly is already backing up using Oracle recommended incremental strategy (e.g. Level 0 followed by Level 1, usually in weekly cycles).

    To avoid taking and transporting another copy of the application’s data files, it’s much faster and more efficient to RESTORE FOREIGN and RECOVER FOREIGN the existing Level 0 and 1 into the target PDB file directory until the desired migration cut-over.

    Maybe also worth mentioning that if it takes a week to roll-forward a huge database in this way, then in the interim period before migration, some applications like DWH will have added / dropped tablespaces – hence, use of the NETWORK_LINK option for the DATAPUMP FULL=YES TRANSPORTABLE=ALWAYS is preferred since you can then easily generate the required TRANSPORT_DATAFILES clauses.

    Liked by 2 people

  2. Hi Mark,

    Thanks for your comments. And you are right about the disk space requirements, and that it is a good idea to uses a database link to extract the metadata for the full transportable export import. Once time allows I will update the post. Thanks.
    Also, the idea of using RESTORE FOREIGN is put on my list for a future blog post.

    Thanks,
    Daniel

    Liked by 1 person

  3. Hi Mark,

    I am performing a migration with V4 XTTS from AIX to linux, and I have encountered an error that I did not expect, when performing the backup command:

    $ORACLE_HOME/perl/bin/perl xttdriver.pl –backup

    everything went correctly but when I saw the res.txt file, it did not match the backups pieces with the datafiles. Here is an example:

    [oracle@targetserver tempxtt]$ cat res.txt
    #0:::101,6,TS_CIERR_DAT_101.dbf,0,1060603814,0,0,0,TS_CIERR_DAT,TS_CIERR_DAT_101.dbf
    #0:::124,6,TS_CIERR_IDX_124.dbf,0,1060603814,0,0,0,TS_CIERR_IDX,TS_CIERR_IDX_124.dbf
    #0:::125,6,TS_CIERR_LOB_125.dbf,0,1060603814,0,0,0,TS_CIERR_LOB,TS_CIERR_LOB_125.dbf

    And you should see the following done:

    [oracle@targetserver tempxtt]$ cat res.txt
    #0:::101,6,TS_CIERR_DAT_101_6fvu8qtt_1_1.bkp,0,1060603814,0,0,0,TS_CIERR_DAT,TS_CIERR_DAT_101.dbf
    #0:::124,6,TS_CIERR_IDX_124_6gvu8qu4_1_1.bkp,0,1060603814,0,0,0,TS_CIERR_IDX,TS_CIERR_IDX_124.dbf
    #0:::125,6,TS_CIERR_LOB_125_6hvu8qua_1_1.bkp,0,1060603814,0,0,0,TS_CIERR_LOB,TS_CIERR_LOB_125.dbf

    Any thoughts on this.
    Thank you very much for everything.

    Like

  4. Hi Antonio,

    Have you tried to restore the backup? I have never taken a look into that file, so I don’t know what you should expect of the content. Try to restore and post the error to the blog, if any.

    Regards,
    Daniel

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s