If you need to migrate a database to the cloud or anywhere else for that matter, you should consider using cross-platform transportable tablespaces and incremental backup (XTTS). Even for really large databases – 10s or 100s of TB – you can still migrate with minimal downtime. And it works across different endian formats. In fact, for the majority of cross-endian projects this method is used.
In addition to minimal downtime, XTTS has the following benefits:
- You can implicitly upgrade the database by migrating directly into a higher release
- You can migrate from a non-CDB and into a PDB
- You can keep downtime at a minimum by using frequent incremental backups
- You can migrate across endianness – e.g. from AIX or Solaris to Oracle Linux
Endian-what?
Endianness is determined by the operating system. Simplified, it determines in which order bytes are stored in memory:
- Big endian: stores the most significant byte of a word at the smallest memory address.
- Little endian: stores the least-significant byte at the smallest address.
Wikipedia has an article for the interested reader.
Which platform uses which endian format? There is a query for that:
SQL> select platform_name, endian_format from v$transportable_platform;
If your source and target platform does not use the same endian format, then you need a cross-endian migration.
How Does It Work
To concept is explained in this video on our YouTube Channel:
Basically, you need to migrate two things:
- Data
- Metadata
Data
The data itself is stored in data files and you will be using transportable tablespaces for this. Since the source and target platform are not the same, the data files must be converted to the new format. Only the data files that make up user tablespaces are transported. The system tablespaces, like SYSTEM and SYSAUX, are not transported.
If you have a big database, it will take a lot of time to copy the data files. Often this is a problem because the downtime window is short. To overcome this you can use a combination of RMAN full backups (backup set or image copies) and incremental backups.
There are a few ways to do this which is covered in the following MOS notes:
- Version 3: 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
- Version 3: 12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
- Version 4, preferred: Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)
The first two methods are using version 3 of a Perl script (xttdriver.pl
), whereas the last method uses version 4 of the same Perl script. Version 4 offers a much simplified method and I will use that version for this blog post series.
Version 4 of the Perl script has a list of requirements. If your project can’t meet these requirements, check if the previous version 3 can be used.
Metadata
Transferring the data files is just part of the project. Information on what is inside the data files, the metadata, is missing because the system tablespaces were not transferred. The metadata is needed by the target database, otherwise, the data files are useless. The Transportable Tablespace concept as a whole does not work for system tablespaces, but instead we can use Data Pump.
You can use either:
- Traditional transportable tablespace
- Or, the newer full transportable export/import (FTEX)
For this blog post series, I am only focusing on FTEX. But if you run into problems with FTEX, or if you can’t meet any of the FTEX requirements, you should look into the traditional transportable tablespace method.
Here are a few examples of metadata that Data Pump must transfer:
- Users
- Privileges
- Packages, procedudes and functions
- Table and index defintions (the actual rows and index blocks are in the data files)
- Temporary tables
- Views
- Synonyms
- Directories
- Database links
- And so forth
Conclusion
By using a combination of cross-platform transportable tablespaces and incremental backups, you can migrate even huge databases to the cloud. And it even works for cross-endian migrations, like AIX or Solaris to Oracle Linux.
You can watch our YouTube playlist and watch videos on cross-platform transportable tablespaces.
Further Reading
- Transporting Data Across Platforms, Backup and Recovery User’s Guide 19c
- My Oracle Support: Master Note for Transportable Tablespaces (TTS) — Common Questions and Issues (Doc ID 1166564.1)
- My Oracle Support: Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable (Doc ID 1454872.1)
- Mike Dietrich: Different MOS Notes for xTTS PERL scripts – Use V4 scripts
Other Blog Posts in This Series
- Introduction
- Prerequisites
- Full Transportable Export/Import
- What Is a Self-contained Transportable Tablespace Set
- How to Migrate a Database Using Full Transportable Export/Import and Incremental Backups
- Make It Fast
- Backup on Standby Database
- Slow Network Between Source and Target
- Bigfile Tablespaces
- Testing the Procedure On Activated Standby Database
- Testing the Procedure Multiple Times Using Read-Only Tablespaces
- Target Database and Data Guard
- ASM Aliases and Why You Should Get Rid of Them
- How to use Transportable Tablespace with TDE Tablespace Encryption
- Pro Tips
A major issue with the xTTS Perl scripts V4 approach is that the process starts with RMAN taking file image copies of NONCDB – hence the need to have storage space available to copy your NONCDB application data. If an organisation needs to use incremental backups to migrate – i.e. it has huge business critical database that can only afford a brief window of downtime, then it almost certainly is already backing up using Oracle recommended incremental strategy (e.g. Level 0 followed by Level 1, usually in weekly cycles).
To avoid taking and transporting another copy of the application’s data files, it’s much faster and more efficient to RESTORE FOREIGN and RECOVER FOREIGN the existing Level 0 and 1 into the target PDB file directory until the desired migration cut-over.
Maybe also worth mentioning that if it takes a week to roll-forward a huge database in this way, then in the interim period before migration, some applications like DWH will have added / dropped tablespaces – hence, use of the NETWORK_LINK option for the DATAPUMP FULL=YES TRANSPORTABLE=ALWAYS is preferred since you can then easily generate the required TRANSPORT_DATAFILES clauses.
LikeLiked by 2 people
Hi Mark,
Thanks for your comments. And you are right about the disk space requirements, and that it is a good idea to uses a database link to extract the metadata for the full transportable export import. Once time allows I will update the post. Thanks.
Also, the idea of using RESTORE FOREIGN is put on my list for a future blog post.
Thanks,
Daniel
LikeLiked by 1 person
Hi Mark,
I am performing a migration with V4 XTTS from AIX to linux, and I have encountered an error that I did not expect, when performing the backup command:
$ORACLE_HOME/perl/bin/perl xttdriver.pl –backup
everything went correctly but when I saw the res.txt file, it did not match the backups pieces with the datafiles. Here is an example:
[oracle@targetserver tempxtt]$ cat res.txt
#0:::101,6,TS_CIERR_DAT_101.dbf,0,1060603814,0,0,0,TS_CIERR_DAT,TS_CIERR_DAT_101.dbf
#0:::124,6,TS_CIERR_IDX_124.dbf,0,1060603814,0,0,0,TS_CIERR_IDX,TS_CIERR_IDX_124.dbf
#0:::125,6,TS_CIERR_LOB_125.dbf,0,1060603814,0,0,0,TS_CIERR_LOB,TS_CIERR_LOB_125.dbf
And you should see the following done:
[oracle@targetserver tempxtt]$ cat res.txt
#0:::101,6,TS_CIERR_DAT_101_6fvu8qtt_1_1.bkp,0,1060603814,0,0,0,TS_CIERR_DAT,TS_CIERR_DAT_101.dbf
#0:::124,6,TS_CIERR_IDX_124_6gvu8qu4_1_1.bkp,0,1060603814,0,0,0,TS_CIERR_IDX,TS_CIERR_IDX_124.dbf
#0:::125,6,TS_CIERR_LOB_125_6hvu8qua_1_1.bkp,0,1060603814,0,0,0,TS_CIERR_LOB,TS_CIERR_LOB_125.dbf
Any thoughts on this.
Thank you very much for everything.
LikeLike
Hi Antonio,
Have you tried to restore the backup? I have never taken a look into that file, so I don’t know what you should expect of the content. Try to restore and post the error to the blog, if any.
Regards,
Daniel
LikeLike
The restore use a new file names instead of the original file names. Example user_01.dbf, user_02.dbf, tool_01.dbf will change to user_04.dbf, user_05.dbf, and tools_06.dbf. It will still work, but why change the file names?
LikeLike
Hi Mao,
I actually don’t know the reason for doing this. I never noticed that when I worked with the script. I use OMF (mostly using ASM) most of the time, and that way the database is in control of the file names.
Regards,
Daniel
LikeLike
Nice post Daniel.
Your post has helped me to migrate a 70 TB , AIX to Exadata.
I have another requirement to migrate two oracle db (10.2.0.5, non-CDB) on AIX to EXADATA cloud@customer and consolidate both into one CDB and one PDB.
Please will the following approach work?
1) Create a 19c CDB and a PDB on destination Exadata
2) Use XTTP V3 with incremental backup to migrate source database 1 from 10g non-CDB on AIX to destination Exadata into a CDB and a PDB on 19c
3) For source database 2, perform data pump export of the schemas and import into destination PDB
Please is there an alternative way to consolidate 2 source databases into same container database in destination?
Regards
Wale
LikeLike
Hi Wale,
I’m really glad that you are modernizing your estate, but the project is also quite a mouthful. But of course it’s doable.
I can’t recall the prerequisites for XTTS v3 but check the MOS note and use it if possible.
I assume there is a good reason why both databases must go into the same database? The best solution would be to use two different PDBs.
Nevertheless, theoretically you should be able to migrate two databases into the same PDB using transportable tablespaces. You might have a lot of conflicting objects/schemas but that you can probably handle. XTTS v3 and v4 don’t handle multiple migrations at the same time very well, so you probably need to do them one after the other.
How big are the databases? Have you considered doing Data Pump for both? I think that might be easier. Plus, it allows you to e.g. convert LOBs from BasicFile to SecureFile in both databases.
Regards,
Daniel
LikeLike
Thank you, Daniel.
Yes, there is business decision to consolidate both source databases into same database.
Also, I decided to use XTTP V3 because the V4 doesn’t work with source databases that are older than 11.2.0.4.
One database is 45 TB, that is why data pump is not an option for it. The second database is 2 TB, so I will use data pump to migrate it. Both source database are 10.2.0.5
We are really enjoying your blog, very helpful
Regards
Wale
LikeLike
Hi Wale,
45 TB might be a lot for Data Pump but there are several things you can do to significantly speed up such a migration. If you run into issues with XTTS v3 perhaps consider Data Pump. I might be able to give you some additional pointers.
Thanks for the positive feedback. Much appreciated.
Regards,
Daniel
LikeLike
Hello Daniel
Thank you very much for your great input. Much appreciated.
We are considering using bi-directional GG replication now.
This is because source database is on AIX with character set of WE8ISO8859P1, while destination character set will be WE8MSWIN1252.
Since both source and destination have different character sets (and different endianness) , what will you prefer as best way to do the initial load of GoldenGate please?
Data pump has the tendency of replacing unrecognized characters with ” ?”, which our customer do not want .
LikeLike
Hi,
Before completing disregarding cross-platform transportable tablespace, please pay attention to the character set restrictions. The requirement is that the two character sets must be compatible. WE8ISO8859P1 is a binary subset of WE8MSWIN1252, so it might be possible to use XTTS for the migration.
https://docs.oracle.com/en/database/oracle/oracle-database/19/spmdu/general-limitations-on-transporting-data.html#GUID-28800719-6CB9-4A71-95DD-4B61AA603173
https://docs.oracle.com/en/database/oracle/oracle-database/19/nlspg/appendix-A-locale-data.html#GUID-02A80824-EDC9-4D90-BE9E-BEF67F7B60EE
XTTS would be much easier than OGG with bi-directional replication. Please evaluate the XTTS requirements. It seems as if your case will match it.
Nevertheless, for initial load you can use Data Pump. Even going cross-platform and to a different character set. Data Pump is much better at this than the former export utility. Normally, Data Pump does not import any “?” or inverted “?” unless you’ve made a mistake. Data Pump always export and imports irrespective of your NLS settings, so it should be safe. Data Pump should handle the conversion without issues in your case.
Regards,
Daniel
LikeLike
Thank you, Daniel. This is brilliant and very useful.
Much appreciated.
Wale
LikeLiked by 1 person