When migrating Oracle Database to a different endian format using transportable tablespaces and incremental backups (XTTS), a list of requirements must be met. The following list of requirements exist when using:
The most important requirements – for a complete list check MOS note:
Windows is not supported.
RMAN on the source system must not have DEVICE TYPE DISK configured to COMPRESSED.
RMAN on the source system must not have default channel configured to type SBT.
For Linux: Minimum version for source and destination is 11.2.0.3.
Other platforms: Minimum version for source and destination is 12.1.0.2.
Disk space for a complete backup of the database on both source and target host. If your data files take up 100 TB, you need an additional 100 TB of free disk space. For 12c databases, and if your data files have a lot of free space, the backup might be smaller due to RMAN unused block compression.
Also worth mentioning is that the Perl script during the roll forward phase (applying level 1 incremental) will need to restart the target database. Applying the incremental backups on the target data files happens in NOMOUNT mode. Be sure nothing else uses the target database while you roll forward.
Block Change Tracking (BCT) is strongly recommended on the source database. Note, that this is an Enterprise Edition feature (in OCI: DBCS EE-EP or ExaCS). If you don’t enable BCT the incremental backups will be much slower because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.
What If – V3 Perl Script
If disk space is a problem or if you can’t meet any of the other requirements, check out the below two MOS notes:
They describe a previous version of the Perl script, version 3. The scripts use DBMS_FILE_TRANSFER to perform the conversion of the data files in-flight. That way no extra disk space is needed. However, DBMS_FILE_TRANSFER has a limitation that data files can’t be bigger than 2 TB.
Also, the V3 scripts might be useful for very old databases.
Transportable Tablespaces In General
To get a complete list of limitations on transporting data, you should look in the documentation. The most notable are:
Source and target database must have compatible character sets. If the character sets in both databases are not the same, check documentation for details.
No columns can be encrypted with TDE Column Encryption. The only option is to remove the encryption before migration and re-encrypt afterward.
TDE Tablespace Encryption is supported for same-endian migration if the source database is 12.1.0.2 or newer. If you need to go across endianness, you must decrypt the tablespaces and re-encrypt after migration. Remember, Oracle Database 12.2 can encrypt tablespaces online.
If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the conversion on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
Requires Enterprise Edition.
The database timezone file version in the target database must be equal to or higher than the source database.
The database time zone must match if you have tables with TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). If you have such tables, and the database time zone does not match, those tables are skipped during import. You can then move the affected tables using a normal Data Pump table mode export and import.
To check the database time zone:
SQL> select dbtimezone from dual;
You can alter the time zone for a database with an ALTER DATABASE statement.
Full Transportable Export/Import
FTEX automates the process of importing the metadata. It is simpler to use and automatically includes all the metadata in your database. Compared to a traditional transportable tablespace import, FTEX is a lot easier and removes a lot of manual work from the end user. But there are a few requirements that must be met:
Source database must be 11.2.0.3 or higher.
Target database must be 12.1.0.1 or higher.
Requires Enterprise Edition.
COMPATIBLE must be set to 12.0.0 or higher in both source and target database. If your source database is an Oracle Database 11g, this is not possible. In that case, set version to 12 or higher during Data Pump export instead.
If you can’t meet the requirements, check out traditional transportable tablespace. It have different requirements, and it allows more customization.
When doing the XTTS blog post series, I came across a lot of valuable information. These nuggets were not suitable for a dedicated blog post but are still worth sharing.
Pro Tip 1: The Other Backups
When you are preparing for an XTTS migration, you will be doing a lot of backups of the source database. Will those backups somehow interfere with your existing backups?
The Perl script takes the first backup – the initial level 0 backup – using:
RMAN> backup for transport .... ;
It is a backup created for cross-platform transport and not something to be used to restore the source database. The documentation states about cross-platform backups:
RMAN does not catalog backup sets created for cross-platform transport in the control file. This ensures that backup sets created for cross-platform transport are not used during regular restore operations.
This is good because it ensures that your recovery strategy will not take those backups into account. Most likely, you will be moving the files from the source system pretty soon, and in that case, you don’t want RMAN depending on them.
But after the initial level 0 backup, you will create level 1 incremental backups. The incremental backups are just regular incremental backups:
RMAN> backup incremental from scn ... tablespace ... ;
It is not a cross-platform backup, so it will be recorded in the control file and/or recovery catalog. Once you move those incremental backups away from the source system, be sure to tidy them up in the RMAN catalog:
Sum up: While preparing for the migration, keep taking your regular backups.
Pro Tip 2: Data Pump Parameters
Use a parameter file for your Data Pump export and import. Especially, the import will be a lot easier because you don’t need to write a very long command line with character escapes and the like:
Pro Tip 3: Generate Data Files Parameters For Data Pump
The list of files that Data Pump needs, I generate with this query. It works if you are using ASM, and transport_datafile will point to the alias – not the real file:
export ASMDG=+DATA
asmcmd ls -l $ASMDG | grep '^DATAFILE' | sed -n -e 's/ => .*//p' | sed -n -e 's/^.* N \s*/transport_datafiles='$ASMDG'\//p'
Pro Tip 4: Queries
Generate a comma separated list of tablespaces:
select
listagg(tablespace_name, ',') within group (order by tablespace_name)
from
dba_tablespaces
where
contents='PERMANENT'
and tablespace_name not in ('SYSTEM','SYSAUX');
Generate a comma separated list of tablespaces in n batches:
define batches=8
select
batch,
listagg(tablespace_name, ',') within group (order by tablespace_name)
from (
select
mod(rownum, &batches) as batch,
tablespace_name
from (
select
t.tablespace_name,
sum(d.bytes)
from
dba_tablespaces t,
dba_data_files d
where
t.tablespace_name=d.tablespace_name
and t.contents='PERMANENT'
and t.tablespace_name not in ('SYSTEM','SYSAUX')
group by
t.tablespace_name
order by 2 desc
)
)
group by batch;
Generate read-only commands
select
'ALTER TABLESPACE ' || tablespace_name ||' READ ONLY;'
from
dba_tablespaces
where
contents='PERMANENT'
and tablespace_name not in ('SYSTEM','SYSAUX');
Pro Tip 5: Troubleshooting
Be sure to always run the Perl script with debug option enabled:
$ #Set environment variable
$ export XTTDEBUG=1
$ #Or use --debug flag on commands
$ $ORACLE_HOME/perl/bin/perl xttdriver.pl ... --debug 3
Now it comes the interesting part, let say DBA needs to export a particular schema that contains queues, and import into another database, so if at the end of the impdp operation the DBA does a simple comparison of the number objects from the origin database schema with the target database schema, there is a big chance these counts will not match, but there will be no errors or warnings at the expdp/impdp logs.
This happens exactly because during the export/import we will not consider the queue objects created on the fly on the source side, usually the ones ending by _P and _D, thus the target database may not have these objects, but of course, they may get created later on whenever required during the use of the queue. This is an expected behavior and the functionally is working as expected. A suggested way to check whether everything has been imported successfully is to use a simple query to check the total number of "QUEUE" type objects instead, for example:
SQL> select count(*) from DBA_OBJECTS where owner=’&schema’ and object_type = ‘QUEUE’;
Pro Tip 12: ORA-39218 or ORA-39216
If your Data Pump metadata import fails with the below error, you are having problems with evolved types. This blog post tells you what to do:
W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch
Failing sql is:
BEGIN SYS.DBMS_METADATA.CHECK_TYPE('APPUSER','CAR_TYPE','4','613350A0E37618A6108C500936D0C6162C',''); END;
In the last three customer cases I worked on, we solved the performance problems by using the latest Release Update plus Data Pump bundle patch. This is my number 1 advice.
Skip Statistics
I recommend that you skip statistics when you export:
Import statistics from source database using DBMS_STATS.
Import statistics from a test system using DBMS_STATS.
Options 1 and 3 are especially appealing if your target database is very different from the source. Imagine going from AIX to Exadata, from 11.2.0.4 to 19c, and non-CDB to PDB. The platform itself is very different; Exadata has superiour capabilities. In addition, it is a new version with other histogram types and different architecture. In this case, it does make sense to get new statistics that can better reflect the new environment.
Data Pump uses Advanced Queueing which uses the streams pool in the SGA. If you have enough memory, I suggest that you allocate a big streams pool right from the beginning:
SQL> alter system set streams_pool_size=2G scope=both;
You don’t want the streams pool to eat too much from your buffer cache and shared pool, so if you are short on memory find a more suitable value.
Database Complexity
The Data Pump export and import must be done during downtime. The time it takes to perform these two tasks is often critical.
How long will it take? It depends (classic IT answer)!
The complexity of your user data dictionary has the biggest impact. The more objects, generally the longer the export and import will take. Also, certain features like partitioning have a big impact as well. It might be impossible to reduce the user data dictionary complexity, but it can have a big impact. In some situations, I have seen old or obsolete data in the database. Or partitions that had to be archived. Or entire groups of tables that were used by a feature in the application that was no longer in use.
Another thing to look at is invalid objects. If you have objects that can’t compile, check the reason and whether the object can be dropped. Often these invalid objects are just remnants from old times.
In a recent case I worked on, the customer found more than 1.000 old tables that could be dropped. This significantly decreased the downtime required for the migration.
Dump File or Network Mode
A thing to test: What works best in your migration: Data Pump in dump file mode or network mode?
Normally, we recommend dump file mode because it has much better parallel capabilities. But metadata export and import for transportable tablespace jobs happen serially anyway (until Oracle Database 21c), so there might be a benefit of using Data Pump in network mode. When using Data Pump in network mode, you just start the Data Pump import without first doing an export. The information is loaded directly into the target database over a database link.
Parallel Metadata Export and Import
Starting with Oracle Database 21, Data Pump supports parallel export and import of metadata when using transportable tablespaces. Add the following to your Data Pump parameter file. n is the level of parallelism:
parallel=n
If an export was made in a lower release that didn’t support parallel export, you can still import in parallel. Parallel Metadata import works regardless of how the Data Pump export was made.
Auditing
If you are using Traditional Auditing, you can disable it in source and target database during the migration:
SQL> alter system set audit_trail=none scope=spfile;
SQL> alter system set audit_sys_operations=none scope=spfile;
SQL> shutdown immediate
SQL> startup
In at least one case, I’ve seen it make a difference for the customer.
Additionally, if you have a lot of traditional auditing data, I suggest you get rid of it (archive if needed, else delete it).
I strongly recommend that you apply the recent-most Release Update to your source and target Oracle Database. Use the download assistant to find it.
Use Backup Sets
If both source and target databases are Oracle Database 12c or newer, you should set the following in xtt.properties:
usermantransport=1
RMAN will use backup sets using the new backup for transport syntax. Backup sets are better than image copies because RMAN automatically adds unused block compression. Unused block compression can shrink the size of the backup and improve performance.
Block Change Tracking
Enable block change tracking on source database. Although strictly speaking not required, it is strongly recommended, because it will shorten the time it takes to perform incremental backups dramatically. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata:
SQL> select status, filename from v$block_change_tracking;
SQL> alter database enable block change tracking;
If you look in xtt.properties, there is a parameter called parallel. What does it do?
It controls the number of batches in which the backup and restore/recover commands run. The Perl script will split the tablespaces into n batches – n is parallel from xtt.properties. One batch will process all the data files belonging to those tablespaces.
If you have 20 tablespaces, the Perl script will run in four batches of five tablespaces. If each tablespace has three data files, the Perl script will run four batches of each 15 data files.
Each batch will process n data files at the same time. n being the default parallelism assigned to the disk channel. To find the current parallelism (here it is two):
RMAN> show all;
...
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
...
If you want to change it to eight:
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
When you restore and convert the data files on the target database, it will also use the RMAN configuration parameter.
To enable parallel backup and restore, be sure to change the default disk parallelism on both source and target database host.
For image file backups (usermantransport=0), when the data files are converted on the target database, it will use the parallel degree specified in xtt.properties parameter parallel. Backup sets are converted using the RMAN configuration parameter.
Multiple Perl Scripts
If you really want to squeeze the very last drop of performance out of your system, or if you want to use multiple RAC nodes, you can use multiple Perl scripts.
Normally, you only have one Perl script with corresponding files like xtt.properties:
[oracle@sales12 xtts]$ pwd
/home/oracle/xtts
[oracle@sales12 xtts]$ ls -l
total 260
-rw-r--r-- 1 oracle oinstall 5169 Mar 11 19:30 xtt.newproperties
-rw-r--r-- 1 oracle oinstall 266 Mar 11 19:30 xtt.properties
-rw-r--r-- 1 oracle oinstall 1390 Mar 11 19:30 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall 71 Mar 11 19:30 xttdbopen.sql
-rw-r--r-- 1 oracle oinstall 180408 Mar 11 19:30 xttdriver.pl
-rw-r--r-- 1 oracle oinstall 11710 Mar 11 19:30 xttprep.tmpl
-rw-r--r-- 1 oracle oinstall 52 Mar 11 19:30 xttstartupnomount.sql
The idea with multiple Perl scripts is that you have many sets of Perl scripts; each set working on a unique batch of tablespaces.
So instead of just one folder, I could have four folders. Each folder is a complete Perl script with all the files. Download rman_xttconvert_VER4.3.zip and extract to four folders:
[oracle@sales12 ~]$ pwd
/home/oracle
[oracle@sales12 ~]$ ls -l
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts1
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts2
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts3
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts4
Each of the xtt.properties files will work on a unique set of tablespaces:
You must also ensure that src_scratch_location and dest_scratch_location are set to different locations. Each set of Perl scripts must have dedicated scratch locations.
You have multiple concurrent sessions running when you need to backup and restore/recover. Each session will use one of the Perl scripts, and, thus, process the tablespaces concurrently.
SSH session 1:
export TMPDIR=/home/oracle/xtts1
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 2 (notice I changed TMPDIR to another directory, xtts2):
export TMPDIR=/home/oracle/xtts2
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 3:
export TMPDIR=/home/oracle/xtts3
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 4:
export TMPDIR=/home/oracle/xtts4
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
In the above example, I used four different Perl scripts and four concurrent sessions. But you can scale up if you have the resources for it. One of our customers ran with 40 concurrent sessions!
You must ensure to include all your tablespaces. The sum of all the tablespaces in your Perl scripts must be all of the tablespaces in your database. Don’t forget one of the tablespaces.
On RAC, you can run the Perl scripts on all the nodes, utilizing all your resources.
Watch this video to learn how a customer migrated a 230 TB database using multiple Perl scripts
When you use ASM, there is tight control over the file names. ASM strictly enforces the naming standard dictated by Oracle Managed Files (OMF), and only the database can create file names that comply with OMF.
Sometimes it is handy to create files in other locations in ASM that still refer to a database file. Here you can use aliases. Aliases work like a symbolic link in the file system.
Alias Oracle ASM file names are distinguished from fully qualified file names or numeric file names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers, such as in the format USERS.259.685366091.
When you use ls -l you can also tell whether a file is an alias. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, you can see in the Name column that it is an alias. The => indicate it:
ASMCMD> ls -l +DATA
Type Redund Striped Time Sys Name
DATAFILE UNPROT COARSE MAR 16 08:00:00 N account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
DATAFILE UNPROT COARSE MAR 16 08:00:00 N accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855
You can read about Fully Qualified File Name Form in the ASM documentation, if you are interested.
Why Are the Aliases Created?
When the Perl script is restoring and recovering the data files on the target database, they do not belong to any database yet. The tablespaces have not been plugged into any database yet. Hence, it is impossible to figure out the right OMF name of the data files.
As an alternative, ASM names the data files according to the syntax of the source database. For instance, it will use the source database GUID (select guid from v$containers) as part of the name.
In addition, the Perl script creates ASM aliases using the following format: <dest_datafile_location>/<tablespace_name>_<file#>.dbf
When you perform the Data Pump import, you can refer to the aliases in your Data Pump parameter file (transport_datafile). Using the aliases is especially useful if you plan on having a standby database.
How Do I Get Rid of the Aliases?
After performing the Data Pump import, the tablespaces are plugged into a database, and now the data files belong to a database. But the target database is referring to the data files either via:
An ASM alias
Or directly via the absolute file name. As described earlier, the absolute file path uses the OMF syntax of the source database
Let me illustrate that. Imagine:
In xtt.propertiesdest_datafile_location=+DATA.
My data file is named users01.dbf, belongs to tablespace USERS and has file ID 65.
Target DB_UNIQUE_NAME is SALES2.
Source database GUID is 86D5DC2587337002E0532AB2A8C0A57C.
How will the file be registered in the database?
If I used the aliases, it is known as +DATA/users_65.dbf.
If I used the absolute file name, it is known as +DATA/SALES2/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/users.280.1099469855. ASM generates the last two sets of numbers.
Neither of the two formats is proper OMF names. What is the real OMF name? Imagine:
Target database GUID is DA495482D68D0220E0530F01000A98DF
The real OMF file name is (notice the change in GUID):
+DATA/SALES2/DA495482D68D0220E0530F01000A98DF/DATAFILE/users.280.1099469855
You can get the GUID of a database by using select guid from v$containers.
In ASM, only the database can store a file in OMF syntax. You must fix this from the target database. The easiest way is to use online data file move. If you don’t specify the target location, the database will generate an OMF name:
SQL> --using file number
SQL> alter database move datafile 65;
SQL> --using full name
SQL> alter database move datafile '+DATA/users_65.dbf';
How does the move work?
It is a entirely online operation.
It is a block-by-block copy.
The database copies the data file. While the copy operation takes place, the two files are kept in sync until the database can switch to the new file. After that, the database removes the original file.
If the data file belongs to a PDB, you must switch your session to that container.
You can learn more about online data file move in our YouTube video:
Why Bother?
If my database works fine, why should I worry? I can think of at least two reasons:
Comply to naming standard
Avoid problems in other migrations
Comply to naming standard
I highly recommend that you use and comply with any naming standard, including OMF. Data files that are not appropriately stored according to OMF, should be moved to the correct location.
When I worked outside in the real world as a DBA, I remember multiple occasions of loss of data files. In many situations, a DBA had found a file apparently not belonging to a database – at least according to the naming standard. But the file was used by a database; it was just not stored in the correct location. When the file was removed = big problem in the database.
With many databases and many hosts, it is very important that you make standards and keep with the standards. Otherwise, you will end up in a big mess.
Avoid problems in other migrations
This is especially relevant if you need to perform multiple migrations to the same database host.
The Perl script used for the migration will create the aliases in the location specified by dest_datafile_location. The names of the aliases are very simple, and there is a risk that another migration will try to make the same alias.
Imagine you already did one migration. The database uses the alias +DATA/users_4.dbf. Now you want to make a second migration, and this database also wants to use +DATA/users_4.dbf. The same alias can’t be used for two different files. Big problem!
A user left a comment on my blog telling me this actually lead to corruption in the first database. That risk is a very good reason for getting rid of the aliases and using only proper OMF file names.
Conclusion
ASM aliases are created automatically as part of the migration. The aliases are very useful during the migration, but I highly recommend getting rid of the aliases right after the migration.
When you perform a Data Pump import of transportable tablespaces, the data files plugs into the target database, and the tablespaces turns into read write mode immediately. It happens in the phase DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK.
Dump Pump must turn the tablespaces into read write mode for several reasons:
Suppose you have tables with columns to type timestamp with timezone information (TSTZ) and there is a different in the database time zone file version. In that case, all TSTZ columns must be updated to reflect the new timezone conventions.
The data file bitmap of free space must be updated in case you have segments in your tablespaces that are not imported.
When the tablespaces switches to read write mode the data file headers are updated. Now, the data files changes in such a way that they belong to that database (I imagine it has something to do with DBID, SCN, and so forth).
This also means that you can only import the tablespaces one time. If you want to repeat the process, you need to restore and recover the data files. Not even Flashback Database can save you.
TRANSPORTABLE=KEEP_READ_ONLY
A new option was added to Data Pump in Oracle Database 19c to solve this: TRANSPORTABLE=KEEP_READ_ONLY. Here is what the documentation says:
If specified, then data files remain in read-only mode. As a result of this setting, the tables containing TSTZ column data cannot be updated, and are dropped from the import. In addition, data file bitmaps cannot be rebuilt.
Keeping the tablespaces read only and leaving the data files untouched sounds good. But there are some restrictions:
If you have TSTZ columns, they can’t be checked and updated. This means that the entire table with a TSTZ column is skipped during import. You will see this error in the Data Pump log file: ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues. To solve it, you need to manually import the table afterward using a Data Pump table mode export and import.
If you have any segments in your data files that no longer belong to an object (e.g. if that specific object was not imported), the free space bitmap can’t be updated, and you have free space that can’t be used. You can solve this later on when the tablespaces are in read write mode using dbms_space_admin.tablespace_rebuild_bitmaps.
If you can live with these restrictions, you can use this feature.
Testing
You can use TRANSPORTABLE=KEEP_READ_ONLY when testing in the following way:
Perform the backups on the source database using $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup.
Transfer res.txt and restore/recover the data files using $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore
Now you want to test the migration procedure. Set the tablespaces in the source database in read only mode, do an incremental backup and perform the Data Pump export. Optionally, use a temporarily activated standby database to avoid interruptions on the primary production database.
Set a guaranteed restore point in the target database.
After recovering the data files on the target system, you perform the Data Pump import. You must set the Data Pump parameter TRANSPORTABLE=KEEP_READ_ONLY to leave the data files untouched.
When you are done with your tests, you either flash back to the restore point or delete the data in your database. The latter would be a series of DROP commands (schema, roles etc.) followed by DROP TABLESPACE commands. To preserve the data files, be sure to use DROP TABLESPACE ... INCLUDING CONTENTS KEEP DATAFILES. The KEEP clause is especially vital on ASM and with OMF.
When you specify INCLUDING CONTENTS, the KEEP DATAFILES clause lets you instruct the database to leave untouched the associated operating system files, including Oracle Managed Files. You must specify this clause if you are using Oracle Managed Files and you do not want the associated operating system files removed by the INCLUDING CONTENTS clause.
You can now continue to recover the data files on your target system with new incremental backups from the source database. You can roll forward the data files even though we had them plugged into a database. This is really cool.
This allows you to repeat the test with new fresh production data without the tedious task of completely restoring and recovering the data files.
Repeat the test cycle as many times as you want. If the data files are left untouched using TRANSPORTABLE=KEEP_READ_ONLY and you don’t turn them into read write mode manually, you can repeat the process.
Production Migration
Can you use TRANSPORTABLE=KEEP_READ_ONLY for the production migration? Yes, you can. But eventually, you will need to turn the tablespaces into read write mode.
I would not usually recommend it. But recently, I was involved in a specific case where it was helpful.
During testing, a customer ran into an issue where Data Pump hung during import. They killed the import and tried to repeat the Data Pump import. Unfortunately, Data Pump imports of transportable tablespaces are not resumable until Oracle Database 21c. So they had to start Data Pump all over. But the data files had already been touched by the target database. Now Data Pump refused to progress with the import because the data files were not as expected.
The customer was advised on how to avoid the Data Pump hang. But they were still a little concerned about their upcoming production migration. They would like to keep the tablespaces in read only mode. Just in case something similar would happen. In that case, they could easily start all over because the data files were untouched.
Conclusion
During a migration project, you should use the Data Pump feature TRANSPORTABLE=KEEP_READ_ONLY to ease your testing efforts. In some rare cases, it might also be helpful for production migrations.
It is often impossible to get a full copy of the production database or a comparable test system. If you still want to test the migration procedure, you need to take the backups on the production database. That might have an impact on your production database.
You could offload the backups to a standby database but periodically must set tablespaces in read only mode to perform the Data Pump exports. This means downtime for your application.
But there is a way to do all the work on the standby database. This will allow you to test the migration procedure without affecting the primary production database.
You should use the procedure described in a previous blog post about backups on a standby database. When you reach the final incremental backups you need to follow the information below.
I assume you have conducted a level 0 and a number of incremental backups on the standby database. Now it is time for the final incremental backup. We will do this on the standby database as well. You don’t need to touch the primary database except for a few minor changes (archive current log and defer redo log transport).
Now you have a consistent backup of the data files and a corresponding Data Pump export; both components were taken from a standby database. Transfer the files to your target database and test the procedure.
Production Migration
The heading says you can use this procedure for testing. How about the production migration?
Well, it could work, but I would not recommend it.
When you use this approach, you leave the primary database open for business. At the same time, you take the final incremental backup. This opens for a potential catastrophe. Users entering data into the open, primary database while you have taken the last incremental backup. That would mean data loss!
Using the other standby database approach, you are sure that no more data is entered into the source database, when you perform the final migration. You ensure this by setting the tablespaces read only in the primary database.
This is why I do not recommend that you use the approach with an activated standby database for your production migration.
Conclusion
You should test your migration procedure and get comfortable with your runbook. You can use a temporarily activated standby database if you don’t have a dedicated test environment. This allows you to perform realistic testing without interrupting the primary database and your users.
Often when you migrate an Oracle Database using cross-platform transportable tablespace (XTTS) and incremental backups, it is a big database. Big databases must often be protected by Data Guard.
How do you ensure that Data Guard protects the target database at the end of the migration?
Build After Migration
A simple solution is to build the standby database after the migration. There are some downsides to this approach:
It takes time. In many situations, the business requires that the database can’t go live until a standby database is in place. Having to build a standby database will prolong the downtime.
It puts a load on the primary database. Right after the migration, you don’t have your backups in place yet, so you will need to build the standby databases directly from the primary database. That requires a lot of I/O and network traffic. You might want to use those resources for other activities, like taking a level 0 backup or regathering statistics.
It can become even more complicated if you migrate into multitenant architecture. Rebuilding the standby database of an already running, active CDB might not be an option. The other PDBs in the CDB are compromised while the standby is rebuilt.
Restore Data Files to Standby Host
Here is an approach that offers much less downtime. Restore the data files onto the standby host as well as part of the migration. During the Data Pump import, the plug-in propagates to the standby database via redo apply.
When migrating with XTTS and the Perl script you are familiar with running the backup on the source database (xttdriver.pl --backup). Also, you know how to restore and recover the data files on the target system (xttdriver.pl --restore).
Now, the idea is you restore and recover the data files on the primary host and also on the standby host. During the Data Pump import, the data files are plugged into the primary database. The plug-in commands are propagated to the standby database via redo apply. Then, the standby database can plug-in the data files if they are located in the same place as on the primary database. The rest of the Data Pump import will propagate as well, and in the end you will have a working Data Guard environment.
You must recover the data files on the primary and standby database to the exact same SCN. Be sure to restore all backups to both the primary and standby database.
Prepare the target database and build a standby database before starting the migration.
In xtt.properties set destconnstr to point to the target primary database.
Copy xtt.properties to the target primary host and target standby host.
On target standby host, change xtt.properties and set destconnstr to point to the target standby database.
Create the cross-platform backups on the source database using xttdriver.pl --backup in the usual way
Now copy the file res.txt to both the target primary and target standby database.
The backups in the scratch location should also be available to both the target primary and standby databases.
Now restore or recover the backups on the target primary and target standby database using xttdriver.pl --restore.
Repeat the backup/restore/recover process as many times as needed. Keep recovering the target primary and target standby database. It is very important that the data files on both the target primary and target standby database are recovered to the exact same SCN.
On the target databases the data files will be restored into ASM. The data file itself is created with an OMF file name:
The disk group is what you specified in xtt.properties in dest_datafile_location
DB_UNIQUE_NAME is not the same on the primary and standby database. It will differ.
To comply with OMF standard the database must restore the data files into a folder corresponding to the PDB it belongs to. However, currently, the data files do not belong to a PDB. We haven’t done the Data Pump import plug-in operation yet. The database will create a folder with the GUID of the source database. If you are interested, you can get the GUID from the source database using select guid from v$containers.
The last part of the OMF file name is the tablespace name, and some numbers representing the file ID and a number to ensure uniqueness. This part will differ on the primary and standby database as well.
We now know that the data file name is different on the primary and standby database. Previously, it was stated that it is important that the data files are stored in the same location and has the same name. This is a problem! But the Perl script solves that by creating ASM aliases.
The aliases will be created in the location specified by dest_datafile_location in xtt.properties. Use asmcmd to verify it. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, we can see in the Name column that it is an alias:
ASMCMD> ls -l +DATA
Type Redund Striped Time Sys Name
DATAFILE UNPROT COARSE MAR 16 08:00:00 N account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
DATAFILE UNPROT COARSE MAR 16 08:00:00 N accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855
...
If you look at the directory where the alias is pointing to, you can see that the files are proper OMF files – real data files. Column SYS is Y, and Name does not contain the alias pointer=>:
ASMCMD> cd +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE
ASMCMD> ls -l
Type Redund Striped Time Sys Name
DATAFILE UNPROT COARSE MAR 16 08:00:00 Y ACCOUNT.282.1099469855
DATAFILE UNPROT COARSE MAR 16 08:00:00 Y ACCOUNTIDX.280.1099469855
...
Thus, the aliases are hiding the fact that the data files have different name on the target primary and target standby database.
When you prepare the parameter file for the Data Pump import, be sure to reference the aliases – not the OMF named data files. The aliases have the same name on both the target primary and target standby database:
Then start the Data Pump import. The data files are plugged into the primary database during DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK. When the redo containing that information is applied on the standby database, the standby database will plug in the data files as well.
When the Data Pump import completes, you can verify that the standby database survived and is still applying redo. A switch-over is also a good way of testing it.
Conclusion
You can prepare the target standby database in advance. This enables the target database to be protected by Data Guard as soon as the Data Pump import completes.
Restore and recover the data files to the target primary and target standby database. If they are recovered to the exact same SCN the plug-in of the data files propagates to the standby database via redo apply. After the Data Pump import, your target database has a fully functional standby database.
What if the database you want to migrate has bigfile tablespaces? Bigfile tablespaces are very often used in very large databases. Do they influence on the migration using cross-platform transportable tablespaces and incremental backups?
A bigfile tablespace is a tablespace with a single, but potentially very large (up to 4G blocks) data file. Traditional smallfile tablespaces, in contrast, can contain multiple data files, but the files cannot be as large.
The benefits of bigfile tablespaces are the following:
A bigfile tablespace with 8K blocks can contain a 32 terabyte data file…
Important to note is that a bigfile tablespace contains one big data file. That file can be huge.
What Do You Normally Do?
When RMAN backs up a database or a tablespace it will parallelize by using several channels – each channel will process one or more data files.
Imagine a 32 TB data file (from a bigfile tablespace). One RMAN channel will be allocated and needs to process that data file alone. That will take a while!
To solve that multisection backup was implemented. That allows multiple channels to work on the same data file. You can enable multisection backup by using the clause section size.
Backup
The initial, level 0 backup of the source database is executed by the Perl script during the migration. The Perl script generates the RMAN backup commands on-the-fly, and it does not specify a section size.
In addition, it is not possible to specify section size as a default channel configuration (see RMAN show all command).
So there is no way to enable the use of multisection backup. Currently, multisection backup is simply not supported by the Perl script.
Now What
If multisection backup is essential to your migration, I suggest you create a Service Request and ask for an enhancement. The more customers request it, the more likely it is that the feature will be added.
If you have bigfile tablespaces you have only one option. Use multiple Perl scripts. It is a good workaround if you have many bigfile tablespaces. The workaround probably won’t add much value if you only have one.
I have another blog post describing the use of multiple Perl scripts.
When doing a cross-platform migration with transportable tablespaces and incremental backup, is it possible to perform backups on a standby database? If so, you could offload the work from a primary database.
The short answer is yes. You can use a physical standby database but not a snapshot standby database.
Using a standby database for the backups is an advantage in some situations:
The primary database does not have the capacity to perform the extra backups (CPU, disk space, or I/O).
You want the primary database to be completely unaffected by the migration.
Perform the Data Pump export on the primary database:
[oracle@src-pri]$ expdp system ... full=y transportable=always ...
Active Data Guard
If you have a license for Active Data Guard, you can simply keep the physical standby database in OPEN WITH REDO APPLY mode. You don’t need to switch from OPEN mode to MOUNT mode.
Conclusion
Being able to perform the backups on a standby database is a huge advantage. But it makes the process slightly more complicated, so I would recommend it only if really needed. In addition, if you don’t follow the procedure strictly, you might accidentally use a feature that requires Active Data Guard.
When you migrate an Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups, you will need to transfer a large amount of data to the target host. In some situations, especially if the target host is in a remote data center or the cloud, you are restricted by the network throughput, and it can be a limiting factor.
One approach to speed up the data transfer is to compress it before it leaves the source host. Compressing and decompressing takes time and CPU. But if your network connection is slow enough, it may pay off in the end.
First, you start the initial level 0 backup of the source database:
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
The backups will be stored in the directory defined by src_scratch_location. In my case, the directory is /u01/app/oracle/xtts_src_scratch. Now you can simple compress the entire directory:
gzip -r /u01/app/oracle/xtts_src_scratch
This should significantly reduce the size of the files. Transfer the compressed files to your target host, and put them into the directory defined by dest_scratch_location. Then, uncompress the files:
gunzip -r /u01/app/oracle/xtts_dest_scratch
Continue the procedure described in the above MOS note and start the restore:
$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore
Some Figures
I made a little test to give you an idea of how much there is to save.
Size
Data files total size
197 GB
Backup set size
197 GB
After gzip compression
12 GB
Because I don’t have any free space in my data files, the backup sets have almost the same size as the actual data files – 197 GB. By gzipping the files I could reduce the file size to 12 GB.
So there is a significant amount to save – from 197 GB to 12 GB. It makes a huge difference whether you have to transfer 197 or 12 GB. But you must consider the time it takes to compress and decompress.
Elapsed time
Compression
13 min 52 sec
Decompression
8 min 7 sec
I made the same test but with image file backups and the numbers were almost the same. Generally, you should always try to use backup sets over image file backups. Backup sets use unused block compression which will skip all the unused blocks in your data files. That can make quite a difference for database with a lot of free space.
What About RMAN Compression
RMAN does have the option of compressing the backup set, but that is currently not supported by the Perl scripts. If the Perl scripts would be able to do that, you could get a good compression ratio with RMAN compression as well (of course provided you have a license for Advanced Compression Option) and avoid the gzip trick. Using the example above, by using RMAN compressed backup sets (medium compression) the backup files would be 0,3 GB which is very good as well.
But for now, you are "stuck" with the gzip trick.
Final Words
A few things to consider:
You should only consider this approach if you have a slow network.
You will need extra disk space during the compression/decompression operation.
You should only compress the initial, level 0 backup. The subsequent level 1 incremental backups will be much smaller in size, and it is unlikely to pay off.
You should test on your own database and hardware to know the actual compression ratio. Add the network throughput to the equation, and you can do the math to see if it is a good idea for your specific project.