You have various means at your disposal if you want the migration using cross-platform transportable tablespaces (XTTS) and incremental backups to perform better.
Patches
I strongly recommend that you apply the recent-most Release Update to your target Oracle Database. Use the download assistant to find it.
Also, I strongly recommend that you apply the Data Pump bundle patch as well. The Data Pump bundle patch is a collection of important Data Pump patches. The MOS note Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1) has more details and links to the patches.
Use Backup Sets
If both source and target databases are Oracle Database 12c or newer, you should set the following in xtt.properties:
usermantransport=1
RMAN will use backup sets using the new backup for transport
syntax. Backup sets are better than image copies because RMAN automatically adds unused block compression. Unused block compression can shrink the size of the backup and improve performance.
Block Change Tracking
Enable block change tracking on source database. Although strictly speaking not required, it is strongly recommended, because it will shorten the time it takes to perform incremental backups dramatically. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata:
SQL> select status, filename from v$block_change_tracking;
SQL> alter database enable block change tracking;
If the source database in on 19.9 or earlier, and you don’t get much benefit out block change tracking, you should look at Bug 29148799 – Performance Issue During Rman Backup When Block Change Tracking Is Enabled (Doc ID 29148799.8).
Parallel Options
If you look in xtt.properties, there is a parameter called parallel
. What does it do?
It controls the number of batches in which the backup and restore/recover commands run. The Perl script will split the tablespaces into n batches – n is parallel
from xtt.properties. One batch will process all the data files belonging to those tablespaces.
If you have 20 tablespaces, the Perl script will run in four batches of five tablespaces. If each tablespace has three data files, the Perl script will run four batches of each 15 data files.
Each batch will process n data files at the same time. n being the default parallelism assigned to the disk channel. To find the current parallelism (here it is two):
RMAN> show all;
...
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
...
If you want to change it to eight:
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
When you restore and convert the data files on the target database, it will also use the RMAN configuration parameter.
To enable parallel backup and restore, be sure to change the default disk parallelism on both source and target database host.
For image file backups (usermantransport=0
), when the data files are converted on the target database, it will use the parallel degree specified in xtt.properties parameter parallel
. Backup sets are converted using the RMAN configuration parameter.
Multiple Perl Scripts
If you really want to squeeze the very last drop of performance out of your system, or if you want to use multiple RAC nodes, you can use multiple Perl scripts.
Normally, you only have one Perl script with corresponding files like xtt.properties:
[oracle@sales12 xtts]$ pwd
/home/oracle/xtts
[oracle@sales12 xtts]$ ls -l
total 260
-rw-r--r-- 1 oracle oinstall 5169 Mar 11 19:30 xtt.newproperties
-rw-r--r-- 1 oracle oinstall 266 Mar 11 19:30 xtt.properties
-rw-r--r-- 1 oracle oinstall 1390 Mar 11 19:30 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall 71 Mar 11 19:30 xttdbopen.sql
-rw-r--r-- 1 oracle oinstall 180408 Mar 11 19:30 xttdriver.pl
-rw-r--r-- 1 oracle oinstall 11710 Mar 11 19:30 xttprep.tmpl
-rw-r--r-- 1 oracle oinstall 52 Mar 11 19:30 xttstartupnomount.sql
That one script will process all the tablespaces:
[oracle@sales12 xtts]$ cat xtt.properties
tablespaces=ACCOUNT,SALES,REPORTING,USERS
...
The idea with multiple Perl scripts is that you have many sets of Perl scripts; each set working on a unique batch of tablespaces.
So instead of just one folder, I could have four folders. Each folder is a complete Perl script with all the files. Download rman_xttconvert_VER4.3.zip and extract to four folders:
[oracle@sales12 ~]$ pwd
/home/oracle
[oracle@sales12 ~]$ ls -l
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts1
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts2
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts3
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts4
Each of the xtt.properties files will work on a unique set of tablespaces:
[oracle@sales12 xtts]$ cat /home/oracle/xtts1/xtt.properties
tablespaces=ACCOUNT
...
[oracle@sales12 xtts]$ cat /home/oracle/xtts2/xtt.properties
tablespaces=SALES
...
[oracle@sales12 xtts]$ cat /home/oracle/xtts3/xtt.properties
tablespaces=REPORTING
...
[oracle@sales12 xtts]$ cat /home/oracle/xtts4/xtt.properties
tablespaces=USERS
...
You must also ensure that src_scratch_location
and dest_scratch_location
are set to different locations. Each set of Perl scripts must have dedicated scratch locations.
You have multiple concurrent sessions running when you need to backup and restore/recover. Each session will use one of the Perl scripts, and, thus, process the tablespaces concurrently.
SSH session 1:
export TMPDIR=/home/oracle/xtts1
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 2 (notice I changed TMPDIR to another directory, xtts2):
export TMPDIR=/home/oracle/xtts2
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 3:
export TMPDIR=/home/oracle/xtts3
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 4:
export TMPDIR=/home/oracle/xtts4
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
In the above example, I used four different Perl scripts and four concurrent sessions. But you can scale up if you have the resources for it. One of our customers ran with 40 concurrent sessions!
You must ensure to include all your tablespaces. The sum of all the tablespaces in your Perl scripts must be all of the tablespaces in your database. Don’t forget one of the tablespaces.
On RAC, you can run the Perl scripts on all the nodes, utilizing all your resources.
Watch this video to learn how a customer migrated a 230 TB database using multiple Perl scripts
Database Complexity
The Data Pump export and import must be done during downtime. The time it takes to perform these two tasks is often critical.
How long will it take? It depends (classic IT answer)!
The complexity of your user data dictionary has the biggest impact. The more objects, generally the longer the export and import will take. Also, certain features like partitioning have a big impact as well. It might be impossible to reduce the user data dictionary complexity, but it can have a big impact. In some situations, I have seen old or obsolete data in the database. Or partitions that had to be archived. Or entire groups of tables that were used by a feature in the application that was no longer in use. Getting rid of such data can have an impact.
Another thing to look at is invalid objects. If you have objects that can’t compile, check the reason and whether the object can be dropped. Often these invalid objects are just remnants from old times. Getting rid of those will make the export faster, the import will be faster as well and the database won’t have to spend time trying to compile them.
A thing to test: What works best in your migration: Data Pump in dump file mode or network mode? Normally, we recommend dump file mode because it has much better parallel capabilities. But metadata export and import for transportable tablespace jobs happen serially anyway, so there might be a benefit of using Data Pump in network mode. When using Data Pump in network mode, you just start the Data Pump import without first doing an export. The information is loaded directly into the target database over a database link.
Skip Statistics
I recommend that you skip statistics when you export:
exclude=table_statistics,index_statistics
Instead, either:
- Regather new statistics on target database.
- Import statistics from source database using
DBMS_STATS
. - Import statistics from a test system using
DBMS_STATS
.
Options 1 and 3 are especially appealing if your target database is very different from the source. Imagine going from AIX to Exadata, from 11.2.0.4 to 19c, and non-CDB to PDB. The platform itself is very different; Exadata has superiour capabilities. In addition, it is a new version with other histogram types and different architecture. In this case, it does make sense to get new statistics that can better reflect the new environment.
We discuss statistics during migrations in detail in our webinar Performance Stability, Tips and Tricks and Underscores.
Dictionary Statistics
Accurate statistics are always important and it applies to Data Pump jobs as well. You should gather dictionary statistics:
- Within reasonable time before Data Pump export.
- Immediately after Data Pump import.
I usually go just schema stats on SYS and SYSTEM but you can use the dedicated procedure as well:
SQL> begin
dbms_stats.gather_schema_stats('SYS');
dbms_stats.gather_schema_stats('SYSTEM');
end;
/
SQL> --Or
SQL> exec dbms_stats.gather_dictionary_stats;
Parallel Metadata Export and Import
Starting with Oracle Database 21, Data Pump supports parallel export and import of metadata when using transportable tablespaces. Add the following to your Data Pump parameter file. n is the level of parallelism:
parallel=n
If an export was made in a lower release that didn’t support parallel export, you can still import in parallel. Parallel Metadata import works regardless of how the Data Pump export was made.
Other Blog Posts in This Series
- Introduction
- Prerequisites
- Full Transportable Export/Import
- What Is a Self-contained Transportable Tablespace Set
- How to Migrate a Database Using Full Transportable Export/Import and Incremental Backups
- Make It Fast
- Backup on Standby Database
- Slow Network Between Source and Target
- Bigfile Tablespaces
- Testing the Procedure On Activated Standby Database
- Testing the Procedure Multiple Times Using Read-Only Tablespaces
- Target Database and Data Guard
- ASM Aliases and Why You Should Get Rid of Them
- Pro Tips