XTTS: Slow Network Between Source and Target

When you migrate an Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups, you will need to transfer a large amount of data to the target host. In some situations, especially if the target host is in a remote data center or the cloud, you are restricted by the network throughput, and it can be a limiting factor.

One approach to speed up the data transfer is to compress it before it leaves the source host. Compressing and decompressing takes time and CPU. But if your network connection is slow enough, it may pay off in the end.

The Approach

Like all other posts in this series, I am using the Perl scripts found in MOS note V4 PERL Scripts to reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

First, you start the initial level 0 backup of the source database:

$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

The backups will be stored in the directory defined by src_scratch_location. In my case, the directory is /u01/app/oracle/xtts_src_scratch. Now you can simple compress the entire directory:

gzip -r /u01/app/oracle/xtts_src_scratch

This should significantly reduce the size of the files. Transfer the compressed files to your target host, and put them into the directory defined by dest_scratch_location. Then, uncompress the files:

gunzip -r /u01/app/oracle/xtts_dest_scratch

Continue the procedure described in the above MOS note and start the restore:

$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Some Figures

I made a little test to give you an idea of how much there is to save.

Size
Data files total size 197 GB
Backup set size 197 GB
After gzip compression 12 GB

Because I don’t have any free space in my data files, the backup sets have almost the same size as the actual data files – 197 GB. By gzipping the files I could reduce the file size to 12 GB. So there is a significant amount to save – from 197 GB to 12 GB. It makes a huge difference whether you have to transfer 197 or 12 GB. But you must consider the time it takes to compress and decompress.

Elapsed time
Compression 13 min 52 sec
Decompression 8 min 7 sec

I made the same test but with image file backups and the numbers were almost the same. Generally, you should always try to use backup sets over image file backups. Backup sets use unused block compression which will skip all the unused blocks in your data files. That can make quite a difference for database with a lot of free space.

What About RMAN Compression

RMAN does have the option of compressing the backup set, but that is currently not supported by the Perl scripts. If the Perl scripts would be able to do that, you could get a good compression ratio with RMAN compression as well (of course provided you have a license for Advanced Compression Option) and avoid the gzip trick. Using the example above, by using RMAN compressed backup sets (medium compression) the backup files would be 0,3 GB which is very good as well.

But for now, you are "stuck" with the gzip trick.

Final Words

A few things to consider:

  • You should only consider this approach if you have a slow network.
  • You will need extra disk space during the compression/decompression operation.
  • You should only compress the initial, level 0 backup. The subsequent level 1 incremental backups will be much smaller in size, and it is unlikely to pay off.
  • You should test on your own database and hardware to know the actual compression ratio. Add the network throughput to the equation, and you can do the math to see if it is a good idea for your specific project.

Other Blog Posts in This Series