How to Migrate a Database Using Full Transportable Export Import and Incremental Backups

These steps will guide you through a migration of a database using Full Transportable Export Import (FTEX) and incremental backups. I covered the concept in a previous blog post, which you should read to understand the basics. Remember Transportable Tablespaces and Full Transportable Export/Import requires Enterprise Edition.

My demo environment looks like this: Overview of demo environment for migrating using FTEX and incremental backups

I have an 12.1.0.2 database that I want to migrate to a PDB in a new CDB that runs 19c.

Check Prerequisites

Create a new PDB called SALES in the target CDB:

TARGET/CDB1 SQL> create pluggable database sales admin user admin identified by admin;
TARGET/CDB1 SQL> alter pluggable database sales open;
TARGET/CDB1 SQL> alter pluggable database sales save state;

Prepare the database to use TDE Tablespace Encryption:

TARGET/CDB1 SQL> alter session set container=sales;
TARGET/CDB1 SQL> administer key management set key force keystore identified by <keystore-pwd> with backup;

Verify SQL*Net connectivity from source host to target PDB:

[oracle@source]$ sqlplus system@<target ip>/<pdb-service-name>

Verify database character set and national character set are the same:

SOURCE/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

TARGET/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

Ensure the source database is in ARCHIVELOG mode:

SOURCE/SALES SQL> select log_mode from v$database;

Enable block change tracking on source database. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata. Although strictly speaking not required, it is strongly recommended:

SOURCE/SALES SQL> select status, filename from v$block_change_tracking;
SOURCE/SALES SQL> alter database enable block change tracking;

Ensure that you can connect from the source to the target host as oracle:

[oracle@source]$ ssh <target ip> date

Identify Tablespaces

Identify all the tablespaces that you will migrate. With FTEX you should transport all the tablespaces, except those that contain Oracle maintained data, like SYSTEM, SYSAUX, UNDO and so forth:

SOURCE/SALES SQL> select tablespace_name from dba_tablespaces;

Save the list of tablespaces for later. In my demo, I only have the tablespace SALES except the Oracle maintained ones.

Next, on the target database ensure that any of the existing tablespaces doesn’t conflict with the ones you are transporting:

TARGET/SALES SQL> select tablespace_name from dba_tablespaces;

If there is a conflict of names, you have to drop or rename the tablespaces in the target database.

Download and Configure Perl Scripts

Create a folder to hold the perl scripts, download the scripts from MOS doc ID 2471245.1, and unzip:

[oracle@source]$ rm -rf /home/oracle/xtts
[oracle@source]$ mkdir /home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ --Download file from MOS
[oracle@source]$ unzip rman_xttconvert_VER4.3.zip

Create a working directory (aka. scratch location) which will hold the backups. Ensure that you have enough space at this location at both source and target database.

[oracle@source]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@source]$ mkdir -p /u01/app/oracle/xtts_scratch

Create the same location on the target host:

[oracle@target]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@target]$ mkdir -p /u01/app/oracle/xtts_scratch

Configure your migration in xtt.properties. In this demo the file looks like this:

tablespaces=SALES
platformid=13
src_scratch_location=/u01/app/oracle/xtts_scratch
dest_scratch_location=/u01/app/oracle/xtts_scratch
dest_datafile_location=+DATA
asm_home=/u01/app/19.0.0.0/grid
asm_sid=+ASM1
parallel=4
rollparallel=4
getfileparallel=4
metatransfer=1
dest_user=oracle
dest_host=<target_ip>
desttmpdir=/u01/app/oracle/xtts_scratch
srcconnstr=sys/<password>@sales
destconnstr=sys/<password>@newsales
usermantransport=1

A little explanation:

  • platformid is set to 13 because this is a Linux migration. You can get the number by querying v$transportable_platform.
  • Adjust the parallel options according to the capabilities of the source and target system.
  • When you are using ASM disk group in dest_datafile_location you must also set asm_home and asm_sid.

Finally, copy the scripts (and the configuration) to your target system:

[oracle@source]$ scp -r /home/oracle/xtts/ <target_ip>:/home/oracle/

Initial Backup and Restore

Now, you can start the first initial backup of the database. You take it while the source database is up and running, so it doesn’t matter if the backup/restore cycle take hours or days to complete:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

The perl script has been configured in such a way that it automatically transfers the backups to the target system. In addition to that, a small text file must be transferred as well:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

Now, on the target system, you can restore the backup that was just taken. If needed, the data files are automatically converted to the proper endian format. If conversion is needed, you need space for a copy of all the data files:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Incremental Backup and Restore

You can – and should – run the incremental backup and restores as many times as possible. The more frequent you run them, the faster they will run because there will be fewer changes. At least, close to the migration downtime window starts you should run them often, to minimize the time it will take to perform the final backup and restore:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Final Incremental Backup and Restore

Now downtime starts! Set the tablespaces read-only:

SOURCE/SALES SQL> alter tablespace SALES read only;

Perform the final incremental backup:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

You will receive an error because the tablespace is read-only. This is ignorable: This error is ignorable because the tablespace was set read-only on purpose

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Import Metadata Using FTEX

Create a directory object that points to the xtts folder:

TARGET/SALES SQL> create directory LOGDIR as '/home/oracle/xtts';

Next, create a database link to the source database that can be used to import the metadata. If the source database is already a PDB, ensure that the database link points directly into the PDB:

TARGET/SALES SQL> create public database link SRCLNK connect to system identified by <password> using '//<source_ip>:1521/<service_name>';

Test that it works:

TARGET/SALES SQL> select * from dual@srclnk;

Next, create a par file (sales_imp.par) that you can use for the Data Pump import (see appendix below for explanation):

network_link=SRCLNK
full=y
transportable=always
metrics=y
logtime=all
exclude=TABLE_STATISTICS,INDEX_STATISTICS
exclude=SYS_USER
exclude=TABLESPACE:"IN('TEMP')"
exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')"
directory=logdir
logfile=sales_imp.log
transport_datafiles='+DATA/CDB1_FRA2VD/B2D617FCB79B0684E053AF01000A6DCE/DATAFILE/SALES.281.105552609'

Start Data Pump and perform the import. newsales is a TNS alias that points into the SALES PDB in the target CDB. If you have encrypted tablespaces, you should use the option encryption_pwd_prompt. It allows you to input the TDE password. It can be omitted if there are no encrypted tablespaces.

$ impdp system@newsales parfile=sales_imp.par encryption_pwd_prompt=yes

Once the import has completed, you should examine the Data Pump log file for any critical errors. Check the appendix (see below) for ignorable errors:

[oracle@target]$ vi /home/oracle/xtts/sales_imp.log

That’s it! Your data has been migrated. Now would be a good time to:

  • Test your application.
  • Start a backup.
  • Gather statistics – they were excluded from the export.
  • Drop the database link that points to the source database.
  • Cleanup the file system:
    • /home/oracle/xtts
    • /u01/app/oracle/xtts_scratch

Conclusion

Even huge, TB-sized, databases can be migrated with very little downtime by using incremental backups. By using the perl script from My Oracle Support and combined with Full Transportable Export/Import it is a simple process. In addition, you can even migrate to a new endian format, to a higher release and into a PDB in one operation. It requires Enterprise Edition and you must have plenty of disk space – potentially twice the size of your database.

There is a video on our YouTube channel that you can watch. It demos the entire process. I suggest that you subscribe to our channel and get notified whenever there are new videos.

Thanks to my good colleague, Robert Pastijn, for supplying a runbook that was used as inspiration.

Appendix

If Source Database Is in OCI and Automatic Backup Is Enabled

If the source database is running in OCI and you have enabled automatic backup, you must make a few changes.

In xttprep.tmpl around line 319 change:

cp('backup for transport allow inconsistent ' ||

to

cp('set encryption off for all tablespaces;set compression algorithm "basic";backup for transport allow inconsistent ' ||

In xttdriver.pl around line 4268 change:

my $rman_str1 = "set nocfau;";

to

my $rman_str1 = "set nocfau;".
                "set encryption off for all tablespaces ;".
                "set compression algorithm 'basic' ;" ;

ORA-02085

If you get ORA-02085 when querying over the database link:

TARGET/SALES SQL> alter system set global_names=false;

Data Pump Parameters

Use network_link to specify the name of the database link that points back to the source database.

full=y and transportable=always instructs Data Pump to perform a full transportable export/import.

exclude=TABLE_STATISTICS,INDEX_STATISTICS exclude statistics from the import. It is better and faster to gather new, fresh statistics on the target database. If you insist on importing your statistics, you should use DBMS_STATS.

exclude=SYS_USER excludes the import of the SYS user. In a PDB that is not even allowed, and most likely you are not interested in importing the definition of the SYS user.

exclude=TABLESPACE:"IN('TEMP')" excludes the temporary tablespace from the import. Most likely there is already a temporary tablespace in the new, target PDB. It is faster to create a TEMP tablespace in advance – and name it the same as in the source database.

A change was made to Spatial in 19c and some Spatial admin users are removed. To avoid errors/noise in the log file you can safely exclude them from the import by specifying exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')".

transport_datafiles is used to specify the data files that make you the tablespace you are transporting. Specify the parameter multiple times to specify more data files. You can use asmcmd to get the data file paths and names.

Data Pump Ignorable Errors

Multimedia desupported in 19c, but code is still there. You can safely disregard this error:

Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
ORA-39342: Internal error - failed to import internal objects tagged with ORDIM due to ORA-00955: name is already used by an existing object.

Package is removed in 12.2. See ORA-39083 And ORA-04042 Errors On DBMS_DEFER_SYS When Importing Into 12.2 Database (Doc ID 2335846.1):

Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
ORA-39083: Object type PROCACT_SYSTEM failed to create with error:ORA-04042: procedure, function, package, or package body does not exist

Failing sql is:
BEGIN
SYS.DBMS_UTILITY.EXEC_DDL_STATEMENT('GRANT EXECUTE ON DBMS_DEFER_SYS TO "DBA"');COMMIT; END;

23 thoughts on “How to Migrate a Database Using Full Transportable Export Import and Incremental Backups

  1. No, you are not missing something. When you use Data Pump (either with transportable tablespaces or a regular export) the upgrade and also PDB conversion happens implicitly when you import into the new target database (that is already running on the new version, and as a PDB). This is one of the biggest advantages of doing it this way.

    Liked by 1 person

  2. Thanks, When you do final incremental restore and impdp as shown, does it import all users and other objects like scheduler jobs , public synonyms , public db links etc. I did not see that you have created user in target before you ran import.

    Liked by 1 person

  3. Hi Sanjay,
    Thank you for your question. When you use FTEX to migrate the data you should not create any objects in the target database. Data Pump will bring over the definitions of users, scheduler jobs, synonyms and so forth. And I am pretty sure it applies to DB links as well.
    Regards,
    Daniel

    Liked by 1 person

  4. Thanks Daniel for a really helpful Demo! Can you please clarify some questions below:

    – What do you mean by “Imagefile Backup” in your Video use case? Is it the Image Copy Backup – exact byte-for-byte copy of DB files?
    – Can the backup be compressed before moving it to target server to reduce the transfer time?
    – Can we migrate with the same scenario (full backup + incremental) but using backup sets with compression? Like described in the MOS Note: MOS 2013271.1
    – From the same MOS Note above I see that we can create Metadata export during the RMAN back using “DATAPUMP FORMAT ‘/tmp/xplat_backups/trans_ts_dmp.bck’”. Will this type of export work for your migration scenario? Will it be faster maybe compared to the Datapump FULL Transportable Exp/Imp like in your example?

    Thanks

    Like

  5. Hi Kirill,

    Those are good questions.

    1. You are right. What I mean is “Image Copy backup”.
    2. You can’t compress the initial image copy backup. Not natively with RMAN at least. You would have to use OS utilities for that.
    There is the option of using compressed backupset for rolling forward the image copies, but that is not supported by the perl scripts. However, if you perform frequent incremental backup and restores, then the size of the backup should be of much concern.
    3. No, that procedure works only with a full backup. Hence, it is not really usable, I think. Because a full backup simply takes too long in much cases.
    4. I can’t recall the exact details, but I am pretty sure that RMAN is doing a full transportable export as part of the backup. So it is the same as I do.

    I hope that help.

    Thanks,
    Daniel

    Like

  6. Hi Daniel, Great video! I’m doing a migration using version Oracle 19c from AIX non-cdb to cloud at customer PDB. In my impdp parfile I have:
    network_link=SRCLNK
    full=y
    job_name=imp_job
    transportable=always
    exclude=SYS_USER
    and the importing is complaining with ORA-39038: Object path “SYS_USER” is not supported for FULL jobs.
    What am I missing?
    Thanks in Advance, Sue

    Like

  7. Hi Sue,
    Is that your complete par file? There are parameters missing, like “transport_datafiles”. See a complete example in the blog post.
    However, you can actually remove “exclude=SYS_USER” from the par file. I added it to the example because it can remove some of the “noise” from the Data Pump logfile. If you don’t do it, you will see some ORA errors in the log file that involves SYS objects. However, in most cases these can be safely excluded. Because SYS is not imported anyway.
    Regards,
    Daniel

    Like

  8. Hi Daniel,
    can I use this method on ExaCs to upgrade 12.1 to 19 on the same node/machine ?
    Are there any better ways ?
    Regards
    Greg

    Like

  9. Hi Greg,
    Yes. You can use TTS to import into a higher release database and also into a PDB from a non-CDB. When you stay on the same endian format (Linux to Linux) you don’t need the extra disk space to convert the data files. But with this method you loose a lot of the internal stuff, like AWR, SQL Plan Baselines, SQL Profiles and so forth. Why not do a regular upgrade? You can use Flashback Database to quickly revert the upgrade if something fails.
    Optionally, if you want less downtime, you can do it in a rolling manner with DBMS_ROLLING. I know that my colleagues in the MAA team are working on creating a new procedure for that.
    Regards,
    Daniel

    Like

  10. Thank You for prompt answer, I’m not sure what ‘regular upgrade’ means in ExaCS, this DBMS_ROLLING sounds promising though. My config us just simple 1/2 4 node rac ExaCS already migrated from on-prem.
    Regards
    Greg

    Like

  11. Hi Greg,
    Our MAA team has the following recommendations for upgrades on ExaCS:
    Upgrading to 19c Oracle Database on Exadata Cloud Service (ExaCS) and Exadata Cloud at Customer Gen2 (ExaCC) (Doc ID 2628228.1)
    Regards,
    Daniel

    Like

  12. Hi Daniel,

    Thanks for sharing this content, this is very helpful.
    Have a question, I did this for cross-platform migration recently using this MOS “V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)” (The doc where you downloaded the xtt scripts). Can we refer to the same doc for migration from EXADATA X7 to X8M??

    Thank you!!

    Like

  13. Hi Ramesh,

    You can use this approach to migrate from one Exadata to another.

    However, to migrate between two Exadata systems you should look into using standby databases. This would be my preferred approach as it easily avoids downtime issues because you just need a switchover to complete the migration from one system to the other. If you also need to upgrade the database as part of the migration, then you will have to do a regular database upgrade afterward. I would consider this the easiest solution.
    And with the latest version of Zero Downtime Migration, you can use it to migrate to an Exadata on-prem as well. Very easy!

    Regards,
    Daniel

    Like

  14. Hi Daniel,

    Thanks for your reply.
    Yes, Have done a couple of migrations using the ZDM and Data Gaurd model. We have proposed the same to one of our customers. But, they would like to go for this strategy for their Dev/Test boxes.

    Yes, The latest version of ZDM 21c is amazing. Thanks for your response again.

    Regards,
    Ramesh

    Like

  15. Hi Daniel,

    Thanks for sharing this content, this is very helpful!
    I did this for cross-platform migration recently using this MOS “V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)” (The doc where you downloaded the xtt scripts). The source was 11.2.0.4 standalone instance on AIX, and the Target is to a 19c PDB in Exadata. Level 0 backup and restore was successful, so was level 1 backup.

    Wonder if you ever saw error while doing level 1 restore (roll forward) like this —

    ——————————————————————–
    Start rollforward
    ——————————————————————–

    Connected to an idle instance.
    Connected to an idle instance.
    ASMCMD-8102: no connection to Oracle ASM; command requires Oracle ASM to run
    ASMCMD-8102: no connection to Oracle ASM; command requires Oracle ASM to run

    So basically instead of shutdown and startup (in NOMOUNT) of TARGET PDB, it was doing to the CDB instead!

    I checked xtt.properties file over and it looks good to me, and I am able to connect to Target PDB, startup/shutdown etc.

    Can you shed some light here? Let me know if you need any other information!

    Thanks!!

    Like

  16. Hi Daniel,

    Thanks for sharing, this is great content!

    I am doing database migration using XTTS Version V4 approach by following Doc ID 2471245.1. The source database is version 11.2.0.4 in AIX, and the destination database is in version 19c on Exadata. I was able to complete level 0 backup and restore successful, also level 1 backup. But when I do the level 1 restore on the TARGET database (i.e. PDB name is MKTSVS_DEV), it always tries to bring down the entire CDB (INFDCDB1) instead of target PDB! The error message was —

    ——————————————————————–
    Start rollforward
    ——————————————————————–

    Connected to an idle instance.
    Connected to an idle instance.
    ASMCMD-8102: no connection to Oracle ASM; command requires Oracle ASM to run
    ASMCMD-8102: no connection to Oracle ASM; command requires Oracle ASM to run

    Wonder if you’ve ever seen this before? FYI, destconnstr was setup in xtt.properties file along with other parameters. I was able to manually connect to the Target PDB and shutdown/startup etc.

    I tried a few times and always the same issue. Stuck now.. What else shall I check?

    Thanks!!

    Like

  17. Hi Elaine,
    I am glad you like the content. Thanks for the feedback.
    I haven’t seen this error before. In my blog post, I migrate also directly into a PDB, so it is odd that you encounter this problem. I suggest that you raise an SR at MOS.
    Regards,
    Daniel

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s