XTTS: Target Database and Data Guard

Often when you migrate an Oracle Database using cross-platform transportable tablespace (XTTS) and incremental backups, it is a big database. Big databases must often be protected by Data Guard.

How do you ensure that Data Guard protects the target database at the end of the migration?

Build After Migration

A simple solution is to build the standby database after the migration. There are some downsides to this approach:

  • It takes time. In many situations, the business requires that the database can’t go live until a standby database is in place. Having to build a standby database will prolong the downtime.
  • It puts a load on the primary database. Right after the migration, you don’t have your backups in place yet, so you will need to build the standby databases directly from the primary database. That requires a lot of I/O and network traffic. You might want to use those resources for other activities, like taking a level 0 backup or regathering statistics.
  • It can become even more complicated if you migrate into multitenant architecture. Rebuilding the standby database of an already running, active CDB might not be an option. The other PDBs in the CDB are compromised while the standby is rebuilt.

Restore Data Files to Standby Host

Here is an approach that offers much less downtime. Restore the data files onto the standby host as well as part of the migration. During the Data Pump import, the plug-in propagates to the standby database via redo apply.

When migrating with XTTS and the Perl script you are familiar with running the backup on the source database (xttdriver.pl --backup). Also, you know how to restore and recover the data files on the target system (xttdriver.pl --restore).

Now, the idea is you restore and recover the data files on the primary host and also on the standby host. During the Data Pump import, the data files are plugged into the primary database. The plug-in commands are propagated to the standby database via redo apply. Then, the standby database can plug-in the data files if they are located in the same place as on the primary database. The rest of the Data Pump import will propagate as well, and in the end you will have a working Data Guard environment.

You must recover the data files on the primary and standby database to the exact same SCN. Be sure to restore all backups to both the primary and standby database.

Overview of the process

How To

The following assumes that your target databases are using ASM. Also, I am migrating directly into a PDB. But the procedure is similar for migrations into a non-CDB database. Use the procedure described in my previous blog post or MOS note V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1). Adjust the procedure according to the following.

  • Prepare the target database and build a standby database before starting the migration.
  • In xtt.properties set destconnstr to point to the target primary database.
  • Copy xtt.properties to the target primary host and target standby host.
  • On target standby host, change xtt.properties and set destconnstr to point to the target standby database.
  • Create the cross-platform backups on the source database using xttdriver.pl --backup in the usual way
  • Now copy the file res.txt to both the target primary and target standby database.
  • The backups in the scratch location should also be available to both the target primary and standby databases.
  • Now restore or recover the backups on the target primary and target standby database using xttdriver.pl --restore.
  • Repeat the backup/restore/recover process as many times as needed. Keep recovering the target primary and target standby database. It is very important that the data files on both the target primary and target standby database are recovered to the exact same SCN.
  • On the target databases the data files will be restored into ASM. The data file itself is created with an OMF file name:
    +<disk_group>/<db_unique_name>/<source_guid>/DATAFILE/<omf_name>
    
    It could be something like this:
    +DATA/CDB1_FRA356/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.281.1099469863
    
    • The disk group is what you specified in xtt.properties in dest_datafile_location
    • DB_UNIQUE_NAME is not the same on the primary and standby database. It will differ.
    • To comply with OMF standard the database must restore the data files into a folder corresponding to the PDB it belongs to. However, currently, the data files do not belong to a PDB. We haven’t done the Data Pump import plug-in operation yet. The database will create a folder with the GUID of the source database. If you are interested, you can get the GUID from the source database using select guid from v$containers.
    • The last part of the OMF file name is the tablespace name, and some numbers representing the file ID and a number to ensure uniqueness. This part will differ on the primary and standby database as well.
  • We now know that the data file name is different on the primary and standby database. Previously, it was stated that it is important that the data files are stored in the same location and has the same name. This is a problem! But the Perl script solves that by creating ASM aliases.
  • The aliases will be created in the location specified by dest_datafile_location in xtt.properties. Use asmcmd to verify it. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, we can see in the Name column that it is an alias:
    ASMCMD> ls -l +DATA
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855
    ...
    
  • If you look at the directory where the alias is pointing to, you can see that the files are proper OMF files – real data files. Column SYS is Y, and Name does not contain the alias pointer =>:
    ASMCMD> cd +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE
    ASMCMD> ls -l 
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNTIDX.280.1099469855
    ...
    
  • Thus, the aliases are hiding the fact that the data files have different name on the target primary and target standby database.
  • When you prepare the parameter file for the Data Pump import, be sure to reference the aliases – not the OMF named data files. The aliases have the same name on both the target primary and target standby database:
    $ cat import.par
    
    transport_datafiles=+DATA/account_25.dbf
    transport_datafiles=+DATA/accountidx_26.dbf
    transport_datafiles=...
    
  • Then start the Data Pump import. The data files are plugged into the primary database during DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK. When the redo containing that information is applied on the standby database, the standby database will plug in the data files as well.
  • When the Data Pump import completes, you can verify that the standby database survived and is still applying redo. A switch-over is also a good way of testing it.

Conclusion

You can prepare the target standby database in advance. This enables the target database to be protected by Data Guard as soon as the Data Pump import completes.

Restore and recover the data files to the target primary and target standby database. If they are recovered to the exact same SCN the plug-in of the data files propagates to the standby database via redo apply. After the Data Pump import, your target database has a fully functional standby database.

Further Reading

Other Blog Posts in This Series

11 thoughts on “XTTS: Target Database and Data Guard

  1. Hi,
    Thanks for such details notes on XTTS.
    I have one question, I am testing XTTS and I my target(PDB) has standby as well.
    I do I connect to standby pdb to restore the datafiles?
    destconnstr=sys/Password@TEST1P01, this won’t at standby

    Liked by 1 person

    1. Hi Mohsi,

      That’s a very good question. I can see that information is missing in the blog post. I have updated the blog post with some missing information. I hope clears it out. Otherwise, let me know, and I will elaborate.
      In short, in “xtt.properties” on the target standby host, you need to change the destconnstr to point to the target standby database.

      Regards,
      Daniel

      Liked by 1 person

  2. “Restore and recover the data files to the target primary and target standby database. If they are recovered to the exact same SCN” –> how do you ensure the SCN is the same? we perform xttdriver.pl –restore and the SCN becomes different on primary and standby. then the import breaks the dataguard with ORA-19729

    Like

    1. Hi,

      You ensure that by recovering the same backup on both target primary and target standby database:
      * Take backup1 on source
      * Restore backup1 to target primary
      * Restore backup1 to target standby
      * Take backup2 on source
      * Restore backup2 on target primary
      * Restore backup2 on target standby
      * ….

      As long as you ensure to restore the same backup on the target primary and target standby, then you are good to go. The important part here is the SCN of the data files on the target systems. First, you create the target primary, then you create the target standby. Now, they are in SYNC and have the same SCN. Now you start the backup/restore using the perl script, and it will roll forward the data files. Since you are using the same backup on both target sites, the SCN of the data files will be the same.

      Regards,
      Daniel

      Like

  3. Hi Daniel
    I’ve tested the XTTS solution on a standalone database and it works fine. But I also need it for an DG setup on ODA server. Here I get this error :
    RMAN-05169: cannot use format specifier when connected to a pluggable database and recovery file destination parameter is set
    The DG configuration has been done with odacli commands – and yes db_recovery_file_dest is configured
    db_recovery_file_dest : +RECO(FG$FILEGROUP_TEMPLATE_MIRROR)
    log_archive_dest_1 : LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR= ….
    I’m not keen on changing the DG configuration made by Oracle, so is there any way I can get around that ? Am I doing something wrong ?
    I’m trying to a TTS of a PDB to PDB.
    Regards Børge

    Like

    1. Hi Børge,
      I have never done a migration on ODA myself, so I don’t know if there’s something specific on this platform.
      However, in MOS note “Xtts V4 Migration Phase 2 – Prepare Phase Step 2.3 Fails During restore with RMAN-05169: cannot use format specifier when connected to a pluggable database (Doc ID 2953429.1)” it is stated that this error can occur is the XTTS staging area and FRA is set to the same location. You can try to change “dest_datafile_location” to a different location.
      If you need confirmation whether your changes are safe on ODA, you need to raise a service request in My Oracle Support.
      Regards,
      Daniel

      Like

  4. Hi Daniel
    Great suggestion – I’ll try with a different location for destconnstr.
    Many thanks for prompt reply.
    Regards
    Børge

    Like

  5. Hi Daniel
    Just a small follow-up.
    It worked fine with you solution to use another destconnstr – pointing to a dummy DB system. The only additional step is to correct the owner ship of the datafile in ASM after the restore. But that was it.
    Many thanks for your help
    Regards Børge

    Like

  6. Hi Daniel,
    While restoring on the standby site, the diskgroup & db unique name are different.
    So I guess we need to update the xtt parameter “dest_datafile_location” for standby restore.
    Also once it restores, during import we are picking the datafile aliases and executing import on primary but wondering how it will resolve the different diskgroup for standby?

    Thanks
    Kishore

    Like

    1. Hi,

      When you use XTTS scripts the data files are not restored to the OMF compliant location. You need to move the data files after the migration into the OMF location.
      If the standby database has a different disk group or you want to restore the files in a different place, then the primary target database, you simply specify a different location in the xtt.properties file.
      When Data Pump plugs in the data files on the primary site, the standby database will look for the data files in the same location as on the primary database. If that’s not the location, you need to specify a different location with the init parameter DB_FILE_NAME_CONVERT on the standby database.

      Happy migration,
      Daniel

      Like

Leave a reply to Mohsi Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.