XTTS: ASM Aliases and Why You Should Get Rid of Them

If you migrate an Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups and if your target database use ASM, you should get rid of the aliases after the migration.

What Is an ASM Alias?

When you use ASM, there is tight control over the file names. ASM strictly enforces the naming standard dictated by Oracle Managed Files (OMF), and only the database can create file names that comply with OMF.

Sometimes it is handy to create files in other locations in ASM that still refer to a database file. Here you can use aliases. Aliases work like a symbolic link in the file system.

How can you tell if a file is an alias?

Alias Oracle ASM file names are distinguished from fully qualified file names or numeric file names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers, such as in the format USERS.259.685366091.

When you use ls -l you can also tell whether a file is an alias. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, you can see in the Name column that it is an alias. The => indicate it:

ASMCMD> ls -l +DATA
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855

You can read about Fully Qualified File Name Form in the ASM documentation, if you are interested.

Why Are the Aliases Created?

When the Perl script is restoring and recovering the data files on the target database, they do not belong to any database yet. The tablespaces have not been plugged into any database yet. Hence, it is impossible to figure out the right OMF name of the data files. As an alternative, ASM names the data files according to the syntax of the source database. For instance, it will use the source database GUID (select guid from v$containers) as part of the name. In addition, the Perl script creates ASM aliases using the following format: <dest_datafile_location>/<tablespace_name>_<file#>.dbf

When you perform the Data Pump import, you can refer to the aliases in your Data Pump parameter file (transport_datafile). Using the aliases is especially useful if you plan on having a standby database.

How Do I Get Rid of the Aliases?

After performing the Data Pump import, the tablespaces are plugged into a database, and now the data files belong to a database. But the target database is referring to the data files either via:

  • An ASM alias
  • Or directly via the absolute file name. As described earlier, the absolute file path uses the OMF syntax of the source database

Let me illustrate that. Imagine:

  • In xtt.properties dest_datafile_location=+DATA.
  • My data file is named users01.dbf, belongs to tablespace USERS and has file ID 65.
  • Target DB_UNIQUE_NAME is SALES2.
  • Source database GUID is 86D5DC2587337002E0532AB2A8C0A57C.

How will the file be registered in the database?

  • If I used the aliases, it is known as +DATA/users_65.dbf.
  • If I used the absolute file name, it is known as +DATA/SALES2/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/users.280.1099469855. ASM generates the last two sets of numbers.

Neither of the two formats is proper OMF names. What is the real OMF name? Imagine:

  • Target database GUID is DA495482D68D0220E0530F01000A98DF
  • The real OMF file name is (notice the change in GUID): +DATA/SALES2/DA495482D68D0220E0530F01000A98DF/DATAFILE/users.280.1099469855

You can get the GUID of a database by using select guid from v$containers.

In ASM, only the database can store a file in OMF syntax. You must fix this from the target database. The easiest way is to use online data file move. If you don’t specify the target location, the database will generate an OMF name:

SQL> --using file number
SQL> alter database move datafile 65;
SQL> --using full name
SQL> alter database move datafile '+DATA/users_65.dbf';

How does the move work?

  • It is a entirely online operation.
  • It is a block-by-block copy.
  • The database copies the data file. While the copy operation takes place, the two files are kept in sync until the database can switch to the new file. After that, the database removes the original file.
  • If the data file belongs to a PDB, you must switch your session to that container.

You can learn more about online data file move in our YouTube video:

Why Bother?

If my database works fine, why should I worry? I can think of at least two reasons:

  • Comply to naming standard
  • Avoid problems in other migrations

Comply to naming standard

I highly recommend that you use and comply with any naming standard, including OMF. Data files that are not appropriately stored according to OMF, should be moved to the correct location.

When I worked outside in the real world as a DBA, I remember multiple occasions of loss of data files. In many situations, a DBA had found a file apparently not belonging to a database – at least according to the naming standard. But the file was used by a database; it was just not stored in the correct location. When the file was removed = big problem in the database.

With many databases and many hosts, it is very important that you make standards and keep with the standards. Otherwise, you will end up in a big mess.

Avoid problems in other migrations

This is especially relevant if you need to perform multiple migrations to the same database host.

The Perl script used for the migration will create the aliases in the location specified by dest_datafile_location. The names of the aliases are very simple, and there is a risk that another migration will try to make the same alias.

Imagine you already did one migration. The database uses the alias +DATA/users_4.dbf. Now you want to make a second migration, and this database also wants to use +DATA/users_4.dbf. The same alias can’t be used for two different files. Big problem!

A user left a comment on my blog telling me this actually lead to corruption in the first database. That risk is a very good reason for getting rid of the aliases and using only proper OMF file names.

Conclusion

ASM aliases are created automatically as part of the migration. The aliases are very useful during the migration, but I highly recommend getting rid of the aliases right after the migration.

Other Blog Posts in This Series

XTTS: Target Database and Data Guard

Often when you migrate an Oracle Database using cross-platform transportable tablespace (XTTS) and incremental backups, it is a big database. Big databases must often be protected by Data Guard.

How do you ensure that Data Guard protects the target database at the end of the migration?

Build After Migration

A simple solution is to build the standby database after the migration. There are some downsides to this approach:

  • It takes time. In many situations, the business requires that the database can’t go live until a standby database is in place. Having to build a standby database will prolong the downtime.
  • It puts a load on the primary database. Right after the migration, you don’t have your backups in place yet, so you will need to build the standby databases directly from the primary database. That requires a lot of I/O and network traffic. You might want to use those resources for other activities, like taking a level 0 backup or regathering statistics.
  • It can become even more complicated if you migrate into multitenant architecture. Rebuilding the standby database of an already running, active CDB might not be an option. The other PDBs in the CDB are compromised while the standby is rebuilt.

Restore Data Files to Standby Host

Here is an approach that offers much less downtime. Restore the data files onto the standby host as well as part of the migration. During the Data Pump import, the plug-in propagates to the standby database via redo apply.

When migrating with XTTS and the Perl script you are familiar with running the backup on the source database (xttdriver.pl --backup). Also, you know how to restore and recover the data files on the target system (xttdriver.pl --restore).

Now, the idea is you restore and recover the data files on the primary host and also on the standby host. During the Data Pump import, the data files are plugged into the primary database. The plug-in commands are propagated to the standby database via redo apply. Then, the standby database can plug-in the data files if they are located in the same place as on the primary database. The rest of the Data Pump import will propagate as well, and in the end you will have a working Data Guard environment.

You must recover the data files on the primary and standby database to the exact same SCN. Be sure to restore all backups to both the primary and standby database.

Overview of the process

How To

The following assumes that your target databases are using ASM. Also, I am migrating directly into a PDB. But the procedure is similar for migrations into a non-CDB database. Use the procedure described in my previous blog post or MOS note V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1). Adjust the procedure according to the following.

  • Prepare the target database and build a standby database before starting the migration.
  • In xtt.properties set destconnstr to point to the target primary database.
  • Copy xtt.properties to the target primary host and target standby host.
  • On target standby host, change xtt.properties and set destconnstr to point to the target standby database.
  • Create the cross-platform backups on the source database using xttdriver.pl --backup in the usual way
  • Now copy the file res.txt to both the target primary and target standby database.
  • The backups in the scratch location should also be available to both the target primary and standby databases.
  • Now restore or recover the backups on the target primary and target standby database using xttdriver.pl --restore.
  • Repeat the backup/restore/recover process as many times as needed. Keep recovering the target primary and target standby database. It is very important that the data files on both the target primary and target standby database are recovered to the exact same SCN.
  • On the target databases the data files will be restored into ASM. The data file itself is created with an OMF file name:
    +<disk_group>/<db_unique_name>/<source_guid>/DATAFILE/<omf_name>
    
    It could be something like this:
    +DATA/CDB1_FRA356/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.281.1099469863
    
    • The disk group is what you specified in xtt.properties in dest_datafile_location
    • DB_UNIQUE_NAME is not the same on the primary and standby database. It will differ.
    • To comply with OMF standard the database must restore the data files into a folder corresponding to the PDB it belongs to. However, currently, the data files do not belong to a PDB. We haven’t done the Data Pump import plug-in operation yet. The database will create a folder with the GUID of the source database. If you are interested, you can get the GUID from the source database using select guid from v$containers.
    • The last part of the OMF file name is the tablespace name, and some numbers representing the file ID and a number to ensure uniqueness. This part will differ on the primary and standby database as well.
  • We now know that the data file name is different on the primary and standby database. Previously, it was stated that it is important that the data files are stored in the same location and has the same name. This is a problem! But the Perl script solves that by creating ASM aliases.
  • The aliases will be created in the location specified by dest_datafile_location in xtt.properties. Use asmcmd to verify it. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, we can see in the Name column that it is an alias:
    ASMCMD> ls -l +DATA
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855
    ...
    
  • If you look at the directory where the alias is pointing to, you can see that the files are proper OMF files – real data files. Column SYS is Y, and Name does not contain the alias pointer =>:
    ASMCMD> cd +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE
    ASMCMD> ls -l 
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNTIDX.280.1099469855
    ...
    
  • Thus, the aliases are hiding the fact that the data files have different name on the target primary and target standby database.
  • When you prepare the parameter file for the Data Pump import, be sure to reference the aliases – not the OMF named data files. The aliases have the same name on both the target primary and target standby database:
    $ cat import.par
    
    transport_datafiles=+DATA/account_25.dbf
    transport_datafiles=+DATA/accountidx_26.dbf
    transport_datafiles=...
    
  • Then start the Data Pump import. The data files are plugged into the primary database during DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK. When the redo containing that information is applied on the standby database, the standby database will plug in the data files as well.
  • When the Data Pump import completes, you can verify that the standby database survived and is still applying redo. A switch-over is also a good way of testing it.

Conclusion

You can prepare the target standby database in advance. This enables the target database to be protected by Data Guard as soon as the Data Pump import completes.

Restore and recover the data files to the target primary and target standby database. If they are recovered to the exact same SCN the plug-in of the data files propagates to the standby database via redo apply. After the Data Pump import, your target database has a fully functional standby database.

Further Reading

Other Blog Posts in This Series