What Happens to Your Oracle Data Guard During Conversion to Multitenant

Oracle Data Guard is an amazing piece of tech. It helps keeping your data safe. When you convert to the multitenant architecture, it is crucial that you don’t jeopardize your Data Guard configuration.

Follow the below steps to bring along your standby database.

What’s The Problem

When you prepare for multitenant conversion, you prepare two things:

  • Data files – you make the data files consistent by opening the non-CDB in read-only mode.
  • Manifest file – you create an XML file which contains information about the non-CDB.

The manifest file contains information about the data files, including the location. However, the manifest file lists only the location on the primary database. There is no information about the standby database.

When you plug in the non-CDB, the plug-in happens without problems on the CDB primary database. It reads the manifest file and finds the data files.

But what about the CDB standby database? Since the manifest file does not list the file location on the standby host, how can the standby database find the corresponding data files?

The Options

There are two options which you control with the standbys clause on the create pluggable database statement:

  • Enabled recovery:
    • You specify standbys=all, or you explicitly list the standby database in the standbys clause.
    • On plug-in, the CDB standby database must find the data files. How the standby database finds the data files depends on the configuration.
    • The new PDB is protected by Data Guard immediately on plug-in.
    • If the standby database fails to find the data files, recovery stops for the entire CDB. All your PDBs are now unprotected unless you use PDB Recovery Isolation (see appendix).
  • Deferred recovery:
    • You specify standbys=none, or you don’t list the standby database in the standbys clause.
    • On plug-in, the CDB standby notes the creation of the PDB but does not attempt to find and recover the data files.
    • The new PDB is not protected by Data Guard until you provide the data files and re-enable recovery as described in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1). Typically, this means restoring all data files to the standby system. The other PDBs in the CDB standby are fully protected during the entire process.

Convert with AutoUpgrade

You must convert with deferred recovery on the CDB standby database. AutoUpgrade uses this approach by default:

upg1.manage_standbys_clause=standbys=none

When AutoUpgrade completes, you must follow the process to restore the data files on the CDB standby database and re-enable recovery.

There is no way to plug in with enabled recovery. This includes the alias trick. This requires work on the primary and standby systems. AutoUpgrade is a fully automated process that does not allow you to intervene midway.

If you set manage_standbys_clause to anything but the default to plug in with enabled recovery, you will most likely end up in problems. Either the data files are missing on the standby system or not at the right SCN. This stops the MRP process in the standby database. Since the MRP process is responsible for recovering all the other PDBs as well, you are not only breaking the recently added PDB, but also all other PDBs.

Convert Manually

ASM

You can plug-in with enabled recovery and use the data files on the standby. The standby database searches the OMF location for the data files. ASM does not you manually moving files into an OMF location. Instead, you can create aliases in the OMF location as described in Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1). The standby database follows the plug-in operation.

This option won’t work, if you use the as clone clause on the create pluggable database statement. The clause generates a new GUID and since the GUID is part of the OMF location, you won’t be able to create aliases upfront.

Alternatively, you can plug in with deferred recovery.

OMF in File System

You can plug-in with enabled recovery and use the data files on the standby. The CDB standby database searches the OMF location for the data files. Either:

  • Move the data files into the OMF location.
  • Create soft links in the OMF location for each data file pointing to the current location.

These options won’t work, if you want to use the as clone clause. The clause generates a new GUID and since the GUID is part of the OMF location, you don’t know the OMF location upfront.

If you set standby_pdb_source_file_directory in the CDB standby database, it looks for the data files in that directory. However, it will always copy the data files into the OMF location. Even if you specify create pluggable database ... nocopy. Setting standby_pdb_source_file_directory is, however, compatible with the as clone clause.

Alternatively, you can plug in with deferred recovery.

Regular Files

The database uses regular files when db_create_file_dest is empty.

If you plug in with enabled recovery, the CDB standby database expects to find the data files in the exact same location (path and file name) as on the primary database. The location is either the full path from the manifest file or the location specified by create pluggable database ... source_file_directory='<data_file_location>'.

If the data files are in a different location on the CDB standby database, you either:

  • Set db_file_name_convert in your CDB standby database. This changes the name of each of the data files accordingly.
  • Set standby_pdb_source_file_directory in your CDB standby database. When media recovery looks for a specific file during plug-in, it searches this directory instead of the full path from the manifest file.

You can plug-in using the as clone clause without problems.

Alternatively, you can plug in with deferred recovery.

Refreshable Clone PDBs

When you migrate a non-CDB using refreshable clone PDBs, you are using a clone of the non-CDB database. Thus, there are no existing data files on the standby database that you can use.

You can only create a refreshable clone PDB with deferred recovery (standbys=none). After you transition the refreshable clone PDB into a regular, stand-alone PDB using alter pluggable database ... refresh mode none, you must follow the process to restore the data files and re-enable recovery. If you use AutoUpgrade, you must wait until the entire job completes.

Until you have completed the recovery process, the PDB is not protected by Data Guard.

For further information, including how Oracle Cloud Infrastructure makes it easier for you, have a look at Sinan’s blog post.

Important

Whichever method you choose, you must check your Data Guard configuration before going live.

  1. Check the recovery status on all standby databases:

    select name, recovery_status
    from   v$pdbs;
    
  2. Test the Data Guard configuration by performing a switchover.

Don’t go live without checking your Data Guard configuration!

Appendix

PDB Recovery Isolation

PDB Recovery Isolation is a new feature in Oracle Database 21c.

In an Active Data Guard environment, PDB recovery isolation ensures that media recovery of a CDB on the standby is not impacted when one or more PDBs are not consistent with the rest of the CDB.

Source: About PDB Recovery Isolation

If you plug in a database with standbys=all and the standby database can’t find the data files, PDB recovery isolation kicks in:

  • The standby database disables recovery of the affected PDB.
  • The standby database restores the data files from the primary database.
  • After restore, the standby database re-enables recovery of the PDB.
  • The affected PDB is unprotected until the process is completed.
  • The other PDBs are unaffected by the situation.

PDB Recovery Isolation reduces risk and automates the resolution of the problem.

At the time of writing, it requires a license for Active Data Guard.

Further Reading

Thank You

A big thank you to my valued colleague, Sinan Petrus Toma, for teaching me about PDB recovery isolation.

Can Data Pump Export to ASM Storage?

The other day, I was helping my boss, Roy, with a benchmark for our session Data Pump New Features and Best Practice at Oracle DatabaseWorld at CloudWorld. I needed to find a database with access to very fast storage, and I decided to use a Base Database Service in OCI.

But the fast storage is on ASM. I never tried exporting with Data Pump to ASM storage. Is it possible?

How to Export to ASM Storage?

First, I create a directory in ASM:

ASMCMD> cd DATA
ASMCMD> mkdir DATA_PUMP

Then, I create a database directory pointing to the ASM directory:

SQL> create myasmdir as '+DATA/DATA_PUMP';
SQL> grant read, write to ... ;

And start a Data Pump export:

expdp ... \
   directory=myasmdir \
   dumpfile=exp%L.dmp \
   logfile=exp.log

It fails:

Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation: nonexistent file or path [29434]

It turns out that the ASM directory can’t hold the log file. You must store it in a regular file system.

I create an additional directory for the log file:

SQL> create mylogdir as '/tmp';

Start the Data Pump export, now, redirecting the log file to regular storage:

expdp ... \
   directory=myasmdir \
   dumpfile=exp%L.dmp \
   logfile=mylogdir:exp.log

Appendix

When you work with Data Pump in Oracle Database 19c, you should always install the Data Pump bundle patch. In 19.20.0, we have more than 150 Data Pump specific fixes included.

A Word about Zero Downtime Oracle Grid Infrastructure Patching

In previous blog posts, I have shown you how to patch Oracle Grid Infrastructure 19c (GI) in a rolling manner. However, all those methods require a shutdown of the GI stack on each node. The database remains up all the time, but individual nodes will go down for a period.

You can change that with Zero Downtime Oracle Grid Infrastructure Patching (ZDOGIP):

Zero-downtime Oracle Grid Infrastructure patching enables patching of Oracle Grid Infrastructure without interrupting database operations. Patches are applied out-of-place and in a rolling fashion, with one node being patched at a time, while the database instances on this node remain operational. Zero-downtime Oracle Grid Infrastructure patching supports Oracle Real Application Clusters (Oracle RAC) databases on clusters with two or more nodes.

ZDOGIP achieves this with a bit of trickery:

When using Zero Downtime Patching, only the binaries in the Oracle Grid Infrastructure user space are patched. Additional Oracle Grid Infrastructure OS system software, kernel modules and system commands including ACFS, AFD, OLFS, and OKA, are not updated. These commands continue to run the version previous to the patch version.

Questions and Answers

Several questions came to my mind when I read about ZDOGIP. Here’s a summary:

How Can the Database Survive Without the ASM Instance?

ZDOGIP uses out-of-place patching and switches to a new, patched home. The database remains up while the GI stack restarts. The ASM instance restarts as well. How can the database survive that? The answer is Oracle Flex ASM. While the GI stack restarts, the database can access an ASM instance on another hub and access the shared storage directly.

This is an extract of the alert log. It shows how a database switches to a remote ASM instance (+ASM2) during a zero downtime patching session:

2023-02-24T09:18:03.663616+00:00
ALTER SYSTEM RELOCATE CLIENT TO '+ASM2'
2023-02-24T09:18:09.855879+00:00
NOTE: ASMB (9427) relocating from ASM instance +ASM1 to +ASM2 (User initiated)
NOTE: ASMB (index:0) registering with ASM instance as Flex client 0x1409d79b6620c71c (reg:163770215) (startid:1129292367) (reconnect)
NOTE: ASMB (index:0) (9427) connected to ASM instance +ASM2, osid: 53135 (Flex mode; client id 0x1409d79b6620c71c)
NOTE: ASMB (9427) rebuilding ASM server state for all pending groups
NOTE: ASMB (9427) rebuilding ASM server state for group 2 (RECO)

What Are the Minimum Requirements?

Can I Use It with ASM Filter Driver and ASM Cluster File System?

Yes, but if you are using ASM Filter Driver (AFD) or ASM Cluster File System (ACFS) and the patch you are applying updates these components, special attention is needed. You can’t update the kernel drivers when GI is running. However, the kernel drivers must be updated. With ZDOGIP you can postpone the update of the kernel drivers, and thus postpone the restart of the entire GI stack including the database that it manages.

… running with the older version of drivers is not supported for an extended period (e.g. restart should be completed in 24 hours of patching).

In short, if you are using AFD or ACFS, you more of less lose the benefit of ZDOGIP because the database must restart anyway.

This feature is recommended for the configurations that do not have (ACFS/AFD/OKA/OLFS).

You should expect that every Release Update contains patches for AFD and ACFS.

If you use AFD or ACFS, I recommend relying on rolling patch installation instead and investing your time in Application Continuity.

The quotes are from MOS note Zero-Downtime Oracle Grid Infrastructure Patching (ZDOGIP). (Doc ID 2635015.1).

How Do I Use Zero Downtime Oracle Grid Infrastructure Patching Together with ASM Filter Driver and ASM Cluster File System?

After patching with ZDOGIP, you must restart the entire GI stack, including the local database instance. You must do this shortly after the patch apply.

The procedure involves executing root.sh -updateosfiles. You will find the full details in the documentation.

How Can I Tell Whether ASM Filter Driver or ACFS Is Installed?

To see whether your system uses AFD:

$ORACLE_HOME/bin/asmcmd afd_state

To see whether your system uses ACFS:

$ORACLE_HOME/bin/crsctl query driver activeversion -all

Does It Work for Single Instance Databases as Well?

No, this feature is for Oracle RAC databases only.

Can I Use Zero Downtime Oracle Grid Infrastructure Patching with Oracle Fleet Patching and Provisioning

Yes, you can. There is a simple command line parameter that you can use, which tells Oracle Fleet Patching and Provisioning (FPP) to use ZDOGIP:

rhpctl move gihome ... -tgip

Patching can becomes slightly more complicated when you use ZDOGIP. You can alleviate that complexity by using FPP.

What Do All the Abbreviations Mean?

When you read the documentation and the MOS notes, you will come across several abbreviations. Here’s a handy list of some of them:

Abbreviation Meaning
ACFS ASM Cluster File System
ADVM ASM Dynamic Volume Manager
AFD ASM Filter Driver
OKA OS Kernel extensions
OLFS Oracle Layered File System

Appendix

Further Reading

Other Blog Posts in This Series

XTTS: ASM Aliases and Why You Should Get Rid of Them

If you migrate an Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups and if your target database use ASM, you should get rid of the aliases after the migration.

What Is an ASM Alias?

When you use ASM, there is tight control over the file names. ASM strictly enforces the naming standard dictated by Oracle Managed Files (OMF), and only the database can create file names that comply with OMF.

Sometimes it is handy to create files in other locations in ASM that still refer to a database file. Here you can use aliases. Aliases work like a symbolic link in the file system.

How can you tell if a file is an alias?

Alias Oracle ASM file names are distinguished from fully qualified file names or numeric file names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers, such as in the format USERS.259.685366091.

When you use ls -l you can also tell whether a file is an alias. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, you can see in the Name column that it is an alias. The => indicate it:

ASMCMD> ls -l +DATA
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855

You can read about Fully Qualified File Name Form in the ASM documentation, if you are interested.

Why Are the Aliases Created?

When the Perl script is restoring and recovering the data files on the target database, they do not belong to any database yet. The tablespaces have not been plugged into any database yet. Hence, it is impossible to figure out the right OMF name of the data files. As an alternative, ASM names the data files according to the syntax of the source database. For instance, it will use the source database GUID (select guid from v$containers) as part of the name. In addition, the Perl script creates ASM aliases using the following format: <dest_datafile_location>/<tablespace_name>_<file#>.dbf

When you perform the Data Pump import, you can refer to the aliases in your Data Pump parameter file (transport_datafile). Using the aliases is especially useful if you plan on having a standby database.

How Do I Get Rid of the Aliases?

After performing the Data Pump import, the tablespaces are plugged into a database, and now the data files belong to a database. But the target database is referring to the data files either via:

  • An ASM alias
  • Or directly via the absolute file name. As described earlier, the absolute file path uses the OMF syntax of the source database

Let me illustrate that. Imagine:

  • In xtt.properties dest_datafile_location=+DATA.
  • My data file is named users01.dbf, belongs to tablespace USERS and has file ID 65.
  • Target DB_UNIQUE_NAME is SALES2.
  • Source database GUID is 86D5DC2587337002E0532AB2A8C0A57C.

How will the file be registered in the database?

  • If I used the aliases, it is known as +DATA/users_65.dbf.
  • If I used the absolute file name, it is known as +DATA/SALES2/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/users.280.1099469855. ASM generates the last two sets of numbers.

Neither of the two formats is proper OMF names. What is the real OMF name? Imagine:

  • Target database GUID is DA495482D68D0220E0530F01000A98DF
  • The real OMF file name is (notice the change in GUID): +DATA/SALES2/DA495482D68D0220E0530F01000A98DF/DATAFILE/users.280.1099469855

You can get the GUID of a database by using select guid from v$containers.

In ASM, only the database can store a file in OMF syntax. You must fix this from the target database. The easiest way is to use online data file move. If you don’t specify the target location, the database will generate an OMF name:

SQL> --using file number
SQL> alter database move datafile 65;
SQL> --using full name
SQL> alter database move datafile '+DATA/users_65.dbf';

How does the move work?

  • It is a entirely online operation.
  • It is a block-by-block copy.
  • The database copies the data file. While the copy operation takes place, the two files are kept in sync until the database can switch to the new file. After that, the database removes the original file.
  • If the data file belongs to a PDB, you must switch your session to that container.

You can learn more about online data file move in our YouTube video:

Why Bother?

If my database works fine, why should I worry? I can think of at least two reasons:

  • Comply to naming standard
  • Avoid problems in other migrations

Comply to naming standard

I highly recommend that you use and comply with any naming standard, including OMF. Data files that are not appropriately stored according to OMF, should be moved to the correct location.

When I worked outside in the real world as a DBA, I remember multiple occasions of loss of data files. In many situations, a DBA had found a file apparently not belonging to a database – at least according to the naming standard. But the file was used by a database; it was just not stored in the correct location. When the file was removed = big problem in the database.

With many databases and many hosts, it is very important that you make standards and keep with the standards. Otherwise, you will end up in a big mess.

Avoid problems in other migrations

This is especially relevant if you need to perform multiple migrations to the same database host.

The Perl script used for the migration will create the aliases in the location specified by dest_datafile_location. The names of the aliases are very simple, and there is a risk that another migration will try to make the same alias.

Imagine you already did one migration. The database uses the alias +DATA/users_4.dbf. Now you want to make a second migration, and this database also wants to use +DATA/users_4.dbf. The same alias can’t be used for two different files. Big problem!

A user left a comment on my blog telling me this actually lead to corruption in the first database. That risk is a very good reason for getting rid of the aliases and using only proper OMF file names.

Conclusion

ASM aliases are created automatically as part of the migration. The aliases are very useful during the migration, but I highly recommend getting rid of the aliases right after the migration.

Other Blog Posts in This Series

XTTS: Target Database and Data Guard

Often when you migrate an Oracle Database using cross-platform transportable tablespace (XTTS) and incremental backups, it is a big database. Big databases must often be protected by Data Guard.

How do you ensure that Data Guard protects the target database at the end of the migration?

Build After Migration

A simple solution is to build the standby database after the migration. There are some downsides to this approach:

  • It takes time. In many situations, the business requires that the database can’t go live until a standby database is in place. Having to build a standby database will prolong the downtime.
  • It puts a load on the primary database. Right after the migration, you don’t have your backups in place yet, so you will need to build the standby databases directly from the primary database. That requires a lot of I/O and network traffic. You might want to use those resources for other activities, like taking a level 0 backup or regathering statistics.
  • It can become even more complicated if you migrate into multitenant architecture. Rebuilding the standby database of an already running, active CDB might not be an option. The other PDBs in the CDB are compromised while the standby is rebuilt.

Restore Data Files to Standby Host

Here is an approach that offers much less downtime. Restore the data files onto the standby host as well as part of the migration. During the Data Pump import, the plug-in propagates to the standby database via redo apply.

When migrating with XTTS and the Perl script you are familiar with running the backup on the source database (xttdriver.pl --backup). Also, you know how to restore and recover the data files on the target system (xttdriver.pl --restore).

Now, the idea is you restore and recover the data files on the primary host and also on the standby host. During the Data Pump import, the data files are plugged into the primary database. The plug-in commands are propagated to the standby database via redo apply. Then, the standby database can plug-in the data files if they are located in the same place as on the primary database. The rest of the Data Pump import will propagate as well, and in the end you will have a working Data Guard environment.

You must recover the data files on the primary and standby database to the exact same SCN. Be sure to restore all backups to both the primary and standby database.

Overview of the process

How To

The following assumes that your target databases are using ASM. Also, I am migrating directly into a PDB. But the procedure is similar for migrations into a non-CDB database. Use the procedure described in my previous blog post or MOS note V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1). Adjust the procedure according to the following.

  • Prepare the target database and build a standby database before starting the migration.
  • In xtt.properties set destconnstr to point to the target primary database.
  • Copy xtt.properties to the target primary host and target standby host.
  • On target standby host, change xtt.properties and set destconnstr to point to the target standby database.
  • Create the cross-platform backups on the source database using xttdriver.pl --backup in the usual way
  • Now copy the file res.txt to both the target primary and target standby database.
  • The backups in the scratch location should also be available to both the target primary and standby databases.
  • Now restore or recover the backups on the target primary and target standby database using xttdriver.pl --restore.
  • Repeat the backup/restore/recover process as many times as needed. Keep recovering the target primary and target standby database. It is very important that the data files on both the target primary and target standby database are recovered to the exact same SCN.
  • On the target databases the data files will be restored into ASM. The data file itself is created with an OMF file name:
    +<disk_group>/<db_unique_name>/<source_guid>/DATAFILE/<omf_name>
    
    It could be something like this:
    +DATA/CDB1_FRA356/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.281.1099469863
    
    • The disk group is what you specified in xtt.properties in dest_datafile_location
    • DB_UNIQUE_NAME is not the same on the primary and standby database. It will differ.
    • To comply with OMF standard the database must restore the data files into a folder corresponding to the PDB it belongs to. However, currently, the data files do not belong to a PDB. We haven’t done the Data Pump import plug-in operation yet. The database will create a folder with the GUID of the source database. If you are interested, you can get the GUID from the source database using select guid from v$containers.
    • The last part of the OMF file name is the tablespace name, and some numbers representing the file ID and a number to ensure uniqueness. This part will differ on the primary and standby database as well.
  • We now know that the data file name is different on the primary and standby database. Previously, it was stated that it is important that the data files are stored in the same location and has the same name. This is a problem! But the Perl script solves that by creating ASM aliases.
  • The aliases will be created in the location specified by dest_datafile_location in xtt.properties. Use asmcmd to verify it. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, we can see in the Name column that it is an alias:
    ASMCMD> ls -l +DATA
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855
    ...
    
  • If you look at the directory where the alias is pointing to, you can see that the files are proper OMF files – real data files. Column SYS is Y, and Name does not contain the alias pointer =>:
    ASMCMD> cd +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE
    ASMCMD> ls -l 
    Type      Redund  Striped  Time             Sys  Name
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNT.282.1099469855
    DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  Y    ACCOUNTIDX.280.1099469855
    ...
    
  • Thus, the aliases are hiding the fact that the data files have different name on the target primary and target standby database.
  • When you prepare the parameter file for the Data Pump import, be sure to reference the aliases – not the OMF named data files. The aliases have the same name on both the target primary and target standby database:
    $ cat import.par
    
    transport_datafiles=+DATA/account_25.dbf
    transport_datafiles=+DATA/accountidx_26.dbf
    transport_datafiles=...
    
  • Then start the Data Pump import. The data files are plugged into the primary database during DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK. When the redo containing that information is applied on the standby database, the standby database will plug in the data files as well.
  • When the Data Pump import completes, you can verify that the standby database survived and is still applying redo. A switch-over is also a good way of testing it.

Conclusion

You can prepare the target standby database in advance. This enables the target database to be protected by Data Guard as soon as the Data Pump import completes.

Restore and recover the data files to the target primary and target standby database. If they are recovered to the exact same SCN the plug-in of the data files propagates to the standby database via redo apply. After the Data Pump import, your target database has a fully functional standby database.

Further Reading

Other Blog Posts in This Series