XTTS: Testing the Procedure Multiple Times Using Read-Only Tablespaces

You can perform multiple tests of the procedure for cross-platform transportable tablespaces (XTTS) and incremental backups. You don’t have to restore and recover the data files every time. It makes your testing effort a lot more efficient.

Default Behaviour

When you perform a Data Pump import of transportable tablespaces, the data files plugs into the target database, and the tablespaces turns into read write mode immediately. It happens in the phase DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK.

Dump Pump must turn the tablespaces into read write mode for several reasons:

  • Suppose you have tables with columns to type timestamp with timezone information (TSTZ) and there is a different in the database time zone file version. In that case, all TSTZ columns must be updated to reflect the new timezone conventions.
  • The data file bitmap of free space must be updated in case you have segments in your tablespaces that are not imported.

When the tablespaces switches to read write mode the data file headers are updated. Now, the data files changes in such a way that they belong to that database (I imagine it has something to do with DBID, SCN, and so forth).

This also means that you can only import the tablespaces one time. If you want to repeat the process, you need to restore and recover the data files. Not even Flashback Database can save you.

TRANSPORTABLE=KEEP_READ_ONLY

A new option was added to Data Pump in Oracle Database 19c to solve this: TRANSPORTABLE=KEEP_READ_ONLY. Here is what the documentation says:

If specified, then data files remain in read-only mode. As a result of this setting, the tables containing TSTZ column data cannot be updated, and are dropped from the import. In addition, data file bitmaps cannot be rebuilt.

Keeping the tablespaces read only and leaving the data files untouched sounds good. But there are some restrictions:

  • If you have TSTZ columns, they can’t be checked and updated. This means that the entire table with a TSTZ column is skipped during import. You will see this error in the Data Pump log file: ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues. To solve it, you need to manually import the table afterward using a Data Pump table mode export and import.
  • If you have any segments in your data files that no longer belong to an object (e.g. if that specific object was not imported), the free space bitmap can’t be updated, and you have free space that can’t be used. You can solve this later on when the tablespaces are in read write mode using dbms_space_admin.tablespace_rebuild_bitmaps.

If you can live with these restrictions, you can use this feature.

Testing

You can use TRANSPORTABLE=KEEP_READ_ONLY when testing in the following way:

  1. Perform the backups on the source database using $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup.
  2. Transfer res.txt and restore/recover the data files using $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore
  3. Now you want to test the migration procedure. Set the tablespaces in the source database in read only mode, do an incremental backup and perform the Data Pump export. Optionally, use a temporarily activated standby database to avoid interruptions on the primary production database.
  4. Set a guaranteed restore point in the target database.
  5. After recovering the data files on the target system, you perform the Data Pump import. You must set the Data Pump parameter TRANSPORTABLE=KEEP_READ_ONLY to leave the data files untouched.
  6. When you are done with your tests, you either flash back to the restore point or delete the data in your database. The latter would be a series of DROP commands (schema, roles etc.) followed by DROP TABLESPACE commands. To preserve the data files, be sure to use DROP TABLESPACE ... INCLUDING CONTENTS KEEP DATAFILES. The KEEP clause is especially vital on ASM and with OMF.

    When you specify INCLUDING CONTENTS, the KEEP DATAFILES clause lets you instruct the database to leave untouched the associated operating system files, including Oracle Managed Files. You must specify this clause if you are using Oracle Managed Files and you do not want the associated operating system files removed by the INCLUDING CONTENTS clause.

  7. You can now continue to recover the data files on your target system with new incremental backups from the source database. You can roll forward the data files even though we had them plugged into a database. This is really cool. This allows you to repeat the test with new fresh production data without the tedious task of completely restoring and recovering the data files.
  8. Repeat the test cycle as many times as you want. If the data files are left untouched using TRANSPORTABLE=KEEP_READ_ONLY and you don’t turn them into read write mode manually, you can repeat the process.

Production Migration

Can you use TRANSPORTABLE=KEEP_READ_ONLY for the production migration? Yes, you can. But eventually, you will need to turn the tablespaces into read write mode.

I would not usually recommend it. But recently, I was involved in a specific case where it was helpful.

During testing, a customer ran into an issue where Data Pump hung during import. They killed the import and tried to repeat the Data Pump import. Unfortunately, Data Pump imports of transportable tablespaces are not resumable until Oracle Database 21c. So they had to start Data Pump all over. But the data files had already been touched by the target database. Now Data Pump refused to progress with the import because the data files were not as expected. The customer was advised on how to avoid the Data Pump hang. But they were still a little concerned about their upcoming production migration. They would like to keep the tablespaces in read only mode. Just in case something similar would happen. In that case, they could easily start all over because the data files were untouched.

Conclusion

During a migration project, you should use the Data Pump feature TRANSPORTABLE=KEEP_READ_ONLY to ease your testing efforts. In some rare cases, it might also be helpful for production migrations.

Other Blog Posts in This Series

XTTS: Testing the Procedure On Activated Standby Database

While preparing for the production migration of your Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups, you should test the procedure.

It is often impossible to get a full copy of the production database or a comparable test system. If you still want to test the migration procedure, you need to take the backups on the production database. That might have an impact on your production database. You could offload the backups to a standby database but periodically must set tablespaces in read only mode to perform the Data Pump exports. This means downtime for your application.

But there is a way to do all the work on the standby database. This will allow you to test the migration procedure without affecting the primary production database.

You can also read about the procedure in the MOS note Using XTTs in a Data Guard Environment.

How To

You should use the procedure described in a previous blog post about backups on a standby database. When you reach the final incremental backups you need to follow the information below.

I assume you have conducted a level 0 and a number of incremental backups on the standby database. Now it is time for the final incremental backup. We will do this on the standby database as well. You don’t need to touch the primary database except for a few minor changes (archive current log and defer redo log transport).

Now you have a consistent backup of the data files and a corresponding Data Pump export; both components were taken from a standby database. Transfer the files to your target database and test the procedure.

Production Migration

The heading says you can use this procedure for testing. How about the production migration?

Well, it could work, but I would not recommend it.

When you use this approach, you leave the primary database open for business. At the same time, you take the final incremental backup. This opens for a potential catastrophe. Users entering data into the open, primary database while you have taken the last incremental backup. That would mean data loss!

Using the other standby database approach, you are sure that no more data is entered into the source database, when you perform the final migration. You ensure this by setting the tablespaces read only in the primary database. This is why I do not recommend that you use the approach with an activated standby database for your production migration.

Conclusion

You should test your migration procedure and get comfortable with your runbook. You can use a temporarily activated standby database if you don’t have a dedicated test environment. This allows you to perform realistic testing without interrupting the primary database and your users.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration and Testing

How can you test your OCI database before going live? With Logical Online migration it is possible using Flashback Database, restore or cloning. Let’s explore the options.

Flashback Database

Using Flashback Database is the easiest option and it is supported on all target platforms except Autonomous Database (shared and dedicated). In addition, your target database must be Enterprise Edition.

Does Oracle GoldenGate support Flashback Database? You can find the answer in the MOS note Does Goldengate Support Oracle RDBMS Flashback Features? (Doc ID 966212.1):

In a situation where there are only Replicats on a system, then FLASHBACK DATABASE… is fully supported if the Replicats are all using a checkpoint table and the trail files are available that go as far back as the flashback.

When you do Logical Online migrations you only have replicat process in your target database. In addition, checkpoint tables are enabled by default. All good!

Flashback Database – How To

  1. Stop replicat process by logging on to the GoldenGate hub and navigate to the Target Administration Server: How to stop replicat process in Oracle GoldenGate Microservices Architecture Hub
  2. Set a guaranteed restore point in the target PDB (named tgtpdb):
alter session set container=tgtpdb;
create restore point grp4test guarantee flashback database;
  1. Do your tests.
  2. Revert the changes by issuing a FLASHBACK PLUGGABLE DATABASE command:
alter session set container=cdb$root;
alter pluggable database tgtpdb close immediate;
flashback pluggable database tgtpdb to restore point grp4test;
alter pluggable database tgtpdb open resetlogs;
  1. Restart replicat process: How to start replicat process in Oracle GoldenGate Microservices Architecture Hub

The above procedure uses flashback of the pluggable database. This feature was introduced in Oracle Database 12.2. If your target database is 12.1 or older, you have to flashback the entire CDB. If your target database is a non-CDB database, you have to flashback back the entire database.

Restore

Backup/restore is also an option. In OCI it is easy to backup and restore, however, it does take longer than using Flashback Database. But the good thing is that you get to test your OCI backup and recovery strategy.

For Autonomous Database you can use the automatic backups which are enabled by default. In the other Database Cloud Offerings, you must enable automatic backup yourself.

Restore – How To

  1. Stop replicat process (see above)
  2. Create backup (or rely on automatic backup)
  3. Do your tests
  4. Restore backup
  5. Restart replicat process (see above)

Clone

Cloning the database or the entire DB System is also an option. A benefit is that the GoldenGate replication can continue while you are doing the tests. When you use Flashback Database and restore the replication must be stopped. Trail files will accumulate on disk and at one point the target database will be very busy catching up with the lag. For very huge and active systems it might be desirable to work on a clone. However, cloning to a new system mean that you are no longer testing in that specific database that will be your future production database. Also, cloning involves creating new DB Systems which has a cost.

Clone – How To

  1. Clone database or DB System
  2. Do your tests
  3. Discard database or DB System

Conclusion

Before going live in your new OCI database, you should test it. With Logical Online method it is easy, and you can use well-known techniques. If you combine it with the recommendations from our webinar Performance Stability, Tips and Tricks and Underscores, you are well underway towards a successful migration.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration and Testing

Testing is an essential part of any migration project of your Oracle Database. With Zero Downtime Migration (ZDM) and the Physical Online method it has become a lot easier. Before going live (i.e. doing the Data Guard switchover) you can test on your production data on your future production system – the OCI database. That’s cool.

Concept

For the duration of your test convert the OCI target database into a snapshot standby database. A short recap on snapshot standby database:

  • A snapshot standby database is a type of updatable standby database that provides full data protection for a primary database.
  • A snapshot standby database receives and archives, but does not apply, redo data from its primary database. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database.
  • A snapshot standby database diverges from its primary database over time because redo data from the primary database is not applied as it is received. Local updates to the snapshot standby database cause additional divergence. The data in the primary database is fully protected however, because a snapshot standby can be converted back into a physical standby database at any time, and the redo data received from the primary is then applied.

The plan is:

  1. Build the standby database in OCI.
  2. Migration is currently paused at ZDM_CONFIGURE_DG_SRC (-pauseafter ZDM_CONFIGURE_DG_SRC).
  3. Convert the OCI target database – which is a physical standby – into a snapshot standby.
  4. Do your test. Do whatever you want. The database is protected by Flashback Database, so you can insert and delete data, truncate, add tablespaces, you name it.
  5. When you are done with your tests, convert the OCI target database back into a physical standby database. Implicitly, the database is flashed back, and now the redo is getting applied again.
  6. After a little while the target database is now back in sync again.
  7. Complete migration at your will.

How

You find the steps needed to conver to snapshot standby and back again below. You can repeat the process as many times as you want.

Manual

To convert the standby database to snapshot standby:

alter database recover managed standby database cancel;
shutdown immediate
startup mount
alter database convert to snapshot standby;
alter database open;

Now, the database is opened in READ WRITE mode and you can use it for testing. To convert back to a physical standby database:

shutdown immediate
startup mount
alter database convert to physical standby;
shutdown immediate
startup
alter database recover managed standby database disconnect from session;

Manual on RAC

To convert the standby database to snapshot standby:

sqlplus / as sysdba <<EOF
   alter database recover managed standby database cancel;
EOF
srvctl stop database -d $ORACLE_UNQNAME
srvctl start database -d $ORACLE_UNQNAME -o mount
sqlplus / as sysdba <<EOF
   alter database convert to snapshot standby;
   alter database open;
EOF

Now, the database is opened in READ WRITE mode and you can use it for testing. The database is only opened on one node. You can open the other nodes by executing the following on each of the nodes:

alter database open;

To convert back to a physical standby database:

srvctl stop database -d $ORACLE_UNQNAME
sqlplus / as sysdba <<EOF
   startup mount
   alter database convert to physical standby;
   shutdown immediate
EOF
srvctl start database -d $ORACLE_UNQNAME
sqlplus / as sysdba <<EOF
   alter database recover managed standby database disconnect from session;
EOF

Data Guard Broker

To convert the standby database to a snapshot standby database using broker (CDB19_fra3zt is the target DB_UNIQUE_NAME):

convert database 'CDB19_fra3zt' to snapshot standby;

Now, the database is opened in READ WRITE mode and you can use it for testing. To convert back to a physical standby database:

convert database 'CDB19_fra3zt' to physical standby;

Conclusion

One of the really cool features of ZDM is that you can use my standby database for testing in OCI – before I decide to do the switchover. You can achieve this by converting to a snapshot standby database.

Finally, a thank you to my colleague Jose Bennani Pareja for helping out with RAC database information.

Other Blog Posts in This Series