New Webinars Coming Your Way

These days there is an abundance of virtual events. I feel exhausted from it from time to time. So you might ask yourself. Is there really room for virtual events?

The answer is: Yes, there is 🙂

We have added more upcoming webinars to our Virtual Classroom Series: Upgrade to Oracle Database 19c. If you enjoyed the previous ones, I am sure that you will love these as well.

Webinar Date Sign Up
10 How Low Can You Go? Zero Downtime Operations October 21, 2021 Link
11 Secure Your Job – Fallback Is Your Insurance November 11, 2021 Link
12 Migrating Very Large Databases December 9, 2021 Link
13 Data Pump Extreme – Deep Dive with Development January 27, 2022 Link

Of course, we still keep our promise to you: All tech – no marketing, no buzzwords

If you miss any of the webinars, head over to the Webinar page and find the recording and the slides. All the previous nine webinars are there as well.

I hope to see you there!

Zero Downtime Migration – Logical Offline Migration How To Minimize Downtime

With Oracle Zero Downtime Migration (ZDM) 21.2, you can now perform logical offline migrations from IBM AIX and Oracle Solaris. Since it is an offline approach, the database is unavailable during the entire migration. Let’s look at how you can minimize downtime.

Before Migration

Statistics

Ensure that dictionary statistics are up-to-date in the source database. Data Pump does a lot of querying in the data dictionary, and accurate statistics are vital for good execution plans. I recommend gathering stats on SYS and SYSTEM:

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

You could also use dbms_stats.gather_dictionary_stats. But it gathers statistics on all internal schemas, and most often, only SYS and SYSTEM is sufficient:

begin
   dbms_stats.gather_dictionary_stats;
end;
/

Data

I think it goes without saying, but the less data, the faster the migration complete. If you have any junk data in your database, get rid of it.

Migration

Parallel

Apply a proper degree of parallelism. Rule-of-thumb:

  • On-prem: 2 x number of physical cores
  • OCI: Number of OCPUs

When you are importing in OCI, consider scaling up on CPUs. More CPUs, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines need around 10 minutes of downtime.

Import processes running in parallel can read from the same dump file. This means that the number of parallel processes for an import is not limited by the number of dump files (which is the case of an export). To illustrate with an example:

  • Export: parallel=8 – 8 dump files were created. Each parallel worker needs exclusive access to a dump file.
  • Import: parallel=16 – still only 8 dump files. Each parallel worker can read from all dump files, no locks required.

You can control Data Pump parallel in ZDM using these parameters:

Data Pump Mode

Should you go via dump file or directly over a database link? Only your tests can tell the answer, but I am pretty sure that you will find that dump file mode is the quickest.

When you export via a dump file, you also need to figure out how you can transfer it to the target database:

  • Direct Transfer – you can use either scp or rsync. At the risk of starting a religious discussion, I don’t think it makes much of a difference in this use case. If you need rsync on Exadata, you can find guidance in MOS Doc ID 1556257.1.
  • Via Object Store – you can use either curl or OCI CLI. I would expect you can get better performance with OCI CLI. But you need to test it in your environment.

Remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance, and ideally, they should be separated from the storage that holds the database data files.

SecureFile LOB

ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. Don’t turn it off. Imports run faster when SecureFile LOBs are in play.

After Migration

Statistics

As soon as ZDM has completed the migration, gather dictionary stats in the target database. All those new objects that came in with the import have for sure made the statistics stale.

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

Next, you need to figure out how you want to take care of the optimizer statistics. In a previous post, I discussed why this is needed and how to do it. If you decide to import statistics via DBMS_STATS, just ensure that you have gathered dictionary stats before.

Further Reading

I recommend that you also read the following posts:

Conclusion

When migrating from IBM AIX and Oracle Solaris, you use the logical offline approach, which means a lot of downtime. You have a few options to make it run as fast as possible.

Other Blog Posts in This Series

How To Upgrade An Encrypted Oracle Database and Convert to PDB

Using AutoUpgrade, you can upgrade your encrypted Oracle Database and convert to a pluggable database. The process is not entirely automated, so you must handle the TDE encryption key manually.

A new parameter called skip_tde_key_import is introduced. Here is what the documentation says:

(Optional) The default is NO. You can use this option for non-CDB-to-PDB and unplug/plug operations. When set to YES, the upgrade is run, but import of the source database KeyStore into the target database is skipped, without raising an error. AutoUpgrade will leave the PDB open in upgrade mode, so that you can import the keys manually yourself. After you import the keys, you must then restart the database in normal mode.

In other words, AutoUpgrade does everything except import of the TDE encryption key.

How To

This is what happens: Overview of upgrading to Oracle Database 19c of database encrypted with TDE and convert to a PDB

AutoUpgrade handles the green things – you handle the yellow thing. My environment:

  1. Non-CDB called FTEX running on Oracle Database 11.2.0.4
  2. FTEX is encrypted with TDE and has an auto-login keystore
  3. CDB called CDB2 running on Oracle Database 19c
  4. CDB2 is prepared for TDE and has a keystore defined

First, I ensure that AutoUpgrade is version 21.2 or newer:

java -jar autoupgrade.jar -version

I create a config file for AutoUpgrade. Notice, that I have specified that the database is plugged into CDB2 using the parameter target_cdb. Also, notice that I have specified the new parameter skip_tde_key_import and set it to YES:

upg1.source_home=/u01/app/oracle/product/11.2.0.4
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=FTEX
upg1.target_cdb=CDB2
upg1.skip_tde_key_import=YES

Then, I analyze my database (FTEX) for any issues:

java -jar autoupgrade.jar -config FTEX.cfg -mode analyze

No issues are detected, so I proceed with the upgrade:

java -jar autoupgrade.jar -config FTEX.cfg -mode deploy

After the upgrade and conversion to PDB, I log on to CDB2. I want to check the outcome. Has FTEX been upgraded and converted?

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   YES                

Open – but restricted. I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

CAUSE                TYPE       MESSAGE                                STATUS     ACTION
Wallet Key Needed    ERROR      PDB needs to import keys from source.  PENDING    Import keys from source.    

OK. This is expected. AutoUpgrade does not handle the TDE encryption key. See the yellow part of the arrow above. You have to do that.

I start by merging the two keystores – the keystore of the old non-CDB database (FTEX) and the keystore of the CDB (CDB2). I merge the two keystores into the one of CDB2:

CDB$ROOT SQL> administer key management 
   merge keystore '/u01/app/oracle/admin/FTEX/wallet' identified by "oracle_4U" 
   into existing keystore '/u01/app/oracle/admin/CDB2/wallet/tde' identified by "oracle_4U" 
   with backup using 'merge_keystore';

Notice that I have specified the folders of keystores and not the actually file names. I have also specified the keystore password for both the keystores. And, finally, I tell the database to create a backup of the keystore before the merge (as a precaution – and I recommend always doing that).

Now, I switch to the FTEX PDB. I need to configure FTEX to use the TDE encryption key that is merged from the old keystore. Each key in the keystore has a key ID and I tell the PDB to use the right one. I explain later on how to determine the key ID:

CDB$ROOT SQL> alter session set container=FTEX;

FTEX SQL> administer key management 
   use key 'Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' 
   force keystore identified by "oracle_4U" 
   with backup using 'use_key_ftex';

Now, the PDB knows the TDE encryption key and you can actually start to query data from an encrypted tablespace. However, there is still a plug-in violation saying the TDE encryption key has to be imported. The PDB insists that the encryption key is imported – rather than merged into the keystore. So I will export the key from the PDB and import it again:

FTEX SQL> administer key management 
   export keys with secret "secret-passphrase" 
   to '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U";
FTEX SQL> administer key management 
   import keys with secret "secret-passphrase" 
   from '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U" 
   with backup using 'import_key_ftex';
FTEX SQL> host rm /etc/oracle/exported-keys-ftex

The encryption key is saved in an encrypted file in the OS. The file is encrypted using a passphrase (secret-passphrase), and you should pick a better one than I did. Finally – and important – I remove the file from the OS when I am done. I don’t want my encryption keys lying around in files all over the OS.

I restart the FTEX PDB:

FTEX SQL> alter pluggable database FTEX close immediate;
FTEX SQL> alter pluggable database FTEX open;

And the PDB is now open in READ WRITE mode and unrestricted

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   NO

I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

No rows selected.

Job done! Encrypted database upgraded and converted to a PDB.

Determine Key ID

Use the orapki utility to determine the key ID of the TDE encryption key. Specify the location of the old non-CDB keystore:

orapki wallet display -wallet /u01/app/oracle/admin/FTEX/wallet

Oracle PKI Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Requested Certificates: 
User Certificates:
Oracle Secret Store entries: 
ORACLE.SECURITY.DB.ENCRYPTION.Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY
ORACLE.SECURITY.TS.ENCRYPTION.BUCsXzQga4wfJ8gNSuptAPMCAwAAAAAAAAAAAAAAAAAAAAAAAAAA
Trusted Certificates: 
Subject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Secure Server Certification Authority,O=RSA Data Security\, Inc.,C=US
Subject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US

The TDE encryption key is starting with ORACLE.SECURITY.DB.ENCRYPTION and doesn’t end on MASTERKEY. In the above example the key ID is Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.

If your non-CDB is Oracle Database 12c or later you can also get the key ID using a view:

SQL> select key_id from v$encryption_keys where key_use='TDE';

Why Do I Need to Export and Import?

Above you saw that I had to export and import the encryption keys, even after merging the keystores. You might ask: Why is that even needed? Why is the ADMINISTER KEY MANAGEMENT MERGE KEYSTORE command not enough? That’s a very good question. I have reached out to our security people to get their take on it. It sounds like a bug to me.

The Future

The above solution makes it easier to handle encrypted databases. But we are not satisfied yet. A future version of AutoUpgrade will have even better support for encrypted databases, and everything will be automated.

The biggest issue for us is to find a safe way to handle the TDE keystore password. We need to have a safe way of getting the password from you and storing it in memory until we actually need it.

We are just about to start testing the very first version of AutoUpgrade with proper TDE support. If you are interested in becoming a beta tester, reach out to me to work something out.

Conclusion

Converting encrypted databases to PDB is now possible in AutoUpgrade. AutoUpgrade mainly handles the process, but you have to manage the TDE encryption keys yourself.

Oracle Database 21c Is Here

Last week Oracle released Oracle Database 21c for additional platforms: Linux and Exadata. Other platforms will follow. You should keep an eye out for Release Schedule of Current Database Releases (Doc ID 742060.1) for further information.

Things to Notice

In my part of the Oracle Database, there are things to notice. I want to highlight:

To get all the details, visit the Upgrade and Utilities part of the new features documentation. There are some good examples of how the features can be used.

Behaviour Changes

Read-Only Oracle Home (ROOH) is now the default. Be sure to set the following environment variables to control the location of these directories:

  • ORACLE_BASE_HOME
  • ORACLE_BASE_CONFIG

I like ROOH, but it takes some time to get used to. For instance, network/admin files (tnsnames, sqlnet) and dbs files (pfile, spfile) are now in a new location.

The Pre-Upgrade Information Tool or preupgrade.jar is removed and replaced by AutoUpgrade. A few new parameters have been introduced to make the transition easier.

Innovation Release

Remember, 21c is an innovation release, which means a shorter support window than Long Term Releases such as Oracle Database 19c. If you adopt Innovation Releases, you should be prepared to upgrade to the next database release within one year after the next database release ships.

I would not recommend that you upgrade your production systems to Oracle Database 21c due to the limited support period. Not unless you are prepared to upgrade the database soon again – when support runs out. Oracle Database 19c is the current Long Term Support release. I recommend that for production databases.

Different release types for Oracle Database - innovation vs long term support

To learn more about innovation release and our release model, have a look at our slide deck. We discuss it in the first chapter.

New Features

I want to mention a few new features. They haven’t attracted as much attention as the marque features, but they are still cool.

Expression based init.ora parameters make it possible to base database parameters (init.ora) on calculations made on the system’s configuration. For example, setting the database parameter CPU_COUNT to half the number of CPUs (Windows):

alter system set cpu_count='$NUMBER_OF_PROCESSORS/2';

For more details, check out my video on YouTube.

Placeholders in SQL DDL Statements can improve application security because sensitive information, like passwords, doesn’t need to be hardcoded in SQL DDL. Example: You can make this statement:

CREATE USER :!username IDENTIFIED BY :!password ...

And Oracle Call Interface programs can substitute the placeholders into:

CREATE USER "DANIEL" IDENTIFIED BY "MyS3cr3tP!d" ...

This is similar to data binding but occurs in Oracle Client.

Conclusion

The complete 21c documentation is online, so I suggest that you head on over there and have a look. In the upgrade guide, you can find the list of behavior changes and also deprecated and desupported functionality. And finally, but most interesting perhaps, is Learning Database New Features.

Try it out in Always Free ADB or explore the Oracle LiveLabs.

Upgrade with Less Downtime

You can upgrade your Oracle Database with less downtime. With the release of AutoUpgrade 21.2 we added the fast deploy option. It allows you to run the preupgrade fixups while the database is still online. Only the actual upgrade will now require downtime.

The Usual Upgrade

When you upgrade the usual way, i.e., using AutoUpgrade in deploy mode, then you:

  1. Analyze the database – java -jar autoupgrade.jar -mode analyze This is non-intrusive and doesn’t change anything on the database. You get a report, and you can fix any issue that AutoUpgrade can’t fix for you.
  2. Upgrade the database – java -jar autoupgrade.jar -mode deploy The database is analyzed again. Based on the findings of the analysis, a set of preupgrade fixups are executed. Finally, the database upgrade takes place.

Best practice is to use AutoUpgrade in deploy mode

During step #2, the database is not available – you have downtime. Even though the database was analyzed previously, it will be re-analyzed to catch any new issues. This is the safest approach. From the first analysis (in step #1) to the second analysis (in step #2), potentially new issues can occur.

We talked to a few of our customers who are really trying to reduce the downtime of an upgrade. For them – every second counts and has a significant impact on their business. They asked us to make the upgrade faster.

Fast Deploy Mode

One customer suggested moving the final analysis and preupgrade fixup outside of the downtime window. Shortly before the downtime would start, they would like to run an analysis and the preupgrade fixups. Next, they would wait until the downtime window starts and do only the upgrade.

I must stress; there is a risk that a new issue pops up between the fixups and the upgrade – but this one customer is willing to take the risk. In exchange, they will get upgrades with less downtime.

We call this approach fast deploy. The procedure is:

  1. Analyze the database – java -jar autoupgrade.jar -mode analyze
  2. Run the preupgrade fixups – java -jar autoupgrade.jar -mode fixups
  3. Now downtime starts
  4. Upgrade the database – java -jar autoupgrade.jar -mode upgrade

To reduce downtime use the new fast deploy mode - with increased risk

Fast deploy requires AutoUpgrade 21.2 or newer. The target database release can be any of the target releases already supported by AutoUpgrade, meaning 12.2 or higher.

Here are the details of the feature – from the AutoUpgrade change log:

This feature enables DBA’s to run the prechecks and prefixups while the database is still online and then run deploy to complete the upgrade. Once the fixups have been run on the source database, DBA can then skip the prechecks and prefixup stages and proceed directly to the phases that follow for example: Database can be online: java -jar autoupgrade.jar -mode fixups -config yourconfig Upgrade time: java -jar autoupgrade.jar -mode upgrade -config yourconfig The -mode upgrade command proceeds directly to the phases that follow the prechecks and prefixups stages during deploy. The main goal of this feature is to reduce database downtime by running the fixups when your database is online and open for business and then skipping the fixups when your database is offline. See Oracle documentation for further details.

Conclusion

If your application is really sensitive to downtime, you can now upgrade with less downtime using fast deploy. It introduces a slight risk that the preupgrade analyze will not detect an issue that will cause troubles later on. You have to choose whether that risk is acceptable to you.

Our recommendation is to use the old approach – AutoUpgrade in deploy mode. Consider only fast deploy if downtime really hurts.

If you are interested in seeing which other things got put into AutoUpgrade 21.2, you can check the change log.

Zero Downtime Migration – Logical Online and Sequences

When migrating with Oracle Zero Downtime Migration (ZDM) and the logical online, you use Data Pump for the initial load and Oracle GoldenGate to keep the database in sync. When using Oracle GoldenGate, attention is needed on your sequences; otherwise, you will probably end up with a lot of ORA-00001 unique constraint violated.

The Problem

I think it is best to illustrate with an example:

Source Database Target Database
Sequence s1 currently has a value of 100.
ZDM starts and performs the export for the initial load.
The Data Pump import creates sequence s1 with current value 100.
A user inserts a row and uses the sequence:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence, and the statement now looks like this:
insert into orders values(101 ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                GoldenGate replicates the statement. It does not take a new value from the sequence on the target. Doing so would corrupt the data because there is no guarantee that the same number would be taken. Instead, it uses the statement as it was executed on the source:
insert into orders values(101 ...
Now it is time to complete the migration. Users are disconnected.
GoldenGate synchronizes the final changes before users are now connecting to the target database.
A user inserts another record into orders:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence on the target database. It was created with current value 100, so next value is 101:
insert into orders values(101 ...
101 has already been used, and typically, there is a primary key or unique key constraint on such a column.
This causes ORA-00001 unique constraint violated.

I talked to our GoldenGate experts, and it is a common pitfall. During migrations, sequences are often forgotten, and soon after the migration they start seeing ORA-00001 unique constraint violated.

The Solution

After the switchover, you must ensure that the sequences have a proper value. The sequences must be forwarded or advanced. You can do that in many ways. Here are some ideas:

Recreate sequences

Right after the migration has been completed and GoldenGate has done the final synchronization, but before users are connecting, you can recreate the sequences.

  • Drop the sequences on the target database.
  • Recreate the sequences either using DBMS_METADATA or Data Pump.
  • Ensure to adjust the privileges on the sequences.

If you can wrap all those changes into a shell script, you can have ZDM execute it automatically as part of the migration flow. Check out Pro Tip 5: Adding Custom Scripts from a previous blog post.

Forward or advance the sequence

Another approach is to forward or advance the sequence. Increment the sequence on the target database until the next value is high enough. What do I mean by high enough?

  • The value of the same sequence on the source database.
  • If you know the sequence is only used to generate numbers for one column (let’s call it t1.c1), then get the highest value from that column SELECT max(c1) FROM t1. Typically, such a column is a primary key column with a unique index underneath, so a SELECT max should go very fast.

Now that you have the desired, new value for the sequence, you can calculate how much to increment by. Example: The source database sequence (DBA_SEQUENCES.LAST_NUMBER) is 1500 and target database sequence is 1000, then you need to increment by 500:

alter sequence seq1 increment by 500;
select seq1.nextval from dual;
alter sequence seq1 increment by 1;

If you use the approach of SELECT max then you need to increment by 501. In any case, better increment by too much than too little.

Similar to the former option, you can put the commands into a shell script and have ZDM execute it

Replicate sequence changes

GoldenGate also has the possibility of replicating the changes to the sequences. If you decide to use this with ZDM, it will require additional configuration of GoldenGate, which is why I prefer any of the former options. If you want to more about it, check out Oracle GoldenGate 19.1 – Using Oracle GoldenGate for Oracle Database, chapter 8.1 Installing Support for Oracle Sequences.

Conclusion

If your database uses sequences and you are using Oracle GoldenGate as part of your migration, ensure to handle your sequences properly. Otherwise, you will fall into a typical pitfall of GoldenGate migrations.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Migration and Statistics

If you decided to do a logical migration of your Oracle Database with Zero Downtime Migration (ZDM), here is something important about optimizer statistics. You must manually take care of the optimizer statistics on the target database after the Data Pump import. Either by recreating the statistics or transporting the statistics from the source database.

Background

Whenever I talk about migration of Oracle Database with Data Pump, I always mention that it is best practice to exclude optimizer statistics from the Data Pump export. Here is how you do it:

$ expdp ... exclude=statistics

Why is that recommended? Data Pump is not very good at extracting the optimizer statistics. There is nothing wrong with the statistics when they are imported. But it can take a very long time to do the export of statistics.

ZDM and Data Pump

The development team behind ZDM wanted the tool to be easy to use. Also, they wanted ZDM to use all the best practices that come with the various other tools that ZDM uses. When they talked to us about Data Pump, we told them to exclude statistics.

BUT – this also leaves you in a situation where you have a database completely without optimizer statistics. It goes without saying that it is a disaster waiting to happen. You must ensure that optimizer statistics are present on the target database before allowing the users to connect to it.

Regather

One option is to regather optimizer statistics on the target database after the Data Pump import. When the Data Pump import completes, and before you proceed with the switchover, it is time to regather the statistics. Typically, you would start ZDM something like this:

$ zdmcli migrate database .... -pauseafter ZDM_MONITOR_GG_LAG

It will start by performing the initial load of the database with Data Pump. Then it will configure GoldenGate before it pauses – waiting for your signal to complete the migration. At this time, use DBMS_STATS to gather statistics:

SQL> exec dbms_stats.gather_database_stats;

This has some drawbacks:

  • It requires time and resources – which might not be the biggest problem. The source database is still open for business. We haven’t performed the switchover yet.
  • Column usage information (table COL_USAGE$) is not populated, so in some cases, no histograms will be created. This will happen if the database is supposed to automatically determine whether histograms are needed (method_opt includes size auto). In that case, you can merge the column usage information from another database via a database link, which can be a good idea if your application is depending on histograms.
  • The table, schema, or database statistics preferences are not present. It could be degree, method_opt, stale_pct or any other preferences that you can set with DBMS_STATS.SET_TABLE_PREFS (or schema or database-wide preferences). These preferences can be transferred to the new system, which is what I will talk about next.

Transferring Statistics

Another option is to transfer the statistics using DBMS_STATS. We have covered this in detail in a webinar, so I suggest you watch Performance Stability, Tips and Tricks and Underscores for all the details.

In short,

  1. The optimizer statistics in the source database are extracted from the data dictionary and stored in a transportable format in a regular heap table (referred to as a staging table).
  2. Using Data Pump, you move that table to the target database.
  3. Then you put the statistics into the data dictionary of the target database so that optimizer can use them.

Pro tip: You should perform step #1 before you start ZDM and store the staging table in one of the schemas that you are migrating with ZDM. That way, you don’t have to move the table manually. It is moved by ZDM together with the real data.

One thing to be aware of is that the table, schema or database statistics preferences are not transferred when you use e.g. DBMS_STATS.EXPORT_TABLE_STATS. There are dedicated procedures for transferring the statistics preferences:

Conclusion

You must figure out how to move the optimizer statistics into your target database when you use ZDM to perform logical migrations. If not, your target database will be without optimizer statistics which is a disaster waiting to happen.

Appendix

You might now ask. If Data Pump is bad at exporting statistics, but there are already better ways available in the database, why don’t we change the Data Pump code? And you are right – but so far, other things have been prioritized. I would love to see this one day fully embedded in Data Pump.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration and Testing

How can you test your OCI database before going live? With Logical Online migration it is possible using Flashback Database, restore or cloning. Let’s explore the options.

Flashback Database

Using Flashback Database is the easiest option and it is supported on all target platforms except Autonomous Database (shared and dedicated). In addition, your target database must be Enterprise Edition.

Does Oracle GoldenGate support Flashback Database? You can find the answer in the MOS note Does Goldengate Support Oracle RDBMS Flashback Features? (Doc ID 966212.1):

In a situation where there are only Replicats on a system, then FLASHBACK DATABASE… is fully supported if the Replicats are all using a checkpoint table and the trail files are available that go as far back as the flashback.

When you do Logical Online migrations you only have replicat process in your target database. In addition, checkpoint tables are enabled by default. All good!

Flashback Database – How To

  1. Stop replicat process by logging on to the GoldenGate hub and navigate to the Target Administration Server: How to stop replicat process in Oracle GoldenGate Microservices Architecture Hub
  2. Set a guaranteed restore point in the target PDB (named tgtpdb):
alter session set container=tgtpdb;
create restore point grp4test guarantee flashback database;
  1. Do your tests.
  2. Revert the changes by issuing a FLASHBACK PLUGGABLE DATABASE command:
alter session set container=cdb$root;
alter pluggable database tgtpdb close immediate;
flashback pluggable database tgtpdb to restore point grp4test;
alter pluggable database tgtpdb open resetlogs;
  1. Restart replicat process: How to start replicat process in Oracle GoldenGate Microservices Architecture Hub

The above procedure uses flashback of the pluggable database. This feature was introduced in Oracle Database 12.2. If your target database is 12.1 or older, you have to flashback the entire CDB. If your target database is a non-CDB database, you have to flashback back the entire database.

Restore

Backup/restore is also an option. In OCI it is easy to backup and restore, however, it does take longer than using Flashback Database. But the good thing is that you get to test your OCI backup and recovery strategy.

For Autonomous Database you can use the automatic backups which are enabled by default. In the other Database Cloud Offerings, you must enable automatic backup yourself.

Restore – How To

  1. Stop replicat process (see above)
  2. Create backup (or rely on automatic backup)
  3. Do your tests
  4. Restore backup
  5. Restart replicat process (see above)

Clone

Cloning the database or the entire DB System is also an option. A benefit is that the GoldenGate replication can continue while you are doing the tests. When you use Flashback Database and restore the replication must be stopped. Trail files will accumulate on disk and at one point the target database will be very busy catching up with the lag. For very huge and active systems it might be desirable to work on a clone. However, cloning to a new system mean that you are no longer testing in that specific database that will be your future production database. Also, cloning involves creating new DB Systems which has a cost.

Clone – How To

  1. Clone database or DB System
  2. Do your tests
  3. Discard database or DB System

Conclusion

Before going live in your new OCI database, you should test it. With Logical Online method it is easy, and you can use well-known techniques. If you combine it with the recommendations from our webinar Performance Stability, Tips and Tricks and Underscores, you are well underway towards a successful migration.

Other Blog Posts in This Series

Zero Downtime Migration – Physical Online Migration of Very Large Databases

Following the blog post on migrating Very Large Databases (VLDBs) using Logical Online method, let’s touch upon the Physical Online method as well.

Existing Data Guard

VLDBs are very often protected by Data Guard with one or more standby databases. When you start a migration with Zero Downtime Migration (ZDM) you don’t want to promise your existing Data Guard setup. If something happens during the preparation of the migration to OCI, you still want to be able to switch over to an on-prem standby database. Something like this:

Setup with on-prem Oracle Database Data Guard and standby database in OCI

When ZDM configure the OCI standby database, it will be added as a manually managed database, i.e. it does not use Data Guard broker. It does not interfere with your Data Guard broker setup and the setup remains valid. A few things to observe:

  • When ZDM is working, no switch-overs are allowed. This will cause the process to error out.
  • By working I mean – from the second you start the ZDM migration and until it is paused at ZDM_CONFIGURE_DG_SRC.
  • When ZDM is paused, you can do as many switch-overs as you like. Just ensure that the original source database become the primary database again as soon as possible.
  • When you need to complete the migration, the source database must be the primary database and no switch-overs are allowed. Which does make sense because in this last phase, ZDM is switching over to the OCI target database.
  • What about fail-overs. Fail-overs means loss of data and to accept that you need to open the database with RESETLOGS. This causes all sort of havoc in your Data Guard setup. You are back to start.

At which phases can you switch over to your on-prem standby database without jeopardizing the ZDM migration

Important

But you must remove your on-prem Data Guard setup, before you complete the migration with ZDM, i.e. you resume the paused job. ZDM will not be able to finish properly if there is a conflicting Data Guard configuration. To remove your existing Data Guard setup:

  1. ZDM migration is currently paused at ZDM_CONFIGURE_DG_SRC.
  2. Connect using Data Guard CLI (dgmgrl) to source on-prem database (primary). Remove configuration:
remove configuration
  1. Connect using SQL*Plus (sqlplus) to source on-prem database (primary). Remove Data Guard broker configuration:
alter system set dg_broker_start=FALSE scope=memory;
alter system reset dg_broker_start;
alter system reset dg_broker_config_file1;
alter system reset dg_broker_config_file2;
  1. Check log_archive_config. It must contain DB_UNIQUE_NAME of the source on-prem database (primary) and the OCI target database – nothing else. Using the above example, it should be:
alter system set log_archive_config='dg_config=(SALES_1,SALES_oci)' scope=both;
  1. Check log_archive_dest_n parameters. Ensure that Data Guard broker removed the correct ones. There should only be one remote redo log destination – and that is the OCI target database – all others should be removed. If you need to remove one, use this command (remember adjusting the suffix):
alter system set log_archive_dest_2='' scope=memory;
alter system reset log_archive_dest_2;
  1. Ensure that log_archive_config is set correctly (like #4) in the OCI target database:
alter system set log_archive_config='dg_config=(SALES_1,SALES_oci)' scope=both;
  1. Finally, complete migration:
$ZDM_HOME/bin/zdmcli resume job -jobid <id>

The Backup

ZDM needs a full backup that can be restored on your target.

  • DBCS: If your target database is one of the OCI Cloud Services (Virtual Machine, Bare Metal or Exadata DB System), ZDM will need to take a new full backup. Existing backups can’t be used.
  • ExaCC or Exadata on-prem: You can either take a new full backup or use an existing backup that is made available on disk. In addition, if you are so fortunate to have a Zero Data Loss Recovery Appliance (ZDLRA), you don’t need to take a backup. ZDM can just restore directly from ZDLRA.

If you are targeting a DBCS your DATA_TRANSFER_MEDIUM is set to OSS (Object Storage Service). The backup is stored in Object Storage using Oracle Database Cloud Backup Module for OCI. The backup in the source database and the restore in the target database will happen via a special sbt channel which streams the backup directly to and from Object Storage. This means:

  • The backup never hits the disk, so you don’t need additional disk space to hold the backup.
  • The duration of the backup is depending on your network speed to OCI. Since the backup is streamed directly to OCI, the network can become a bottleneck. If you have a slow connection to OCI, the backup will run equally slow. The same applies about the restore, however, the target database is already in OCI and does have a good connection to Object Storage.

ZDM will by default use 10 RMAN channels for the restore and the backup. With your knowledge of the source database, you might know better. You can tweak the number of channels in the response file. Look for the parameters SRC_RMAN_CHANNELS and TGT_RMAN_CHANNELS.

Based on your knowledge or testing you can determine which RMAN Compression algorithm that gives the best benefit on your database. You can adjust the compression algorithm in the response file using the parameter ZDM_RMAN_COMPRESSION_ALGORITHM. The default is MEDIUM which is normally gives the best balance between compression ratio and CPU time. And remember, RMAN Compression normally require a license for Advanced Compression Option but when you migrate with ZDM, you can use it for free.

While ZDM is taking a backup of the source database, no other backups should be running. Be sure to put your regular backups, including archive backups, on hold.

Redo Apply

In the source database, you should keep archive logs on disk until the target database has been restored, Data Guard has been configured, and the target database has caught up with redo apply. If you have a slow network connection and a huge database, it can take days until the backup has completed, restored has completed and redo apply has caught up.

  • Imagine you start the backup at sequence 100.
  • The restore of the target database finishes two days later. The source database is now at sequence 200.
  • ZDM configure Data Guard and starts redo transfer and redo apply. The source database is not at sequence 220.
  • Sequences 100-220 must be available on disk on the source database host, so the source (primary) database can transfer them to the target (standby) database.

It is not uncommon for VLDBs to generate redo on a daily basis that are double-digit TB. Just the other day I talked to a customer whose database generated 15 TB of archive logs a day.

First, you must be able to transfer the redo from the source database (primary) to the target database (standby). This is simple math: If you have 15 TB redo a day, you should be able to transfer that using a 1,5 Gbps connection (amount of redo / 24 / 60 / 60 * 8). If transferring redo becomes a problem, you can look into using redo transport compression. This can reduce the amount of data that must be transferred at the cost of CPU cycles. You can read more about it in the MOS note Redo Transport Compression in a Data Guard Environment (Doc ID 729551.1). I learned from colleagues in the Maximum Availability Architecture (MAA) team that TDE Tablespace Encryption and redo transport compression doesn’t play very well together. If your source database is encrypted, you should not expect that much benefit from redo transport compression.

Next, redo must be applied on the target database. Is the target database capable of applying redo so fast? On Exadata the answer is most likely: YES – but as always in IT, it depends. Redo Apply benchmark of Oracle Database Data Guard The numbers of the above graph comes from Redo Apply Best Practices – Oracle Data Guard and Active Data Guard. Based on your database release and the type of workload you have in the database; you can see the amount of redo that can be applied daily (in TB). The last two columns are using Multi-Instance Redo Apply (MIRA) with either two or four active RAC nodes. The numbers were generated on an Exadata.

Backup of Target Database

How do you backup your target database in OCI? You want to have a valid backup from the very second that you switch over to the target database.

The target placeholder database that you originally created will be overwritten by ZDM. This means that you can’t configure and enable automatic backup in OCI in advance. You must wait until the migration has completed until you enable automatic backup.

You could:

  • Extend the downtime window to allow automatic backup to be configured after the migration. Also, allow enough time for the first backup to complete.
  • Or, do your own backup in OCI. This is more cumbersome but will allow you to open the database for business immediately after the switchover. But you are in charge of the backup now. All the various bits and pieces are available:
    • Original backup is still in object storage.
    • Archive logs are on disk – you can back them up manually.
    • Perform incrementals if needed – put them somewhere safe.
    • In case of emergency – glue it all together

Data Guard on Target Database

Similar to automatic backup, you can’t create a Data Guard Association until ZDM has completed the migration. The cloud tooling does not support creating the standby database through a cascading standby. This means that you can’t build your OCI standby database until after ZDM has completed the migration – and the OCI database is the primary database. You can start to create the OCI Data Guard as soon as ZDM is done, but

  • You must tolerate that the OCI Data Guard is missing
  • Or, take downtime

Conclusion

Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Automatic backup and data guard can’t be created in OCI until after the migration. This might force you to take downtime. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration of Very Large Databases

Things always work on PowerPoint slides. And they almost always work in our lab environments as well. And recorded demos are also pretty bullet-proof.

But what happens when you have a huge database to migrate? A Very Large Database (VLDB). First, things get more exciting. Second, there are more knobs to turn and the smallest thing might become a big thing. Here are some things to consider when you migrate VLDBs using the Logical Online method.

Existing Data Guard

No need to worry about your existing on-prem Oracle Data Guard environment. The Logical Migration uses Oracle Data Pump and Oracle GoldenGate. Both tools can be used on a database in a Data Guard and does not interfere with Data Guard operations.

But no switchovers or failovers are allowed during replication. GoldenGate can be configured to fully support a Data Guard environment and seamlessly reconnect in case of a Data Guard switchover. But it requires advanced configuration of GoldenGate. Zero Downtime Migration (ZDM) does the GoldenGate configuration but does so in a basic way that does not support Data Guard switchovers.

Data Pump

  • Apply a proper degree of parallelism. Rule-of-thumb:

    • On-prem: 2 x number of physical cores
    • OCI: Number of OCPUs
  • When you are importing in OCI, consider scaling up on CPUs. More CPUs, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines needs around 10 minutes of downtime.

  • ZDM applies Data Pump compression automatically. Remember you don’t need a license for Advanced Compression Option to use compression when you are migrating with ZDM. Our experience is that MEDIUM most of the time is the best alternative. It provides a good compression ratio at a low CPU footprint. Although HIGH can compress better, it often comes at a much higher CPU footprint. I would recommend HIGH only in situations where the dump file size really matters (lack of disk space or really slow network connection). Below you can find two benchmarks we did, that compares the different compression algorithms: Comparing Data Pump compression algorithms

  • ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. SecureFile LOBs are superior in many ways, also when it comes to importing. Based on a customer case, we did a benchmark that measures the time it takes to import LOB data. The customer saw a 3x improvement during by transforming to SecureFile LOBs – and got the benefits of SecureFile LOBs afterwards. ZDM transforms to SecureFile LOBs automatically. Don’t turn it off. Here are the log entries from the two imports. Notice the elapsed time:

# BasicFile LOBs
10-OCT-20 21:43:21.848: W-3 . .  imported "SCHEMA"."TABLE"      31.83 GB  681025 rows in 804 seconds using direct_path
# SecureFile LOBs
15-OCT-20 18:16:48.663: W-13 . . imported "SCHEMA"."TABLES"     31.83 GB  681025 rows in 261 seconds using external_table
  • Don’t use import over network link. Use dump files. Performance-wise there are some restrictions when you import over network link. These restrictions really hurt on a VLDB.

  • But when you export to dump files, remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance and ideally, they should be separated from the storage that holds the database data files.

  • A Data Pump export does not affect your existing backup operations. But the export and the backup will fight over the same resources, so I recommend suspending major backup operations (level 0+1) until after the export.

  • Speaking of fighting over resources. You should run the export at off peak hours. That will allow the export to use many more resources. The export is not a fully consistent export. Each table is consistent, but ZDM does not use FLASHBACK_TIME or FLASHBACK_SCN to make the entire export consistent. This helps avoid ORA-01555 snapshot too old during the export. The SCN of each table is recorded in the dump file and GoldenGate later on uses that information to start replication on each table individually from the appropriate SCN.

GoldenGate Replication

How much data will GoldenGate need to transfer from source database to GoldenGate hub and from GoldenGate hub to target database? GoldenGate stores the replication data in trail files. The trail files are smaller than redo logs. The size of trail files is typically 30-40 % of the size of the redo logs. Imagine a database generating 15 TB of redo a day. The size of the trail files will be 4,5-6 TB. If you further apply compression on the trail files, you can typically reduce the size to around 1/8. Using our example, the trail files are now 550-750 GB. Which is significantly less than the initial 15 TB.

In the ZDM response file there is a parameter called GOLDENGATESETTINGS_EXTRACT_PERFORMANCEPROFILE. The default value is MEDIUM, but you have the option of changing it do HIGH. This should increase the performance of the extract process.

ZDM configure GoldenGate to use Parallel Replicat. This gives you a few options. First, you can control the number of mappers by using the parameter GOLDENGATESETTINGS_REPLICAT_MAPPARALLELISM (default is 4). Also, apply parallelism is auto-tuned. You can set a minimum and maximum value using GOLDENGATESETTINGS_REPLICAT_MINAPPLYPARALLELISM and GOLDENGATESETTINGS_REPLICAT_MAXAPPLYPARALLELISM=50. Defaults are 4 and 50 so I don’t think it will be necessary to change.

Last, you can find more advice in Administering Oracle GoldenGate 19.1, Tuning the Performance of Oracle GoldenGate.

GoldenGate Health Check

Oracle GoldenGate comes with health check scripts that you can use on the source and target database. This will help you monitor and troubleshoot the process. Here is information on how to install and use the health check scripts. You can also find information in Administering Oracle GoldenGate 19.1, Using Healthcheck Scripts to Monitor and Troubleshoot.

Backup of Target Database

When you use logical online migration in ZDM, you create the target database in advance. In contrast to physical migration, the target database is not overwritten. This means that you can configure automatic backup of your target OCI database before you complete the migration and switch over to OCI. You will even have time to and possibility of testing the backup.

I recommend that you configure automatic backup after the Data Pump initial load. This way there is less archive logs to back up as well.

Data Guard on Target Database

Similar to automatic backup, you can also create your Data Guard association before the switchover. Likewise, wait until after the initial load has completed.

Conclusion

Even huge Oracle Databases can be migrated to OCI using Zero Downtime Migration. You might need to make a few adjustments from the standard flow, but it is absolutely doable. Your database can be protected by backup and Data Guard from the very second you switch over to OCI. Besides the actual migration, you should also do your best to ensure performance stability once the database is open for business. For that purpose, you should have a look at our webinar Performance Stability, Tips and Tricks and Underscores.

Other Blog Posts in This Series