Get Started with Autoupgrade

If you never upgraded a database or it has been a while since you did it, I suggest that you get familiar with AutoUpgrade. Other methods of upgrading still exist, but AutoUpgrade is the only recommended method!

How To

AutoUpgrade is a tool that comes in a single file named autoupgrade.jar. You find it in your Oracle Home in $ORACLE_HOME/rdbms/admin. You should always download the latest version of AutoUpgrade from My Oracle Support and put it into your Oracle Home, thus overwriting the existing file.

AutoUpgrade is fully backward compatible, and a newer version of AutoUpgrade can upgrade databases to a previous version. In this example, AutoUpgrade is version 21.3.211115, but notice the information in build.supported_target_versions:

$ java -jar autoupgrade.jar -version

build.hash 081e3f7
build.version 21.3.211115
build.date 2021/11/15 11:57:54
build.max_target_version 21
build.supported_target_versions 12.2,18,19,21
build.type production

Version 21 of AutoUpgrade can upgrade your database to Oracle Database 21c and previous releases.

Now, you are ready to analyze your Oracle Database for upgrade readiness and eventually upgrade it. This short YouTube video explains the process.

Finally, you can watch a short demo of a database upgrade.

Try It

But the best way to learn is to do it yourself. You can use our Hands-On Lab for this purpose. You can find an overview of the lab and the lab instructions on Mike Dietrich’s blog.

You can run the lab in two ways.

VirtualBox image

The Hands-On Lab comes as a self-contained VirtualBox image that you download and run on your own computer. It requires around 100 GB of disk space and a fairly modern computer. Nothing fancy, but it doesn’t run smoothly on arcane hardware. Get started here.

LiveLabs

You can run the entire lab in just a browser using Oracle LiveLabs. You can do it in our Cloud Free Tier so that it will be completely free. Our workshop on Oracle LiveLabs is called Hitchhiker’s Guide for Upgrading to Oracle Database 19c.

Watch this video and learn how to provision a lab in Oracle LiveLabs.

Guided Tour

If you get stuck in the lab or just want to watch and let Mike Dietrich do all the typing, you can watch this recorded session of the complete hands-on lab.

Further Reading

Once done with the lab, you can start on these additional ressources:

Good luck!

P.S. Remember – it is better to fail in our lab than in production…

I Never Meet Joel, But I Have Met His Spirit #JoelKallmanDay

I didn’t know Joel Kallman. I heard about his name, and I knew he had something to do with APEX. But we never had to opportunity to meet each other. After his untimely death, I became aware of his huge impact on the Oracle community. So many people shared so many great stories about him and how he influenced the Oracle community. It became apparent to me how much he meant for the Oracle community.

This post is about the Oracle community. When I started working with Oracle, I quickly found out how great the community was. People were happily sharing knowledge – on blogs, at conferences, in videos. I was blown away by the amount of help and guidance I could get. For free – and even with a huge smile on top of that. I was a rookie and all that help made a huge difference. This is what I mean – when I say I believe I have met Joel’s spirit. But still, after many years working with Oracle, I use and value the community.

Since I joined Oracle, it has been on my to-do list to do a rolling upgrade. I never tried that for real. Last week, I had time to try out rolling upgrades with DBMS_ROLLING. Provision a Data Guard environment, open the documentation, get a cup of coffee, and off we go…

At one point, I was stuck. Like in – really stuck! Friday night – even after trying Connor’s gin/tonic debugging – still stuck. Saturday morning – after a good night’s, sleep still stuck. Errors, errors, errors!

Luckily, I attended a talk at UKOUG Tech Talk in April this year about rolling upgrades. Clive and Zahid did a really good presentation on rolling upgrades with a lot of valuable technical knowledge. Flip through the slides, and then – voila – problem solved. Now I could proceed with the upgrade. Thanks Clive, thanks Zahid.

If it hadn’t been for the community – for people happily sharing knowledge and helping each other – for Joel’s spirit, I would probably still be stuck in that rolling upgrade.

Never stop sharing knowledge!

New Webinars Coming Your Way

These days there is an abundance of virtual events. I feel exhausted from it from time to time. So you might ask yourself. Is there really room for virtual events?

The answer is: Yes, there is 🙂

We have added more upcoming webinars to our Virtual Classroom Series: Upgrade to Oracle Database 19c. If you enjoyed the previous ones, I am sure that you will love these as well.

Webinar Date Sign Up
10 How Low Can You Go? Zero Downtime Operations October 21, 2021 Link
11 Secure Your Job – Fallback Is Your Insurance November 11, 2021 Link
12 Migrating Very Large Databases December 9, 2021 Link
13 Data Pump Extreme – Deep Dive with Development January 27, 2022 Link

Of course, we still keep our promise to you: All tech – no marketing, no buzzwords

If you miss any of the webinars, head over to the Webinar page and find the recording and the slides. All the previous nine webinars are there as well.

I hope to see you there!

Zero Downtime Migration – Logical Offline Migration How To Minimize Downtime

With Oracle Zero Downtime Migration (ZDM) 21.2, you can now perform logical offline migrations from IBM AIX and Oracle Solaris. Since it is an offline approach, the database is unavailable during the entire migration. Let’s look at how you can minimize downtime.

Before Migration

Statistics

Ensure that dictionary statistics are up-to-date in the source database. Data Pump does a lot of querying in the data dictionary, and accurate statistics are vital for good execution plans. I recommend gathering stats on SYS and SYSTEM:

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

You could also use dbms_stats.gather_dictionary_stats. But it gathers statistics on all internal schemas, and most often, only SYS and SYSTEM is sufficient:

begin
   dbms_stats.gather_dictionary_stats;
end;
/

Data

I think it goes without saying, but the less data, the faster the migration complete. If you have any junk data in your database, get rid of it.

Migration

Parallel

Apply a proper degree of parallelism. Rule-of-thumb:

  • On-prem: 2 x number of physical cores
  • OCI: Number of OCPUs

When you are importing in OCI, consider scaling up on CPUs. More CPUs, faster import. Bare Metal and Exadata DB Systems scales online, whereas Virtual Machines need around 10 minutes of downtime.

Import processes running in parallel can read from the same dump file. This means that the number of parallel processes for an import is not limited by the number of dump files (which is the case of an export). To illustrate with an example:

  • Export: parallel=8 – 8 dump files were created. Each parallel worker needs exclusive access to a dump file.
  • Import: parallel=16 – still only 8 dump files. Each parallel worker can read from all dump files, no locks required.

You can control Data Pump parallel in ZDM using these parameters:

Data Pump Mode

Should you go via dump file or directly over a database link? Only your tests can tell the answer, but I am pretty sure that you will find that dump file mode is the quickest.

When you export via a dump file, you also need to figure out how you can transfer it to the target database:

  • Direct Transfer – you can use either scp or rsync. At the risk of starting a religious discussion, I don’t think it makes much of a difference in this use case. If you need rsync on Exadata, you can find guidance in MOS Doc ID 1556257.1.
  • Via Object Store – you can use either curl or OCI CLI. I would expect you can get better performance with OCI CLI. But you need to test it in your environment.

Remember to ensure you have adequate disk space on the source database host to hold the dump file. Use mount points that can deliver good write performance, and ideally, they should be separated from the storage that holds the database data files.

SecureFile LOB

ZDM automatically transforms BasicFile LOBs into SecureFile LOBs – which is very good. Don’t turn it off. Imports run faster when SecureFile LOBs are in play.

After Migration

Statistics

As soon as ZDM has completed the migration, gather dictionary stats in the target database. All those new objects that came in with the import have for sure made the statistics stale.

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

Next, you need to figure out how you want to take care of the optimizer statistics. In a previous post, I discussed why this is needed and how to do it. If you decide to import statistics via DBMS_STATS, just ensure that you have gathered dictionary stats before.

Further Reading

I recommend that you also read the following posts:

Conclusion

When migrating from IBM AIX and Oracle Solaris, you use the logical offline approach, which means a lot of downtime. You have a few options to make it run as fast as possible.

Other Blog Posts in This Series

How To Upgrade An Encrypted Oracle Database and Convert to PDB

Using AutoUpgrade, you can upgrade your encrypted Oracle Database and convert to a pluggable database. The process is not entirely automated, so you must handle the TDE encryption key manually.

A new parameter called skip_tde_key_import is introduced. Here is what the documentation says:

(Optional) The default is NO. You can use this option for non-CDB-to-PDB and unplug/plug operations. When set to YES, the upgrade is run, but import of the source database KeyStore into the target database is skipped, without raising an error. AutoUpgrade will leave the PDB open in upgrade mode, so that you can import the keys manually yourself. After you import the keys, you must then restart the database in normal mode.

In other words, AutoUpgrade does everything except import of the TDE encryption key.

How To

This is what happens: Overview of upgrading to Oracle Database 19c of database encrypted with TDE and convert to a PDB

AutoUpgrade handles the green things – you handle the yellow thing. My environment:

  1. Non-CDB called FTEX running on Oracle Database 11.2.0.4
  2. FTEX is encrypted with TDE and has an auto-login keystore
  3. CDB called CDB2 running on Oracle Database 19c
  4. CDB2 is prepared for TDE and has a keystore defined

First, I ensure that AutoUpgrade is version 21.2 or newer:

java -jar autoupgrade.jar -version

I create a config file for AutoUpgrade. Notice, that I have specified that the database is plugged into CDB2 using the parameter target_cdb. Also, notice that I have specified the new parameter skip_tde_key_import and set it to YES:

upg1.source_home=/u01/app/oracle/product/11.2.0.4
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=FTEX
upg1.target_cdb=CDB2
upg1.skip_tde_key_import=YES

Then, I analyze my database (FTEX) for any issues:

java -jar autoupgrade.jar -config FTEX.cfg -mode analyze

No issues are detected, so I proceed with the upgrade:

java -jar autoupgrade.jar -config FTEX.cfg -mode deploy

After the upgrade and conversion to PDB, I log on to CDB2. I want to check the outcome. Has FTEX been upgraded and converted?

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   YES                

Open – but restricted. I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

CAUSE                TYPE       MESSAGE                                STATUS     ACTION
Wallet Key Needed    ERROR      PDB needs to import keys from source.  PENDING    Import keys from source.    

OK. This is expected. AutoUpgrade does not handle the TDE encryption key. See the yellow part of the arrow above. You have to do that.

I start by merging the two keystores – the keystore of the old non-CDB database (FTEX) and the keystore of the CDB (CDB2). I merge the two keystores into the one of CDB2:

CDB$ROOT SQL> administer key management 
   merge keystore '/u01/app/oracle/admin/FTEX/wallet' identified by "oracle_4U" 
   into existing keystore '/u01/app/oracle/admin/CDB2/wallet/tde' identified by "oracle_4U" 
   with backup using 'merge_keystore';

Notice that I have specified the folders of keystores and not the actually file names. I have also specified the keystore password for both the keystores. And, finally, I tell the database to create a backup of the keystore before the merge (as a precaution – and I recommend always doing that).

Now, I switch to the FTEX PDB. I need to configure FTEX to use the TDE encryption key that is merged from the old keystore. Each key in the keystore has a key ID and I tell the PDB to use the right one. I explain later on how to determine the key ID:

CDB$ROOT SQL> alter session set container=FTEX;

FTEX SQL> administer key management 
   use key 'Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' 
   force keystore identified by "oracle_4U" 
   with backup using 'use_key_ftex';

Now, the PDB knows the TDE encryption key and you can actually start to query data from an encrypted tablespace. However, there is still a plug-in violation saying the TDE encryption key has to be imported. The PDB insists that the encryption key is imported – rather than merged into the keystore. So I will export the key from the PDB and import it again:

FTEX SQL> administer key management 
   export keys with secret "secret-passphrase" 
   to '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U";
FTEX SQL> administer key management 
   import keys with secret "secret-passphrase" 
   from '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U" 
   with backup using 'import_key_ftex';
FTEX SQL> host rm /etc/oracle/exported-keys-ftex

The encryption key is saved in an encrypted file in the OS. The file is encrypted using a passphrase (secret-passphrase), and you should pick a better one than I did. Finally – and important – I remove the file from the OS when I am done. I don’t want my encryption keys lying around in files all over the OS.

I restart the FTEX PDB:

FTEX SQL> alter pluggable database FTEX close immediate;
FTEX SQL> alter pluggable database FTEX open;

And the PDB is now open in READ WRITE mode and unrestricted

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   NO

I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

No rows selected.

Job done! Encrypted database upgraded and converted to a PDB.

Determine Key ID

Use the orapki utility to determine the key ID of the TDE encryption key. Specify the location of the old non-CDB keystore:

orapki wallet display -wallet /u01/app/oracle/admin/FTEX/wallet

Oracle PKI Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Requested Certificates: 
User Certificates:
Oracle Secret Store entries: 
ORACLE.SECURITY.DB.ENCRYPTION.Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY
ORACLE.SECURITY.TS.ENCRYPTION.BUCsXzQga4wfJ8gNSuptAPMCAwAAAAAAAAAAAAAAAAAAAAAAAAAA
Trusted Certificates: 
Subject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Secure Server Certification Authority,O=RSA Data Security\, Inc.,C=US
Subject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US

The TDE encryption key is starting with ORACLE.SECURITY.DB.ENCRYPTION and doesn’t end on MASTERKEY. In the above example the key ID is Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.

If your non-CDB is Oracle Database 12c or later you can also get the key ID using a view:

SQL> select key_id from v$encryption_keys where key_use='TDE';

Why Do I Need to Export and Import?

Above you saw that I had to export and import the encryption keys, even after merging the keystores. You might ask: Why is that even needed? Why is the ADMINISTER KEY MANAGEMENT MERGE KEYSTORE command not enough? That’s a very good question. I have reached out to our security people to get their take on it. It sounds like a bug to me.

The Future

The above solution makes it easier to handle encrypted databases. But we are not satisfied yet. A future version of AutoUpgrade will have even better support for encrypted databases, and everything will be automated.

The biggest issue for us is to find a safe way to handle the TDE keystore password. We need to have a safe way of getting the password from you and storing it in memory until we actually need it.

We are just about to start testing the very first version of AutoUpgrade with proper TDE support. If you are interested in becoming a beta tester, reach out to me to work something out.

Conclusion

Converting encrypted databases to PDB is now possible in AutoUpgrade. AutoUpgrade mainly handles the process, but you have to manage the TDE encryption keys yourself.

Oracle Database 21c Is Here

Last week Oracle released Oracle Database 21c for additional platforms: Linux and Exadata. Other platforms will follow. You should keep an eye out for Release Schedule of Current Database Releases (Doc ID 742060.1) for further information.

Things to Notice

In my part of the Oracle Database, there are things to notice. I want to highlight:

To get all the details, visit the Upgrade and Utilities part of the new features documentation. There are some good examples of how the features can be used.

Behaviour Changes

Read-Only Oracle Home (ROOH) is now the default. Be sure to set the following environment variables to control the location of these directories:

  • ORACLE_BASE_HOME
  • ORACLE_BASE_CONFIG

I like ROOH, but it takes some time to get used to. For instance, network/admin files (tnsnames, sqlnet) and dbs files (pfile, spfile) are now in a new location.

The Pre-Upgrade Information Tool or preupgrade.jar is removed and replaced by AutoUpgrade. A few new parameters have been introduced to make the transition easier.

Innovation Release

Remember, 21c is an innovation release, which means a shorter support window than Long Term Releases such as Oracle Database 19c. If you adopt Innovation Releases, you should be prepared to upgrade to the next database release within one year after the next database release ships.

I would not recommend that you upgrade your production systems to Oracle Database 21c due to the limited support period. Not unless you are prepared to upgrade the database soon again – when support runs out. Oracle Database 19c is the current Long Term Support release. I recommend that for production databases.

Different release types for Oracle Database - innovation vs long term support

To learn more about innovation release and our release model, have a look at our slide deck. We discuss it in the first chapter.

New Features

I want to mention a few new features. They haven’t attracted as much attention as the marque features, but they are still cool.

Expression based init.ora parameters make it possible to base database parameters (init.ora) on calculations made on the system’s configuration. For example, setting the database parameter CPU_COUNT to half the number of CPUs (Windows):

alter system set cpu_count='$NUMBER_OF_PROCESSORS/2';

For more details, check out my video on YouTube.

Placeholders in SQL DDL Statements can improve application security because sensitive information, like passwords, doesn’t need to be hardcoded in SQL DDL. Example: You can make this statement:

CREATE USER :!username IDENTIFIED BY :!password ...

And Oracle Call Interface programs can substitute the placeholders into:

CREATE USER "DANIEL" IDENTIFIED BY "MyS3cr3tP!d" ...

This is similar to data binding but occurs in Oracle Client.

Conclusion

The complete 21c documentation is online, so I suggest that you head on over there and have a look. In the upgrade guide, you can find the list of behavior changes and also deprecated and desupported functionality. And finally, but most interesting perhaps, is Learning Database New Features.

Try it out in Always Free ADB or explore the Oracle LiveLabs.

Upgrade with Less Downtime

You can upgrade your Oracle Database with less downtime. With the release of AutoUpgrade 21.2 we added the fast deploy option. It allows you to run the preupgrade fixups while the database is still online. Only the actual upgrade will now require downtime.

The Usual Upgrade

When you upgrade the usual way, i.e., using AutoUpgrade in deploy mode, then you:

  1. Analyze the database – java -jar autoupgrade.jar -mode analyze This is non-intrusive and doesn’t change anything on the database. You get a report, and you can fix any issue that AutoUpgrade can’t fix for you.
  2. Upgrade the database – java -jar autoupgrade.jar -mode deploy The database is analyzed again. Based on the findings of the analysis, a set of preupgrade fixups are executed. Finally, the database upgrade takes place.

Best practice is to use AutoUpgrade in deploy mode

During step #2, the database is not available – you have downtime. Even though the database was analyzed previously, it will be re-analyzed to catch any new issues. This is the safest approach. From the first analysis (in step #1) to the second analysis (in step #2), potentially new issues can occur.

We talked to a few of our customers who are really trying to reduce the downtime of an upgrade. For them – every second counts and has a significant impact on their business. They asked us to make the upgrade faster.

Fast Deploy Mode

One customer suggested moving the final analysis and preupgrade fixup outside of the downtime window. Shortly before the downtime would start, they would like to run an analysis and the preupgrade fixups. Next, they would wait until the downtime window starts and do only the upgrade.

I must stress; there is a risk that a new issue pops up between the fixups and the upgrade – but this one customer is willing to take the risk. In exchange, they will get upgrades with less downtime.

We call this approach fast deploy. The procedure is:

  1. Analyze the database – java -jar autoupgrade.jar -mode analyze
  2. Run the preupgrade fixups – java -jar autoupgrade.jar -mode fixups
  3. Now downtime starts
  4. Upgrade the database – java -jar autoupgrade.jar -mode upgrade

To reduce downtime use the new fast deploy mode - with increased risk

Fast deploy requires AutoUpgrade 21.2 or newer. The target database release can be any of the target releases already supported by AutoUpgrade, meaning 12.2 or higher.

Here are the details of the feature – from the AutoUpgrade change log:

This feature enables DBA’s to run the prechecks and prefixups while the database is still online and then run deploy to complete the upgrade. Once the fixups have been run on the source database, DBA can then skip the prechecks and prefixup stages and proceed directly to the phases that follow for example: Database can be online: java -jar autoupgrade.jar -mode fixups -config yourconfig Upgrade time: java -jar autoupgrade.jar -mode upgrade -config yourconfig The -mode upgrade command proceeds directly to the phases that follow the prechecks and prefixups stages during deploy. The main goal of this feature is to reduce database downtime by running the fixups when your database is online and open for business and then skipping the fixups when your database is offline. See Oracle documentation for further details.

Conclusion

If your application is really sensitive to downtime, you can now upgrade with less downtime using fast deploy. It introduces a slight risk that the preupgrade analyze will not detect an issue that will cause troubles later on. You have to choose whether that risk is acceptable to you.

Our recommendation is to use the old approach – AutoUpgrade in deploy mode. Consider only fast deploy if downtime really hurts.

If you are interested in seeing which other things got put into AutoUpgrade 21.2, you can check the change log.

Zero Downtime Migration – Logical Online and Sequences

When migrating with Oracle Zero Downtime Migration (ZDM) and the logical online, you use Data Pump for the initial load and Oracle GoldenGate to keep the database in sync. When using Oracle GoldenGate, attention is needed on your sequences; otherwise, you will probably end up with a lot of ORA-00001 unique constraint violated.

The Problem

I think it is best to illustrate with an example:

Source Database Target Database
Sequence s1 currently has a value of 100.
ZDM starts and performs the export for the initial load.
The Data Pump import creates sequence s1 with current value 100.
A user inserts a row and uses the sequence:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence, and the statement now looks like this:
insert into orders values(101 ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                GoldenGate replicates the statement. It does not take a new value from the sequence on the target. Doing so would corrupt the data because there is no guarantee that the same number would be taken. Instead, it uses the statement as it was executed on the source:
insert into orders values(101 ...
Now it is time to complete the migration. Users are disconnected.
GoldenGate synchronizes the final changes before users are now connecting to the target database.
A user inserts another record into orders:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence on the target database. It was created with current value 100, so next value is 101:
insert into orders values(101 ...
101 has already been used, and typically, there is a primary key or unique key constraint on such a column.
This causes ORA-00001 unique constraint violated.

I talked to our GoldenGate experts, and it is a common pitfall. During migrations, sequences are often forgotten, and soon after the migration they start seeing ORA-00001 unique constraint violated.

The Solution

After the switchover, you must ensure that the sequences have a proper value. The sequences must be forwarded or advanced. You can do that in many ways. Here are some ideas:

Recreate sequences

Right after the migration has been completed and GoldenGate has done the final synchronization, but before users are connecting, you can recreate the sequences.

  • Drop the sequences on the target database.
  • Recreate the sequences either using DBMS_METADATA or Data Pump.
  • Ensure to adjust the privileges on the sequences.

If you can wrap all those changes into a shell script, you can have ZDM execute it automatically as part of the migration flow. Check out Pro Tip 5: Adding Custom Scripts from a previous blog post.

Forward or advance the sequence

Another approach is to forward or advance the sequence. Increment the sequence on the target database until the next value is high enough. What do I mean by high enough?

  • The value of the same sequence on the source database.
  • If you know the sequence is only used to generate numbers for one column (let’s call it t1.c1), then get the highest value from that column SELECT max(c1) FROM t1. Typically, such a column is a primary key column with a unique index underneath, so a SELECT max should go very fast.

Now that you have the desired, new value for the sequence, you can calculate how much to increment by. Example: The source database sequence (DBA_SEQUENCES.LAST_NUMBER) is 1500 and target database sequence is 1000, then you need to increment by 500:

alter sequence seq1 increment by 500;
select seq1.nextval from dual;
alter sequence seq1 increment by 1;

If you use the approach of SELECT max then you need to increment by 501. In any case, better increment by too much than too little.

Similar to the former option, you can put the commands into a shell script and have ZDM execute it

Replicate sequence changes

GoldenGate also has the possibility of replicating the changes to the sequences. If you decide to use this with ZDM, it will require additional configuration of GoldenGate, which is why I prefer any of the former options. If you want to more about it, check out Oracle GoldenGate 19.1 – Using Oracle GoldenGate for Oracle Database, chapter 8.1 Installing Support for Oracle Sequences.

Conclusion

If your database uses sequences and you are using Oracle GoldenGate as part of your migration, ensure to handle your sequences properly. Otherwise, you will fall into a typical pitfall of GoldenGate migrations.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Migration and Statistics

If you decided to do a logical migration of your Oracle Database with Zero Downtime Migration (ZDM), here is something important about optimizer statistics. You must manually take care of the optimizer statistics on the target database after the Data Pump import. Either by recreating the statistics or transporting the statistics from the source database.

Background

Whenever I talk about migration of Oracle Database with Data Pump, I always mention that it is best practice to exclude optimizer statistics from the Data Pump export. Here is how you do it:

$ expdp ... exclude=statistics

Why is that recommended? Data Pump is not very good at extracting the optimizer statistics. There is nothing wrong with the statistics when they are imported. But it can take a very long time to do the export of statistics.

ZDM and Data Pump

The development team behind ZDM wanted the tool to be easy to use. Also, they wanted ZDM to use all the best practices that come with the various other tools that ZDM uses. When they talked to us about Data Pump, we told them to exclude statistics.

BUT – this also leaves you in a situation where you have a database completely without optimizer statistics. It goes without saying that it is a disaster waiting to happen. You must ensure that optimizer statistics are present on the target database before allowing the users to connect to it.

Regather

One option is to regather optimizer statistics on the target database after the Data Pump import. When the Data Pump import completes, and before you proceed with the switchover, it is time to regather the statistics. Typically, you would start ZDM something like this:

$ zdmcli migrate database .... -pauseafter ZDM_MONITOR_GG_LAG

It will start by performing the initial load of the database with Data Pump. Then it will configure GoldenGate before it pauses – waiting for your signal to complete the migration. At this time, use DBMS_STATS to gather statistics:

SQL> exec dbms_stats.gather_database_stats;

This has some drawbacks:

  • It requires time and resources – which might not be the biggest problem. The source database is still open for business. We haven’t performed the switchover yet.
  • Column usage information (table COL_USAGE$) is not populated, so in some cases, no histograms will be created. This will happen if the database is supposed to automatically determine whether histograms are needed (method_opt includes size auto). In that case, you can merge the column usage information from another database via a database link, which can be a good idea if your application is depending on histograms.
  • The table, schema, or database statistics preferences are not present. It could be degree, method_opt, stale_pct or any other preferences that you can set with DBMS_STATS.SET_TABLE_PREFS (or schema or database-wide preferences). These preferences can be transferred to the new system, which is what I will talk about next.

Transferring Statistics

Another option is to transfer the statistics using DBMS_STATS. We have covered this in detail in a webinar, so I suggest you watch Performance Stability, Tips and Tricks and Underscores for all the details.

In short,

  1. The optimizer statistics in the source database are extracted from the data dictionary and stored in a transportable format in a regular heap table (referred to as a staging table).
  2. Using Data Pump, you move that table to the target database.
  3. Then you put the statistics into the data dictionary of the target database so that optimizer can use them.

Pro tip: You should perform step #1 before you start ZDM and store the staging table in one of the schemas that you are migrating with ZDM. That way, you don’t have to move the table manually. It is moved by ZDM together with the real data.

One thing to be aware of is that the table, schema or database statistics preferences are not transferred when you use e.g. DBMS_STATS.EXPORT_TABLE_STATS. There are dedicated procedures for transferring the statistics preferences:

Conclusion

You must figure out how to move the optimizer statistics into your target database when you use ZDM to perform logical migrations. If not, your target database will be without optimizer statistics which is a disaster waiting to happen.

Appendix

You might now ask. If Data Pump is bad at exporting statistics, but there are already better ways available in the database, why don’t we change the Data Pump code? And you are right – but so far, other things have been prioritized. I would love to see this one day fully embedded in Data Pump.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration and Testing

How can you test your OCI database before going live? With Logical Online migration it is possible using Flashback Database, restore or cloning. Let’s explore the options.

Flashback Database

Using Flashback Database is the easiest option and it is supported on all target platforms except Autonomous Database (shared and dedicated). In addition, your target database must be Enterprise Edition.

Does Oracle GoldenGate support Flashback Database? You can find the answer in the MOS note Does Goldengate Support Oracle RDBMS Flashback Features? (Doc ID 966212.1):

In a situation where there are only Replicats on a system, then FLASHBACK DATABASE… is fully supported if the Replicats are all using a checkpoint table and the trail files are available that go as far back as the flashback.

When you do Logical Online migrations you only have replicat process in your target database. In addition, checkpoint tables are enabled by default. All good!

Flashback Database – How To

  1. Stop replicat process by logging on to the GoldenGate hub and navigate to the Target Administration Server: How to stop replicat process in Oracle GoldenGate Microservices Architecture Hub
  2. Set a guaranteed restore point in the target PDB (named tgtpdb):
alter session set container=tgtpdb;
create restore point grp4test guarantee flashback database;
  1. Do your tests.
  2. Revert the changes by issuing a FLASHBACK PLUGGABLE DATABASE command:
alter session set container=cdb$root;
alter pluggable database tgtpdb close immediate;
flashback pluggable database tgtpdb to restore point grp4test;
alter pluggable database tgtpdb open resetlogs;
  1. Restart replicat process: How to start replicat process in Oracle GoldenGate Microservices Architecture Hub

The above procedure uses flashback of the pluggable database. This feature was introduced in Oracle Database 12.2. If your target database is 12.1 or older, you have to flashback the entire CDB. If your target database is a non-CDB database, you have to flashback back the entire database.

Restore

Backup/restore is also an option. In OCI it is easy to backup and restore, however, it does take longer than using Flashback Database. But the good thing is that you get to test your OCI backup and recovery strategy.

For Autonomous Database you can use the automatic backups which are enabled by default. In the other Database Cloud Offerings, you must enable automatic backup yourself.

Restore – How To

  1. Stop replicat process (see above)
  2. Create backup (or rely on automatic backup)
  3. Do your tests
  4. Restore backup
  5. Restart replicat process (see above)

Clone

Cloning the database or the entire DB System is also an option. A benefit is that the GoldenGate replication can continue while you are doing the tests. When you use Flashback Database and restore the replication must be stopped. Trail files will accumulate on disk and at one point the target database will be very busy catching up with the lag. For very huge and active systems it might be desirable to work on a clone. However, cloning to a new system mean that you are no longer testing in that specific database that will be your future production database. Also, cloning involves creating new DB Systems which has a cost.

Clone – How To

  1. Clone database or DB System
  2. Do your tests
  3. Discard database or DB System

Conclusion

Before going live in your new OCI database, you should test it. With Logical Online method it is easy, and you can use well-known techniques. If you combine it with the recommendations from our webinar Performance Stability, Tips and Tricks and Underscores, you are well underway towards a successful migration.

Other Blog Posts in This Series