New Webinars Coming Your Way

These days there is an abundance of virtual events. I feel exhausted from it from time to time. So you might ask yourself. Is there really room for virtual events?

The answer is: Yes, there is 🙂

We have added more upcoming webinars to our Virtual Classroom Series: Upgrade to Oracle Database 19c. If you enjoyed the previous ones, I am sure that you will love these as well.

Webinar Date Sign Up
10 How Low Can You Go? Zero Downtime Operations October 21, 2021 Link
11 Secure Your Job – Fallback Is Your Insurance November 11, 2021 Link
12 Migrating Very Large Databases December 9, 2021 Link
13 Data Pump Extreme – Deep Dive with Development January 27, 2022 Link

Of course, we still keep our promise to you: All tech – no marketing, no buzzwords

If you miss any of the webinars, head over to the Webinar page and find the recording and the slides. All the previous nine webinars are there as well.

I hope to see you there!

How To Upgrade An Encrypted Oracle Database and Convert to PDB

Using AutoUpgrade, you can upgrade your encrypted Oracle Database and convert to a pluggable database. The process is not entirely automated, so you must handle the TDE encryption key manually.

A new parameter called skip_tde_key_import is introduced. Here is what the documentation says:

(Optional) The default is NO. You can use this option for non-CDB-to-PDB and unplug/plug operations. When set to YES, the upgrade is run, but import of the source database KeyStore into the target database is skipped, without raising an error. AutoUpgrade will leave the PDB open in upgrade mode, so that you can import the keys manually yourself. After you import the keys, you must then restart the database in normal mode.

In other words, AutoUpgrade does everything except import of the TDE encryption key.

How To

This is what happens: Overview of upgrading to Oracle Database 19c of database encrypted with TDE and convert to a PDB

AutoUpgrade handles the green things – you handle the yellow thing. My environment:

  1. Non-CDB called FTEX running on Oracle Database 11.2.0.4
  2. FTEX is encrypted with TDE and has an auto-login keystore
  3. CDB called CDB2 running on Oracle Database 19c
  4. CDB2 is prepared for TDE and has a keystore defined

First, I ensure that AutoUpgrade is version 21.2 or newer:

java -jar autoupgrade.jar -version

I create a config file for AutoUpgrade. Notice, that I have specified that the database is plugged into CDB2 using the parameter target_cdb. Also, notice that I have specified the new parameter skip_tde_key_import and set it to YES:

upg1.source_home=/u01/app/oracle/product/11.2.0.4
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=FTEX
upg1.target_cdb=CDB2
upg1.skip_tde_key_import=YES

Then, I analyze my database (FTEX) for any issues:

java -jar autoupgrade.jar -config FTEX.cfg -mode analyze

No issues are detected, so I proceed with the upgrade:

java -jar autoupgrade.jar -config FTEX.cfg -mode deploy

After the upgrade and conversion to PDB, I log on to CDB2. I want to check the outcome. Has FTEX been upgraded and converted?

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   YES                

Open – but restricted. I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

CAUSE                TYPE       MESSAGE                                STATUS     ACTION
Wallet Key Needed    ERROR      PDB needs to import keys from source.  PENDING    Import keys from source.    

OK. This is expected. AutoUpgrade does not handle the TDE encryption key. See the yellow part of the arrow above. You have to do that.

I start by merging the two keystores – the keystore of the old non-CDB database (FTEX) and the keystore of the CDB (CDB2). I merge the two keystores into the one of CDB2:

CDB$ROOT SQL> administer key management 
   merge keystore '/u01/app/oracle/admin/FTEX/wallet' identified by "oracle_4U" 
   into existing keystore '/u01/app/oracle/admin/CDB2/wallet/tde' identified by "oracle_4U" 
   with backup using 'merge_keystore';

Notice that I have specified the folders of keystores and not the actually file names. I have also specified the keystore password for both the keystores. And, finally, I tell the database to create a backup of the keystore before the merge (as a precaution – and I recommend always doing that).

Now, I switch to the FTEX PDB. I need to configure FTEX to use the TDE encryption key that is merged from the old keystore. Each key in the keystore has a key ID and I tell the PDB to use the right one. I explain later on how to determine the key ID:

CDB$ROOT SQL> alter session set container=FTEX;

FTEX SQL> administer key management 
   use key 'Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' 
   force keystore identified by "oracle_4U" 
   with backup using 'use_key_ftex';

Now, the PDB knows the TDE encryption key and you can actually start to query data from an encrypted tablespace. However, there is still a plug-in violation saying the TDE encryption key has to be imported. The PDB insists that the encryption key is imported – rather than merged into the keystore. So I will export the key from the PDB and import it again:

FTEX SQL> administer key management 
   export keys with secret "secret-passphrase" 
   to '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U";
FTEX SQL> administer key management 
   import keys with secret "secret-passphrase" 
   from '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U" 
   with backup using 'import_key_ftex';
FTEX SQL> host rm /etc/oracle/exported-keys-ftex

The encryption key is saved in an encrypted file in the OS. The file is encrypted using a passphrase (secret-passphrase), and you should pick a better one than I did. Finally – and important – I remove the file from the OS when I am done. I don’t want my encryption keys lying around in files all over the OS.

I restart the FTEX PDB:

FTEX SQL> alter pluggable database FTEX close immediate;
FTEX SQL> alter pluggable database FTEX open;

And the PDB is now open in READ WRITE mode and unrestricted

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   NO

I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

No rows selected.

Job done! Encrypted database upgraded and converted to a PDB.

Determine Key ID

Use the orapki utility to determine the key ID of the TDE encryption key. Specify the location of the old non-CDB keystore:

orapki wallet display -wallet /u01/app/oracle/admin/FTEX/wallet

Oracle PKI Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Requested Certificates: 
User Certificates:
Oracle Secret Store entries: 
ORACLE.SECURITY.DB.ENCRYPTION.Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY
ORACLE.SECURITY.TS.ENCRYPTION.BUCsXzQga4wfJ8gNSuptAPMCAwAAAAAAAAAAAAAAAAAAAAAAAAAA
Trusted Certificates: 
Subject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Secure Server Certification Authority,O=RSA Data Security\, Inc.,C=US
Subject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US

The TDE encryption key is starting with ORACLE.SECURITY.DB.ENCRYPTION and doesn’t end on MASTERKEY. In the above example the key ID is Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.

If your non-CDB is Oracle Database 12c or later you can also get the key ID using a view:

SQL> select key_id from v$encryption_keys where key_use='TDE';

Why Do I Need to Export and Import?

Above you saw that I had to export and import the encryption keys, even after merging the keystores. You might ask: Why is that even needed? Why is the ADMINISTER KEY MANAGEMENT MERGE KEYSTORE command not enough? That’s a very good question. I have reached out to our security people to get their take on it. It sounds like a bug to me.

The Future

The above solution makes it easier to handle encrypted databases. But we are not satisfied yet. A future version of AutoUpgrade will have even better support for encrypted databases, and everything will be automated.

The biggest issue for us is to find a safe way to handle the TDE keystore password. We need to have a safe way of getting the password from you and storing it in memory until we actually need it.

We are just about to start testing the very first version of AutoUpgrade with proper TDE support. If you are interested in becoming a beta tester, reach out to me to work something out.

Conclusion

Converting encrypted databases to PDB is now possible in AutoUpgrade. AutoUpgrade mainly handles the process, but you have to manage the TDE encryption keys yourself.

Oracle Database 21c Is Here

Last week Oracle released Oracle Database 21c for additional platforms: Linux and Exadata. Other platforms will follow. You should keep an eye out for Release Schedule of Current Database Releases (Doc ID 742060.1) for further information.

Things to Notice

In my part of the Oracle Database, there are things to notice. I want to highlight:

To get all the details, visit the Upgrade and Utilities part of the new features documentation. There are some good examples of how the features can be used.

Behaviour Changes

Read-Only Oracle Home (ROOH) is now the default. Be sure to set the following environment variables to control the location of these directories:

  • ORACLE_BASE_HOME
  • ORACLE_BASE_CONFIG

I like ROOH, but it takes some time to get used to. For instance, network/admin files (tnsnames, sqlnet) and dbs files (pfile, spfile) are now in a new location.

The Pre-Upgrade Information Tool or preupgrade.jar is removed and replaced by AutoUpgrade. A few new parameters have been introduced to make the transition easier.

Innovation Release

Remember, 21c is an innovation release, which means a shorter support window than Long Term Releases such as Oracle Database 19c. If you adopt Innovation Releases, you should be prepared to upgrade to the next database release within one year after the next database release ships.

I would not recommend that you upgrade your production systems to Oracle Database 21c due to the limited support period. Not unless you are prepared to upgrade the database soon again – when support runs out. Oracle Database 19c is the current Long Term Support release. I recommend that for production databases.

Different release types for Oracle Database - innovation vs long term support

To learn more about innovation release and our release model, have a look at our slide deck. We discuss it in the first chapter.

New Features

I want to mention a few new features. They haven’t attracted as much attention as the marque features, but they are still cool.

Expression based init.ora parameters make it possible to base database parameters (init.ora) on calculations made on the system’s configuration. For example, setting the database parameter CPU_COUNT to half the number of CPUs (Windows):

alter system set cpu_count='$NUMBER_OF_PROCESSORS/2';

For more details, check out my video on YouTube.

Placeholders in SQL DDL Statements can improve application security because sensitive information, like passwords, doesn’t need to be hardcoded in SQL DDL. Example: You can make this statement:

CREATE USER :!username IDENTIFIED BY :!password ...

And Oracle Call Interface programs can substitute the placeholders into:

CREATE USER "DANIEL" IDENTIFIED BY "MyS3cr3tP!d" ...

This is similar to data binding but occurs in Oracle Client.

Conclusion

The complete 21c documentation is online, so I suggest that you head on over there and have a look. In the upgrade guide, you can find the list of behavior changes and also deprecated and desupported functionality. And finally, but most interesting perhaps, is Learning Database New Features.

Try it out in Always Free ADB or explore the Oracle LiveLabs.

Upgrade with Less Downtime

You can upgrade your Oracle Database with less downtime. With the release of AutoUpgrade 21.2 we added the fast deploy option. It allows you to run the preupgrade fixups while the database is still online. Only the actual upgrade will now require downtime.

The Usual Upgrade

When you upgrade the usual way, i.e., using AutoUpgrade in deploy mode, then you:

  1. Analyze the database – java -jar autoupgrade.jar -mode analyze This is non-intrusive and doesn’t change anything on the database. You get a report, and you can fix any issue that AutoUpgrade can’t fix for you.
  2. Upgrade the database – java -jar autoupgrade.jar -mode deploy The database is analyzed again. Based on the findings of the analysis, a set of preupgrade fixups are executed. Finally, the database upgrade takes place.

Best practice is to use AutoUpgrade in deploy mode

During step #2, the database is not available – you have downtime. Even though the database was analyzed previously, it will be re-analyzed to catch any new issues. This is the safest approach. From the first analysis (in step #1) to the second analysis (in step #2), potentially new issues can occur.

We talked to a few of our customers who are really trying to reduce the downtime of an upgrade. For them – every second counts and has a significant impact on their business. They asked us to make the upgrade faster.

Fast Deploy Mode

One customer suggested moving the final analysis and preupgrade fixup outside of the downtime window. Shortly before the downtime would start, they would like to run an analysis and the preupgrade fixups. Next, they would wait until the downtime window starts and do only the upgrade.

I must stress; there is a risk that a new issue pops up between the fixups and the upgrade – but this one customer is willing to take the risk. In exchange, they will get upgrades with less downtime.

We call this approach fast deploy. The procedure is:

  1. Analyze the database – java -jar autoupgrade.jar -mode analyze
  2. Run the preupgrade fixups – java -jar autoupgrade.jar -mode fixups
  3. Now downtime starts
  4. Upgrade the database – java -jar autoupgrade.jar -mode upgrade

To reduce downtime use the new fast deploy mode - with increased risk

Fast deploy requires AutoUpgrade 21.2 or newer. The target database release can be any of the target releases already supported by AutoUpgrade, meaning 12.2 or higher.

Here are the details of the feature – from the AutoUpgrade change log:

This feature enables DBA’s to run the prechecks and prefixups while the database is still online and then run deploy to complete the upgrade. Once the fixups have been run on the source database, DBA can then skip the prechecks and prefixup stages and proceed directly to the phases that follow for example: Database can be online: java -jar autoupgrade.jar -mode fixups -config yourconfig Upgrade time: java -jar autoupgrade.jar -mode upgrade -config yourconfig The -mode upgrade command proceeds directly to the phases that follow the prechecks and prefixups stages during deploy. The main goal of this feature is to reduce database downtime by running the fixups when your database is online and open for business and then skipping the fixups when your database is offline. See Oracle documentation for further details.

Conclusion

If your application is really sensitive to downtime, you can now upgrade with less downtime using fast deploy. It introduces a slight risk that the preupgrade analyze will not detect an issue that will cause troubles later on. You have to choose whether that risk is acceptable to you.

Our recommendation is to use the old approach – AutoUpgrade in deploy mode. Consider only fast deploy if downtime really hurts.

If you are interested in seeing which other things got put into AutoUpgrade 21.2, you can check the change log.

Zero Downtime Migration – Logical Online and Sequences

When migrating with Oracle Zero Downtime Migration (ZDM) and the logical online, you use Data Pump for the initial load and Oracle GoldenGate to keep the database in sync. When using Oracle GoldenGate, attention is needed on your sequences; otherwise, you will probably end up with a lot of ORA-00001 unique constraint violated.

The Problem

I think it is best to illustrate with an example:

Source Database Target Database
Sequence s1 currently has a value of 100.
ZDM starts and performs the export for the initial load.
The Data Pump import creates sequence s1 with current value 100.
A user inserts a row and uses the sequence:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence, and the statement now looks like this:
insert into orders values(101 ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                GoldenGate replicates the statement. It does not take a new value from the sequence on the target. Doing so would corrupt the data because there is no guarantee that the same number would be taken. Instead, it uses the statement as it was executed on the source:
insert into orders values(101 ...
Now it is time to complete the migration. Users are disconnected.
GoldenGate synchronizes the final changes before users are now connecting to the target database.
A user inserts another record into orders:
insert into orders values(s1.nextval ...
s1.nextval takes the next number from the sequence on the target database. It was created with current value 100, so next value is 101:
insert into orders values(101 ...
101 has already been used, and typically, there is a primary key or unique key constraint on such a column.
This causes ORA-00001 unique constraint violated.

I talked to our GoldenGate experts, and it is a common pitfall. During migrations, sequences are often forgotten, and soon after the migration they start seeing ORA-00001 unique constraint violated.

The Solution

After the switchover, you must ensure that the sequences have a proper value. The sequences must be forwarded or advanced. You can do that in many ways. Here are some ideas:

Recreate sequences

Right after the migration has been completed and GoldenGate has done the final synchronization, but before users are connecting, you can recreate the sequences.

  • Drop the sequences on the target database.
  • Recreate the sequences either using DBMS_METADATA or Data Pump.
  • Ensure to adjust the privileges on the sequences.

If you can wrap all those changes into a shell script, you can have ZDM execute it automatically as part of the migration flow. Check out Pro Tip 5: Adding Custom Scripts from a previous blog post.

Forward or advance the sequence

Another approach is to forward or advance the sequence. Increment the sequence on the target database until the next value is high enough. What do I mean by high enough?

  • The value of the same sequence on the source database.
  • If you know the sequence is only used to generate numbers for one column (let’s call it t1.c1), then get the highest value from that column SELECT max(c1) FROM t1. Typically, such a column is a primary key column with a unique index underneath, so a SELECT max should go very fast.

Now that you have the desired, new value for the sequence, you can calculate how much to increment by. Example: The source database sequence (DBA_SEQUENCES.LAST_NUMBER) is 1500 and target database sequence is 1000, then you need to increment by 500:

alter sequence seq1 increment by 500;
select seq1.nextval from dual;
alter sequence seq1 increment by 1;

If you use the approach of SELECT max then you need to increment by 501. In any case, better increment by too much than too little.

Similar to the former option, you can put the commands into a shell script and have ZDM execute it

Replicate sequence changes

GoldenGate also has the possibility of replicating the changes to the sequences. If you decide to use this with ZDM, it will require additional configuration of GoldenGate, which is why I prefer any of the former options. If you want to more about it, check out Oracle GoldenGate 19.1 – Using Oracle GoldenGate for Oracle Database, chapter 8.1 Installing Support for Oracle Sequences.

Conclusion

If your database uses sequences and you are using Oracle GoldenGate as part of your migration, ensure to handle your sequences properly. Otherwise, you will fall into a typical pitfall of GoldenGate migrations.

Other Blog Posts in This Series

DataPump Export from Data Guard

Exporting data from your production database can be a pain for several reasons:

  • You probably want a consistent export so you set the Data Pump parameter FLASHBACK_TIME. That requires a lot of undo space. If there is heavy activity on the database, then you risk running into ORA-01555.
  • You want export the data quickly using the parallel options in Data Pump. The quicker you export, the less susceptible you also are to ORA-01555. But parallel uses more resources.

If you have an Active Data Guard, you can export your data from the standby database. It should be straightforward, right? An export is after all just reading data…

But Data Pump is not just a simple export tool. It is quite advanced – and to support all that functionality it does require a master table to be created which can be used to coordinate the operation. Further, it uses Advanced Queueing (AQ) for communicating between the processes. And AQ also requires a writable database.

But how about DML Redirect that was introduced in Oracle Database 19c? Can it be used to handle those few DMLs? No, because creation of the master table is a DDL – not a DML. If you try, you will fail:

$  expdp system@localhost:1521/pdb1 schemas=SH directory=mydir

Export: Release 19.0.0.0.0 - Production on Mon Apr 12 18:19:22 2021
Version 19.10.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Password:

Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.SYS_EXPORT_SCHEMA_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1163
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.KUPV$FT", line 1056
ORA-06512: at "SYS.KUPV$FT", line 1044

Solution

I turned on my bat signal and got help from my fellow Product Manager, Pieter Van Puymbroeck. I turned on the bat signal and got help Photo by Ali Kokab on Unsplash

Use a snapshot standby, he said. Of course – Snapshot Standby is part of Enterprise Edition, so if you have Data Guard, you also have Snapshot Standby. If you need a quick recap on what a snapshot standby database is, you should check out this video.

First, I convert my physical standby database (named CDB19_fra3zt) into a snapshot standby database:

DGMGRL> convert database 'CDB19_fra3zt' to snapshot standby;

Next, create a directory object that can be used with Data Pump:

SQL> create directory mydir as '/tmp';

Finally, start an export. Notice that I haven’t set FLASHBACK_TIME. When exporting from a snapshot standby there are probably no other users on the database, so you don’t need to worry about consistency of the export:

$ expdp system schemas=sales directory=mydir

My data has now been exported, and it is time to convert my snapshot standby database back into a physical standby database to properly protect my precious database:

DGMGRL> convert database 'CDB19_fra3zt' to physical standby;

But

Will this jeopardize my primary database. No, but ….

Your primary database is still protected, because log files are still sent to the standby database, but not applied. What will then happen if I need to switch over or fail over?

  1. Convert the snapshot standby database back into a physical standby database
  2. That will implicitly issue a Flashback Database and revert all the changes made
  3. Then catch up with the primary by applying all the log files
  4. Finally, complete the switchover or failover

Bottomline, it will take longer! The flashback operation is really fast and should be done within minutes. After all, Data Pump isn’t making that many changes. Only the master table and some AQ stuff. How much time then to do log apply? That depends entirely on how big the apply lag is.

So, if you choose to go down this path be prepared that in the event of a switchover or a failover, it will take a little longer until the standby is open for business. Unless, you have two standby databases, then you will still have run ready to jump in immediately.

Conclusion

You can export from your Data Guard, if you convert it temporarily to a snapshot standby. Be prepared that a switchover or failover operation will take longer.

If you want to know more about Data Pump, you should check out our webinar Migration Strategies – Insights, Tips and Secrets.

And finally, speaking of Data Pump, remember to get the Data Pump bundle patch (bug 32551008) as Roy mentioned on Twitter.

Future-proof Your Encrypted Database During Upgrade

In Oracle Database 19c, use of sqlnet.ora to define the keystore (or wallet) location has been deprecated. Instead you should use the database parameter WALLET_ROOT. If you upgrade to Oracle Database 19c with AutoUpgrade, it has become a lot easier. Let AutoUpgrade do the work for you.

How To

If you instruct AutoUpgrade to use the new encryption parameters it will not only add the parameters to the SPFile but also copy the keystore file to the location defined. This is what you have to do:

  1. Create a text file which contains the definition of WALLET_ROOT and TDE_CONFIGURATION. I call it /tmp/au-pfile-tde. Optionally, change the location of the keystore to fit your organization.
WALLET_ROOT='/etc/oracle/keystores/$ORACLE_SID'
TDE_CONFIGURATION='KEYSTORE_CONFIGURATION=FILE'
  1. Instruct AutoUpgrade to add those parameter during and after upgrade. Add the following to your AutoUpgrade config file:
upg1.add_during_upgrade_pfile=/tmp/au-pfile-tde
upg1.add_after_upgrade_pfile=/tmp/au-pfile-tde

That’s it! AutoUpgrade will detect that you are changing the keystore location, and it will copy the keystore files to the new location at the appropriate time.

Important: When you use WALLET_ROOT the keystore files should always be stored in a subfolder called tde. This means that the keystore files will end up in /etc/oracle/keystores/$ORACLE_SID/tde. You should not add /tde manually to WALLET_ROOT. The database will do that automatically when it looks up the keystore.

Now What

Since you have moved the keystore files to a new location there are some things that you should take care of:

  • You can remove the sqlnet.ora parameter ENCRYPTION_WALLET_LOCATION. It is not used anymore.
  • The keystore files that were stored in the old location (that defined by ENCRYPTION_WALLET_LOCATION) can be moved manually to a backup location. I would never recommend that you delete keystore files – NEVER! Instead move the old files to a backup location and keep them there.
  • The keystore files are to be considered critical and contain sensitive information, so ensure that the new location has the same security measures as the old one – like:
    • Restricted file permissions
    • Auditing
    • Backup

Background

Traditionally, if you have an encrypted database, you need to define the keystore location in sqlnet.ora using the parameter ENCRYPTION_WALLET_LOCATION. You would set it to something like this:

ENCRYPTION_WALLET_LOCATION=
 (SOURCE=
  (METHOD=FILE)
   (METHOD_DATA=
    (DIRECTORY=/etc/oracle/keystores/$ORACLE_SID)))

For many reasons, sqlnet.ora was not a good location for this parameter, and especially with the introduction of isolated keystore mode, a new method was needed.

In came WALLET_ROOT and TDE_CONFIGURATION database initialization parameters. The former, WALLET_ROOT, defines the location of the keystore. The latter, TDE_CONFIGURATION, defines which kind of keystore is in use. Typically, it is set to FILE – that’s when you use a software keystore (a file in the OS). But it could also be OKV if you are Oracle Key Vault. For a software keystore you would set it to something like:

ALTER SYSTEM SET WALLET_ROOT='/etc/oracle/keystores/$ORACLE_SID' SCOPE=SPFILE;
ALTER SYSTEM SET TDE_CONFIGURATION='KEYSTORE_CONFIGURATION=FILE' SCOPE=SPFILE;
SHUTDOWN IMMEDIATE
STARTUP

Now, the database finds the keystore location using the WALLET_ROOT parameter which is much more smooth.

As of Oracle Database 19c, configuration the keystore using sqlnet.ora has been deprecated, and as with any other deprecated functionality, you should move to a fully supported alternative.

Further Reading

Performance Stability after Upgrade and Migration

Yesterday I gave a talk to the Danish Oracle User Group. The topic was Performance stability after upgrade and migration and included something about statistics, a lot about SQL Plan Management and a few recommendations for parameters in Oracle Database 19c.

Danish Oracle User Group

If you are interested you can download the slides and have a look yourself. The talk itself was not recorded.

You Want More?

The content I presented is part of a revamped webinar that Roy, Mike and I are giving on Thursday, 4 March 2021 at 19:00 CET. We call it

Performance Stability, Tips, Tricks & Underscores

The third webinar the series addresses performance stability, tips, tricks and underscores. Participants learn how to perform proactive testing before upgrading to Oracle Database19c.

Topics that will be covered during this session:

  • Testing Principles
  • Ensure Performance Stability
  • SQL Plan Management
  • Real Application Testing
  • Tips, Tricks & Underscores to get the best out of Oracle Database 19c

It is all tech – no buzzwords and marketing. We have to keep it within two hours, but I think that we already have way too much content. I assure you it will be action packed.

If that sounds interesting, sign up and I will see you next Thursday. It will be so much fun.

Otherwise, you can find the recording a few days after. In addition, most of the content will also be cut into bite-size pieces and uploaded to our YouTube channel – why not subscribe?

Links

Upgrade Internals – Webinar

Here is a little teaser for our upcoming webinar on Database Upgrade Internals – and so much more on Wednesday 10 February 2021 at 10:00 CET. You can still sign up.

Virtual Classroom Series - Upgrade to Oracle Database 19c

We have a special guest star joining, Frederick Alvarez – one of the masterminds behind AutoUpgrade. In addition, as usual Mike Dietrich will be there, and we have prepared a whole lot of new contents for this webinar. And – if I have to say so myself – it has become quite good and interesting. If you are still not convinced take a look at this!

What Actually Happens During Upgrade

We will go into the upgrade itself, dissect it, and discover what actually happens. Hopefully, it will no longer be a black box to you.

How are the workers processing the upgrade? How are the workers processing the upgrade?

How is a container database upgraded? How is a container database upgraded?

What goes on in the phases of the upgrade? What goes on in the phases of the upgrade?

I Need More Power, Scotty

We will also cover, how you can make your upgrade faster and show what you can achieve.

What is the benefit of gathering stats before the upgrade? What is the benefit of gathering stats before the upgrade?

What Can You Achieve With AutoUpgrade

A deep dive into the checks that are executed by AutoUpgrade. A list of some of the comprehensive checks that are executed by AutoUpgrade A list of some of the comprehensive checks that are executed by AutoUpgrade

One page monitoring One page monitoring

The Old Releases

For those of you on very old releases, we will discuss how you can upgrade to Oracle Database 19c.

What to consider when doing upgrades from very old releases What to consider when doing upgrades from very old releases

See You

Once the webinar is over we will publish a recording and make the slides available for download.

See you on wednesday!

Control Upgrade Parallelism In AutoUpgrade

Under the hood, AutoUpgrade uses the Parallel Upgrade Utility or catctl.pl to do the database upgrade. The Parallel Upgrade Utility has a long list of options that you can configure. The parallel options being the most notable. In your AutoUpgrade config file you can now specify a subset of options to catctl.pl using the parameter catctl_options.

Now, you should not expect a 10x improvement by adding a ton of CPUs to your upgrade. Mike Dietrich posted a really good article that explains what matters to an upgrade when it comes to performance. But f you want to squeeze out the very last resources on your system during upgrade, or want to fine-tune the distribution of CPUs the resource consumption, you can do it with AutoUpgrade.

How

First, you can find a list of catctl parameters in the documentation.

When you have determined the parameters that you want to use, specify them in your config file either globally (for all upgrades):

global.catctl_options=-N 8

or locally for a specific upgrade:

upg1.catctl_options=-N 8

The above examples will run the upgrade of PDBs using eight parallel processes.

You should have a look in the documentation to know the minimum and maximum values for the settings. At least for the parallel settings it is really good to know.

Non-CDB

Support for this parameter and non-CDB databases was added in 21.1.2. To ensure that the upgrade runs with as many parallel processes as possible:

upg1.catctl_options=-n 8

CDB and PDB

When you upgrade a CDB the following happen:

  • First, CDB$ROOT is upgraded using eight parallel processes.
  • Next, a number of PDBs are upgraded concurrently, starting with PDB$SEED. The total number of parallel processes to use is controlled by the parameter n.
  • Each individual PDB is upgraded using a number of SQL processes as well. This is controlled by the parameter N.

This means that the number of PDBs that are upgraded at the same time is: n / N

What is the best value of n?

  • If you are conservative you set n to CPU_COUNT.
  • If you are bold, you could probably raise it further, because some of the phases in the upgrade runs serially or doesn’t use the full parallel capacity. Try to set n to CPU_COUNT + 10 % and see how loaded your CPUs get. Keep increasing until you find a suitable level.

In the documentation there are some really good examples and explanations of using N and n together.

On a system with CPU_COUNT=48, put the following in your AutoUpgrade config file to run 6 PDB upgrades concurrently using 8 parallel processes:

upg1.catctl_options=-n 48 -N 8

Note, regardless of what you specify, when it comes to CDB$ROOT AutoUpgrade will always run with the maximum number of parallel processes. CDB$ROOT is special and it must be upgraded before any of the other ones can start. Hence, it makes sense to get it completed as fast as possible.

If you are doing an unplug-plug upgrade of a single PDB it could be a good idea to add more parallel processes to that single upgrade. If you want to use 8 parallel processes:

upg1.catctl_options=-N 8

Conclusion

It can be useful to override the default parallel settings during upgrades. You should not expect a 10x performance improvement, but you might squeeze out the very last resources. What is the best setting? It depends. You should go with the defaults, or test it using your own databases.

History

12 April 2021: Added information about support of non-CDB databases as of AutoUpgrade 21.1.2.