If Importing Statistics Using DBMS_STATS Is Slow

When migrating Oracle Databases, you often transport the statistics using dbms_stats. Our team got involved in an interesting case the other day, and I would like to share the solution with you.

Problem

A customer migrated to Oracle Database 19c and decided to move the statistics using dbms_stats.import_schema_stats. They noticed that the procedure started to burn a lot of CPU, and after 38 hours, they gave up.

A SQL Monitor report showed one statement which spent almost all the time. An insert statement into sys.dbms_stats_id_map_tab with a subselect referencing the staging table. The staging table is the one you specify in the argument stattab in the call to dbms_stats.import_schema_stats. The staging holds the optimizer statistics in a portable format. From the staging table, the procedure can insert it into the data dictionary in the proper format.

Obviously, they could see already that the procedure would use far too much time.

Diagnosing

The first shot was to look at the code behind dbms_stats. But this specific code hadn’t changed since its introduction many releases ago.

Recursive statements that touch the data dictionary immediately brought our attention to dictionary statistics. But the customer told us that they were not stale.

The customer ran SQL Tuning Advisor on the offending statement, and one of the findings was about stale statistics. SQL Tuning Advisor recommended gathering statistics on the staging table and corresponding indexes.

One of our performance experts looked at the execution plan and found a pattern he had seen before. He tried to disable Join Predicate Push Down in the session. It helped, but this was just a workaround. We wanted to find the root cause.

Solution

The SQL Tuning advisor came up with the real problem. Stale statistics on the staging table and corresponding indexes. Once the customer gathered statistics on the staging table and indexes, the import of statistics finished in 2 hours and 27 minutes which was acceptable.

We also discovered that the dictionary statistics were not as accurate as the customer had initially concluded. In fact, by mistake, they had misinformed us. A fresh run of dbms_stats.gather_dictionary_stats gave a slight performance improvement as well.

Recommendations

These recommendations are now part of our best practices for migrations.

  1. Before importing statistics using dbms_stats.import_schema_stats, gather stats on the staging table.

  2. Immediately after importing a lot of data, and before you import statistics with dbms_stats or anything else, you should gather dictionary statistics. This applies to regular Data Pump imports and transportable tablespace metadata imports.

The run book should look like this:

$ impdp parfile=import.par ...
$ sqlplus / as sysdba

SQL> exec dbms_stats.gather_dictionary_stats; 
SQL> exec dbms_stats.gather_table_stats(
		ownname=>'SYSTEM',
		tabname=>'MY_STG_TAB_1',
		cascade=>TRUE);
SQL> exec dbms_stats.import_schema_stats(
		ownname=>'SYSTEM', 
		stattab=>'MY_STG_TAB_1', 

In the above example, the staging table is called SYSTEM.MY_STG_TAB_1.

The recommendation applies as well if you are using the procedures dbms_stats.import_database_stats or dbms_stats.import_table_stats.

Dictionary Statistics

Why is it important to gather dictionary statistics immediately after the import?

When you import data or use transportable tablespaces, you often have a brand new, empty database. Then you import a lot of objects. Those objects are represented as rows in the data dictionary. For instance, the tables you import now appear as rows in SYS.TAB$, the partitions in SYS.TABPART$, the indexes in SYS.IND$, and so forth.
Those internal tables were almost empty before – there were not a lot of tables. Now they have a lot of rows. This means that the statistics are stale. When you start to use functionality in the database, like importing statistics, recursive queries using the internal tables will be executed. With stale statistics on the dictionary, you can have suboptimal execution plans and bad performance. Gathering dictionary statistics can fix this for you.

Conclusion

Statistics are always vital, whether optimizer statistics on user data or internally in the data dictionary. Be sure to verify the accuracy of your statistics when you have problems.

Also, SQL Tuning Advisor is a great tool. It can quickly come up with suggestions for fixing problems. Use the recommendations as input to your troubleshooting. SQL Tuning Advisor also works on internal tables.

Additional Information

We have a few videos on our YouTube channel which have more information about transporting statistics with dbms_stats.

Do I Need To Disable the Scheduler During Upgrade?

I was asked a question the other day:

When upgrading an Oracle Database, do we need to disable the scheduler (DBMS_SCHEDULER)?

The short answer is: No …. or perhaps.

What Happens During Analyze

When you use AutoUpgrade in Analyze mode (java -jar autoupgrade.jar -mode analyze), it will check your database. It is a non-intrusive check, and normal operations can continue, including use of the scheduler.

What Happens During Deploy

When downtime starts, and you are ready to upgrade your database, you start AutoUpgrade in Deploy mode (java -jar autoupgrade.jar -mode deploy).

Analyze And Fixups

First, AutoUpgrade will re-analyze the database, and based on the findings; it will run pre-upgrade fixups. The fixups make changes to the database, like gathering dictionary statistics, emptying recycle bin and other administrative tasks. The scheduler remains active during this period, so if you have any jobs that do administrative things on the database, like gathering statistics, there is a chance that they will collide. But typically not a problem.

Upgrade

Then the actual upgrade of the database can start. This happens while the database is started in upgrade mode (STARTUP UPGRADE)

When the database is started in upgrade mode, many things are disabled automatically. The scheduler being one of them.

Examples of other changes that happen in upgrade mode:

  • System triggers are disabled
  • Certain parameters are changed
  • Resource Manager is disabled

You can check the alert log for more information. Here is a snippet:

2022-05-17T11:56:54.585122+02:00
AQ Processes can not start in restrict mode

Post-Upgrade

After the actual upgrade, the database is restarted in normal mode. The scheduler becomes enabled again.

In this phase, AutoUpgrade is recompiling invalid objects and performing post-upgrade fixups. Changes will be made to the database, like re-gathering dictionary statistics. Similar to the pre-upgrade fixups, depending on the nature of your scheduler jobs, there is a risk of things colliding. That can cause waits or concurrency issues.

Finally, the time zone file is upgraded. This process requires the database to be started in upgrade mode again. Again, the scheduler will be automatically disabled.

What Is The Answer?

From a functional point of view the scheduler is enabled and working during some parts of an upgrade. Only during the most critical parts is it automatically disabled.

So, the answer is: No, you do not need to disable the scheduler during upgrade. The database will automatically disable it when needed.

But the database is restarted multiple times which of course will affect any running scheduler jobs. Depending on the nature of your scheduler jobs, you might decide to disable it completely during the entire database upgrade. For instance, if you have long-running jobs or jobs that are sensitive to being interrupted. On the other hand, if your jobs are short-running, restart easily, or you basically don’t care, then it is perfectly fine to leave it all running during a database upgrade.

Manually Disable The Scheduler

If you decide to disable the scheduler manually, you should temporarily change job_queue_processes:

SQL> alter system set job_queue_processes=0 scope=both;

Don’t forget to set it to the original value after the upgrade.

You can find more information in MOS note How to disable the scheduler using SCHEDULER_DISABLED attribute in 10g (Doc ID 1491941.1).

Upgrade Mode

A few more words about upgrade mode:

When you start Oracle Database in upgrade mode, you can only run queries on fixed views. If you attempt to run other views or PL/SQL, then you receive errors.

When the database is started in upgrade mode, only queries on fixed views execute without errors. This restriction applies until you either run the Parallel Upgrade Utility (catctl.pl) directly, or indirectly by using the dbupgrade script). Before running an upgrade script, using PL/SQL on any other view, or running queries on any other view returns an error.

About Starting Oracle Database in Upgrade Mode, Upgrade Guide 19c

Starts the database in OPEN UPGRADE mode and sets system initialization parameters to specific values required to enable database upgrade scripts to be run. UPGRADE should only be used when a database is first started with a new version of the Oracle Database Server.

When run, upgrade scripts transform an installed version or release of an Oracle database into a later version, for example, to upgrade an Oracle9i database to Oracle Database 10g. Once the upgrade completes, the database should be shut down and restarted normally.

12.46 STARTUP, User’s Guide and Reference 19c

Transparent Data Encryption and Multitenant

In a multitenant environment where you want to use Transparent Data Encryption (TDE), you can do it in two ways:

  • United keystore mode. The default option. The CDB has a keystore, and all PDBs use that keystore. The encryption keys belong to each individual PDB, but the one keystore contains all the encryption keys.
  • Isolated keystore mode. Became available with 19.11.0 and in later versions. The CDB has a keystore that all PDBs can use, but you can configure a PDB to use its own keystore. If a PDB uses TDE in isolated mode, that PDB will physically have its own keystore, where only the TDE encryption keys get stored. PDBs that are not configured to use isolated mode, will put the encryption keys into the keystore of the CDB. Isolated mode is fairly new and is not fully supported yet by AutoUpgrade, OCI tooling, and other tools.

United mode is the easy way of doing things. You configure one keystore and then all PDBs can use that keystore.

Isolated mode is suitable when you want to completely isolate the PDBs and even keep the encryption keys separate. Moreover, you can have different passwords protecting the keystores. Isolated mode strengthens security but adds maintenance overhead; more keystores to backup and protect). Additionally, in isolated mode, each PDB can use a different kind of keystore. The CDB can use a software keystore (a file in the OS), PDB1 can use its own software keystore (another file in the OS), and PDB2 can store its encryption keys in Oracle Key Vault. More security and more flexibility.

Regardless of which keystore mode you plan to use, you always start by configuring TDE in united mode in the CDB. Afterward you can enable isolated mode in individual PDBs, if you want that.

How To Configure TDE

This procedure enables TDE in united mode. I will use a software keystore (a file in the OS):

  1. Create a directory where I will place the keystore. You can change $ORA_KEYBASE to another location.

    export ORA_KEYBASE=$ORACLE_BASE/admin/$ORACLE_SID/wallet
    #Don't change ORA_KEYSTORE
    export ORA_KEYSTORE=$ORA_KEYBASE/tde
    mkdir -p $ORA_KEYSTORE
    
  2. Configure WALLET_ROOT to tell the database where I want to create the keystore files, and TDE_CONFIGURATION to tell the database to use a software keystore:

    alter session set container=cdb$root;
    alter system set wallet_root='$ORA_KEYBASE' scope=spfile;
    shutdown immediate
    startup
    alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE' scope=both;
    
  3. Now create the keystore and a TDE encryption key for CDB$ROOT. My TDE keystore password is oracle_4U; you should pick a better password:

    administer key management create keystore '$ORA_KEYSTORE' identified by "oracle_4U";
    administer key management set keystore open force keystore identified by "oracle_4U";
    administer key management set key identified by "oracle_4U" with backup;
    

    You can optionally use the CONTAINERS=ALL clause to set a TDE encryption key in all PDBs. Don’t do this if you plan on using isolated keystore later on:

    administer key management create keystore '$ORA_KEYSTORE' identified by "oracle_4U";
    administer key management set keystore open force keystore identified by "oracle_4U" container=all;
    administer key management set key identified by "oracle_4U" with backup container=all;
    
  4. Optionally, create an auto-login keystore. If you don’t, you must manually input the TDE keystore password every time the database starts.

    administer key management create local auto_login keystore from keystore '$ORA_KEYSTORE' identified by "oracle_4U";
    

That’s it. You can now start to create encrypted tablespaces:

create tablespace ... encryption encrypt;

I have now created the root keystore in the location defined by WALLET_ROOT. The database automatically adds a subfolder called tde. In that folder you find ewallet.p12 which is the actual software keystore of the CDB, and cwallet.sso which is the auto-login keystore:

$ pwd
/u01/app/oracle/admin/CDB2/wallet/tde
$ ll
total 8
-rw-------. 1 oracle dba 4040 May 16 09:35 cwallet.sso
-rw-------. 1 oracle dba 3995 May 16 09:35 ewallet.p12

Configure Isolated Keystore

You can enable isolated mode in a PDB after you configure the CDB for united mode (the above procedure). The following assumes that TDE has not been configured yet in PDB1:

  1. Switch to the PDB and configure TDE_CONFIGURATION:
    ALTER SESSION SET CONTAINER=PDB1;
    ALTER SYSTEM SET TDE_CONFIGURATION='KEYSTORE_CONFIGURATION=FILE' SCOPE=BOTH;
    
  2. Create the keystore and a TDE encryption key for the PDB. Notice I am giving my PDB keystore a different password:
    ADMINISTER KEY MANAGEMENT CREATE KEYSTORE IDENTIFIED BY "oracle_4U2";
    ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY "oracle_4U2";
    ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "oracle_4U2" WITH BACKUP;
    
  3. Optionally, create an auto-login keystore of the PDB keystore. If not, you need to manually input the TDE keystore password in the PDB every time it starts:
    ADMINISTER KEY MANAGEMENT CREATE LOCAL AUTO_LOGIN KEYSTORE FROM KEYSTORE IDENTIFIED BY "oracle_4U2";	
    

The PDB keystore is now placed in a subfolder of WALLET_ROOT matching the PDB GUID (D6A29777EC214B6FE055000000000001). You find similar files, ewallet.p12 and cwallet.sso in the dedicated folder for the isloated PDB keystore:

$ pwd
/u01/app/oracle/admin/CDB2/wallet/D6A29777EC214B6FE055000000000001/tde
$ ll
total 8
-rw-------. 1 oracle dba 2120 May 16 09:37 cwallet.sso
-rw-------. 1 oracle dba 2059 May 16 09:35 ewallet.p12

To get the GUID of a PDB:

select name, guid from v$containers;

The database will automatically create the directories needed for the PDB keystore.

Migrating Between Keystore Modes

If you need to migrate between the two keystore modes, there are two commands you can use. oracle_4U is the keystore password of the root keystore; oracle_4U2 is the keystore password of the PDB keystore.

To migrate a PDB from united to isolated mode, i.e., to isolate a keystore:

alter session set container=PDB1;
administer key management
   force isolate keystore identified by "oracle_4U2" 
   from root keystore force keystore identified by "oracle_4U"
   with backup;

To migrate a PDB from isolated to united mode, i.e., to unite a PDB keystore into a root keystore:

alter session set container=PDB1;
administer key management
   unite keystore identified by "oracle_4U2" 
   with root keystore force keystore identified by "oracle_4U" 
   with backup;

To determine which keystore mode is in use:

select con_id, wrl_parameter, keystore_mode 
   from v$encryption_wallet;

Final Notes

If you want to use isolated keystore mode in 19.11, 19.12 or 19.13 you need to apply patch 32235513 as well. From 19.14 and onwards this is not needed.

Isolated mode used to be a cloud-only feature. But since 19.11 it has been made available to everyone.

Further Reading

Troubleshooting Rabbit Hole: From Data Guard to Data Integrity Checks

I always fear the worst when I get a TNS error. It’s not my expertise. A TNS error was exactly what I got while I configured a Data Guard environment. Redo Transport didn’t work; the redo logs never made it to the standby database.

The Error

I took a look in the alert log on the primary database and found this error:

2022-05-10T08:25:28.739917+00:00
"alert_SALES2.log" 5136L, 255034C
        TCP/IP NT Protocol Adapter for Linux: Version 12.2.0.1.0 - Production
  Time: 10-MAY-2022 18:09:02
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12650

TNS-12650: No common encryption or data integrity algorithm
    ns secondary err code: 0
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0

A little further in the alert log, I found proof that the primary database could not connect to the standby database:

2022-05-10T18:09:02.991061+00:00
Error 12650 received logging on to the standby
TT04: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (12650)
TT04: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
2022-05-10T18:09:02.991482+00:00
Errors in file /u01/app/oracle/diag/rdbms/sales2_fra3cx/SALES2/trace/SALES2_tt04_75629.trc:
ORA-12650: No common encryption or data integrity algorithm
Error 12650 for archive log file 1 to '...'

The Investigation

As always, Google it! Although I have used DuckDuckGo for privacy reasons instead of Google for many years, I still say google it, which is fairly annoying.

The search revealed this MOS note: ORA-12650: No Common Encryption Or Data Integrity Algorithm When Using SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=sha256 (Doc ID 2396891.1) Although it is fairly old, it led me to look for issues with data integrity checks defined in sqlnet.ora.

The primary database had the following defined in sqlnet.ora:

SQLNET.CRYPTO_CHECKSUM_SERVER=REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA1)
SQLNET.CRYPTO_CHECKSUM_CLIENT=REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA1)

The above means that any connection made to or from this database must use data integrity checks. CRYPTO_CHECKSUM_SERVER and CRYPTO_CHECKSUM_CLIENT defines that. Also, the database will only accept connections using the SHA1 algorithm.

Then I looked in sqlnet.ora on the standby database:

SQLNET.CRYPTO_CHECKSUM_CLIENT=ACCEPTED
SQLNET.CRYPTO_CHECKSUM_SERVER=ACCEPTED
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA256,SHA384,SHA512,SHA1)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA256,SHA384,SHA512)

This database does not require data integrity checks. But if the other party requests or requires it, then the server is fine with it. That’s the meaning of ACCEPTED. But look at the allowed algorithms. When acting as server (i.e. receiving connections from someone else), it does not allow SHA1 algorithm, the only one allowed by the counterpart.

The Solution

I decided to remove all instances of SHA1 because:

  • It is an old algorithm
  • Any 12c database or client supports newer algorithms
  • In this environment, I don’t have any old 11g servers or clients

I added all the SHA-2 algorithms as supported algorithms. Now, sqlnet.ora in both databases look like this:

SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA256,SHA384,SHA512)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA256,SHA384,SHA512)

This solved the problem and now redo transport worked fine.

If I wanted to go maximum security, I should allow only the SHA512 algorithm in both sqlnet.ora files:

SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA512)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA512)

And force both databases to always use data integrity checks:

SQLNET.CRYPTO_CHECKSUM_SERVER=REQUIRED
SQLNET.CRYPTO_CHECKSUM_CLIENT=REQUIRED

Security

Some questions I asked myself while reading the Security Guide 19c.

Why do you want data integrity checks in our connections?

To protect against two types of attack:

  1. Data modification attack An unauthorized party intercepting data in transit, altering it, and retransmitting it is a data modification attack. For example, intercepting a $100 bank deposit, changing the amount to $10,000, and retransmitting the higher amount is a data modification attack.
  2. Replay attack Repetitively retransmitting an entire set of valid data is a replay attack, such as intercepting a $100 bank withdrawal and retransmitting it ten times, thereby receiving $1,000.

Can I do more to strengthen security in sqlnet.ora?

Yes. You should definitely also take a look at network encryption to protect data-in-transit. Take a look at Configuring Oracle Database Native Network Encryption and Data Integrity in the Security Guide 19c. These four parameters are of interest:

Also, reading Securing the Oracle Database – A technical primer can inspire you.

What’s wrong with SHA-1?

It’s old and has been made insecure by computer evolution. From Wikipedia:

In cryptography, SHA-1 (Secure Hash Algorithm 1) is a cryptographically broken but still widely used hash function which takes an input and produces a 160-bit (20-byte) hash value known as a message digest – typically rendered as a hexadecimal number, 40 digits long. It was designed by the United States National Security Agency, and is a U.S. Federal Information Processing Standard.

Since 2005, SHA-1 has not been considered secure against well-funded opponents; as of 2010 many organizations have recommended its replacement. NIST formally deprecated use of SHA-1 in 2011 and disallowed its use for digital signatures in 2013. As of 2020, chosen-prefix attacks against SHA-1 are practical. As such, it is recommended to remove SHA-1 from products as soon as possible and instead use SHA-2 or SHA-3. Replacing SHA-1 is urgent where it is used for digital signatures.

AutoUpgrade and Secure External Password Store Enables Complete Automation

Many commands that involve Transparent Data Encryption (TDE) require inputting the TDE keystore password. Also, when you use AutoUpgrade. on an encrypted Oracle Database you probably need to store the TDE keystore password using the -load_password option.

Manually inputting passwords is unsuitable for an environment with a high degree of automation. In Oracle Database it is solved by Secure External Password Store (SEPS) (as of Oracle Database 12.2). In a previous blog post, I showed how you could use it to your advantage.

This blog post is about how to use AutoUpgrade together with SEPS.

Good News

As of version 22.2 AutoUpgrade fully supports Oracle Database with a Secure External Password Store. If SEPS contains the TDE keystore password, you don’t have to input the password using the -load_password option.

If you are using AutoUpgrade in some sort of automation (like Ansible), you should look into SEPS. AutoUpgrade can use SEPS when the TDE keystore password is needed, and you can upgrade and convert completely unattended.

How To

The Oracle Database DB12 is encrypted and on Oracle Database 12.2. I want to upgrade, convert, and plug it into CDB2 on Oracle Database 19c.

  1. Ensure that your Oracle Databases DB12 and CDB2 are properly configured with a Secure External Password Store and it contains the TDE keystore password.
  2. Ensure that AutoUpgrade is version 22.2 or higher:
    $ java -jar autoupgrade.jar -version
    
  3. Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
    	
    upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/DB12
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=DB12
    upg1.target_cdb=CDB2
    
  4. Analyze:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  5. The summary report tells me everything is fine; just go ahead. I don’t need to input the TDE keystore passwords:
    [Stage Name]    PRECHECKS
    [Status]        SUCCESS
    [Start Time]    2022-03-30 10:28:38
    [Duration]       
    [Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks
    [Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks/db12_preupgrade.log
    				Check passed and no manual intervention needed
    
  6. Optionally, I can use the -load_password prompt to check the TDE configuration:
    $ java -jar autoupgrade.jar -config DB12.cfg -load_password
    	
    TDE> list
    +----------+---------------+------------------+-----------+------------------+
    |ORACLE_SID|Action Required|      TDE Password|SEPS Status|Active Wallet Type|
    +----------+---------------+------------------+-----------+------------------+
    |      CDB2|               |No password loaded|   Verified|               Any|
    |      DB12|               |No password loaded|    Unknown|        Auto-login|
    +----------+---------------+------------------+-----------+------------------+
    
    Action Required is empty and verifies that I don’t need to input the TDE keystore passwords. AutoUpgrade checked SEPS in CDB2 and found that it works. It is impossible to check SEPS in DB12 because it is on Oracle Database 12.2. The functionality was added in Oracle Database 19c.
  7. Start the upgrade and conversion:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode deploy
    
  8. That’s it!

What Happens

  • You must configure an AutoUpgrade keystore. Even though you are not loading any TDE keystore passwords, it is still required. Some commands require a passphrase (or transport secret) and AutoUpgrade must store them in its keystore.
  • Whenever a database is using SEPS, and a TDE keystore password is required, AutoUpgrade will use the IDENTIFIED BY EXTERNAL STORE clause.

What Else

You can mix and match. If only one database uses SEPS, you can input the other TDE keystore password manually using the -load_password option. AutoUpgrade will check your database configuration and ask only for the needed TDE keystore passwords.

Other Blog Posts in This Series

Upgrading an Encrypted Non-CDB and Converting To PDB

Converting an encrypted non-CDB to a PDB requires the keystore passwords of the non-CDB and the target CDB. You can do it with AutoUpgrade, and you can upgrade in the same operation.

How To

The Oracle Database DB12 is encrypted and on Oracle Database 12.2. I want to upgrade, convert, and plug it into CDB2 on Oracle Database 19c.

  1. Ensure that AutoUpgrade is version 22.2 or higher:
    $ java -jar autoupgrade.jar -version
    
  2. Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
    	
    upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/DB12
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=DB12
    upg1.target_cdb=CDB2
    
  3. Analyze:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  4. The summary report warns me that TDE keystore passwords are needed:
    [Stage Name]    PRECHECKS
    [Status]        FAILURE
    [Start Time]    2022-03-29 12:42:32
    [Duration]       
    [Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks
    [Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks/db12_preupgrade.log
                Check failed for DB12, manual intervention needed for the below checks
                [TDE_PASSWORDS_REQUIRED]
    
    There are more details in the preupgrade log file:
    ==============
    BEFORE UPGRADE
    ==============
    
    	REQUIRED ACTIONS
    	================
    		1.  Perform the specified action ...
    		ORACLE_SID                      Action Required
    		------------------------------  ------------------------
    		CDB2                            Add TDE password
    		DB12                            Add TDE password
    
  5. Add the TDE keystore passwords into the AutoUpgrade keystore:
    $ java -jar autoupgrade.jar -config DB12.cfg -load_password
    
    TDE> add DB12
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    TDE> add CDB2
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    
  6. Save the passwords into the AutoUpgrade keystore. I choose to create an auto-login keystore:
    TDE> save
    Convert the keystore to auto-login [YES|NO] ? YES
    TDE> exit
    
  7. Re-analyze the database:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  8. If AutoUpgrade does not report any other problems, start the upgrade and conversion. Since I chose to create an AutoUpgrade auto-login keystore, I don’t have to provide the password when AutoUpgrade starts:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode deploy
    
  9. That’s it!

What Happens

  • First, AutoUpgrade upgrades the database to Oracle Database 19c. This is a regular non-CDB database upgrade. It requires an auto-login keystore.
  • After the upgrade, AutoUpgrade exports the encryption keys into a file. To avoid writing the encryption keys in clear text in the export file, the database needs a passphrase (transport secret) to encrypt the encryption key. AutoUpgrade generates a passphrase and stores it in the AutoUpgrade keystore. In addition, the database needs the keystore password. This is the WITH SECRET and IDENTIFIED BY clauses of the ADMINISTER KEY MANAGEMENT EXPORT KEYS statement.
  • The encryption keys is imported into CDB$ROOT of the target CDB. To load the encryption keys from the export file, the database needs the passphrase and keystore password (of the target CDB). AutoUpgrade gets both password from the AutoUpgrade keystore. This is the WITH SECRET and IDENTIFIED BY clauses of the ADMINISTER KEY MANAGEMENT IMPORT KEYS statement.
  • The pluggable database is created from the manifest file using CREATE PLUGGABLE DATABASE statement.
  • AutoUpgrade executes the ADMINISTER KEY MANAGEMENT IMPORT KEYS statement again – this time while connected to the PDB itself.
  • Finally, AutoUpgrade completes the PDB conversion by running noncdb_to_pdb.sql.

The encryption keys are imported two times – first in CDB$ROOT and then in the PDB itself. AutoUpgrade must import into CDB$ROOT if the PDB has any of the system tablespaces (SYSTEM or SYSAUX) or the undo tablespace encrypted.

Fallback

AutoUpgrade fallback functionality also works for an upgrade and PDB conversion. But there are a few requirements:

  • A target_pdb_copy_option must be specified.
  • The database must be Enterprise Edition.
  • A guaranteed restore point must be created (default behavior).

It is not possible to revert the PDB conversion. To fall back the data files must be copied as part of the PDB conversion. You specify that the data files are copied by using the config file parameter target_pdb_copy_option. As an example, if I want to copy the data files during plug-in and generate OMF names, I use this parameter:

upg1.target_pdb_copy_option=file_name_convert=NONE

AutoUpgrade automatically creates a guaranteed restore point in the beginning of an upgrade. AutoUpgrade will issue a FLASHBACK DATABASE statement to revert the upgrade. The parameter restoration governs the creation of the restore point. The default value is YES, meaning AutoUpgrade creates a guaranteed restore point, and fallback is possible.

If all prerequisites are met, I can revert the entire operation and return the database to the original state (from 19c PDB back into a 12.2 non-CDB). 103 is the job id of the upgrade/PDB conversion:

$ java -jar autoupgrade.jar -config PDB1.cfg -restore -jobs 103

Other Blog Posts in This Series

Upgrading an Encrypted PDB

An unplug-plug upgrade of an encrypted PDB requires the keystore password of the source and target CDB, and you can do it with AutoUpgrade.

How To

The pluggable database PDB1 is encrypted and is plugged into CDB1, which is Oracle Database 12.2. I want to upgrade the PDB to Oracle Database 19c by plugging it into CDB2.

  1. Ensure that AutoUpgrade is version 22.2 or higher:
    $ java -jar autoupgrade.jar -version
    
  2. Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
    
    upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/PDB1
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=CDB1
    upg1.target_cdb=CDB2
    upg1.pdbs=PDB1
    
  3. Analyze:
    $ java -jar autoupgrade.jar -config PDB1.cfg -mode analyze
    
  4. The summary report warns me that TDE keystore passwords are needed:
    [Stage Name]    PRECHECKS
    [Status]        FAILURE
    [Start Time]    2022-03-29 07:58:52
    [Duration]       
    [Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/PDB1/CDB1/100/prechecks
    [Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/PDB1/CDB1/100/prechecks/cdb1_preupgrade.log
    		Check failed for PDB1, manual intervention needed for the below checks
    		[TDE_PASSWORDS_REQUIRED]
    
    There are more details in the preupgrade log file:
    ==============
    BEFORE UPGRADE
    ==============
    
    	REQUIRED ACTIONS
    	================
    		1.  Perform the specified action ...
    		ORACLE_SID                      Action Required
    		------------------------------  ------------------------
    		CDB1                            Add TDE password
    		CDB2                            Add TDE password
    
  5. Add the TDE keystore passwords into the AutoUpgrade keystore:
    $ java -jar autoupgrade.jar -config PDB1.cfg -load_password
    
    TDE> add CDB1
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    TDE> add CDB2
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    
  6. Save the passwords into the AutoUpgrade keystore. I choose to create an auto-login keystore:
    TDE> save
    Convert the keystore to auto-login [YES|NO] ? YES
    TDE> exit
    
  7. Re-analyze the PDB:
    $ java -jar autoupgrade.jar -config PDB1.cfg -mode analyze
    
  8. If AutoUpgrade does not report any other problems, start the unplug-plug upgrade. Since I chose to create an AutoUpgrade auto-login keystore, I don’t have to provide the password when AutoUpgrade starts:
    $ java -jar autoupgrade.jar -config PDB1.cfg -mode deploy
    
  9. That’s it!

What Happens

  • When AutoUpgrade needs to unplug the encrypted PDB into a manifest file, the source CDB will need the TDE keystore password. AutoUpgrade can get it from its keystore. This is the IDENTIFIED BY clause of the ALTER PLUGGABLE DATABASE ... UNPLUG INTO statement.
  • The encryption keys of the PDB go into the manifest file. The database doesn’t want to write the encryption keys in clear text in the manifest file and asks for a passphrase that can encrypt the encryption keys. AutoUpgrade generates a passphrase and stores the passphrase in the AutoUpgrade keystore. This is the ENCRYPT USING clause of the ALTER PLUGGABLE DATABASE ... UNPLUG INTO statement.
  • When the PDB plugs into the target CDB, the target CDB will need the TDE keystore password. This is the IDENTIFIED BY clause of the CREATE PLUGGABLE DATABASE ... USING statement.
  • The database must get the encryption keys of the PDB from the manifest files. The encryption keys are encrypted using a passphrase. The database asks AutoUpgrade about the passphrase which is stored in the AutoUpgrade keystore. This is the DECRYPT USING clause of the CREATE PLUGGABLE DATABASE ... USING statement.

Fallback

AutoUpgrade fallback functionality also works on an encrypted PDB. When it comes to unplug-plug upgrades and fallback capability, it is a requirement that the data files were copied as part of the upgrade process.

In the above example, a fallback using AutoUpgrade would not be possible. Since I did not specify a target_pdb_copy_option the data files were re-used. Other means of falling back to the original state is needed.

Had I specified a target_pdb_copy_option in my config file, a fallback would be possible. In the below example, I am specifying a copy option. file_name_convert=none means that the data files are copied and new OMF names are generated:

upg1.target_pdb_copy_option=file_name_convert=NONE

In this case, I can revert the unplug-plug upgrade and return to the original state using AutoUpgrade. 103 is the job id of the upgrade:

$ java -jar autoupgrade.jar -config PDB1.cfg -restore -jobs 103

Isolated Keystore Mode

In CDBs the default way of storing TDE encryption keys is in a united keystore. The CDB has one keystore and all PDBs store their encryption keys in that one keystore.

With Oracle Database 19.14 a new option became possible: isolated keystore. The CDB still has a keystore that PDBs can use. But you can also configure each individual PDB to use its own keystore.

Currently, AutoUpgrade does not support isolated keystore mode. But we are working on it.

Other Blog Posts in This Series

Upgrading an Encrypted Non-CDB or CDB

Upgrading a non-CDB or an entire CDB is straightforward with AutoUpgrade. There is only one requirement:

  • An auto-login keystore must be present.

Upgrade Non-CDB and CDB

The auto-login keystore enables the database to open the TDE keystore without a DBA manually entering the keystore password. During a database upgrade, the database will restart multiple times. The upgrade process embeds the restarts, and there is no way for a DBA to intervene halfway to enter the TDE keystore password. Hence, it is required to use an auto-login keystore.

You can query the database for the type of the TDE keystore:

SQL> select wallet_type from v$encryption_wallet;

AUTOLOGIN

It must be an AUTOLOGIN keystore or a LOCAL_AUTOLOGIN. I like the local autologin keystore because it adds an additional layer of security.

When a proper keystore is in place, you can start the upgrade.

Keystore Location and WALLET_ROOT

The Oracle Database must know where to find the TDE keystore. It will look in the following locations in the following order:

  • As of Oracle Database 19c, WALLET_ROOT initialization parameter.
  • ENCRYPTION_WALLET_LOCATION sqlnet.ora parameter.
  • $ORACLE_BASE/admin/DB_UNIQUE_NAME/wallet
  • $ORACLE_HOME/admin/DB_UNIQUE_NAME/wallet

Oracle recommends using the parameter WALLET_ROOT when your database is on Oracle Database 19c. The parameter is introduced in Oracle Database 19c, and all other methods have been deprecated.

It is easier to configure the TDE keystore using WALLET_ROOT than sqlnet.ora. AutoUpgrade can implement the changes needed to switch to the WALLET_ROOT parameter as part of an upgrade . I recommend doing that.

TNS_ADMIN

Often, sqlnet.ora defines the TDE keystore configuration. This means that the TNS_ADMIN location is important.

TNS_ADMIN defaults to $ORACLE_HOME/network/admin. But sometimes, it is relocated either via a profile (like .bashrc) or using srvctl setenv database. AutoUpgrade fully supports both methods.

But it does happen from time to time that there are issues with the TNS_ADMIN location. Recently, I saw it at a customer. The customer used a dedicated sqlnet.ora for each database. The parameter ENCRYPTION_WALLET_LOCATION was unique in each of the sqlnet.ora files. They had issues with their profiles and AutoUpgrade picked up the wrong sqlnet.ora. This caused AutoUpgrade to report issues with the TDE keystore during analyze phase.

Luckily, there is functionality in AutoUpgrade to override the TNS_ADMIN location:

You can put them into the config file. AutoUpgrade will set the TNS_ADMIN environment variable before executing any command. That will effectively override any other TNS_ADMIN setting:

upg1.target_tns_admin_dir=/etc/oracle/keystores/DB12

Usually, I would not recommend using these parameters. In most cases, the correct TNS_ADMIN location is set and all is good. Use only when you encounter issues.

Other Blog Posts in This Series

AutoUpgrade and Transparent Data Encryption (TDE)

It is now easier to upgrade and convert your encrypted Oracle Database. The latest version of AutoUpgrade adds much better support for Oracle Databases that are encrypted with Transparent Data Encryption (TDE).

You must ensure that you are using the latest version of AutoUpgrade. You can download it from My Oracle Support AutoUpgrade Tool (Doc ID 2485457.1). At the time of writing, the latest version of AutoUpgrade is 22.2:

$ java -jar autoupgrade.jar -version
build.version 22.2.220324

AutoUpgrade Keystore

Dealing with TDE, also means dealing with sensitive information. AutoUpgrade must adequately protect the TDE keystore passwords. To do so, AutoUpgrade can have its own keystore to store sensitive information, i.e., TDE keystore passwords. Whenever a TDE keystore password is needed, e.g., during an unplug-plug upgrade of an encrypted PDB, it can get the password from the AutoUpgrade keystore.

You need to tell AutoUpgrade where it can create the keystore. You do so in the config file:

global.keystore=/etc/oracle/keystores/autoupgrade/DB12

When you start to use the AutoUpgrade keystore the following files are created in the directory:

$ pwd
/etc/oracle/keystores/autoupgrade/DB12

$ ll
-rw-------. 1 oracle dba 765 Mar 28 14:56 cwallet.sso
-rw-------. 1 oracle dba 720 Mar 28 14:56 ewallet.p12

It is similar to other keystores that Oracle Database use. ewallet.p12 is the keystore, and cwallet.sso is an auto-login keystore used to open the real keystore. You don’t have to create an auto-login keystore.

You should protect the AutoUpgrade keystore files like you protect any other Oracle Database keystore:

  • Apply restrictive file system permissions.
  • Audit access.
  • Back it up.

Using the Keystore

Create your AutoUpgrade config file and specify global.keystore as described above. Start an interactive prompt that allows you to add the necessary passwords:

$ java -jar autoupgrade.jar -config DB12.cfg -load_password

The first time you use the AutoUpgrade keystore, you must provide a password that protects the AutoUpgrade keystore:

Starting AutoUpgrade Password Loader - Type help for available options
Creating new keystore - Password required
Enter password:       
Enter password again: 
Keystore was successfully created

In the TDE console, the following commands are available:

  • add
  • delete
  • list
  • save
  • help
  • exit

The SID references the databases. If you want to add a TDE password for the database DB12, use the following command:

TDE> add DB12
Enter your secret/Password:    
Re-enter your secret/Password: 
TDE> add CDB2
Enter your secret/Password:    
Re-enter your secret/Password: 

If you want to delete the TDE password for DB12:

TDE> delete DB12

Keystore Password is required prior to operation
Enter wallet password:   

When you save the passwords into the AutoUpgrade keystore, you must decide whether you want to have an auto-login keystore:

TDE> save
Convert the keystore to auto-login [YES|NO] ? 

I recommend using auto-login keystores. If you do not create an AutoUpgrade auto-login keystore, you will be prompted for the AutoUpgrade keystore password when you start AutoUpgrade. If you want to use AutoUpgrade in noconsole mode (-noconsole), then an auto-login keystore is required.

I will show how to upgrade and convert encrypted databases in later blog posts.

Loss of AutoUpgrade Keystore

What happens if your AutoUpgrade keystore is lost? This is fairly simple. You can re-create the keystore and load all passwords into it using the load_password command line option as described above.

Preupgrade Checks

We have added new preupgrade checks to the analyze phase in AutoUpgrade. These checks will help you to provide the needed passwords and ensure your TDE configuration meets certain standards:

  • auto_login_keystore_required
  • keystore_conflict
  • no_keystore_files
  • tde_passwords_required
  • wallet_root
  • tde_in_use
  • oracle_home_keystore

You can read more about these checks in MOS note Database Preupgrade tool check list. (Doc ID 2380601.1).

Further Reading

Other Blog Posts in This Series

XTTS: Full Transportable Export/Import

In this blog post series, I use Full Transportable Export/Import (FTEX) to move the metadata during a cross-platform transportable tablespace migration (XTTS). The documentation states:

You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.

Requirements

A different blog post already covers the requirements for FTEX. Below is a supplement to that list:

  • The user performing the export and import must have the roles DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE, respectively. Don’t run the Data Pump jobs as SYS AS SYSDBA!
  • During export, the default tablespace of the user performing the export must not be one of the tablespaces being transported. In addition, the default tablespace of the user performing the export must be writable. Data Pump needs this to create the control table.
  • The target database (non-CDB) or PDB must not contain a tablespace of the same name as one of the tablespaces being transported. Often this is the case with the USERS tablespace. Either use Data Pump remap_tablespace or rename the tablespace (alter tablespace users rename to users2).
  • All tablespaces are transported. It is not possible to exclude a tablespace or a user from the operation.

What Is Included?

Generally, you should count on everything is included, except SYS objects and things specified in the next chapter. Below is a list of things that are included as well. It is a list of examples from previous questions I have been asked.

  • If a user schema has tables in SYSTEM or SYSAUX tablespace, such tables are also transported. But they are not stored in the transported tablespaces. Instead, those tables are exported into the dump file using conventional export. Examples:
    SQL> --Exported into dump file
    SQL> create table app.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table app.my_tab2 (...) tablespace users;
    
  • If you created any new tables as SYSTEM or any other internal schema, except SYS, those tables will also be transported. If such tables are in the SYSTEM or SYSAUX tablespace, then they are exported into the dump file. Examples:
    SQL> --Exported into dump file
    SQL> create table system.my_tab1 (...) tablespace system;
    SQL> --Exported via transportable tablespace
    SQL> create table system.my_tab2 (...) tablespace users;
    
    No need to emphasize that you should never create any objects in Oracle maintained schemas. But we all know it happens…
  • Public and private database links.
  • Private synonyms.
  • Profiles.
  • Directories including the privileges granted on them, although they are owned by SYS.
  • All schema level triggers (CREATE TRIGGER ... ON SCHEMA), including on system events, except those owned by SYS
  • All database level triggers (CREATE TRIGGER ... ON DATABASE) owned by an internal schema, except SYS.
  • SQL patches.
  • SQL plan baselines.
  • SQL profiles.

What Is Not Included?

The transport does not include any object owned by SYS. Here are some examples:

  • User-created tables in SYS schema are not transported at all. You must re-create such tables (but you should never create such tables in the first place).
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab1 (...) tablespace system;
    SQL> --Not moved, recreate manually
    SQL> create table sys.my_tab2 (...) tablespace users;
    
  • Grants on tables or views owned by SYS, like DBA_USERS or v$datafile.
  • Any trigger owned by SYS.

In addition, the following is not included:

  • Public synonyms.
  • AWR data is not included. You can move such data using the script $ORACLE_HOME/rdbms/admin/awrextr.sql.

How Does It Work?

There are two keywords used to start a full transportable job: TRANSPORTABLE and FULL. If you want to start an FTEX import directly over a network link:

$ impdp ... transportable=always full=y network_link ...

If you want to use dump files:

$ expdp ... transportable=always full=y
$ impdp ... full=y

A Few Words of Advice

Practice, practice, practice

  • Start on a small database and work on your runbook.
  • Eventually, prove it works on a production-size database.

Automate

  • To ensure consistency. There are many steps, and it is easy to overlook a step or miss a detail.
  • To avoid human error. Humans make mistakes. Period!

Save logs

  • Data Pump
  • RMAN
  • Terminal output

Automate clean-up procedure

  • To repeat tests and effectively clean up the target environment.
  • In case of failure and rollback during production migration, you should know how to resume operations safely.

Shut down source database

  • Be sure to offline source database after migration. Having users connect to the wrong database after a migration is a disaster.

Data Pump Import

  • Importing directly into the target database using the NETWORK_LINK option is recommended.

Timezone File Version

Check the timezone file version of your source and target database:

SQL> select * from v$timezone_file;

If they differ and the target timezone file version is higher than the source database, Data Pump will convert any TIMESTAMP WITH TIME ZONE (TSTZ) column to the newer timezone conventions. The conversion happens automatically during import.

Since Data Pump must update data during import, it requires that Data Pump can turn the tablespaces READ WRITE. Thus, you can’t use TRANSPORTABLE=KEEP_READ_ONLY if you have tables with TSTZ columns. Trying to do so will result in:

ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues resulting from time zone version mismatch.
Source time zone version is ?? and target time zone version is ??.

If your target database has a lower timezone file version, you can’t use FTEX. You must upgrade the timezone file in your database.

TDE Tablespace Encryption

If the source database has one or more encrypted tablespaces, you must either:

  • Supply the keystore password on export using the Data Pump option ENCRYPTION_PASSWORD.
  • Specify ENCRYPTION_PWD_PROMPT=YES and Data Pump will prompt for the keystore password. This approach is more safer because the encryption password is otherwise stored in the shell history.

You can read more about Full Mode and transportable tablespaces in the documentation.

You can only transport encrypted tablespaces, if the source and target platform share the same Endian format. For example, going from Windows to Linux is fine, because they are both little Endian platforms. Going from AIX to Linux will not work, that’s big to little Endian. When a tablespace is transported to a platform of a different Endian format, the data files must be converted. The conversion does not work on encrypted tablespaces. The only option is to decrypt the tablespace before transport.

Further Reading

Documentation:

MOS notes:

Blog posts:

Other Blog Posts in This Series