AutoUpgrade and Secure External Password Store Enables Complete Automation

Many commands that involve Transparent Data Encryption (TDE) require inputting the TDE keystore password. Also, when you use AutoUpgrade. on an encrypted Oracle Database you probably need to store the TDE keystore password using the -load_password option.

Manually inputting passwords is unsuitable for an environment with a high degree of automation. In Oracle Database it is solved by Secure External Password Store (SEPS) (as of Oracle Database 12.2). In a previous blog post, I showed how you could use it to your advantage.

This blog post is about how to use AutoUpgrade together with SEPS.

Good News

As of version 22.2 AutoUpgrade fully supports Oracle Database with a Secure External Password Store. If SEPS contains the TDE keystore password, you don’t have to input the password using the -load_password option.

If you are using AutoUpgrade in some sort of automation (like Ansible), you should look into SEPS. AutoUpgrade can use SEPS when the TDE keystore password is needed, and you can upgrade and convert completely unattended.

How To

The Oracle Database DB12 is encrypted and on Oracle Database 12.2. I want to upgrade, convert, and plug it into CDB2 on Oracle Database 19c.

  1. Ensure that your Oracle Databases DB12 and CDB2 are properly configured with a Secure External Password Store and it contains the TDE keystore password.
  2. Ensure that AutoUpgrade is version 22.2 or higher:
    $ java -jar autoupgrade.jar -version
    
  3. Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
    	
    upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/DB12
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=DB12
    upg1.target_cdb=CDB2
    
  4. Analyze:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  5. The summary report tells me everything is fine; just go ahead. I don’t need to input the TDE keystore passwords:
    [Stage Name]    PRECHECKS
    [Status]        SUCCESS
    [Start Time]    2022-03-30 10:28:38
    [Duration]       
    [Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks
    [Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks/db12_preupgrade.log
    				Check passed and no manual intervention needed
    
  6. Optionally, I can use the -load_password prompt to check the TDE configuration:
    $ java -jar autoupgrade.jar -config DB12.cfg -load_password
    	
    TDE> list
    +----------+---------------+------------------+-----------+------------------+
    |ORACLE_SID|Action Required|      TDE Password|SEPS Status|Active Wallet Type|
    +----------+---------------+------------------+-----------+------------------+
    |      CDB2|               |No password loaded|   Verified|               Any|
    |      DB12|               |No password loaded|    Unknown|        Auto-login|
    +----------+---------------+------------------+-----------+------------------+
    
    Action Required is empty and verifies that I don’t need to input the TDE keystore passwords. AutoUpgrade checked SEPS in CDB2 and found that it works. It is impossible to check SEPS in DB12 because it is on Oracle Database 12.2. The functionality was added in Oracle Database 19c.
  7. Start the upgrade and conversion:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode deploy
    
  8. That’s it!

What Happens

  • You must configure an AutoUpgrade keystore. Even though you are not loading any TDE keystore passwords, it is still required. Some commands require a passphrase (or transport secret) and AutoUpgrade must store them in its keystore.
  • Whenever a database is using SEPS, and a TDE keystore password is required, AutoUpgrade will use the IDENTIFIED BY EXTERNAL STORE clause.

What Else

You can mix and match. If only one database uses SEPS, you can input the other TDE keystore password manually using the -load_password option. AutoUpgrade will check your database configuration and ask only for the needed TDE keystore passwords.

Other Blog Posts in This Series

Upgrading an Encrypted Non-CDB and Converting To PDB

Converting an encrypted non-CDB to a PDB requires the keystore passwords of the non-CDB and the target CDB. You can do it with AutoUpgrade, and you can upgrade in the same operation.

How To

The Oracle Database DB12 is encrypted and on Oracle Database 12.2. I want to upgrade, convert, and plug it into CDB2 on Oracle Database 19c.

  1. Ensure that AutoUpgrade is version 22.2 or higher:
    $ java -jar autoupgrade.jar -version
    
  2. Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
    	
    upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/DB12
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=DB12
    upg1.target_cdb=CDB2
    
  3. Analyze:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  4. The summary report warns me that TDE keystore passwords are needed:
    [Stage Name]    PRECHECKS
    [Status]        FAILURE
    [Start Time]    2022-03-29 12:42:32
    [Duration]       
    [Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks
    [Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/DB12/DB12/100/prechecks/db12_preupgrade.log
                Check failed for DB12, manual intervention needed for the below checks
                [TDE_PASSWORDS_REQUIRED]
    
    There are more details in the preupgrade log file:
    ==============
    BEFORE UPGRADE
    ==============
    
    	REQUIRED ACTIONS
    	================
    		1.  Perform the specified action ...
    		ORACLE_SID                      Action Required
    		------------------------------  ------------------------
    		CDB2                            Add TDE password
    		DB12                            Add TDE password
    
  5. Add the TDE keystore passwords into the AutoUpgrade keystore:
    $ java -jar autoupgrade.jar -config DB12.cfg -load_password
    
    TDE> add DB12
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    TDE> add CDB2
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    
  6. Save the passwords into the AutoUpgrade keystore. I choose to create an auto-login keystore:
    TDE> save
    Convert the keystore to auto-login [YES|NO] ? YES
    TDE> exit
    
  7. Re-analyze the database:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode analyze
    
  8. If AutoUpgrade does not report any other problems, start the upgrade and conversion. Since I chose to create an AutoUpgrade auto-login keystore, I don’t have to provide the password when AutoUpgrade starts:
    $ java -jar autoupgrade.jar -config DB12.cfg -mode deploy
    
  9. That’s it!

What Happens

  • First, AutoUpgrade upgrades the database to Oracle Database 19c. This is a regular non-CDB database upgrade. It requires an auto-login keystore.
  • After the upgrade, AutoUpgrade exports the encryption keys into a file. To avoid writing the encryption keys in clear text in the export file, the database needs a passphrase (transport secret) to encrypt the encryption key. AutoUpgrade generates a passphrase and stores it in the AutoUpgrade keystore. In addition, the database needs the keystore password. This is the WITH SECRET and IDENTIFIED BY clauses of the ADMINISTER KEY MANAGEMENT EXPORT KEYS statement.
  • The encryption keys is imported into CDB$ROOT of the target CDB. To load the encryption keys from the export file, the database needs the passphrase and keystore password (of the target CDB). AutoUpgrade gets both password from the AutoUpgrade keystore. This is the WITH SECRET and IDENTIFIED BY clauses of the ADMINISTER KEY MANAGEMENT IMPORT KEYS statement.
  • The pluggable database is created from the manifest file using CREATE PLUGGABLE DATABASE statement.
  • AutoUpgrade executes the ADMINISTER KEY MANAGEMENT IMPORT KEYS statement again – this time while connected to the PDB itself.
  • Finally, AutoUpgrade completes the PDB conversion by running noncdb_to_pdb.sql.

The encryption keys are imported two times – first in CDB$ROOT and then in the PDB itself. AutoUpgrade must import into CDB$ROOT if the PDB has any of the system tablespaces (SYSTEM or SYSAUX) or the undo tablespace encrypted.

Fallback

AutoUpgrade fallback functionality also works for an upgrade and PDB conversion. But there are a few requirements:

  • A target_pdb_copy_option must be specified.
  • The database must be Enterprise Edition.
  • A guaranteed restore point must be created (default behavior).

It is not possible to revert the PDB conversion. To fall back the data files must be copied as part of the PDB conversion. You specify that the data files are copied by using the config file parameter target_pdb_copy_option. As an example, if I want to copy the data files during plug-in and generate OMF names, I use this parameter:

upg1.target_pdb_copy_option=file_name_convert=NONE

AutoUpgrade automatically creates a guaranteed restore point in the beginning of an upgrade. AutoUpgrade will issue a FLASHBACK DATABASE statement to revert the upgrade. The parameter restoration governs the creation of the restore point. The default value is YES, meaning AutoUpgrade creates a guaranteed restore point, and fallback is possible.

If all prerequisites are met, I can revert the entire operation and return the database to the original state (from 19c PDB back into a 12.2 non-CDB). 103 is the job id of the upgrade/PDB conversion:

$ java -jar autoupgrade.jar -config PDB1.cfg -restore -jobs 103

Other Blog Posts in This Series

Upgrading an Encrypted PDB

An unplug-plug upgrade of an encrypted PDB requires the keystore password of the source and target CDB, and you can do it with AutoUpgrade.

How To

The pluggable database PDB1 is encrypted and is plugged into CDB1, which is Oracle Database 12.2. I want to upgrade the PDB to Oracle Database 19c by plugging it into CDB2.

  1. Ensure that AutoUpgrade is version 22.2 or higher:
    $ java -jar autoupgrade.jar -version
    
  2. Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
    global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
    global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
    
    upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/PDB1
    upg1.source_home=/u01/app/oracle/product/12.2.0.1
    upg1.target_home=/u01/app/oracle/product/19
    upg1.sid=CDB1
    upg1.target_cdb=CDB2
    upg1.pdbs=PDB1
    
  3. Analyze:
    $ java -jar autoupgrade.jar -config PDB1.cfg -mode analyze
    
  4. The summary report warns me that TDE keystore passwords are needed:
    [Stage Name]    PRECHECKS
    [Status]        FAILURE
    [Start Time]    2022-03-29 07:58:52
    [Duration]       
    [Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/PDB1/CDB1/100/prechecks
    [Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/PDB1/CDB1/100/prechecks/cdb1_preupgrade.log
    		Check failed for PDB1, manual intervention needed for the below checks
    		[TDE_PASSWORDS_REQUIRED]
    
    There are more details in the preupgrade log file:
    ==============
    BEFORE UPGRADE
    ==============
    
    	REQUIRED ACTIONS
    	================
    		1.  Perform the specified action ...
    		ORACLE_SID                      Action Required
    		------------------------------  ------------------------
    		CDB1                            Add TDE password
    		CDB2                            Add TDE password
    
  5. Add the TDE keystore passwords into the AutoUpgrade keystore:
    $ java -jar autoupgrade.jar -config PDB1.cfg -load_password
    
    TDE> add CDB1
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    TDE> add CDB2
    Enter your secret/Password:    
    Re-enter your secret/Password: 
    
  6. Save the passwords into the AutoUpgrade keystore. I choose to create an auto-login keystore:
    TDE> save
    Convert the keystore to auto-login [YES|NO] ? YES
    TDE> exit
    
  7. Re-analyze the PDB:
    $ java -jar autoupgrade.jar -config PDB1.cfg -mode analyze
    
  8. If AutoUpgrade does not report any other problems, start the unplug-plug upgrade. Since I chose to create an AutoUpgrade auto-login keystore, I don’t have to provide the password when AutoUpgrade starts:
    $ java -jar autoupgrade.jar -config PDB1.cfg -mode deploy
    
  9. That’s it!

What Happens

  • When AutoUpgrade needs to unplug the encrypted PDB into a manifest file, the source CDB will need the TDE keystore password. AutoUpgrade can get it from its keystore. This is the IDENTIFIED BY clause of the ALTER PLUGGABLE DATABASE ... UNPLUG INTO statement.
  • The encryption keys of the PDB go into the manifest file. The database doesn’t want to write the encryption keys in clear text in the manifest file and asks for a passphrase that can encrypt the encryption keys. AutoUpgrade generates a passphrase and stores the passphrase in the AutoUpgrade keystore. This is the ENCRYPT USING clause of the ALTER PLUGGABLE DATABASE ... UNPLUG INTO statement.
  • When the PDB plugs into the target CDB, the target CDB will need the TDE keystore password. This is the IDENTIFIED BY clause of the CREATE PLUGGABLE DATABASE ... USING statement.
  • The database must get the encryption keys of the PDB from the manifest files. The encryption keys are encrypted using a passphrase. The database asks AutoUpgrade about the passphrase which is stored in the AutoUpgrade keystore. This is the DECRYPT USING clause of the CREATE PLUGGABLE DATABASE ... USING statement.

Fallback

AutoUpgrade fallback functionality also works on an encrypted PDB. When it comes to unplug-plug upgrades and fallback capability, it is a requirement that the data files were copied as part of the upgrade process.

In the above example, a fallback using AutoUpgrade would not be possible. Since I did not specify a target_pdb_copy_option the data files were re-used. Other means of falling back to the original state is needed.

Had I specified a target_pdb_copy_option in my config file, a fallback would be possible. In the below example, I am specifying a copy option. file_name_convert=none means that the data files are copied and new OMF names are generated:

upg1.target_pdb_copy_option=file_name_convert=NONE

In this case, I can revert the unplug-plug upgrade and return to the original state using AutoUpgrade. 103 is the job id of the upgrade:

$ java -jar autoupgrade.jar -config PDB1.cfg -restore -jobs 103

Isolated Keystore Mode

In CDBs the default way of storing TDE encryption keys is in a united keystore. The CDB has one keystore and all PDBs store their encryption keys in that one keystore.

With Oracle Database 19.14 a new option became possible: isolated keystore. The CDB still has a keystore that PDBs can use. But you can also configure each individual PDB to use its own keystore.

Currently, AutoUpgrade does not support isolated keystore mode. But we are working on it.

Other Blog Posts in This Series

How To Upgrade An Encrypted Oracle Database and Convert to PDB

New functionality superseeds the functionality mentioned in this blog post. Please use the method described in Upgrading an Encrypted Non-CDB and Converting To PDB.

Using AutoUpgrade, you can upgrade your encrypted Oracle Database and convert to a pluggable database. The process is not entirely automated, so you must handle the TDE encryption key manually.

A new parameter called skip_tde_key_import is introduced. Here is what the documentation says:

(Optional) The default is NO. You can use this option for non-CDB-to-PDB and unplug/plug operations. When set to YES, the upgrade is run, but import of the source database KeyStore into the target database is skipped, without raising an error. AutoUpgrade will leave the PDB open in upgrade mode, so that you can import the keys manually yourself. After you import the keys, you must then restart the database in normal mode.

In other words, AutoUpgrade does everything except import of the TDE encryption key.

How To

This is what happens: Overview of upgrading to Oracle Database 19c of database encrypted with TDE and convert to a PDB

AutoUpgrade handles the green things – you handle the yellow thing. My environment:

  1. Non-CDB called FTEX running on Oracle Database 11.2.0.4
  2. FTEX is encrypted with TDE and has an auto-login keystore
  3. CDB called CDB2 running on Oracle Database 19c
  4. CDB2 is prepared for TDE and has a keystore defined

First, I ensure that AutoUpgrade is version 21.2 or newer:

java -jar autoupgrade.jar -version

I create a config file for AutoUpgrade. Notice, that I have specified that the database is plugged into CDB2 using the parameter target_cdb. Also, notice that I have specified the new parameter skip_tde_key_import and set it to YES:

upg1.source_home=/u01/app/oracle/product/11.2.0.4
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=FTEX
upg1.target_cdb=CDB2
upg1.skip_tde_key_import=YES

Then, I analyze my database (FTEX) for any issues:

java -jar autoupgrade.jar -config FTEX.cfg -mode analyze

No issues are detected, so I proceed with the upgrade:

java -jar autoupgrade.jar -config FTEX.cfg -mode deploy

After the upgrade and conversion to PDB, I log on to CDB2. I want to check the outcome. Has FTEX been upgraded and converted?

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   YES                

Open – but restricted. I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

CAUSE                TYPE       MESSAGE                                STATUS     ACTION
Wallet Key Needed    ERROR      PDB needs to import keys from source.  PENDING    Import keys from source.    

OK. This is expected. AutoUpgrade does not handle the TDE encryption key. See the yellow part of the arrow above. You have to do that.

I start by merging the two keystores – the keystore of the old non-CDB database (FTEX) and the keystore of the CDB (CDB2). I merge the two keystores into the one of CDB2:

CDB$ROOT SQL> administer key management 
   merge keystore '/u01/app/oracle/admin/FTEX/wallet' identified by "oracle_4U" 
   into existing keystore '/u01/app/oracle/admin/CDB2/wallet/tde' identified by "oracle_4U" 
   with backup using 'merge_keystore';

Notice that I have specified the folders of keystores and not the actually file names. I have also specified the keystore password for both the keystores. And, finally, I tell the database to create a backup of the keystore before the merge (as a precaution – and I recommend always doing that).

Now, I switch to the FTEX PDB. I need to configure FTEX to use the TDE encryption key that is merged from the old keystore. Each key in the keystore has a key ID and I tell the PDB to use the right one. I explain later on how to determine the key ID:

CDB$ROOT SQL> alter session set container=FTEX;

FTEX SQL> administer key management 
   use key 'Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' 
   force keystore identified by "oracle_4U" 
   with backup using 'use_key_ftex';

Now, the PDB knows the TDE encryption key and you can actually start to query data from an encrypted tablespace. However, there is still a plug-in violation saying the TDE encryption key has to be imported. The PDB insists that the encryption key is imported – rather than merged into the keystore. So I will export the key from the PDB and import it again:

FTEX SQL> administer key management 
   export keys with secret "secret-passphrase" 
   to '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U";
FTEX SQL> administer key management 
   import keys with secret "secret-passphrase" 
   from '/etc/oracle/exported-keys-ftex' force keystore identified by "oracle_4U" 
   with backup using 'import_key_ftex';
FTEX SQL> host rm /etc/oracle/exported-keys-ftex

The encryption key is saved in an encrypted file in the OS. The file is encrypted using a passphrase (secret-passphrase), and you should pick a better one than I did. Finally – and important – I remove the file from the OS when I am done. I don’t want my encryption keys lying around in files all over the OS.

I restart the FTEX PDB:

FTEX SQL> alter pluggable database FTEX close immediate;
FTEX SQL> alter pluggable database FTEX open;

And the PDB is now open in READ WRITE mode and unrestricted

SQL> select name, open_mode, restricted from v$pdbs

NAME      OPEN_MODE    RESTRICTED
PDB$SEED  READ ONLY    NO        
FTEX      READ WRITE   NO

I check for plug-in violations:

SQL> select cause, type, message, status, action 
     from pdb_plug_in_violations 
     where name = 'FTEX' AND status not in ('RESOLVED')

No rows selected.

Job done! Encrypted database upgraded and converted to a PDB.

Determine Key ID

Use the orapki utility to determine the key ID of the TDE encryption key. Specify the location of the old non-CDB keystore:

orapki wallet display -wallet /u01/app/oracle/admin/FTEX/wallet

Oracle PKI Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Requested Certificates: 
User Certificates:
Oracle Secret Store entries: 
ORACLE.SECURITY.DB.ENCRYPTION.Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY
ORACLE.SECURITY.TS.ENCRYPTION.BUCsXzQga4wfJ8gNSuptAPMCAwAAAAAAAAAAAAAAAAAAAAAAAAAA
Trusted Certificates: 
Subject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Secure Server Certification Authority,O=RSA Data Security\, Inc.,C=US
Subject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US

The TDE encryption key is starting with ORACLE.SECURITY.DB.ENCRYPTION and doesn’t end on MASTERKEY. In the above example the key ID is Ad8l2SYuf0/dv+6Au9tRmAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.

If your non-CDB is Oracle Database 12c or later you can also get the key ID using a view:

SQL> select key_id from v$encryption_keys where key_use='TDE';

Why Do I Need to Export and Import?

Above you saw that I had to export and import the encryption keys, even after merging the keystores. You might ask: Why is that even needed? Why is the ADMINISTER KEY MANAGEMENT MERGE KEYSTORE command not enough? That’s a very good question. I have reached out to our security people to get their take on it. It sounds like a bug to me.

The Future

The above solution makes it easier to handle encrypted databases. But we are not satisfied yet. A future version of AutoUpgrade will have even better support for encrypted databases, and everything will be automated.

The biggest issue for us is to find a safe way to handle the TDE keystore password. We need to have a safe way of getting the password from you and storing it in memory until we actually need it.

We are just about to start testing the very first version of AutoUpgrade with proper TDE support. If you are interested in becoming a beta tester, reach out to me to work something out.

Conclusion

Converting encrypted databases to PDB is now possible in AutoUpgrade. AutoUpgrade mainly handles the process, but you have to manage the TDE encryption keys yourself.

New functionality superseeds the functionality mentioned in this blog post. Please use the method described in Upgrading an Encrypted Non-CDB and Converting To PDB.

How to Upgrade a Single PDB

AutoUpgrade now supports unplug-plug upgrades. You unplug a PDB from a lower release CDB and you plug it into a higher release CDB. After plug-in the PDB is upgraded and eventually it can be opened in normal, READ WRITE mode.

Concept of unplug-plug upgrades which are supported with AutoUpgrade version 21.1.1

When it comes to upgrading in the multitenant world, I am a big fan of unplug-plug upgrades. The concept comes with a number of benefits:

  • It is much faster to upgrade an individual PDB using unplug-plug compared to a CDB with just one PDB in it. When you do an unplug-plug upgrade, the database just need to upgrade the PDB. Compare that to a CDB which first upgrades CDB$ROOT, and then PDB$SEED and any user PDBs.
  • You don’t have to arrange downtime for all the PDBs in the CDB. Downtime is just needed for the PDB that you will upgrade.
  • Combine it with refreshable PDBs and you can still have a really good fallback option. You can check out a previous blog post to see how you can use refreshable PDBs.

AutoUpgrade and Unplug-plug Upgrade

Starting from version 21, AutoUpgrade can now perform unplug-plug upgrades. A newer version of AutoUpgrade can upgrade to older database releases as well, so don’t worry if the AutoUpgrade version doesn’t match the Oracle Database release that you are upgrading to.

There are some requirements that must be met in order to perform unplug-upgrade, so I suggest that you take a look in the documentation.

You have to create the target CDB yourself. It is by design that AutoUpgrade doesn’t do this for you. First, creating a CDB requires a lot of information and it can be done in many different ways (ASM? Which components? RAC?). You would need a very long config file to supply all that information. Also, it takes time to create a CDB and if AutoUpgrade would have to do that inside the maintenance window, it would be prolonged considerably.

During unplug-plug upgrades AutoUpgrade also allows you to change the name of the PDBs and you can decide whether you want to reuse the unplugged data files or take a copy.

How to

Imagine the following AutoUpgrade config file:

upg1.sid=CDB1
upg1.target_cdb=CDB2
upg1.pdbs=hr,logistics
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.target_pdb_name.hr=people

AutoUpgrade will unplug PDBs hr and logistics from CDB1 and plug them into CDB2. In addition, it will change the name of hr to people when it is plugged into CDB2. Finally, you must specify the Oracle Home of the two CDBs, so AutoUpgrade can set the environment correctly and connect to the databases.

If you use lsj command to monitor the progress it does actually look like you are only upgrading one of the PDBs:

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+------------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|           MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+------------------+
| 100|   CDB1|DBUPGRADE|EXECUTING|RUNNING|20/12/22 15:25|15:29:03|13%Upgraded PEOPLE|
+----+-------+---------+---------+-------+--------------+--------+------------------+
Total jobs 1

But if you look into the details with status -job 100 you can see that both PDBs are upgraded in parallel:

upg> status -job 100

... (removed a lot of information)

Details:
[Upgrading] is [0%] completed for [cdb1-people] 
                 +---------+-------------+
                 |CONTAINER|   PERCENTAGE|
                 +---------+-------------+
                 |   PEOPLE|UPGRADE [13%]|
                 |LOGISTICS|UPGRADE [13%]|
                 +---------+-------------+

When the upgrade completes, the PDBs are ready to be used. I suggest that you verify that the databases are open in READ WRITE mode and not in restricted mode. Finally, save the state, so the PDBs start automatically together with the CDB:

SQL> select name, open_mode, restricted from v$pdbs where name in ('PEOPLE', 'LOGISTICS');
SQL> --Verify open_mode=read write and restricted=no
SQL> alter pluggable database people save state;
SQL> alter pluggable database logistics save state;

Caution

With unplug-plug upgrades you can’t use Flashback Database as your fallback plan. It doesn’t work across the plug-in operation. You either have to:

  • Instruct AutoUpgrade to copy the unplugged data files before it plugs into the higher release CDB. That way, you still have the old unplugged data files, and just re-create the PDB in the lower release CDB. But you will have extra downtime because you need to copy the data files.
  • Use Refreshable PDBs to build a copy of your PDB in the higher release, target CDB. When you want to do the upgrade, perform the last refresh and upgrade the refreshable PDB.

Both of the above options require additional disk space to hold a copy of the database.

Of course, you can also use your regular backups as fallback.

What If

Your Target CDB Has a Standby Database?

For now, don’t use AutoUpgrade to make unplug-plug upgrades, if the target CDB has standby databases. A plug-in operation with a standby database is a tricky maneuvre, at least when you want to re-use the data files. We are still trying to figure out how to implement it in AutoUpgrade.

Having said that, it is absolutely doable. You can read more about in the following MOS notes:

You Are Using TDE Tablespace Encryption?

For now, don’t use AutoUpgrade to perform unplug-plug upgrades, if any tablespace in the PDB is encrypted with TDE Tablespace Encryption. We are working on making AutoUpgrade capable of better interacting with the TDE keystore, so keep an eye out for coming versions.

If TDE Tablespace Encryption is enabled in the target CDB, you can still use AutoUpgrade. The PDB will be plugged in as an unencrypted PDB.

Conclusion

Doing unplug-plug upgrades is now supported by AutoUpgrade as of version 21. It includes useful features for renaming PDBs and using copies of unplugged data files.

There is a video on YouTube that shows the procedure. And while you are there, I suggest that you subscribe to our channel.

Further Reading

New Version of AutoUpgrade

Our developers just published a new version of AutoUpgrade. Head over to My Oracle Support and download version 21.1.1.

The version says 21, but it can still upgrade your databases to lower release. So, you should use this new version to upgrade to Oracle Database 19c as well.

The supported target Oracle Database versions that are supported by the new version of AutoUpgrade

What’s New

A total of 48 enhancements and bug fixes made it into the new release. It has been 35 working days since the last release (and that includes Thanksgiving week) which means that the team has put in – on average – close to 1,5 changes into AutoUpgrade every, single working day. I find that quite impressive. And it really supports our message that you should always use the latest version of AutoUpgrade.

In My Oracle Support note 2485457.1 you can find a complete change log. Also, you can also visit the 21c New Feature Guide for additional information.

Unplug/plug Upgrade

The first things that I would like to highlight is unplug/plug upgrade. Instead of upgrading an entire CDB, you can now choose to unplug one or many PDBs, plug them into a higher release CDB, and upgrade the individual PDBs.

How an unplug/upgrade works of an Oracle Database PDB

How do you specify to upgrade only one PDB? Use target_cdb and pdbs?

upg1.sid=CDB1
upg1.pdbs=PDB3
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.target_cdb=CDB2

When doing unplug/plug upgrades you must be aware of the COMPATIBLE setting in the higher release CDB. If COMPATIBLE is higher, when the PDB plugs in, COMPATIBLE will silently and automatically be raised by the CDB. This means that you lose the capability of doing downgrades and flashback database.

You can read more about unplug/plug upgrades in the documentation

AutoUpgrade and Oracle Data Guard

In addition, AutoUpgrade now detects that you are upgrading a primary database in a Data Guard setup. In that case, it will automatically defer redo log transport to the standby databases and disable the Data Guard broker. After the upgrade, the broker is automatically restarted, but you need to manually reconfigure the standby databases, and eventually re-enable redo log transport.

You might ask: Why don’t AutoUpgrade handle it all for me? This is to preserve your fallback capabilities. Even after a successful upgrade, you still might want to revert back to the old version. Let’s say your testing finds a critical issue. You might still have a restore point on the primary database, but what if – for some reason – flashing back fails? Then you still have your standby database which was disabled right before the upgrade.

You can read more about AutoUpgrade and Data Guard in the documentation.

More

A few hints for the interested reader to explore. Have a look at the config file option catctl_options. You can use that to control the level of parallelism for a specific upgrade.

Also, if you don’t want AutoUpgrade to handle your network files (e.g. sqlnet.ora, tnsnames.ora) you disable it using manage_network_files.

You can read more about those options in the documentation.

Conclusion

Get the latest version of AutoUpgrade and benefit from the many new features. Keep an eye out for additional blog posts that will dig deeper into the new features.

Upgrade & Plug In: With ASM, Data Guard, TDE and no Keystore Password

I was helping a customer the other day together with Mike. They were upgrading from 18c to 19c and had to convert the database to a PDB as well. At first glance, it seemed pretty straightforward, but things got complicated because:

  • They have standby databases and want the Data Guard setup to survive the plug-in operation.
  • They are using ASM.
  • They are using TDE Tablespace Encryption and have also encrypted their SYSTEM and SYSAUX tablespace.
  • The DBA that will carry out the upgrade and plug-in is not allowed to have the TDE Keystore password. They have separation of duties, so only the security admins have the keystore password.

Can you do that? Yes, you can! Let me tell you how.

Upgrade

First, upgrade the database. You can easily maintain the Data Guard setup during an upgrade. I wrote a blog post about a little while ago. In addition, to upgrade a database with encrypted tablespaces you don’t need the keystore password. You must configure the database to use an auto login keystore, and that’s it. If you are concerned about the use of an auto-login keystore, you can simply remove it again after the upgrade.

External Store for a Keystore Password

The plug-in operation will require the keystore password. But the DBA doesn’t know it – so we need to find a solution for that. The solution is to store the keystore password in an external store. I also wrote a blog post about that a while ago. When you have it configured you can exchange the commands that require a keystore password, like:

SQL> ADMINISTER KEY MANAGEMENT ... KEYSTORE IDENTIFIED BY "S3c3tPassw0rd";

With this:

SQL> ADMINISTER KEY MANAGEMENT ... KEYSTORE IDENTIFIED BY EXTERNAL STORE;

The database will get the keystore password from an external store, which is basically a file in the file system which is encrypted with a password that only the database know.

The security admins would need to do this in the CDB that will receive the non-CDB database. They can do it in advance, so they can relax while the DBA carries out the operation in a maintenance window. If the TDE keystore is already configured using the WALLET_ROOT parameter, you can use the feature right away. Otherwise, you need a database restart to configure it.

Like with the auto-login keystore, if you are concerned about the security, you can simply disable it again after the operation.

Plug In

Now things get complicated. When you plug in your non-CDB database the manifest file contains information on where the data files are located – but only on the primary database. This is an extract of a manifest file (the one you create with DBMS_PDB.DESCRIBE):

<PDB>
  ...
  <tablespace>
    <name>SYSTEM</name>
    ...
	<file>
      <path>+DATA/SALES1/DATAFILE/system.311.1058127529</path>

After plug-in, the CDB can start to use the data files right away. It reads from the manifest files where the data files are located. But there is no information on where files are located on the standby database. To overcome this you must create aliases in the ASM instance on the standby host. The aliases will point back to the original data files (used by the standby database). So, when the plug-in happens and redo start to flow to the standby database, it will know which data files to recover. If you are storing data files in a regular file system, you could use soft links to serve the same purpose.

The procedure is already very well described:

I won’t repeat the procedure as the above articles are really good. But these articles don’t consider the situation where your SYSTEM and/or SYSAUX tablespace is encrypted.

If that is the case, you must import your encryption keys into CDB$ROOT before you execute the CREATE PLUGGABLE DATABASE command. In Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1) it should happen right before step 17.2.2:

SQL> alter session set container=CDB$ROOT;
SQL> administer key management import keys ... keystore identified by external store ... ;
SQL> --Continue with step 17.2.2
SQL> create pluggable database .... ;

Dots and Underscores

When you follow the MOS notes you might wonder why the dots in the ASM aliases are replaced with underscores. At first glance, I had no idea, but it worked. I later learned the following:

The format for an ASM filename is [filetype|tablespacename].[ASM file number].[file incarnation], but basically it is three pieces of name separated by periods. We can’t create any filename or alias that mimics that format. So the scripts change those periods to underscores (‘_’). That is allowed.

ORA-15032 and ORA-15046

Most likely you get this error because there are already existing aliases on the ASM file. Only one alias is allowed per file.

  1. You can verify that by using the ls command in ASMCMD. If it is an alias the Name column will look similar to this alias1 => +DATA/......
  2. Ensure the database is not using the alias. If it does, rename the file in the database.
  3. Remove the alias from ASM. It is strongly recommended to use rmalias. Although also possible with rm I consider it much safer to use rmalias.

Conclusion

You can upgrade and convert your database to a PDB without comprising your standby database. In addition to that, you can configure your database in such a way that you don’t even need to type in the TDE keystore password.

Upgrading in the cloud – VM DB Systems – 11.2.0.4 to 19c (minimal downtime)

This blog post is a follow-up blog post to a previous post. The procedure I described earlier was a simple approach that required downtime while the entire database is moved from one VM DB System to another. If you have strict requirements to downtime you might not be able to use that approach. In this blog post I will come up with an alternative. I will describe how you can use incremental backups to significantly lower the downtime required. Instead of doing a full backup when the database is down my idea is to:

  • Take a level 0 backup while the source database is up and running
  • Restore the database on target system
  • These two steps take time – but I don’t care because the source database is still up
  • Take incremental backup on source database
  • Recover target database using incremental backup
  • Perform final incremental backup/recover after downtime has started

Overview of DB Systems and databases

My source environment is the red environment. The DB System is called SRCHOST11 and it has an 11.2.0.4 database that is called SALES. Due to the restrictions of the VM DB System I have to move the database to a new DB System in order to upgrade it. I have created a brand-new target environment – the green environment – on the release that I want to target. I have named the DB System TGTHOST19 and it has a multitenant database called CDB1. When I am done, the target environment – CDB1 – will also contain a PDB named SALES. The SALES PDB will be the original 11.2.0.4 database that has been upgraded and converted.

For a short period of time I need to spin up a second database instance on the target system. This second – or temporary – instance will be a duplicate of the source database (as non-CDB database) and I will upgrade it to the new release. Then I can plug in the database as a PDB in the precreated CDB database and get rid of the second/temporary instance. You will see how it works later in the blog post.

Backup Database

I need to exchange files between the source and the target systems, and I will use a File Storage service for that. Check out the documentation if you need help creating one – I created one already called upgsales and now I can mount it on my source system:

[opc@srchost11]$ sudo mkdir -p /mnt/upgsales
[opc@srchost11]$ sudo chmod 777 /mnt/upgsales/
[opc@srchost11]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

While the source database is still open and in use, I will start preparing the backup. First, the password file and wallet:

[oracle@srchost11]$ mkdir -p /mnt/upgsales/backup
[oracle@srchost11]$ cp /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/ewallet.p12 /mnt/upgsales/
[oracle@srchost11]$ cp $ORACLE_HOME/dbs/orapw$ORACLE_SID /mnt/upgsales/orapw$ORACLE_SID

If you are really concerned about security, you can copy the wallet file directly to the target system – instead of via the File Storage service. The File Storage service itself is secured, but the fewer places you have a copy of the wallet – the better and the safer, I assume. Further, you can also encrypt traffic to and from the File Storage service.

Next, a PFile:

SALES SQL> CREATE PFILE='/mnt/upgsales/init.ora' FROM SPFILE;

And now I start a level 0 backup:

SALES RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE FORMAT '/mnt/upgsales/backup/lvl0%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U' ;
SALES RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/upgsales/backup/controlfile';

Restore Database

On my target system, I need to access the File Storage service as well:

[opc@tgthost19]$ sudo mkdir -p /mnt/upgsales
[opc@tgthost19]$ sudo chmod 777 /mnt/upgsales/
[opc@tgthost19]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

Next, I will copy the password file and PFile into the target Oracle Home. I need that in order to start the temporary instance. Note, the name of the temporary instance will be SALES – the same as the source database SID:

[oracle@tgthost19]$ cp /mnt/upgsales/init.ora $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ cp /mnt/upgsales/orapwSALES $ORACLE_HOME/dbs/orapwSALES

I also need to copy the wallet:

[oracle@tgthost19]$ mkdir -p /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ cp /mnt/upgsales/ewallet.p12 /opt/oracle/dcs/commonstore/wallets/tde/SALES/

And I need to create a directory for audit_file_dest:

[oracle@tgthost19]$ mkdir -p /u01/app/oracle/admin/SALES/adump

Now, I must edit the PFile:

[oracle@tgthost19]$ vi $ORACLE_HOME/dbs/initSALES.ora

And make the following changes:

  • Remove all the double-underscore parameters that contains the memory settings from last restart. That could for instance be SALES.__db_cache_size.
  • Set audit_file_dest=’/u01/app/oracle/admin/SALES/adump’
  • Set control_files=’+RECO/sales/controlfile/current.256.1048859635′
  • Set SALES.sga_target=6G
  • Set SALES.pga_aggregate_target=2G
  • Set db_unique_name=’SALES’

I don’t have an abundance of memory on this system, so I keep the memory settings. Strictly speaking you don’t have to change db_unique_name, but I am doing it so it will be easier to cleanup afterwards.

While I work on the temporary instance, I must shut down the other database – the pre-created one that eventually will hold the PDB. Most likely there is not enough memory on the system to support two databases:

[oracle@tgthost19]$ $ORACLE_HOME/bin/srvctl stop database -db $ORACLE_UNQNAME

Let’s start the temporary instance in NOMOUNT mode. Remember to set the environment:

[oracle@tgthost19]$ export ORACLE_UNQNAME=SALES
[oracle@tgthost19]$ export ORACLE_SID=SALES
[oracle@tgthost19]$ sql / as sysdba

SALES SQL> STARTUP NOMOUNT

And finally, I can start the restore using RMAN. Once the database is mounted I must open the keystore, otherwise, the database can’t perform recovery. Then, I can use the catalog command to find the backup pieces in my staging area. And finally, do the restore:

[oracle@tgthost19]$ rman target /

SALES RMAN> RESTORE CONTROLFILE FROM '/mnt/upgsales/backup/controlfile';
SALES RMAN> ALTER DATABASE MOUNT;
SALES RMAN> sql 'ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>';
SALES RMAN> sql "ADMINISTER KEY MANAGEMENT CREATE LOCAL AUTO_LOGIN KEYSTORE FROM KEYSTORE ''/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/'' IDENTIFIED BY <SALES-keystore-password>";
SALES RMAN> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;
SALES RMAN> RESTORE DATABASE;

The SALES database is now restored on my target system. I will leave it there – unrecovered and in MOUNT mode so I can apply incremental backups later on.

Incremental Backup/Recover

I can do as many incremental backup/recover cycles as I want. But what matters is that I make one and restore it – as close to the start of the downtime window as possible. This will significantly reduce the time it takes to make the final incremental backup/restore later on.

On my source database, start an incremental backup:

[oracle@srchost11]$ rman target /

SALES RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE FORMAT '/mnt/upgsales/backup/lvl1%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U';

Now, switch to the target system and recover using that backup. I use the CATALOG command to instruct RMAN to find new backups at the shared file storage.

[oracle@tgthost19]$ rman target /

RMAN SALES> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;
RMAN SALES> RECOVER DATABASE;

RMAN will complain about a missing log file. But worry – this is expected and will be fixed later on:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/28/2020 09:06:51
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 60 and starting SCN of 793358

Down Time Starts

Now it is time to kick users off the database. Your precious downtime starts now.

Prepare Database For Upgrade

In my database I will create some sample data so I can verify the upgrade:

SALES SQL> CREATE USER UPG19 IDENTIFIED BY <secret-password>;
SALES SQL> ALTER USER UPG19 QUOTA UNLIMITED ON USERS;
SALES SQL> CREATE TABLE UPG19.ORDERS(ID NUMBER, CUSTOMER VARCHAR2(50), AMOUNT NUMBER) TABLESPACE USERS;
SALES SQL> INSERT INTO UPG19.ORDERS VALUES(1, 'John', 500);
SALES SQL> COMMIT;

I must prepare my database for upgrade on the source system. When I open the database on the target system, I can only do that in UPGRADE mode (because the database will be restored using 19c Oracle Home). In UPGRADE mode it is impossible to do the pre-upgrade tasks.

I will use the classic preupgrade.jar tool in this demo, but you could also use the newer AutoUpgrade. Always get the latest preupgrade tool from My Oracle Support. Upload the zip file (named preupgrade_19_cbuild_7_lf.zip in my demo) to the source system, extract to $ORACLE_HOME/rdbms/admin and do the pre-upgrade checks:

[oracle@srchost11]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ cd $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ unzip preupgrade_19_cbuild_7_lf.zip

[oracle@srchost11]$ mkdir -p /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ cd /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/preupgrade.jar FILE TEXT DIR .

You must upload the same version of the preupgrade tool to the target system before you can run the post-upgrade fixups. Hence, save the zip file so you don’t have to download it again.

Next, I will review the report generated by the tool:

[oracle@srchost11]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

And I can execute the pre-upgrade fixups:

SALES SQL> SET SERVEROUT ON
SALES SQL> @/mnt/upgsales/preupg_logs_SALES/preupgrade_fixups.sql

Final Incremental Backup/Recover

I can now make the last incremental backup on my source system. To be absolutely sure nothing else gets into the source database from now on, I restart the database in restricted mode:

[oracle@srchost11]$ sqlplus / as sysdba

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP RESTRICT

Then I use RMAN to archive the current log file and start the last backup:

[oracle@srchost11]$ rman target /

SALES RMAN> sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
SALES RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE FORMAT '/mnt/upgsales/backup/lvl1%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U';

Now, switch to the target system, catalog the new backups:

[oracle@tgthost19]$ rman target /

RMAN SALES> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;

By default, RMAN will try to perform complete recovery. But I can’t do that because I don’t have the online redo logs. I must perform incomplete recovery. That sounds dangerous, but it is not. I archived the current log file after I had ensured that no one was using the system (I started in restricted mode, remember). To perform incomplete recovery, I must know at which sequence to stop. I will use the LIST command in RMAN to do that:

RMAN SALES> LIST BACKUP OF ARCHIVELOG ALL;

Sample output of RMAN command "list archivelog all" showing which sequence to restore until I take the last available sequence and add one. In my case, I will recover until sequence 65:

RMAN SALES> run {
   SET UNTIL SEQUENCE 65 THREAD 1;
   RECOVER DATABASE;
}

Finally, you can switch to SQLPlus and open the database. You could in theory also do that from RMAN but likely you will hit ORA-04023: Object SYS.STANDARD could not be validated or authorized:

[oracle@tgthost19]$ sqlplus / as sysdba

SALES SQL> ALTER DATABASE OPEN RESETLOGS UPGRADE;

Upgrade Database

I must upload the same version of the preupgrade tool to the target Oracle Home, before I can do the post-upgrade fixups:

[oracle@tgthost19]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ cd $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ unzip preupgrade_19_cbuild_7_lf.zip

I can now upgrade the database. Ensure to use the same prompt that has the environment set to the SALES database – the temporary instance:

[oracle@tgthost19]$ mkdir -p /mnt/upgsales/upg_logs_SALES
[oracle@tgthost19]$ dbupgrade -l /mnt/upgsales/upg_logs_SALES

Once the upgrade completes, I will finish with the post-upgrade tasks

SQL> STARTUP

SQL> --Recompile
SQL> @$ORACLE_HOME/rdbms/admin/utlrp
SQL> --Check outcome of upgrade
SQL> @$ORACLE_HOME/rdbms/admin/utlusts.sql
SQL> --Post-upgrade fixups
SQL> @/mnt/upgsales/preupg_logs_$SOURCE_SID/postupgrade_fixups.sql
SQL> --Timezone file upgrade
SQL> SET SERVEROUTPUT ON
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql

Last, have a look in the report generated by preupgrade.jar to see if there are any post-upgrade tasks that you have to execute:

[oracle@tgthost19]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

Plug In Database

Now that the temporary database is upgraded let’s look at what we need to prepare for the conversion to a PDB. First, I will export the encryption keys:

SALES SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "<a-secret-password>" TO '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>;

And then I open the database in READ ONLY mode to create a manifest file. After that, I completely shut down the temporary database and, hopefully, it won’t be needed anymore:

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP MOUNT
SALES SQL> ALTER DATABASE OPEN READ ONLY;
SALES SQL> EXEC DBMS_PDB.DESCRIBE('/mnt/upgsales/manifest_sales.xml');
SALES SQL> SHUTDOWN IMMEDIATE

Now, I will restart CDB1 which I shut down previously. I will work in CDB1 for the rest of the blog post. Notice, how I am resetting my environment variables to the original values using the source command. You could also open a new SSH session instead. Anyway, just ensure that your environment is now set to work on the original database, CDB1:

[oracle@tgthost19]$ source ~/.bashrc
[oracle@tgthost19]$ env | grep ORA
[oracle@tgthost19]$ $ORACLE_HOME/bin/srvctl start database -db $ORACLE_UNQNAME

I check for plug in compatibility:

CDB1 SQL> SET SERVEROUT ON
CDB1 SQL> BEGIN 
    IF DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/mnt/upgsales/manifest_sales.xml', 'SALES') THEN
        DBMS_OUTPUT.PUT_LINE('SUCCESS');
    ELSE
        DBMS_OUTPUT.PUT_LINE('ERROR');
    END IF;
END;
/

Hopefully, it should read out SUCCESS. If not, you can query PDB_PLUG_IN_VIOLATIONS to find out why:

CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

I can plug in the SALES database as a new PDB – which I also will call SALES. I am using the MOVE keyword to have my data files moved to a directory that matches the naming standard:

CDB1 SQL> CREATE PLUGGABLE DATABASE SALES USING '/mnt/upgsales/manifest_sales.xml' MOVE;
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES OPEN;

I could also use the NOCOPY keyword and just use the data files from where they currently are placed. Later on, I could move the data files to a proper directory that follows the naming standard, and if I were on Enterprise Edition, I could even use online datafile move.

Next, I can switch to the SALES PDB and import my encryption keys from the file I made a little earlier. Note, that I must enter the secret that I used in the export. And now I have to enter the keystore password for CDB1:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "a-secret-password" FROM '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <CDB1-keystore-password> WITH BACKUP;

Be aware, that if your system tablespaces are encrypted, you might have to import the encryption key into CDB$ROOT as well before you can open the database.

Now, it is time to fully convert the database to a PDB:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
CDB1 SQL> SHUTDOWN IMMEDIATE
CDB1 SQL> STARTUP

Now, check and resolve any plug-in violations:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

And finally, ensure that OPEN_MODE=READ WRITE and RESTRICTED=NO. When so, I can save the state of the PDB so it will auto-open whenever the CDB restarts:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT OPEN_MODE, RESTRICTED FROM V$PDBS WHERE NAME='SALES';
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES SAVE STATE;

Verify that my test data exist:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> SELECT * FROM UPG19.ORDERS;

That’s it. The database is now fully upgraded to 19c and converted to a PDB. Be sure to:

  • Start a backup
  • Test your application
  • Adjust your connection strings
  • And what else your procedure mandates

Wrap-Up

Let’s clean up on the target system! I can remove the files and folders that were created to support the temporary instance:

[oracle@tgthost19]$ #audit dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/admin/SALES/adump
[oracle@tgthost19]$ #diag dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/diag/rdbms/sales
[oracle@tgthost19]$ #wallet
[oracle@tgthost19]$ rm -rf /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ #instance files
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/orapwSALES
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/spfileSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/hc_SALES.dat
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/lkSALES
[oracle@tgthost19]$ #exported master key
[oracle@tgthost19]$ rm /mnt/upgsales/key_export_SALES

Also, since I stored data files in ASM I can delete those as well. Note you have to log on as grid to do that:

[grid@tgthost19]$ asmcmd rm -rf +DATA/SALES
[grid@tgthost19]$ asmcmd rm -rf +RECO/SALES

I can also drop the PDB that gets created automatically when you deploy the new DB System. In my case it is named CDB1_PDB1:

SQL> ALTER PLUGGABLE DATABASE CDB1_PDB1 CLOSE;
SQL> DROP PLUGGABLE DATABASE CDB1_PDB1 INCLUDING DATAFILES;

Also, I can remove the File Storage service that I created. If you want to keep log files from the upgrade (or other files) be sure to copy them somewhere else.

Last, when I am convinced that my upgraded and converted database is doing good, I can terminate the entire source DB system.

Tweaks

If you have a license for any of the Enterprise Edition offerings you might be able to use some of the below features to speed up and backup and recovery. Before using any of them be sure to check the license guide and confirm you have a proper license.

  • Block change tracking – reduces backup time because RMAN doesn’t need to scan the entire database.
  • Parallel backup and recovery – more channels, faster backups and faster restores.
  • Compression – reduces the size of the backups. Since you can apply incremental backups continuously the backup size should be fairly small anyway.

Disclaimer

I am not a backup expert (probably far from). When writing this post I was struggling a lot with missing archive logs. I even had to call an old mentor for advice. But in the end, I decided just to include them in all backups. Possibly, there is a die-hard-RMAN-expert out there that can tell me a better way of doing it. But for sure it doesn’t hurt to include them…

If you come up with a better way, please leave a comment. I would love to learn more.

Conclusion

You can upgrade a 11.2.0.4 database to 19c by moving the database to a new VM DB System. You can reduce downtime by using incremantal backups. You must convert the database to a pluggable database as well because multitenant is the only supported architecture for VM DB Systems on 19c.

References

Other Posts in This Series

Upgrading in the cloud – VM DB Systems – 11.2.0.4 to 19c (simple)

In this blog post I will show you how you can upgrade a database on 11.2.0.4 to 19c. It will also include conversion from the non-CDB architecture into a pluggable database. I have to do this because for VM DB Systems the only supported architecture for 19c is multitenant. Finally, I will use a Standard Edition database to show you something that can be used in any edition.

Overview of DB Systems and databases

My source environment is the red environment. The DB System is called SRCHOST11 and it has an 11.2.0.4 database that is called SALES. Due to the restrictions of the VM DB System I have to move the database to a new DB System in order to upgrade it. I have created a brand-new target environment – the green environment – on the release that I want to target. I have named the DB System TGTHOST19 and it has a multitenant database called CDB1. When I am done, the target environment – CDB1 – will also contain a PDB named SALES. The SALES PDB will be the original 11.2.0.4 database that has been upgraded and converted.

The aim of this blog post is to make it as easy as possible. When I have to move the database from the source DB System to the target DB System, I will just make a full backup that I can restore on the target environment. Obviously, this requires downtime and the amount depends on the size of the database and the transfer speed between the two DB Systems.

My highlevel plan for the task looks like this:

  • Prepare database for upgrade
  • Backup database
  • Restore database
  • Upgrade database
  • Plug in database
  • Wrap-Up

I will elaborate a little on the Restore database part. On VM DB Systems you are not allowed to create your own databases. You can only use the database that gets created when the system is provisioned. However, for a short period of time I need to spin up a second database instance on the target system. This second – or temporary – instance will be a duplicate of the source database (as non-CDB database) and I will upgrade it to the new release. Then I can plug in the database as a PDB in the precreated CDB database, and get rid of the second/temporary instance. You will see how it works later in the blog post.

Prepare Database For Upgrade

I need to exchange files between the source and the target systems and I will use a File Storage service for that. Check out the documentation if you need help creating one – I created one already called upgsales and now I can mount it on my source system:

[opc@srchost11]$ sudo mkdir -p /mnt/upgsales
[opc@srchost11]$ sudo chmod 777 /mnt/upgsales/
[opc@srchost11]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

In my database I will create some sample data so we can verify the upgrade:

SALES SQL> CREATE USER UPG19 IDENTIFIED BY <secret-password>;
SALES SQL> ALTER USER UPG19 QUOTA UNLIMITED ON USERS;
SALES SQL> CREATE TABLE UPG19.ORDERS(ID NUMBER, CUSTOMER VARCHAR2(50), AMOUNT NUMBER) TABLESPACE USERS;
SALES SQL> INSERT INTO UPG19.ORDERS VALUES(1, 'John', 500);
SALES SQL> COMMIT;

DOWN TIME STARTS NOW – get those users off!

I must prepare my database for upgrade on the source system. When I restore the database on the target system I can only open the database in UPGRADE mode (because the database will be restored using 19c Oracle Home). In UPGRADE mode it is impossible to do the pre-upgrade tasks.

I will use the classic preupgrade.jar tool in this demo, but you could also use the newer AutoUpgrade. Always get the latest preupgrade tool from My Oracle Support. Upload the zip file (named preupgrade_19_cbuild_7_lf.zip in my demo) to the source system, extract to $ORACLE_HOME/rdbms/admin and do the pre-upgrade checks:

[oracle@srchost11]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ cd $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ unzip preupgrade_19_cbuild_7_lf.zip

[oracle@srchost11]$ mkdir -p /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ cd /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/preupgrade.jar FILE TEXT DIR .

You must upload the same version of the preupgrade tool to the target system before you can run the post-upgrade fixups. Hence, save the zip file so you don’t have to download it again.

Next, I will review the report generated by the tool:

[oracle@srchost11]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

And I can execute the pre-upgrade fixups:

SALES SQL> SET SERVEROUT ON
SALES SQL> @/mnt/upgsales/preupg_logs_SALES/preupgrade_fixups.sql

Backup Database

The database is now prepared for upgrade. Next, I will get what I need to move the database. First, a PFile:

SALES SQL> CREATE PFILE='/mnt/upgsales/init.ora' FROM SPFILE;

Now I will shut down the database and restart in MOUNT mode. Then I can start a level 0 backup:

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP MOUNT
SALES SQL> EXIT

[oracle@srchost11]$ rman target /

SALES RMAN> BACKUP DATABASE FORMAT '/mnt/upgsales/db_%U';
SALES RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/upgsales/cf_%U';

Now we just need the password file and wallet:

[oracle@srchost11]$ cp /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/ewallet.p12 /mnt/upgsales/
[oracle@srchost11]$ cp $ORACLE_HOME/dbs/orapw$ORACLE_SID /mnt/upgsales/orapw$ORACLE_SID

If you are really concerned about security you can copy the wallet file directly to the target system – instead of via the File Storage service. The File Storage service itself is secured, but the fewer places you have a copy of the wallet – the better and the safer, I assume. Further, you can also encrypt traffic to and from the File Storage service.

Restore Database

On my target system, I need to access the File Storage service as well:

[opc@tgthost19]$ sudo mkdir -p /mnt/upgsales
[opc@tgthost19]$ sudo chmod 777 /mnt/upgsales/
[opc@tgthost19]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

Next, I will copy the password file and PFile into the target Oracle Home. I need that in order to start the temporary instance. Note, the name of the temporary instance will be SALES – the same as the source database SID:

[oracle@tgthost19]$ cp /mnt/upgsales/init.ora $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ cp /mnt/upgsales/orapwSALES $ORACLE_HOME/dbs/orapwSALES

I also need to copy the wallet:

[oracle@tgthost19]$ mkdir -p /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ cp /mnt/upgsales/ewallet.p12 /opt/oracle/dcs/commonstore/wallets/tde/SALES/

And I need to create a directory for audit_file_dest:

[oracle@tgthost19]$ mkdir -p /u01/app/oracle/admin/SALES/adump

Now, I must edit the PFile:

[oracle@tgthost19]$ vi $ORACLE_HOME/dbs/initSALES.ora

And make the following changes:

  • Remove all the double-underscore parameters that contains the memory settings from last restart. That could for instance be SALES.__db_cache_size.
  • Set audit_file_dest=’/u01/app/oracle/admin/SALES/adump’
  • Set control_files=’+RECO/sales/controlfile/current.256.1048859635′
  • Set SALES.sga_target=6G
  • Set SALES.pga_aggregate_target=2G
  • Set db_unique_name=’SALES’

I don’t have an abundance of memory on this sytem, so I keep the memory settings. Strictly speaking you don’t have to change db_unique_name, but I am doing it so it will be easier to cleanup afterwards.

While I work on the temporary instance I must shut down the other database – the pre-created one that eventually will hold the PDB. Most likely there is not enough memory on the system to support two databases:

[oracle@tgthost19]$ sql / as sysdba

CDB1 SQL> SHUTDOWN IMMEDIATE

Let’s start the temporary instance in NOMOUNT mode. Remember to set the environment:

[oracle@tgthost19]$ export ORACLE_UNQNAME=SALES
[oracle@tgthost19]$ export ORACLE_SID=SALES
[oracle@tgthost19]$ sql / as sysdba

SALES SQL> STARTUP NOMOUNT

And finally, I can start the restore using RMAN. Notice how I am using the NOOPEN keyword which instructs RMAN to keep the database MOUNTED and not try to attempt to open the database. If you try to open the database it will fail because the database must be open in UPGRADE mode. At this point in time, the database itself is on 11.2.0.4 but running on 19c binaries:

[oracle@tgthost19]$ rman auxiliary /

SALES RMAN> DUPLICATE DATABASE TO SALES NOOPEN BACKUP LOCATION '/mnt/upgsales/';

Upgrade Database

RMAN left the database in MOUNTED mode. Before I can open the database I must open the keystore:

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY <source-keystore-password>;

Now, I can open the database and execute the RESETLOGS operation that RMAN didn’t do:

SQL> ALTER DATABASE OPEN RESETLOGS UPGRADE;

I must upload the same version of the preupgrade tool to the target Oracle Home, before I can do the post-upgrade fixups:

[oracle@tgthost19]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ cd $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ unzip preupgrade_19_cbuild_7_lf.zip

And I can now upgrade the database:

[oracle@tgthost19]$ mkdir -p /mnt/upgsales/upg_logs_SALES
[oracle@tgthost19]$ dbupgrade -l /mnt/upgsales/upg_logs_SALES

Once the upgrade completes I will finish with the post-upgrade tasks

SQL> STARTUP

SQL> --Recompile
SQL> @$ORACLE_HOME/rdbms/admin/utlrp
SQL> --Check outcome of upgrade
SQL> @$ORACLE_HOME/rdbms/admin/utlusts.sql
SQL> --Post-upgrade fixups
SQL> @/mnt/upgsales/preupg_logs_$SOURCE_SID/postupgrade_fixups.sql
SQL> --Timezone file upgrade
SQL> SET SERVEROUTPUT ON
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql

Last, have a look in the report generated by preupgrade.jar to see if there are any post-upgrade tasks that you have to execute:

[oracle@tgthost19]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

Plug In Database

Now that the temporary database is upgraded let’s look at what we need to prepare for the conversion to a PDB. First, I will export the encryption keys:

SALES SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "<a-secret-password>" TO '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>;

And then I open the database in READ ONLY mode to create a manifest file. After that, I completely shutdown the temporary database and, hopefully, it wont be needed anymore:

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP MOUNT
SALES SQL> ALTER DATABASE OPEN READ ONLY;
SALES SQL> EXEC DBMS_PDB.DESCRIBE('/mnt/upgsales/manifest_sales.xml');
SALES SQL> SHUTDOWN IMMEDIATE

Now, I will restart CDB1 which I shut down previously. I will work in CDB1 for the rest of the blog post. Notice, how I am resetting my environment variables to the original values using the source command. You could also open a new SSH session instead. Anyway, just ensure that your environment is now set to work on the original database, CDB1:

[oracle@tgthost19]$ source ~/.bashrc
[oracle@tgthost19]$ env | grep ORA
[oracle@tgthost19]$ sql / as sysdba

CDB1 SQL> STARTUP

I check for plug in compability:

CDB1 SQL> SET SERVEROUT ON
CDB1 SQL> BEGIN 
    IF DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/mnt/upgsales/manifest_sales.xml', 'SALES') THEN
        DBMS_OUTPUT.PUT_LINE('SUCCESS');
    ELSE
        DBMS_OUTPUT.PUT_LINE('ERROR');
    END IF;
END;
/

Hopefully, it should read out SUCCESS. If not, you can query PDB_PLUG_IN_VIOLATIONS to find out why:

CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

I can plugin the SALES database as a new PDB – which I also will call SALES. I am using the MOVE keyword to have my data files moved to a directory that matches the naming standard:

CDB1 SQL> CREATE PLUGGABLE DATABASE SALES USING '/mnt/upgsales/manifest_sales.xml' MOVE;
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES OPEN;

I could also use the NOCOPY keyword and just use the data files from where they currently are placed. Later on, I could move the data files to a proper directory that follows the naming standard, and if I were on Enterprise Edition I could even use online datafile move.

Next, I can switch to the SALES PDB and import my encryption keys from the file I made a little earlier. Note, that I must enter the secret that I used in the export. And now I have to enter the keystore password for CDB1:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "a-secret-password" FROM '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <CDB1-keystore-password> WITH BACKUP;

Be aware, that if your system tablespaces are encrypted, you might have to import the encryption key into CDB$ROOT as well before you can open the database.

Now, it is time to fully convert the database into a PDB:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
CDB1 SQL> SHUTDOWN IMMEDIATE
CDB1 SQL> STARTUP

Now, check and resolve any plug in violations:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

And finally, ensure that OPEN_MODE=READ WRITE and RESTRICTED=NO. When so, I can save the state of the PDB so it will auto-open whenever the CDB restarts:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT OPEN_MODE, RESTRICTED FROM V$PDBS WHERE NAME='SALES';
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES SAVE STATE;

That’s it. The database is now fully upgraded to 19c and converted to a PDB. Be sure to:

  • Start a backup
  • Test your application
  • Adjust your connection strings
  • And what else your procedure mandates

Wrap-Up

Let’s clean up on the target system! I can remove the files and folders that were created to support the temporary instance:

[oracle@tgthost19]$ #audit dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/admin/SALES/adump
[oracle@tgthost19]$ #diag dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/diag/rdbms/sales
[oracle@tgthost19]$ #wallet
[oracle@tgthost19]$ rm -rf /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ #instance files
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/orapwSALES
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/spfileSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/hc_SALES.dat
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/lkSALES
[oracle@tgthost19]$ #exported master key
[oracle@tgthost19]$ rm /mnt/upgsales/key_export_SALES

Also, since I stored data files in ASM I can delete those as well. Note you have to log on as grid to do that:

[grid@tgthost19]$ asmcmd rm -rf +DATA/SALES
[grid@tgthost19]$ asmcmd rm -rf +RECO/SALES

I can also drop the PDB that gets created automatically when you deploy the new DB System. In my case it is named CDB1_PDB1:

SQL> ALTER PLUGGABLE DATABASE CDB1_PDB1 CLOSE;
SQL> DROP PLUGGABLE DATABASE CDB1_PDB1 INCLUDING DATAFILES;

Also, I can remove the File Storage service that I created. If you want to keep log files from the upgrade (or other files) be sure to copy them somewhere else.

Last, when I am convinced that my upgraded and converted database is doing good, I can terminate the entire source DB system.

Tweaks

The transfer speed to the File Storage service is depending on the number of CPUs on your system (more CPUs, more network speed). If the bottleneck is the network, you can try to temporarily add more CPUs.

If you have a license for any of the Enterprise Edition offerings you might be able to use some of the below features to speed up and backup and recovery. Before using any of them be sure to check the license guide and confirm you have a proper license.

  • Parallel backup and recovery – more channels, faster backups and faster restores.
  • Compression – reduces the size of the backups which is beneficial when they are transported over the network.

Conclusion

You can upgrade a 11.2.0.4 database to 19c by moving the database to a new VM DB System. You must convert the database to a pluggable database as well because multitenant is the only supported architecture for VM DB Systems on 19c.

References

Other Posts in This Series

Upgrading in the cloud – VM DB Systems – What about downgrade?

In a previous blog post I showed how you could upgrade a 12.2 PDB by plugging it into a 19c CDB. But what about downgrade? Yeah, downgrade. You know, that really cool feature that you never practice, but you know you should.

In the previous blog post, I used the CDB that gets automatically created when you deploy a new 19c VM DB System and it comes with COMPATIBLE set to 19.0.0 – the default. When you provision a new VM DB System there is no way to control that parameter. Thus, when I plug in my old release PDB into the new release CDB the COMPATIBLE parameter is automatically raised, and I have lost the possibility of doing a downgrade. The only option to get back to the old release would be a Data Pump export.

If you want to preserve the possibility of doing a database downgrade, I strongly recommend you switch to Bare Metal DB Systems or ExaCS which are much more flexible. But if you insist on using VM DB Systems there is an option – it is cumbersome – but doable. And believe it or not – after working with Oracle Database for so many years it was the first time ever that I had to a downgrade – not even in a lab or a test environment (which became fairly obvious after I had asked for advice the 100th time that day).

Drop pre-created 19c database

To get a 19c CDB with a non-default COMPATIBLE setting we will drop the pre-created database and replace it with a backup that has the proper COMPATIBLE setting. This is supported and the same approach is used in the whitepaper “Hybrid Data Guard to Oracle Cloud Infrastructure”. It is a good read and it has a lot of script examples that I stole… oh… got inspired by.

Connect to the new release VM DB System and ensure that the environment variable ORACLE_UNQNAME is set to the DB_UNIQUE_NAME of the database:

echo $ORACLE_UNQNAME

If it is not set, you can get it from srvctl:

srvctl config database
export ORACLE_UNQNAME=...

Generate a script that can delete all data -, temp -, redo log – and control files:

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_replace_files.lst
SELECT 'asmcmd rm '|| name FROM V$DATAFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$TEMPFILE UNION ALL SELECT 'asmcmd rm '|| member FROM V$LOGFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$CONTROLFILE;
SPOOL OFF
host chmod 777 /tmp/db_replace_files.lst

You must edit the script and get rid of the unneeded lines. Next, stop the database:

srvctl stop database -d $ORACLE_UNQNAME -o immediate

And log on as grid and delete the files using the script we just created:

. /tmp/db_replace_files.lst

As oracle, now restart the database instance in NOMOUNT mode (can’t really go further since we nuked the control files) and set the COMPATIBLE to the same setting as the old release CDB.

srvctl start database -db $ORACLE_UNQNAME -o NOMOUNT

sql / as sysdba
SQL> ALTER SYSTEM SET COMPATIBLE='12.2.0' SCOPE=SPFILE;
SQL> CREATE PFILE FROM SPFILE;

srvctl stop database -db $ORACLE_UNQNAME -o immediate
srvctl start database -db $ORACLE_UNQNAME -o nomount

sql / as sysdba
SQL> SHOW PARAMETER COMPATIBLE

Now we have a new release instance running with the old COMPATIBLE setting. Obviously, it is of no use – yet. Starting a 19c instance with a lower compatible setting

Create a backup of old release CDB

I will use the old release CDB as the source for my new release CDB and thus preserve my COMPATIBLE setting. Obviously, the old release CDB must be upgraded to the new release, so let us first must ensure that the source CDB can actually be upgraded. Use AutoUpgrade in ANALYZE and FIXUPS mode which is described in a previous blog post. We must execute the ANALYZE and FIXUPS mode on the source system because the target system will only be able to open the database in UPGRADE mode, and these steps must be executed while the database is running in normal mode.

Next, I will create a File Storage Service that I can use to share files between the two VM DB Systems. The File Storage Service is really nice because the transfer speed in and out of the service depends on the network bandwidth of your VM DB System. So, if you add more CPUs to the system, you get more network bandwidth to the service automatically. It is really easy to create a File Storage Service and it is very well documented, so I will skip that part here. After that you can mount the file system in your systems (as opc):

sudo mkdir -p /mnt/db-downgrade-122
sudo mount x.x.x.x:/db-downgrade-122 /mnt/db-downgrade-122
sudo chmod 777 /mnt/db-downgrade-122

I can now take a backup of the CDB and store it directly in my NFS mount point:

rman target / 

RMAN> BACKUP DATABASE ROOT FORMAT '/mnt/db-downgrade-122/cdbroot_%U' PLUGGABLE DATABASE 'PDB$SEED' FORMAT '/mnt/db-downgrade-122/pdbseed_%U' PLUS ARCHIVELOG FORMAT '/mnt/db-downgrade-122/arch_%U';
RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/db-downgrade-122/cf_%U';

Since the database is encrypted, we also need a copy of the keystore (or wallet). An easy solution could be to put the keystore files into the File Storage Service but a safer approach is to transfer the file directly using scp. Although, the keystore is protected by a password, it is still safer to keep them separated.

At time of writing, in OCI you can find the location of the keystore in sqlnet.ora.

cat $ORACLE_HOME/network/admin/sqlnet.ora | grep -i encryption_wallet

But that will change at some point in time because as of Oracle Database release 19c the sqlnet.ora parameter ENCRYPTION_WALLET_LOCATION is deprecated. You might have to look at the database parameter WALLET_ROOT instead.

For now, just note down the location of the keystore.

Restore old release CDB and upgrade

Now we can restore the backup of the 12.2 database using the 19c binaries. This is supported and in fact newer releases of RMAN can always restore lower release backups. However, normally RMAN will try to open the database in normal mode which we can’t do because of the version mismatch. I will use the NOOPEN keyword which causes RMAN to leave the database in MOUNT mode and I can manually open the database with RESETLOGS and in UPGRADE mode.

But first we must copy the keystore files from the source database to the new release DB System. In my example I am also copying the auto-login keystore which you shouldn’t do if you are using local auto-login keystores. In that situation the auto-login keystore should be re-created at the new release system:

cd 
scp -i  oracle@:/cwallet.sso cwallet.sso
scp -i  oracle@:/ewallet.p12 ewallet.p12

Now let’s do the restore and open the database in upgrade mode:

rman auxiliary /

RMAN> DUPLICATE DATABASE TO CDB1 AS ENCRYPTED NOOPEN SKIP PLUGGABLE DATABASE SALES BACKUP LOCATION '/mnt/db-downgrade-122/';

And in the end, you can see that RMAN does not open the database: Using the NOPEN keyword you can prevent RMAN from opening a database at the end of the restore and recover operation

Then, you can open the database in UPGRADE mode and with RESETLOGS option to complete the restore. Also, drop the skipped pluggable databases from the data dictionary:

ALTER DATABASE OPEN RESETLOGS UPGRADE;
DROP PLUGGABLE DATABASE SALES INCLUDING DATAFILES;

Finally, you can use AutoUpgrade in UPGRADE mode to upgrade the CDB to the new release. You now have a new target system running on 19c but with a lower COMPATIBLE setting.

If you want to know more about the UPGRADE mode, have a look in the documentation.

Downgrading a PDB

Previously, we have laid the groundworks for the being able to do a downgrade. For the following, I am assuming that you have already upgraded your PDB to 19c and now you ended up in a big doo doo and have to downgrade.

Downgrading in VM DB Systems are also slightly more complicated than on other systems. Remember that VM DB Systems does only support one Oracle Home (the one that comes deployed automatically) and that means that we must move the database back to the source system. For that operation we can’t use data guard, RMAN, or any other fancy approach because they only work to the same or newer release. So, we will have to do an old-school cold copy of the database – and that requires additional downtime. But let’s get started!

I will create a file that gives me all the commands I need to copy the data files out of ASM and into my File Storage Service (you could also scp them directly to the old release system):

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_downgrade_files.lst
SELECT 'asmcmd cp ' || name || ' ''/mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) || '''' FROM V$DATAFILE
UNION
SELECT 'chown oracle:oinstall /mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) FROM V$DATAFILE ORDER BY 1;
SPOOL OFF

And then I can proceed with the actual downgrade. I need to ensure that the unified audit trail is emptied before the downgrade:

ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES OPEN DOWNGRADE;
EXEC DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, FALSE);
SPOOL /tmp/db_downgrade.lst
SET TERMOUT ON TIMING ON SERVEROUT ON ECHO ON
@?/rdbms/admin/catdwgrd.sql
SPOOL OFF

Unplug the PDB and because the PDB is encrypted I have to specify a password that can protect the sensitive information inside the manifest file:

ALTER SESSION SET CONTAINER=CDB$ROOT;
ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES UNPLUG INTO '/mnt/db-downgrade-122/manifest_sales.xml' ENCRYPT USING [a-secret-password];

Now we can copy the data files from the local storage and on to the File Storage Service so we can use at the source system. Use the commands that we generated previously:

asmcmd ...
chown ...

Now switching to the old release system and create the PDB from manifest file. I will use the data files right off the File Storage Service and optionally I can move them afterwards – as an online operation (you might not want to do that, but I ran out of disk space):

CREATE PLUGGABLE DATABASE SALES USING '/mnt/db-downgrade-122/manifest_sales.xml' SOURCE_FILE_DIRECTORY='/mnt/db-downgrade-122' NOCOPY KEYSTORE IDENTIFIED BY [keystore-password] DECRYPT USING [a-secret-password];

Start up the database in UPGRADE mode and open the keystore:

ALTER PLUGGABLE DATABASE SALES OPEN UPGRADE;
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE CONTAINER=all;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY [keystore-password] CONTAINER=all;

Switch to the SALES PDB and complete the downgrade:

ALTER SESSION SET CONTAINER=SALES;
SET TERMOUT ON ECHO ON TIMING ON
SPOOL /home/oracle/sales_catreload.log
@?/rdbms/admin/catrelod.sql
SPOOL OFF

Restart, Recompile and fresh stats:

ALTER PLUGGABLE DATABASE SALES CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE SALES OPEN;
@?/rdbms/admin/utlrp.sql
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
EXEC DBMS_STATS.GATHER_FIXED_OBJECT_STATS;

Check the state of the Oracle Data Dictionary

SET SQLFORMAT ANSICONSOLE LINES 300
SELECT COMP_ID, COMP_NAME, VERSION, STATUS FROM DBA_REGISTRY ORDER BY MODIFIED;

And there you have it. Not exactly super easy, which is why I highly recommend you to look at Bare Metal DB Systems or Exadata DB Systems if you are required to be able to do downgrades (or be prepared to use Data Pump instead).

Other Posts in This Series