Upgrade and profile scripts

In short: When upgrading an Oracle database to any release, I recommend to:

  • Remove the glogin.sql file from the target Oracle Home (sqlplus/admin subfolder).
  • Remove login.sql from current directory, if present.
  • Unset SQLPATH environment variable.

That’s it. Shortest blog post ever…

If you want a little more detail, read on. This blog post started by a tweet I saw the other day. I vaguely remember from my days outside Oracle (in the real world) that the upgrade run book I used had similar steps.

What If

What might happen if you don’t do it? AutoUpgrade might fail with:

-------------------------------------------------
Errors in database [CDB1]
Stage     [DBUPGRADE]
Operation [STOPPED]
Status    [ERROR]
Info    [
Error: UPG-1400
UPGRADE FAILED [CDB1]
Cause: Database upgrade failed with errors
For further details, see the log file located at /home/oracle/upg_logs/CDB1/CDB1/100/autoupgrade_20200804_user.log]

A command line upgrade might fail with:

Unexpected error encountered in catctlMain; Error Stack Below; exiting
Died at /u01/app/oracle/product/19/rdbms/admin/catctl.pl line 9188.
 at /u01/app/oracle/product/19/rdbms/admin/catctl.pl line 9188.
    main::catctlDie("\x{a}Unexpected error encountered in catconInit; exiting\x{a} No chil"...) called at /u01/app/oracle/product/19/rdbms/admin/catctl.pl line 37
27
    main::catctlDBLogon("/u01/app/oracle/product/19/rdbms/admin", "/home/oracle/upg_logs/CDB1/CDB1/100/dbupgrade", "catupgrd20200804112601", "CDB\$ROOT", 0, 4) ca
lled at /u01/app/oracle/product/19/rdbms/admin/catctl.pl line 3782
    main::catctlLogon() called at /u01/app/oracle/product/19/rdbms/admin/catctl.pl line 1422
    main::catctlMain() called at /u01/app/oracle/product/19/rdbms/admin/catctl.pl line 1370
    eval {...} called at /u01/app/oracle/product/19/rdbms/admin/catctl.pl line 1368

------------------------------------------------------
CATCTL FATAL ERROR
------------------------------------------------------

And Database Upgrade Assistant might fail like this:
Error message when DBUA fails due to ORA-04023
And when you dig a little further into the logs you will find ORA-04023: Object SYS.STANDARD could not be validated or authorized.

Notice that I write that the error might happen. That is because it all depends on what you put in glogin.sql or login.sql. Some things might work – but some for sure won’t. One thing that is often used is SET SERVEROUT ON and that will for sure break the upgrade.

Why

glogin.sql is officially named site profile script and login.sql is named user profile script. As mentioned in the documentation they are executed each time you start SQL*Plus and can contain anything that you would type in SQL*Plus. A lot of the commands you use in SQL*Plus are using functionality in the database. Take for example SET SERVEROUT ON which uses the database package DBMS_OUTPUT.

When a database is upgraded it is started in UPGRADE mode which changes the behaviour of the database a lot. Further, the upgrade needs to change a lot of functionality in the database. It does so by calling SQL*Plus to execute a script – and it must execute many scripts. Simplified, it is something similar to:

sqlplus / as sysdba @upgrade.sql

If you define a profile script during upgrade you are effectively injecting code to be executed before the upgrade script. And if your profile script is trying to use functionality that doesn’t work for the time being an error occurs. Some scripts are set to exit on SQLERROR which will break the upgrade. Others, ignore the error only to fail later on when the log file is inspected for unexpected error messages.

Imagine what happens when the upgrade tries to replace the package DBMS_OUTPUT and at the same time there is a profile script which sets SET SERVEROUT ON. We are trying to use the same functionality there are being replaced.

How

You can use this little example to avoid the problem:

export TARGET_ORACLE_HOME=/u01/app/oracle/product/19
mv $TARGET_ORACLE_HOME/sqlplus/admin/glogin.sql $TARGET_ORACLE_HOME/sqlplus/admin/glogin.sql.backup
mv login.sql login.sql.backup
export SQLPATH=
# Now start the upgrade using the preferred method
java -jar autoupgrade.jar ....
# Or
cd $TARGET_ORACLE_HOME/bin
dbupgrade
# Or
$TARGET_ORACLE_HOME/bin/dbua

What Now

AutoUpgrade already has a simple sanity check on the glogin.sql but it is not bulletproof. If your glogin.sql is way off you might get an error:
AutoUpgrade error message due to a bad glogin.sql file - Validating glogin.sql file content
The other options (command line or DBUA) have no checks at all.

We are working on making our documentation clearer on this problem, so expect to see an update coming soon.

Upgrading in the cloud – VM DB Systems – 12.2.0.1 PDB to 19c

In this blog post I will show you how to upgrade a 12.2.0.1 PDB to 19c when it is running in a VM DB System. I have a PDB called SALES and it is running Standard Edition 2 (yes, this procedure works for Standard Edition 2 as well). In a previous blog post I went over the restrictions that apply to VM DB System and having those in mind I can create a high-level plan for the upgrade:

  1. Check plug-in compatibility in the new release CDB
  2. Use AutoUpgrade to analyze the PDB
  3. Create a refreshable PDB on the new system
  4. Downtime starts
  5. Use AutoUpgrade to execute pre-upgrade fixups
  6. Refresh the new PDB
  7. Upgrade the new PDB
  8. Test and wrap-up

The refreshable PDB feature was introduced with in Oracle Database release 12.2 so you can’t use this procedure for lower versions. Your source database must be at least on release 12.2. If not, you must clone the entire database in traditional manner, which will be discussed in a later blog post.

Check plug-in compatibility of the PDB in the new release CDB

I will start by checking whether my PDB can be plugged into the new release CDB. You should describe the PDB to generate a manifest file:

EXEC DBMS_PDB.DESCRIBE('/home/oracle/sales.xml', 'SALES');

Transfer the XML file to the target system and check plug-in compatibility. Instead of transferring the file between the two systems you can also use the File Storage Service to create a shared file system that can be accessed via a NFS client.

BEGIN 
    IF DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/home/oracle/sales.xml', 'SALES') THEN
        DBMS_OUTPUT.PUT_LINE('SUCCESS');
    ELSE
        DBMS_OUTPUT.PUT_LINE('ERROR');
    END IF;
END;
/

Look at the result:

SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

As expected, I do see some plug-in violations: Drag Racing Obviously, there is a difference in database release and patch level. The database upgrade will take care of those issues. Also, you get a warning about COMPATIBLE being different. This is expected and the COMPATIBLE setting will be automatically changed once we plug the PDB into the new release CDB. The underscore parameters are added automatically when you create a VM DB System in OCI and they are there for a good reason. RECYCLEBIN is on in my new release CDB – I can live with that, and finally I have unencrypted tablespaces. For this test it is not critical, but it should never be so in a real database. Remember, for a plug-in operation to complete (i.e. you can open the PDB in READ WRITE mode and RESTRICTED=NO) there must not be any errors. Warnings are accepted but should be ideally be fixed as well.

Use AutoUpgrade to analyze the PDB

I need to create a config file for AutoUpgrade, and I will give it a better name:

cd
java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -create_sample_file config
mv sample_config.cfg upg19_sales.cfg

Edit the config file. See here for description of the parameters. Note that we can’t specify a target_home because it is not present, instead you must specify the target_version. Also, I specify that only one of the PDBs should be analyzed using pdbs parameter. AutoUpgrade will always check CDB$ROOT and PDB$SEED regardless of pdbs setting. This is by design. I ended up with this config file:

global.autoupg_log_dir=/home/oracle/upg_logs

upg1.dbname=CDB1
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1
upg1.sid=CDB1
upg1.log_dir=/home/oracle/upg_logs
upg1.target_version=19
upg1.pdbs=SALES

Now, analyze the database:

java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -config ~/upg19_sales.cfg -mode analyze

Check the analyze result. You can disregard the checks from containers CDB$ROOT, PDB$SEED and all other PDBs than SALES. Also, only look at the issues where STAGE=PRECHECKS and fixup_available=NO:

more ~/upg_logs/$ORACLE_SID/100/prechecks/*preupgrade.log

In OCI I receive a warning from the check TDE_IN_USE. It is expected and you don’t have to do anything. The newly provisioned target system is properly configured. Also, OCI itself has a habit of setting a lot of underscore parameters. Just let them be.

Create a refreshable PDB on the new system

While I wait for downtime to start, I will create a refreshable PDB. I need to copy the PDB to the target system and to avoid doing that during downtime, I will use the refreshable PDB feature. When I refresh it, it will only need to apply the recent-most changes from the source PDB which is much faster than a full clone, obviously. I need a common user in the source CDB that can be used by the database link over which the cloning will take place:

CREATE USER c##clone_user IDENTIFIED BY FOObar11## CONTAINER=ALL;
GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO c##clone_user CONTAINER=ALL; 

In the target CDB, create a database link that points to the source CDB:

CREATE DATABASE LINK clone_link CONNECT TO c##clone_user IDENTIFIED BY FOObar11## USING '10.0.1.45/ CDB1_fra1jf.sub02121342350.daniel.oraclevcn.com';

Check that it works

SELECT count(*) FROM all_objects@clone_link;

If you get ORA-02085 execute

ALTER SESSION SET GLOBAL_NAMES=FALSE;

Let’s create the refreshable PDB. I will set it to REFRESH MODE MANUAL but you could also configure it to refresh automatically at regular intervals, e.g. every 10 minutes, using REFRESH MODE EVERY 10 MINUTES clause. The keystore password is needed for security reasons. It is the same password as you specified for SYS when you created the system (parameter –admin-password):

CREATE PLUGGABLE DATABASE SALES FROM SALES@CLONE_LINK
PARALLEL 4
REFRESH MODE MANUAL
KEYSTORE IDENTIFIED BY "...";

If you get this error:

CREATE PLUGGABLE DATABASE SALES FROM SALES@CLONE_LINK
 *
ERROR at line 1:
ORA-19505: failed to identify file "+DATA/CDB1_FRA1JF/A4EBB0BCBC427D8FE0532D01000A9AEC/DATAFILE/users.275.1039631039"
ORA-15173: entry 'CDB1_FRA1JF' does not exist in directory '/'

You are missing patch 29469563. Now – sit back and relax and wait for down time to start. You can periodically refresh the PDB to further minimize the final refresh time:

ALTER PLUGGABLE DATABASE SALES REFRESH;

Downtime starts

Now it is time to kick users off. Drain the database of connections – or actually just drain the PDB – and prevent users from accessing the database.

Use AutoUpgrade to execute pre-upgrade fixups

Now we can run the preupgrade fixups to prepare the database for upgrade. I will re-use the config file I created earlier, but change the processing mode to fixups:

java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -config ~/upg19_sales.cfg -mode fixups

You should re-check the analyze log file to ensure that no new issues are reported. Note how the path has changed because of a new AutoUpgrade jobid. It increments by one on each run:

more ~/upg_logs/$ORACLE_SID/101/prechecks/*preupgrade.log

Refresh the new PDB

I suggest that you create a tracking table. This way you can ensure that you have all the latest changes in the new PDB:

ALTER SESSION SET CONTAINER=SALES;
CREATE USER UPG_TRACKING IDENTIFIED BY FOObar11##;
ALTER USER UPG_TRACKING QUOTA UNLIMITED ON USERS;
CREATE TABLE UPG_TRACKING.SUCCESS (C1 NUMBER);
INSERT INTO UPG_TRACKING.SUCCESS VALUES (42);
COMMIT;

Shut down the PDB to ensure no one logs on accidentally:

ALTER PLUGGABLE DATABASE SALES CLOSE IMMEDIATE;

And do the final refresh:

ALTER PLUGGABLE DATABASE SALES REFRESH;

Upgrade the new PDB

Now we are done at the source system. We have made the final refresh and all my data are transferred to the new PDB; it is time to convert the refreshable PDB into a regular PDB and open it in upgrade mode:

ALTER PLUGGABLE DATABASE SALES REFRESH MODE NONE;
ALTER PLUGGABLE DATABASE SALES OPEN UPGRADE;

I will double check that all my changes are in my new PDB:

ALTER SESSION SET CONTAINER=SALES;
SELECT * FROM UPG_TRACKING.SUCCESS;

Right now, you can’t use AutoUpgrade for unplug/plug upgrades when source and target CDB are not on the same host, so we will do it the old fashion way (which is nice as it refreshes your skills). I have four CPUs in my system and I only need to upgrade this PDB so let’s ensure that all CPUs are allocated to the upgrade – that’s the option -N 4.

mkdir -p ~/upg_logs/SALES
dbupgrade -c SALES -l ~/upg_logs/SALES -N 4

But it also reminds you how nice AutoUpgrade is. It does so many things automatically. I will only do the essential things. For a real-life upgrade you should consult the upgrade guide to get the full procedure.

Open PDB and set it to auto-start:

ALTER PLUGGABLE DATABASE SALES OPEN;
ALTER PLUGGABLE DATABASE SALES SAVE STATE;

Recompile objects:

ALTER SESSION SET CONTAINER=SALES;
@$ORACLE_HOME/rdbms/admin/utlrp

Check the upgrade with post-upgrade status tool

@$ORACLE_HOME/rdbms/admin/utlusts.sql SALES

Check the state of the Oracle Data Dictionary. If you get an error from the SET command, you are probably using SQLPlus. You should try out SQLcl. It is so much nicer.

SET SQLFORMAT ANSICONSOLE LINES 300
SELECT COMP_ID, COMP_NAME, VERSION, STATUS FROM DBA_REGISTRY ORDER BY MODIFIED;

Because we ran the analyze on the source system, there is no post-upgrade fixups available, and I didn’t use AutoUpgrade for the actual upgrade (which would have figured it out automatically). You need to look in the pre-upgrade analyze log file on the source system. Again, I will only need to look at the issues from SALES PDB:

more ~/upg_logs/$ORACLE_SID/101/prechecks/*preupgrade.log

You could also look in the HTML files that is placed in the same directory, if you need something more readable and a better description of the issues. Otherwise, have a look at the My Oracle Support document “Database Preupgrade tool check list. (Doc ID 2380601.1)”. Even though is says that there is a fixup available you must do it manually. In my case it was:

  • Dictionary stats
  • Fixed object stats
  • Time zone file upgrade

So, let’s do it:

EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
EXEC DBMS_STATS.GATHER_FIXED_OBJECT_STATS;

And finally upgrade the time zone file:

SET SERVEROUTPUT ON
@$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
@$ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql

Clean up the tracking user and database link:

DROP USER UPG_TRACKING CASCADE;
ALTER SESSION SET CONTAINER=CDB$ROOT;
DROP DATABASE LINK clone_link;

Since we have moved the database to a new host, you must update your connect strings and tnsadmin files to point to the new server and service name.

Test and wrap-up

Now it is also time to let in the application testers, start a level 0 backup and what else is on your runbook. Finally, I can now delete the source VM DB System:

oci db system terminate --db-system-id "..."

That should be it. Should something happen during the upgrade it is really easy to make a fallback. Just re-open the source PDB and you are back in business. Speaking of fallback one thing that you must keep in mind is that once you plug in your PDB to a higher release CDB the COMPATIBLE parameter is automatically raised – no questions asked. And that does prevent you from making a database downgrade, if it should be necessary. If you need to go back to the previous release you must use Data Pump and do a regular export/import.

Other Posts in This Series

Upgrading in the cloud – VM DB Systems

This is the first post in a series on the entry-level database system in OCI – being Virtual Machine DB System (VM DB System). You don’t get the same specs as with Exadata DB Systems and Bare Metal DB Systems, but it is much more affordable and still a very good platform. There is good tooling that allows most operations to be fully automated. Upgrades can be made in two ways:

  1. Automated
  2. Manual

Automated

Since November 2020 it is possible to upgrade a database to Oracle Database 19c using the cloud tooling. However, your DB System must meet these requirements:

  • Oracle Linux 7
  • Grid Infrastructure (GI) 19 (only for systems with ASM)

To check the OS version:

[oracle@host]$ cat /etc/os-release

To check GI version:

[grid@host]$ crsctl query crs activeversion

If your system doesn’t meet these requirements either:

  • Move the database to a new DB System that meets these requirements via cloning or backup/restore, or
  • Use the manual approach

At time of writing, if you must upgrade to any other version than Oracle Database 19c you have to do it manually.

Manual

There are some limitations that you must be aware of that makes manual upgrading slightly different.

  • It is not supported to install a second database Oracle Home. You must use the one that is supplied when the system is provisioned. If you need a new database Oracle Home, you must provision a new system.
  • It is not supported to upgrade the Grid Infrastructure.
  • It is not supported to upgrade the operating system.
  • It is not supported to drop the existing CDB and create your own. It is, however, supported to drop the pre-created CDB and replace it with a backup (and we will discuss this option later).
  • You can only provision systems with multitenant architecture.

Having said that, there are some advantages to do manual upgrades:

  • You are in full control
  • You can decide exactly how to perform the upgrade
  • It can be faster because you can customize the upgrade, and you can avoid some of the tasks that the tooling does.

But the downside is that you will need to move the database to a new VM DB System.

Choose a Method

Automated upgrade is easy – manual upgrade is faster and with more control. When you have decided on a method read the other blog post in the series for much more information and demoes.

Other Posts in This Series