Zero Downtime Migration – Preparations

In the following posts I will demo how to migrate a database into OCI. To make the demo as simple as possible my source database will already be running in OCI and my target will be a VM DB System.

Prepare source environment

I will create a VM DB System that can act as my source database. I will call it srchost and the database is called CDB1. Using OCI CLI it can be done like this:

oci db system launch \
  --compartment-id "..." \
  --availability-domain "..." \
  --subnet-id "..." \
  --shape "VM.Standard2.4" \
  --cpu-core-count 4 \
  --database-edition "ENTERPRISE_EDITION" \
  --admin-password "..." \
  --ssh-authorized-keys-file "/path/to/key-file.pub" \
  --license-model "BRING_YOUR_OWN_LICENSE" \
  --db-name "CDB1" \
  --pdb-name "SALES" \
  --storage-management "ASM" \
  --node-count 1 \
  --initial-data-storage-size-in-gb 2048 \
  --display-name "SRCHOST" \
  --hostname "SRCHOST" \
  --db-version "19.0.0.0"

Once completed, I can log on as opc and switch to root and copy the authorized_keys file to allow SSH connection as oracle:

[opc@srchost]$ sudo su -
[root@srchost]$ rm -f /home/oracle/.ssh/authorized_keys
[root@srchost]$ cp /home/opc/.ssh/authorized_keys /home/oracle/.ssh/authorized_keys
[root@srchost]$ chown -R oracle:oinstall /home/oracle/.ssh

Now, I will ensure that the database is in ARCHIVELOG mode; it is a requirement for using Data Guard:

SELECT log_mode FROM v$database;

Since my database is running 12.2 or higher I have to ensure that there is a TDE Keystore. This applies even if the database is not encrypted. You don’t need a license to create a TDE Keystore – not until you start encrypting stuff. The keystore must be OPEN and the type is either AUTOLOGIN, LOCAL_AUTOLOGIN or PASSWORD. In a CDB, this applies to CDB$ROOT and all PDBs:

SELECT con_id, status, wallet_type FROM v$encryption_wallet;

If status is OPEN_NO_MASTER_KEY it means that no TDE master encryption key has been created and you will need to create one. You can also find instructions on how to create a TDE keystore in the ZDM documentation.

Prepare target environment

In my demo I will be targeting a VM DB System which I will call tgthost. For the migration to work, I have to create a placeholder database on the target host. During migration the placeholder database is overwritten by the source database. On a VM DB System this is easy; I just use the database that gets created automatically as placeholder.

DB_NAME must be the same in the two databases, and hence in my demo I must set db-name in the OCI CLI command below to CDB1 (the name of my source database). Contrary, DB_UNIQUE_NAME must be different but this will likely not be a problem because OCI automatically generates a semi-unique DB_UNIQUE_NAME (if you are a strong believer in Murphy’s Law you should double-check it).

The target database patch level can be the same or higher than the source. If higher, you just have to manually execute datapatch after the switch-over (I will put this information into a later blog post). By setting db-version to 19.0.0.0 I automatically get the latest available Release Update.

Also, the SYS password has to match in the two databases, so be sure to set admin-password to your source database SYS password. Finally, you should set storage-management to ASM. This will cause OCI to install Grid Infrastructure as well and set up a SCAN listener.

In my demo I end up with this OCI CLI command:

oci db system launch \
  --compartment-id  "..." \
  --availability-domain "..." \
  --subnet-id "..." \
  --shape "VM.Standard2.4" \
  --cpu-core-count 4 \
  --database-edition "ENTERPRISE_EDITION" \
  --admin-password "..." \
  --ssh-authorized-keys-file /path/to/key-file.pub \
  --license-model "BRING_YOUR_OWN_LICENSE" \
  --db-name "CDB1" \
  --pdb-name "SALES" \
  --storage-management "ASM" \
  --node-count 1 \
  --initial-data-storage-size-in-gb 2048 \
  --display-name "tgthost" \
  --hostname "tgthost" \
  --db-version "19.0.0.0"

As opc, switch to root and enable oracle to log on via SSH:

[opc@tgthost]$ sudo su -
[root@tgthost]$ rm -f /home/oracle/.ssh/authorized_keys
[root@tgthost]$ cp /home/opc/.ssh/authorized_keys /home/oracle/.ssh/authorized_keys
[root@tgthost]$ chown -R oracle:oinstall /home/oracle/.ssh

The requirements to the TDE Keystore applies to the target environment as well, but since I am using an OCI DB System that will be setup up correctly when the system is created.

Configure Connectivity

I need to ensure that the source host can resolve the network name of the target host. It is important to add two entries – one with the host name and one with the SCAN name (they should both point to the target host):

[root@srchost]$ echo -e "[ip address] tgthost" >> /etc/hosts
[root@srchost]$ echo -e "[ip address] tgthost-scan" >> /etc/hosts

Also, I need to ensure that I can connect to the target database from the source host over SQL*Net. Just use the default service of the target CDB, and you can get it using lsnrctl:

[oracle@tgthost]$ lsnrctl status
[oracle@srchost]$ sqlplus system@tgthost-scan/[target-cdb-service-name]

Now, I do it the other way around. Ensure that the target host can resolve the network name of the source host:

[root@tgthost]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@tgthost]$ echo -e "[ip address] srchost-scan" >> /etc/hosts

Ensure that I can connect to the source database from the target host:

[oracle@srchost]$ lsnrctl status
[oracle@tgthost]$ sqlplus system@srchost-scan/[source-cdb-service-name]

Create Sample Data

I will connect to the source database:

[oracle@srchost]$ export ORACLE_SID=CDB1
[oracle@srchost]$ export ORACLE_PDB_SID=SALES
[oracle@srchost]$ sqlplus / as sysdba

Note, that ORACLE_PDB_SID works from 18.5.0 and beyond. For lower versions you can just use ALTER SESSION SET CONTAINER=SALES instead. And create some sample data, so I can verify the migration actually works:

CREATE USER zdmtest IDENTIFIED BY [your-password];
GRANT CREATE TABLE, CONNECT TO zdmtest;
GRANT SELECT ON v_$instance TO zdmtest;
GRANT SELECT ON v_$database TO zdmtest;
ALTER USER zdmtest DEFAULT TABLESPACE users;
ALTER USER zdmtest QUOTA UNLIMITED ON users;

CREATE TABLE zdmtest.items (
   id      NUMBER,
   item    VARCHAR2(20),
   price   NUMBER
);
INSERT INTO zdmtest.items VALUES (1, 'Apple', 2);
INSERT INTO zdmtest.items VALUES (2, 'Banana', 1);
INSERT INTO zdmtest.items VALUES (3, 'Orange', 4);
COMMIT;

That’s It

My source database is now ready to be migrated to OCI. It will be migrated to a new VM DB System that I just created. In the next blog post I will install and configure ZDM. Stay tuned!

Other Blog Posts In This Series

Zero Downtime Migration

When you need to migrate into OCI we have a cool – and free – tool that you can use: Zero Downtime Migration (ZDM).

You can use Zero Downtime Migration to easily migrate to Oracle Cloud Infrastructure - OCI

In short: ZDM builds a copy of your database in OCI. It keeps the OCI database in sync with the on-prem database until you are ready to complete the migration. Then connections are simply switched over to the OCI database. ZDM will take care of all the steps and ensure nothing is lost in the process.

In this blog post series, I will take you through the entire process using version 21 (the latest at time of writing). In the end you will know all there is to know – and you can start migrating your databases into OCI.

Source Database

Your source database can be located:

  • On-prem
  • OCI Classic (you know, our old cloud)
  • OCI (the new cloud, useful when you want to migrate between regions or locations or between system types)

Requirements:

  • The source database must be running 11.2.0.4 or newer
  • Source platform must be Linux

Options

You can migrate your database in two ways – each of them in an online and offline manner:

  • Physical Online The entire database is migrated by restoring a backup of the database and instantiate that as a standby database. The standby database is kept in sync with redo apply. It is online because the only downtime needed is to perform a regular switchover.
  • Physical Offline The entire database is migrated by restoring a backup of the database. It is offline because there is downtime while the backup is created, transferred and restored in OCI.
  • Logical Online One or more schemas are migrated using Data Pump. In addition, Oracle GoldenGate is used to keep the OCI database in sync to avoid downtime. It is online because the only downtime needed is to switch over the users to the new database.
  • Logical Offline Like the online option but without Oracle GoldenGate on top. It is offline because the database is unavailable during the export and import operation.

Free? Easy?

I know what you think right now. First, you say it is a free tool, and now you mention Oracle GoldenGate. We don’t have a license for Oracle GoldenGate. Possibly, you are also thinking that Oracle GoldenGate is complex. Let me address that:

Free

If you migrate your database into OCI (currently ExaCC and Exadata on-prem is excluded), you can use Oracle GoldenGate (for migration purposes only) without paying a separate license – provided you use the OCI Marketplace image. It says:

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools:

  • Oracle Zero Downtime Migration
  • Oracle Cloud Infrastructure Database Migration

So no license for Oracle GoldenGate to handle the migration. You still have to pay for the underlying compute instance – but no license.

Complex

To install Oracle GoldenGate simply follow the wizard to deploy your installation. I will show this in a later blog post. Afterwards, you need to configure the extract and replicat process, but ZDM takes care of that for you. If all goes well, you won’t even have to log into the Oracle GoldenGate Hub.

Yes, Oracle GoldenGate is a complex product, but for this purpose you should not worry. I promise you. It is super easy.

Comparison

Physical Online Physical Offline Logical Online Logical Offline
TARGET PLATFORMS
ATP-D x x
ATP-S x x
ADW-D x x
ADW-S x x
ExaCS x x x x
ExaCC x x x x
Exadata (on-prem) x x x x
Bare Metal DBCS x x x x
Virtual Machine DBCS x x x x
RELEASE AND EDITIONS
SE2 x x x
EE x x x x
Migrate from SE2 to EE x x
Migrate to same version x x x x
Migrate to same version, higher patch level x x x x
Migrate to higher version (see note 1) x x
ARCHITECTURE
Non-CDB x x x x
CDB (see note 2) x x x x
Migrate into PDB – no extra downtime x x
Migrate into PDB – with extra downtime (see note 3) x x
Single instance x x x x
RAC One Node (see note 4) x x x x
RAC x x x x
Migrate from single instance to RAC x x x x
Migrate from RAC to single instance x x
ENCRYPTION
Unencrypted (see note 5) x x x x
Encrypted x x x x
Encrypt data-in-transit during migration (see note 6) x x x x

Note 1: When doing physical migrations, you can’t migrate directly into a higher release. However, you are free to upgrade the database afterwards. But that will incur additional downtime.

Note 2: When migrating databases using the physical option all PDBs are migrated. If you use the logical approach, you migrate each PDB individually and you can choose which you want.

Note 3: When you migrate databases using the physical option, you can optionally convert the database into a PDB afterwards. However, that will incur additional downtime while the noncdb_to_pdb.sql script is executed.

Note 4: RAC One Node are always migrated into RAC.

Note 5: All databases must be encrypted in OCI. An unencrypted database is always encrypted when it is created in OCI.

Note 6: A combination of techniques are in play here (dump file encryption, SQL*Net encryption, HTTPS, SSH/rsync) and it depends a lot of how you choose to carry out the migration. As an example, when doing a physical online migration if the source database is not encrypted, it will be encrypted on-the-fly once they are created in OCI. The initial backup of the source database is sent over an encrypted connection to OCI Object Storage, and redo are transferred over encrypted SQL*Net.

Network Connectivity

  • The ZDM service host needs SSH access (22) to the source database. Unless the target database is an Autonomous Database then it also needs SSH access to the target database host.
  • If you plan on using OCI Object Storage as a staging area, the source database needs access to OCI Object Storage over HTTPS (443). Unless the target database is an Autonomous Database, the same applies to the target database host.
  • SQL*Net connection (1521) are needed between the two databases hosts.
  • If you will be using Oracle GoldenGate as well, you need SQL*Net connection (1521) from the GoldenGate Hub and to the source and target database. In addition, you need HTTPS (443) from the ZDM service host to the GoldenGate Hub.

And then there are all the special cases with proxy and Bastion host which I will not cover here.

References

In case you want to read more here are some useful links:

Other Blog Posts in This Series

This is the introduction blog post in this series. Over the next days the other blog posts will follow. Stay tuned!

Upgrading in the cloud – VM DB Systems – What about downgrade?

In a previous blog post I showed how you could upgrade a 12.2 PDB by plugging it into a 19c CDB. But what about downgrade? Yeah, downgrade. You know, that really cool feature that you never practice, but you know you should.

In the previous blog post, I used the CDB that gets automatically created when you deploy a new 19c VM DB System and it comes with COMPATIBLE set to 19.0.0 – the default. When you provision a new VM DB System there is no way to control that parameter. Thus, when I plug in my old release PDB into the new release CDB the COMPATIBLE parameter is automatically raised, and I have lost the possibility of doing a downgrade. The only option to get back to the old release would be a Data Pump export.

If you want to preserve the possibility of doing a database downgrade, I strongly recommend you switch to Bare Metal DB Systems or ExaCS which are much more flexible. But if you insist on using VM DB Systems there is an option – it is cumbersome – but doable. And believe it or not – after working with Oracle Database for so many years it was the first time ever that I had to a downgrade – not even in a lab or a test environment (which became fairly obvious after I had asked for advice the 100th time that day).

Drop pre-created 19c database

To get a 19c CDB with a non-default COMPATIBLE setting we will drop the pre-created database and replace it with a backup that has the proper COMPATIBLE setting. This is supported and the same approach is used in the whitepaper “Hybrid Data Guard to Oracle Cloud Infrastructure”. It is a good read and it has a lot of script examples that I stole… oh… got inspired by.

Connect to the new release VM DB System and ensure that the environment variable ORACLE_UNQNAME is set to the DB_UNIQUE_NAME of the database:

echo $ORACLE_UNQNAME

If it is not set, you can get it from srvctl:

srvctl config database
export ORACLE_UNQNAME=...

Generate a script that can delete all data -, temp -, redo log – and control files:

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_replace_files.lst
SELECT 'asmcmd rm '|| name FROM V$DATAFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$TEMPFILE UNION ALL SELECT 'asmcmd rm '|| member FROM V$LOGFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$CONTROLFILE;
SPOOL OFF
host chmod 777 /tmp/db_replace_files.lst

You must edit the script and get rid of the unneeded lines. Next, stop the database:

srvctl stop database -d $ORACLE_UNQNAME -o immediate

And log on as grid and delete the files using the script we just created:

. /tmp/db_replace_files.lst

As oracle, now restart the database instance in NOMOUNT mode (can’t really go further since we nuked the control files) and set the COMPATIBLE to the same setting as the old release CDB.

srvctl start database -db $ORACLE_UNQNAME -o NOMOUNT

sql / as sysdba
SQL> ALTER SYSTEM SET COMPATIBLE='12.2.0' SCOPE=SPFILE;
SQL> CREATE PFILE FROM SPFILE;

srvctl stop database -db $ORACLE_UNQNAME -o immediate
srvctl start database -db $ORACLE_UNQNAME -o nomount

sql / as sysdba
SQL> SHOW PARAMETER COMPATIBLE

Now we have a new release instance running with the old COMPATIBLE setting. Obviously, it is of no use – yet. Starting a 19c instance with a lower compatible setting

Create a backup of old release CDB

I will use the old release CDB as the source for my new release CDB and thus preserve my COMPATIBLE setting. Obviously, the old release CDB must be upgraded to the new release, so let us first must ensure that the source CDB can actually be upgraded. Use AutoUpgrade in ANALYZE and FIXUPS mode which is described in a previous blog post. We must execute the ANALYZE and FIXUPS mode on the source system because the target system will only be able to open the database in UPGRADE mode, and these steps must be executed while the database is running in normal mode.

Next, I will create a File Storage Service that I can use to share files between the two VM DB Systems. The File Storage Service is really nice because the transfer speed in and out of the service depends on the network bandwidth of your VM DB System. So, if you add more CPUs to the system, you get more network bandwidth to the service automatically. It is really easy to create a File Storage Service and it is very well documented, so I will skip that part here. After that you can mount the file system in your systems (as opc):

sudo mkdir -p /mnt/db-downgrade-122
sudo mount x.x.x.x:/db-downgrade-122 /mnt/db-downgrade-122
sudo chmod 777 /mnt/db-downgrade-122

I can now take a backup of the CDB and store it directly in my NFS mount point:

rman target / 

RMAN> BACKUP DATABASE ROOT FORMAT '/mnt/db-downgrade-122/cdbroot_%U' PLUGGABLE DATABASE 'PDB$SEED' FORMAT '/mnt/db-downgrade-122/pdbseed_%U' PLUS ARCHIVELOG FORMAT '/mnt/db-downgrade-122/arch_%U';
RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/db-downgrade-122/cf_%U';

Since the database is encrypted, we also need a copy of the keystore (or wallet). An easy solution could be to put the keystore files into the File Storage Service but a safer approach is to transfer the file directly using scp. Although, the keystore is protected by a password, it is still safer to keep them separated.

At time of writing, in OCI you can find the location of the keystore in sqlnet.ora.

cat $ORACLE_HOME/network/admin/sqlnet.ora | grep -i encryption_wallet

But that will change at some point in time because as of Oracle Database release 19c the sqlnet.ora parameter ENCRYPTION_WALLET_LOCATION is deprecated. You might have to look at the database parameter WALLET_ROOT instead.

For now, just note down the location of the keystore.

Restore old release CDB and upgrade

Now we can restore the backup of the 12.2 database using the 19c binaries. This is supported and in fact newer releases of RMAN can always restore lower release backups. However, normally RMAN will try to open the database in normal mode which we can’t do because of the version mismatch. I will use the NOOPEN keyword which causes RMAN to leave the database in MOUNT mode and I can manually open the database with RESETLOGS and in UPGRADE mode.

But first we must copy the keystore files from the source database to the new release DB System. In my example I am also copying the auto-login keystore which you shouldn’t do if you are using local auto-login keystores. In that situation the auto-login keystore should be re-created at the new release system:

cd 
scp -i  oracle@:/cwallet.sso cwallet.sso
scp -i  oracle@:/ewallet.p12 ewallet.p12

Now let’s do the restore and open the database in upgrade mode:

rman auxiliary /

RMAN> DUPLICATE DATABASE TO CDB1 AS ENCRYPTED NOOPEN SKIP PLUGGABLE DATABASE SALES BACKUP LOCATION '/mnt/db-downgrade-122/';

And in the end, you can see that RMAN does not open the database: Using the NOPEN keyword you can prevent RMAN from opening a database at the end of the restore and recover operation

Then, you can open the database in UPGRADE mode and with RESETLOGS option to complete the restore. Also, drop the skipped pluggable databases from the data dictionary:

ALTER DATABASE OPEN RESETLOGS UPGRADE;
DROP PLUGGABLE DATABASE SALES INCLUDING DATAFILES;

Finally, you can use AutoUpgrade in UPGRADE mode to upgrade the CDB to the new release. You now have a new target system running on 19c but with a lower COMPATIBLE setting.

If you want to know more about the UPGRADE mode, have a look in the documentation.

Downgrading a PDB

Previously, we have laid the groundworks for the being able to do a downgrade. For the following, I am assuming that you have already upgraded your PDB to 19c and now you ended up in a big doo doo and have to downgrade.

Downgrading in VM DB Systems are also slightly more complicated than on other systems. Remember that VM DB Systems does only support one Oracle Home (the one that comes deployed automatically) and that means that we must move the database back to the source system. For that operation we can’t use data guard, RMAN, or any other fancy approach because they only work to the same or newer release. So, we will have to do an old-school cold copy of the database – and that requires additional downtime. But let’s get started!

I will create a file that gives me all the commands I need to copy the data files out of ASM and into my File Storage Service (you could also scp them directly to the old release system):

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_downgrade_files.lst
SELECT 'asmcmd cp ' || name || ' ''/mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) || '''' FROM V$DATAFILE
UNION
SELECT 'chown oracle:oinstall /mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) FROM V$DATAFILE ORDER BY 1;
SPOOL OFF

And then I can proceed with the actual downgrade. I need to ensure that the unified audit trail is emptied before the downgrade:

ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES OPEN DOWNGRADE;
EXEC DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, FALSE);
SPOOL /tmp/db_downgrade.lst
SET TERMOUT ON TIMING ON SERVEROUT ON ECHO ON
@?/rdbms/admin/catdwgrd.sql
SPOOL OFF

Unplug the PDB and because the PDB is encrypted I have to specify a password that can protect the sensitive information inside the manifest file:

ALTER SESSION SET CONTAINER=CDB$ROOT;
ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES UNPLUG INTO '/mnt/db-downgrade-122/manifest_sales.xml' ENCRYPT USING [a-secret-password];

Now we can copy the data files from the local storage and on to the File Storage Service so we can use at the source system. Use the commands that we generated previously:

asmcmd ...
chown ...

Now switching to the old release system and create the PDB from manifest file. I will use the data files right off the File Storage Service and optionally I can move them afterwards – as an online operation (you might not want to do that, but I ran out of disk space):

CREATE PLUGGABLE DATABASE SALES USING '/mnt/db-downgrade-122/manifest_sales.xml' SOURCE_FILE_DIRECTORY='/mnt/db-downgrade-122' NOCOPY KEYSTORE IDENTIFIED BY [keystore-password] DECRYPT USING [a-secret-password];

Start up the database in UPGRADE mode and open the keystore:

ALTER PLUGGABLE DATABASE SALES OPEN UPGRADE;
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE CONTAINER=all;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY [keystore-password] CONTAINER=all;

Switch to the SALES PDB and complete the downgrade:

ALTER SESSION SET CONTAINER=SALES;
SET TERMOUT ON ECHO ON TIMING ON
SPOOL /home/oracle/sales_catreload.log
@?/rdbms/admin/catrelod.sql
SPOOL OFF

Restart, Recompile and fresh stats:

ALTER PLUGGABLE DATABASE SALES CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE SALES OPEN;
@?/rdbms/admin/utlrp.sql
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
EXEC DBMS_STATS.GATHER_FIXED_OBJECT_STATS;

Check the state of the Oracle Data Dictionary

SET SQLFORMAT ANSICONSOLE LINES 300
SELECT COMP_ID, COMP_NAME, VERSION, STATUS FROM DBA_REGISTRY ORDER BY MODIFIED;

And there you have it. Not exactly super easy, which is why I highly recommend you to look at Bare Metal DB Systems or Exadata DB Systems if you are required to be able to do downgrades (or be prepared to use Data Pump instead).

Other Posts in This Series

Upgrading in the cloud – VM DB Systems – 12.2.0.1 PDB to 19c

In this blog post I will show you how to upgrade a 12.2.0.1 PDB to 19c when it is running in a VM DB System. I have a PDB called SALES and it is running Standard Edition 2 (yes, this procedure works for Standard Edition 2 as well). In a previous blog post I went over the restrictions that apply to VM DB System and having those in mind I can create a high-level plan for the upgrade:

  1. Check plug-in compatibility in the new release CDB
  2. Use AutoUpgrade to analyze the PDB
  3. Create a refreshable PDB on the new system
  4. Downtime starts
  5. Use AutoUpgrade to execute pre-upgrade fixups
  6. Refresh the new PDB
  7. Upgrade the new PDB
  8. Test and wrap-up

The refreshable PDB feature was introduced with in Oracle Database release 12.2 so you can’t use this procedure for lower versions. Your source database must be at least on release 12.2. If not, you must clone the entire database in traditional manner, which will be discussed in a later blog post.

Check plug-in compatibility of the PDB in the new release CDB

I will start by checking whether my PDB can be plugged into the new release CDB. You should describe the PDB to generate a manifest file:

EXEC DBMS_PDB.DESCRIBE('/home/oracle/sales.xml', 'SALES');

Transfer the XML file to the target system and check plug-in compatibility. Instead of transferring the file between the two systems you can also use the File Storage Service to create a shared file system that can be accessed via a NFS client.

BEGIN 
    IF DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/home/oracle/sales.xml', 'SALES') THEN
        DBMS_OUTPUT.PUT_LINE('SUCCESS');
    ELSE
        DBMS_OUTPUT.PUT_LINE('ERROR');
    END IF;
END;
/

Look at the result:

SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

As expected, I do see some plug-in violations: Drag Racing Obviously, there is a difference in database release and patch level. The database upgrade will take care of those issues. Also, you get a warning about COMPATIBLE being different. This is expected and the COMPATIBLE setting will be automatically changed once we plug the PDB into the new release CDB. The underscore parameters are added automatically when you create a VM DB System in OCI and they are there for a good reason. RECYCLEBIN is on in my new release CDB – I can live with that, and finally I have unencrypted tablespaces. For this test it is not critical, but it should never be so in a real database. Remember, for a plug-in operation to complete (i.e. you can open the PDB in READ WRITE mode and RESTRICTED=NO) there must not be any errors. Warnings are accepted but should be ideally be fixed as well.

Use AutoUpgrade to analyze the PDB

I need to create a config file for AutoUpgrade, and I will give it a better name:

cd
java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -create_sample_file config
mv sample_config.cfg upg19_sales.cfg

Edit the config file. See here for description of the parameters. Note that we can’t specify a target_home because it is not present, instead you must specify the target_version. Also, I specify that only one of the PDBs should be analyzed using pdbs parameter. AutoUpgrade will always check CDB$ROOT and PDB$SEED regardless of pdbs setting. This is by design. I ended up with this config file:

global.autoupg_log_dir=/home/oracle/upg_logs

upg1.dbname=CDB1
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1
upg1.sid=CDB1
upg1.log_dir=/home/oracle/upg_logs
upg1.target_version=19
upg1.pdbs=SALES

Now, analyze the database:

java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -config ~/upg19_sales.cfg -mode analyze

Check the analyze result. You can disregard the checks from containers CDB$ROOT, PDB$SEED and all other PDBs than SALES. Also, only look at the issues where STAGE=PRECHECKS and fixup_available=NO:

more ~/upg_logs/$ORACLE_SID/100/prechecks/*preupgrade.log

In OCI I receive a warning from the check TDE_IN_USE. It is expected and you don’t have to do anything. The newly provisioned target system is properly configured. Also, OCI itself has a habit of setting a lot of underscore parameters. Just let them be.

Create a refreshable PDB on the new system

While I wait for downtime to start, I will create a refreshable PDB. I need to copy the PDB to the target system and to avoid doing that during downtime, I will use the refreshable PDB feature. When I refresh it, it will only need to apply the recent-most changes from the source PDB which is much faster than a full clone, obviously. I need a common user in the source CDB that can be used by the database link over which the cloning will take place:

CREATE USER c##clone_user IDENTIFIED BY FOObar11## CONTAINER=ALL;
GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO c##clone_user CONTAINER=ALL; 

In the target CDB, create a database link that points to the source CDB:

CREATE DATABASE LINK clone_link CONNECT TO c##clone_user IDENTIFIED BY FOObar11## USING '10.0.1.45/ CDB1_fra1jf.sub02121342350.daniel.oraclevcn.com';

Check that it works

SELECT count(*) FROM all_objects@clone_link;

If you get ORA-02085 execute

ALTER SESSION SET GLOBAL_NAMES=FALSE;

Let’s create the refreshable PDB. I will set it to REFRESH MODE MANUAL but you could also configure it to refresh automatically at regular intervals, e.g. every 10 minutes, using REFRESH MODE EVERY 10 MINUTES clause. The keystore password is needed for security reasons. It is the same password as you specified for SYS when you created the system (parameter –admin-password):

CREATE PLUGGABLE DATABASE SALES FROM SALES@CLONE_LINK
PARALLEL 4
REFRESH MODE MANUAL
KEYSTORE IDENTIFIED BY "...";

If you get this error:

CREATE PLUGGABLE DATABASE SALES FROM SALES@CLONE_LINK
 *
ERROR at line 1:
ORA-19505: failed to identify file "+DATA/CDB1_FRA1JF/A4EBB0BCBC427D8FE0532D01000A9AEC/DATAFILE/users.275.1039631039"
ORA-15173: entry 'CDB1_FRA1JF' does not exist in directory '/'

You are missing patch 29469563. Now – sit back and relax and wait for down time to start. You can periodically refresh the PDB to further minimize the final refresh time:

ALTER PLUGGABLE DATABASE SALES REFRESH;

Downtime starts

Now it is time to kick users off. Drain the database of connections – or actually just drain the PDB – and prevent users from accessing the database.

Use AutoUpgrade to execute pre-upgrade fixups

Now we can run the preupgrade fixups to prepare the database for upgrade. I will re-use the config file I created earlier, but change the processing mode to fixups:

java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -config ~/upg19_sales.cfg -mode fixups

You should re-check the analyze log file to ensure that no new issues are reported. Note how the path has changed because of a new AutoUpgrade jobid. It increments by one on each run:

more ~/upg_logs/$ORACLE_SID/101/prechecks/*preupgrade.log

Refresh the new PDB

I suggest that you create a tracking table. This way you can ensure that you have all the latest changes in the new PDB:

ALTER SESSION SET CONTAINER=SALES;
CREATE USER UPG_TRACKING IDENTIFIED BY FOObar11##;
ALTER USER UPG_TRACKING QUOTA UNLIMITED ON USERS;
CREATE TABLE UPG_TRACKING.SUCCESS (C1 NUMBER);
INSERT INTO UPG_TRACKING.SUCCESS VALUES (42);
COMMIT;

Shut down the PDB to ensure no one logs on accidentally:

ALTER PLUGGABLE DATABASE SALES CLOSE IMMEDIATE;

And do the final refresh:

ALTER PLUGGABLE DATABASE SALES REFRESH;

Upgrade the new PDB

Now we are done at the source system. We have made the final refresh and all my data are transferred to the new PDB; it is time to convert the refreshable PDB into a regular PDB and open it in upgrade mode:

ALTER PLUGGABLE DATABASE SALES REFRESH MODE NONE;
ALTER PLUGGABLE DATABASE SALES OPEN UPGRADE;

I will double check that all my changes are in my new PDB:

ALTER SESSION SET CONTAINER=SALES;
SELECT * FROM UPG_TRACKING.SUCCESS;

Right now, you can’t use AutoUpgrade for unplug/plug upgrades when source and target CDB are not on the same host, so we will do it the old fashion way (which is nice as it refreshes your skills). I have four CPUs in my system and I only need to upgrade this PDB so let’s ensure that all CPUs are allocated to the upgrade – that’s the option -N 4.

mkdir -p ~/upg_logs/SALES
dbupgrade -c SALES -l ~/upg_logs/SALES -N 4

But it also reminds you how nice AutoUpgrade is. It does so many things automatically. I will only do the essential things. For a real-life upgrade you should consult the upgrade guide to get the full procedure.

Open PDB and set it to auto-start:

ALTER PLUGGABLE DATABASE SALES OPEN;
ALTER PLUGGABLE DATABASE SALES SAVE STATE;

Recompile objects:

ALTER SESSION SET CONTAINER=SALES;
@$ORACLE_HOME/rdbms/admin/utlrp

Check the upgrade with post-upgrade status tool

@$ORACLE_HOME/rdbms/admin/utlusts.sql SALES

Check the state of the Oracle Data Dictionary. If you get an error from the SET command, you are probably using SQLPlus. You should try out SQLcl. It is so much nicer.

SET SQLFORMAT ANSICONSOLE LINES 300
SELECT COMP_ID, COMP_NAME, VERSION, STATUS FROM DBA_REGISTRY ORDER BY MODIFIED;

Because we ran the analyze on the source system, there is no post-upgrade fixups available, and I didn’t use AutoUpgrade for the actual upgrade (which would have figured it out automatically). You need to look in the pre-upgrade analyze log file on the source system. Again, I will only need to look at the issues from SALES PDB:

more ~/upg_logs/$ORACLE_SID/101/prechecks/*preupgrade.log

You could also look in the HTML files that is placed in the same directory, if you need something more readable and a better description of the issues. Otherwise, have a look at the My Oracle Support document “Database Preupgrade tool check list. (Doc ID 2380601.1)”. Even though is says that there is a fixup available you must do it manually. In my case it was:

  • Dictionary stats
  • Fixed object stats
  • Time zone file upgrade

So, let’s do it:

EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
EXEC DBMS_STATS.GATHER_FIXED_OBJECT_STATS;

And finally upgrade the time zone file:

SET SERVEROUTPUT ON
@$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
@$ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql

Clean up the tracking user and database link:

DROP USER UPG_TRACKING CASCADE;
ALTER SESSION SET CONTAINER=CDB$ROOT;
DROP DATABASE LINK clone_link;

Since we have moved the database to a new host, you must update your connect strings and tnsadmin files to point to the new server and service name.

Test and wrap-up

Now it is also time to let in the application testers, start a level 0 backup and what else is on your runbook. Finally, I can now delete the source VM DB System:

oci db system terminate --db-system-id "..."

That should be it. Should something happen during the upgrade it is really easy to make a fallback. Just re-open the source PDB and you are back in business. Speaking of fallback one thing that you must keep in mind is that once you plug in your PDB to a higher release CDB the COMPATIBLE parameter is automatically raised – no questions asked. And that does prevent you from making a database downgrade, if it should be necessary. If you need to go back to the previous release you must use Data Pump and do a regular export/import.

Other Posts in This Series

Upgrading in the cloud – VM DB Systems

This is the first post in a series on the entry-level database system in OCI – being Virtual Machine DB System (VM DB System). You don’t get the same specs as with Exadata DB Systems and Bare Metal DB Systems, but it is much more affordable and still a very good platform. There is good tooling that allows most operations to be fully automated. Upgrades can be made in two ways:

  1. Automated
  2. Manual

Automated

Since November 2020 it is possible to upgrade a database to Oracle Database 19c using the cloud tooling. However, your DB System must meet these requirements:

  • Oracle Linux 7
  • Grid Infrastructure (GI) 19 (only for systems with ASM)

To check the OS version:

[oracle@host]$ cat /etc/os-release

To check GI version:

[grid@host]$ crsctl query crs activeversion

If your system doesn’t meet these requirements either:

  • Move the database to a new DB System that meets these requirements via cloning or backup/restore, or
  • Use the manual approach

At time of writing, if you must upgrade to any other version than Oracle Database 19c you have to do it manually.

Manual

There are some limitations that you must be aware of that makes manual upgrading slightly different.

  • It is not supported to install a second database Oracle Home. You must use the one that is supplied when the system is provisioned. If you need a new database Oracle Home, you must provision a new system.
  • It is not supported to upgrade the Grid Infrastructure.
  • It is not supported to upgrade the operating system.
  • It is not supported to drop the existing CDB and create your own. It is, however, supported to drop the pre-created CDB and replace it with a backup (and we will discuss this option later).
  • You can only provision systems with multitenant architecture.

Having said that, there are some advantages to do manual upgrades:

  • You are in full control
  • You can decide exactly how to perform the upgrade
  • It can be faster because you can customize the upgrade, and you can avoid some of the tasks that the tooling does.

But the downside is that you will need to move the database to a new VM DB System.

Choose a Method

Automated upgrade is easy – manual upgrade is faster and with more control. When you have decided on a method read the other blog post in the series for much more information and demoes.

Other Posts in This Series