Upgrading in the Cloud – VM DB Systems – Automated Upgrade to 19c – The Details

Following a previous blog post here are all the details on automated upgrades in OCI, and (possibly) the answers to your questions.

Precheck

The precheck ensures the database is ready to upgrade. It uses DBUA which again uses preupgrade.jar to execute the checks. It is similar to running AutoUpgrade in analyze mode. The check is non-intrusive and can be executed while the database is in use.

Normally, when you use preupgrade.jar we always recommend you download the latest version from My Oracle Support. However, this is not possible when you use the tooling. The new, target Oracle Home is always deployed as part of the precheck process – and deleted again after the precheck. There is no way you can replace the preupgrade.jar package. You must use the version of preupgrade.jar that comes with the Oracle Home.

If there are no issues that prevent you from upgrading, you will see this message: The precheck completed and found no errors - the database is ready to upgrade

However, it could also be that there is an error in the database that must be fixed: If a critical issue prevents you from upgrading, a message will be displayed in the console

If you want to see the output from the precheck you must log on to the database host and find the file:

vi $ORACLE_BASE/cfgtoollogs/dbua/upgrade<timestamp>/$ORACLE_UNQNAME/upgrade.xml

Only the XML output is available, which might be a little hard to read. If you prefer you can also download AutoUpgrade to the server and run it in analyze mode. It can produce a much better output, and it works even if the target Oracle Home is not present. Create a simple config file:

upg1.sid=DB11204
upg1.source_home=/u01/app/oracle/product/11.2.0.4/dbhome_1
upg1.target_version=19

And now start AutoUpgrade in analyze mode:

java -jar autoupgrade.jar -config DB11204.cfg -mode analyze

You can use the preupgrade report to determine which issues prevents the upgrade from starting.

The database must be in ARCHIVELOG mode and the size of your Fast Recovery Area (FRA) must be at least 15G (parameter db_recovery_file_dest_size). As well, you must have 15G of free space on the mount point that hosts the FRA.

Upgrade

When you upgrade your database, all PDBs in the database are upgraded as well. There is no way to change it. If a PDB is closed when the upgrade starts, it is opened and upgraded. After the upgrade, the PDB is left opened and in READ WRITE state. But the state is not saved, so after a CDB restart, the PDB will start in whatever state that has been previously saved.

The first version of the tooling does not support standby database. If your database is a primary database, you must remove the standby database, upgrade, and then recreate the standby database. It is in the plan for future enhancements to get this streamlined.

Fallback

Enterprise Edition databases are protected by a guaranteed restore point (GRP) and Flashback Database. The tooling automatically creates the GRP before it starts to work on the database. If an error occurs during the upgrade, you can use the OCI console to initiate a roll back.

If the database upgrade fails on an Enterprise Edition database it is possible to roll back to a guaranteed restore point using Flashback Database

After successful upgrade the GRP is dropped again. The GRP only protects the database during the upgrade, so you can’t rely on the GRP as a fallback mechanism if you decide to fall back after the upgrade. Let’s say that your testing reveals a critical problem after the upgrade, then your only fallback mechanism is to restore a backup.

Since Flashback Database is an Enterprise Edition feature, this fallback mechanism is not available on Standard Edition databases.

In addition, it is strongly recommended that you perform a manual backup of the database before you start the upgrade. The console will also give you this warning, before you can start the upgrade.

Monitoring and Troubleshooting

When you have started the upgrade, you can’t monitor it from the console. You must log on to the host. When you do so, be aware that the timestamps shown in the OCI console are UTC, but the timestamps in the log files on the host is local timestamp (depending on your region).

Using dbcli

Log on as root and use the dbcli tool to monitor the progress. First, list jobs:

[root@host]$ dbcli list-jobs

Which should produce a list like this: Use dbcli list-jobs to list the jobs - including the upgrade - that run on the host Next, you can get additional information about the job using the ID:

[root@host]$ dbcli describe-job -i <id>

Which give you more details: dbcli describe-job can give you more detailed information about the upgrade

Using DBUA Log Files

But you can get even better information by looking in the log files from DBUA. Use the job id from the dbcli command to find the log file:

[oracle@host]$ export ORACLE_BASE=/u01/app/oracle
[oracle@host]$ export DBCLI_JOBID=f4b2597f-990f-4442-a774-153f3713fb7a
[oracle@host]$ tail -f -n 10 $ORACLE_BASE/cfgtoollogs/dbua/$DBCLI_JOBID/silent.log

And for really detailed information look in this directory:

[oracle@host]$ export ORACLE_BASE=/u01/app/oracle
[oracle@host]$ export DBCLI_JOBID=f4b2597f-990f-4442-a774-153f3713fb7a
[oracle@host]$ cd $ORACLE_BASE/cfgtoollogs/dbua/$DBCLI_JOBID/$ORACLE_UNQNAME

Using DCS Agent

The OCI control plane communicates with your DB System using an agent, and sometimes it can be useful to look in those logs:

[root@host]$ cd /opt/oracle/dcs/log
[root@host]$ vi dcs-agent.log

To find the log entries that are related to a specific upgrade search for the job ID:

[root@host]$ cat dcs-agent.log | grep "<job-id>" | more

Q&A

Which version and release update can I upgrade to?

The tooling only allows upgrades to Database 19c. If you need to upgrade to any other version, you must do it manually.

You can decide to upgrade to an Oracle provided image or a custom image: When you upgrade you can choose an Oracle provided image, or your own custom database software image However, for both type of images, the Release Update (or patch level) must be the latest or previous two Release Updates. Even if you have a custom database software image that is older, it can’t be used. You must upgrade to one of the recent Release Updates. If you select 19.0.0.0 you will not get the base release, but the latest Release Update. If you use the APIs this is a smart way of specifying that you always want the latest Release Update.

Where are my log files?

The output from the precheck is stored here:

  • $ORACLE_BASE/cfgtoollogs/dbua/upgrade<timestamp>

The output from the actual upgrade is stored here:

  • $ORACLE_BASE/cfgtoollogs/dbua/<job-id>
  • $ORACLE_BASE/cfgtoollogs/dbua/<job-id>/$ORACLE_UNQNAME

In addition, you can get details about the upgrade using dbcli:

[root@host]$ dbcli list-jobs
[root@host]$ dbcli describe-job -i <job-id>

Why is it taking so long to perform a precheck?

It consists of three phases:

  1. Deploy new Oracle Home to the VM DB System
  2. Precheck of the database
  3. Removing the new Oracle Home

The precheck (phase 2) is really fast. Just as fast as if you would run AutoUpgrade in analyze mode or using preupgrade.jar. The extra time is needed to deploy and remove the Oracle Home again. For each execution of the precheck the procedure repeats, and a new Oracle Home is deployed. It is never re-used.

Why is the upgrade slower than if I do it manually?

Typically, when you upgrade a database you have already – outside of the maintenance window – deployed a new Oracle Home. When you use the tooling, this happens inside the maintenance window. The tooling can’t deploy an Oracle Home prior to the upgrade. In addition, the upgrade is executed with DBUA using default options, which for instance means that the time zone file is upgraded as well. If you are sensitive to downtime and would like to complete the upgrade faster, you must perform the upgrade manually.

Will my 11.2.0.4 database get converted to a PDB?

No, the database is upgraded as-is and there is no PDB conversion. We are working on making it possible to perform the non-CDB conversion as well. If you must convert the non-CDB to a PDB, you must move the database to a new VM DB System that already have a CDB provision. In that case, I would recommend that you use the manual upgrade and plug-in as described in another blog post.

Can I perform an automated upgrade using dbcli?

No, although the command line help of dbcli suggests that such an option exist, it can’t be used.

Other Posts in This Series

How to Migrate a Database Using Full Transportable Export Import and Incremental Backups

These steps will guide you through a migration of a database using Full Transportable Export Import (FTEX) and incremental backups. I covered the concept in a previous blog post, which you should read to understand the basics. Remember Transportable Tablespaces and Full Transportable Export/Import requires Enterprise Edition.

My demo environment looks like this: Overview of demo environment for migrating using FTEX and incremental backups

I have an 12.1.0.2 database that I want to migrate to a PDB in a new CDB that runs 19c.

Check Prerequisites

Create a new PDB called SALES in the target CDB:

TARGET/CDB1 SQL> create pluggable database sales admin user admin identified by admin;
TARGET/CDB1 SQL> alter pluggable database sales open;
TARGET/CDB1 SQL> alter pluggable database sales save state;

Prepare the database to use TDE Tablespace Encryption:

TARGET/CDB1 SQL> alter session set container=sales;
TARGET/CDB1 SQL> administer key management set key force keystore identified by <keystore-pwd> with backup;

Verify SQL*Net connectivity from source host to target PDB:

[oracle@source]$ sqlplus system@<target ip>/<pdb-service-name>

Verify database character set and national character set are the same:

SOURCE/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

TARGET/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

Ensure the source database is in ARCHIVELOG mode:

SOURCE/SALES SQL> select log_mode from v$database;

Enable block change tracking on source database. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata. Although strictly speaking not required, it is strongly recommended:

SOURCE/SALES SQL> select status, filename from v$block_change_tracking;
SOURCE/SALES SQL> alter database enable block change tracking;

Ensure that you can connect from the source to the target host as oracle:

[oracle@source]$ ssh <target ip> date

Identify Tablespaces

Identify all the tablespaces that you will migrate. With FTEX you should transport all the tablespaces, except those that contain Oracle maintained data, like SYSTEM, SYSAUX, UNDO and so forth:

SOURCE/SALES SQL> select tablespace_name from dba_tablespaces;

Save the list of tablespaces for later. In my demo, I only have the tablespace SALES except the Oracle maintained ones.

Next, on the target database ensure that any of the existing tablespaces doesn’t conflict with the ones you are transporting:

TARGET/SALES SQL> select tablespace_name from dba_tablespaces;

If there is a conflict of names, you have to drop or rename the tablespaces in the target database.

Download and Configure Perl Scripts

Create a folder to hold the perl scripts, download the scripts from MOS doc ID 2471245.1, and unzip:

[oracle@source]$ rm -rf /home/oracle/xtts
[oracle@source]$ mkdir /home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ --Download file from MOS
[oracle@source]$ unzip rman_xttconvert_VER4.3.zip

Create a working directory (aka. scratch location) which will hold the backups. Ensure that you have enough space at this location at both source and target database.

[oracle@source]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@source]$ mkdir -p /u01/app/oracle/xtts_scratch

Create the same location on the target host:

[oracle@target]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@target]$ mkdir -p /u01/app/oracle/xtts_scratch

Configure your migration in xtt.properties. In this demo the file looks like this:

tablespaces=SALES
platformid=13
src_scratch_location=/u01/app/oracle/xtts_scratch
dest_scratch_location=/u01/app/oracle/xtts_scratch
dest_datafile_location=+DATA
asm_home=/u01/app/19.0.0.0/grid
asm_sid=+ASM1
parallel=4
rollparallel=4
getfileparallel=4
metatransfer=1
dest_user=oracle
dest_host=<target_ip>
desttmpdir=/u01/app/oracle/xtts_scratch
srcconnstr=sys/<password>@sales
destconnstr=sys/<password>@newsales
usermantransport=1

A little explanation:

  • platformid is set to 13 because this is a Linux migration. You can get the number by querying v$transportable_platform.
  • Adjust the parallel options according to the capabilities of the source and target system.
  • When you are using ASM disk group in dest_datafile_location you must also set asm_home and asm_sid.

Finally, copy the scripts (and the configuration) to your target system:

[oracle@source]$ scp -r /home/oracle/xtts/ <target_ip>:/home/oracle/

Initial Backup and Restore

Now, you can start the first initial backup of the database. You take it while the source database is up and running, so it doesn’t matter if the backup/restore cycle take hours or days to complete:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

The perl script has been configured in such a way that it automatically transfers the backups to the target system. In addition to that, a small text file must be transferred as well:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

Now, on the target system, you can restore the backup that was just taken. If needed, the data files are automatically converted to the proper endian format. If conversion is needed, you need space for a copy of all the data files:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Incremental Backup and Restore

You can – and should – run the incremental backup and restores as many times as possible. The more frequent you run them, the faster they will run because there will be fewer changes. At least, close to the migration downtime window starts you should run them often, to minimize the time it will take to perform the final backup and restore:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Final Incremental Backup and Restore

Now downtime starts! Set the tablespaces read-only:

SOURCE/SALES SQL> alter tablespace SALES read only;

Perform the final incremental backup:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

You will receive an error because the tablespace is read-only. This is ignorable: This error is ignorable because the tablespace was set read-only on purpose

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Import Metadata Using FTEX

Create a directory object that points to the xtts folder:

TARGET/SALES SQL> create directory LOGDIR as '/home/oracle/xtts';

Next, create a database link to the source database that can be used to import the metadata. If the source database is already a PDB, ensure that the database link points directly into the PDB:

TARGET/SALES SQL> create public database link SRCLNK connect to system identified by <password> using '//<source_ip>:1521/<service_name>';

Test that it works:

TARGET/SALES SQL> select * from dual@srclnk;

Next, create a par file (sales_imp.par) that you can use for the Data Pump import (see appendix below for explanation):

network_link=SRCLNK
full=y
transportable=always
metrics=y
logtime=all
exclude=TABLE_STATISTICS,INDEX_STATISTICS
exclude=SYS_USER
exclude=TABLESPACE:"IN('TEMP')"
exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')"
directory=logdir
logfile=sales_imp.log
transport_datafiles='+DATA/CDB1_FRA2VD/B2D617FCB79B0684E053AF01000A6DCE/DATAFILE/SALES.281.105552609'

Start Data Pump and perform the import. newsales is a TNS alias that points into the SALES PDB in the target CDB. If you have encrypted tablespaces, you should use the option encryption_pwd_prompt. It allows you to input the TDE password. It can be omitted if there are no encrypted tablespaces.

$ impdp system@newsales parfile=sales_imp.par encryption_pwd_prompt=yes

Once the import has completed, you should examine the Data Pump log file for any critical errors. Check the appendix (see below) for ignorable errors:

[oracle@target]$ vi /home/oracle/xtts/sales_imp.log

That’s it! Your data has been migrated. Now would be a good time to:

  • Test your application.
  • Start a backup.
  • Gather statistics – they were excluded from the export.
  • Drop the database link that points to the source database.
  • Cleanup the file system:
    • /home/oracle/xtts
    • /u01/app/oracle/xtts_scratch

Conclusion

Even huge, TB-sized, databases can be migrated with very little downtime by using incremental backups. By using the perl script from My Oracle Support and combined with Full Transportable Export/Import it is a simple process. In addition, you can even migrate to a new endian format, to a higher release and into a PDB in one operation. It requires Enterprise Edition and you must have plenty of disk space – potentially twice the size of your database.

There is a video on our YouTube channel that you can watch. It demos the entire process. I suggest that you subscribe to our channel and get notified whenever there are new videos.

Thanks to my good colleague, Robert Pastijn, for supplying a runbook that was used as inspiration.

Appendix

If Source Database Is in OCI and Automatic Backup Is Enabled

If the source database is running in OCI and you have enabled automatic backup, you must make a few changes.

In xttprep.tmpl around line 319 change:

cp('backup for transport allow inconsistent ' ||

to

cp('set encryption off for all tablespaces;set compression algorithm "basic";backup for transport allow inconsistent ' ||

In xttdriver.pl around line 4268 change:

my $rman_str1 = "set nocfau;";

to

my $rman_str1 = "set nocfau;".
                "set encryption off for all tablespaces ;".
                "set compression algorithm 'basic' ;" ;

ORA-02085

If you get ORA-02085 when querying over the database link:

TARGET/SALES SQL> alter system set global_names=false;

Data Pump Parameters

Use network_link to specify the name of the database link that points back to the source database.

full=y and transportable=always instructs Data Pump to perform a full transportable export/import.

exclude=TABLE_STATISTICS,INDEX_STATISTICS exclude statistics from the import. It is better and faster to gather new, fresh statistics on the target database. If you insist on importing your statistics, you should use DBMS_STATS.

exclude=SYS_USER excludes the import of the SYS user. In a PDB that is not even allowed, and most likely you are not interested in importing the definition of the SYS user.

exclude=TABLESPACE:"IN('TEMP')" excludes the temporary tablespace from the import. Most likely there is already a temporary tablespace in the new, target PDB. It is faster to create a TEMP tablespace in advance – and name it the same as in the source database.

A change was made to Spatial in 19c and some Spatial admin users are removed. To avoid errors/noise in the log file you can safely exclude them from the import by specifying exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')".

transport_datafiles is used to specify the data files that make you the tablespace you are transporting. Specify the parameter multiple times to specify more data files. You can use asmcmd to get the data file paths and names.

Data Pump Ignorable Errors

Multimedia desupported in 19c, but code is still there. You can safely disregard this error:

Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
ORA-39342: Internal error - failed to import internal objects tagged with ORDIM due to ORA-00955: name is already used by an existing object.

Package is removed in 12.2. See ORA-39083 And ORA-04042 Errors On DBMS_DEFER_SYS When Importing Into 12.2 Database (Doc ID 2335846.1):

Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
ORA-39083: Object type PROCACT_SYSTEM failed to create with error:ORA-04042: procedure, function, package, or package body does not exist

Failing sql is:
BEGIN
SYS.DBMS_UTILITY.EXEC_DDL_STATEMENT('GRANT EXECUTE ON DBMS_DEFER_SYS TO "DBA"');COMMIT; END;

Move to the Cloud – Webinar

Yesterday, Mike Dietrich and I gave the final webinar in the Oracle Database 19c Upgrade Virtual Classroom series. It was about Move to the Cloud – not only for techies. Now, I say final – but we all know you should never, say never. And in this case, it applies to final as well. We are already talking about subjects for a seventh webinar. If you have any interesting topic, that you think we should cover, get in touch with me.

Oracle Database 19c Upgrade Virtual Classroom

Unfortunately, due to a technical glitch we skipped the part about migrating using transportable tablespaces and full transportable export/import. We uploaded the missing part to YouTube, so you can watch it.

For those interested, you can now download the slides. We had really much information to share, so browse through the deck to find a lot of hidden slides. Typically, there are references and links to more information about a specific topic.

Within a week it should be possible to watch a recording of the webinar.

The Demos and Videos

This presentation we gave, was a brand new one. We used as many demos and videos as we could – or rather had time to prepare. We will post them on our YouTube channel as soon as possible. I suggest that you subscribe to it, so you can receive word as soon as new contents arrives. Further, we want to enhance the presentation even more, so we will be putting in more videos and demos. Let me know, if there was a topic, that could improve with a video or demo.

Thank You

Thanks to everyone that participated yesterday. Happy migrating!

Debut on the Big Stage

Mike Dietrich and I will give two webinars in mid-October on database migrations. One of them will focus solely on migration to Oracle Cloud Infrastructure. The webinars are part of the Oracle Database 19c Upgrade Virtual Classroom series. If you are interested, you have to register.

Oracle Database 19c Upgrade Virtual Classroom

Migration Strategies: Insights, Tips and Secrets

Date: Tuesday 13 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Now it’s time for us to dig deeper. We’d like to offer you further insights and a deep dive from a technical point of view, starting with Data Pump and Transportable Tablespaces, then Full Transportable Export Import, and adding RMAN Incremental Backups to decrease the downtime. Best practices and real-world experience will round up this two-hour webinar.

Move to the Cloud (for techies)

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Debut on the Big Stage

These two webinars will be my debut on the big stage. The previous webinars had a huge interest and, unfortunately, some people couldn’t join because the webinar had reached its maximum number of attendees. If you register, I recommend to join early to get your seat.

I have been presenting in person and virtually for some years now. Also, since I joined Oracle in January, I have been doing quite a few presentations. However, this is my first time for such big crowd. I am excited and look forward to it – but also a little intimidated. Luckily, I have super-star Mike Dietrich there to (virtually) hold my hand.

And, finally, I promise you: No marketing slides – just demos and details.

I hope to see you there!

Zero Downtime Migration – The Pro Tips

In this final (for now, at least) blog post I will show the pro tips that might come in handy. It is a little mix and match of all my notes that didn’t make it into the previous blog posts, but are still too good to go.

Pro Tip 1: Converting To Snapshot Standby For Testing

This is a really cool feature of ZDM (or in fact any migration that uses a standby database). Once the standby database has been built in OCI and while you are waiting to do the switch-over, you can use the database in OCI for testing. So you can do realistic testing on the database you are about to switch over to. To convert the standby database to a snapshot standby database:

alter database recover managed standby database cancel;
shutdown immediate
startup mount
alter database convert to snapshot standby;
alter database open;

Now, the database is opened in READ WRITE mode and you can use it for testing. Don’t worry, the database is fully protected by flashback logs so you can always rewind any changes made, and resync with the primary database. To convert back to a physical standby database:

shutdown immediate
startup mount
alter database convert to physical standby;
shutdown immediate
startup
alter database recover managed standby database disconnect from session;

You can even do this multiple times if you want several test runs on your OCI database. If you want you can read more about the different standby databases or snapshot standby databases in particular.

Pro Tip 2: Monitoring Queries

When you have created the standby database in OCI and are waiting for the switch-over you can use these commands for monitoring. On the source/primary database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

On the target/standby database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;
--MRP process should be 'APPLYING_LOG'
SELECT process, status, sequence# FROM v$managed_standby;
SELECT * FROM v$archive_gap;

Pro Tip 3: Log Files

If something goes south where can you find the log files? On the ZDM service host:

  • $ZDM_BASE/chkbase/scheduled
  • $ZDM_BASE/crsdata/[hostname]/rhp

On the source and target hosts you can also find additional log files containg all the commands that are executed by ZDM:

  • /tmp/zdm-[some number]/zdm/log

Pro Tip 4: Troubleshooting

When you a troubleshooting it is sometimes useful to get rid of all the log files and have ZDM start all over. Some of the log files get really big and are a hard to read, so I usually stop the ZDM service, delete all the log files, and restart ZDM and my troubleshooting. But only do this if there are no other jobs running than the one you are troubleshooting:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice stop
[zdmuser@zdm]$ rm $ZDM_BASE/crsdata/*/rhp/rhpserver.log*
[zdmuser@zdm]$ rm $ZDM_BASE/chkbase/scheduled/*
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start

There is also a chapter about troubleshooting in the documentation but it more or less says the same.

Pro Tip 5: Aborting A Job

Some times it is useful to completely restart a migration. If a database migration is already registered in ZDM, you are not allowed to specify another migration job. First, you have to abort the existing job, before you can enter a new migration job.

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli abort job -jobid n

Now, you can use zdmcli migrate database command again. By the way, the abort job command is missing from the CLI reference but it is a valid, and fully supported command.

Pro Tip 6: Show All Phases

A ZDM migration is split into phases, and you can have ZDM pause after each of the phases. The documentation has a list of all phases but you can also get it directly from the ZDM tool itself for a specific migration job:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp ~/migrate.rsp
   ... \
   ... \
   ... \
   -listphases

Pro Tip 7: Adding Custom Scripts

You can add your own custom scripts to run before or after a phase in the migration job. You can use the -listphases command (described above) to get a list of all the phases. Then decide whether your script should run before or after that phase. This is called an action plug-in. You can bundle those together in a template to make it easier to re-use. If this is something you need, you should dive into the documentation.

Pro Tip 8: Remember To Patch On-Premises Oracle Home

If your OCI Oracle Home is running on a newer Release Update (i.e. higher patch level) then you have to patch your on-premises Oracle Home after the switch-over – and before you execute datapatch. The two patch levels should be identical after the switch-over. Release Updates are always Standby-First Installable. That means that it is allowed to have a standby database running on an Oracle Home of newer patch level – but not older one. This concept is widely used to reduce downtime during database patching and it is basically the same concept that applies for ZDM.

Pro Tip 9: Fallback To On-Premises Database

It is possible to configure ZDM to keep the on-premises database after the switch-over. It will then become a physical standby database. If something happens with the OCI database, you can do an additional switch-over and run off the original on-premises source database. This is a very nice and handy fallback method. If you want to read about it that procedure I suggest that you visit the MOS note MAA Practices for Cloud Migration Using ZDM (Doc ID 2562063.1). Be aware, falling back to the source database requires a license for Advanced Security Option. The target database in OCI is encryted using TDE Tablespace Encryption. You get that as part of any OCI DB System offering. Once the OCI database is the primary database it will generate encrypted redo – and if the source database has to apply that – it must have a license for the Advanced Security Option.

Pro Tip 10: Convert From Single Instance To RAC

A useful feature of ZDM is that it can convert a single instance database to a RAC database in OCI. And it is super simple to do that. The only thing you have to do is to create the target placeholder database as a RAC database. ZDM will detect that and take care of the rest.

Finally, let me mention that if the source database is RAC One Node or RAC, then the target database must be a RAC database. Be sure to create the target placeholder database as RAC.

Other Blog Posts In This Series

Zero Downtime Migration – Migrate Your Database

In the previous blog posts we configured the environments and installed and configured the ZDM service host. Now we can start working on the actual migration.

I will use Object Storage as the staging area between the source and target host, and, hence, I need to create a bucket that I can use for that purpose:

$ oci os bucket create \
  --compartment-id "..." \
  --name "zdm-staging"

Connect to the ZDM service host as zdmuser and have a look at a template response file. It contains a description of each of the parameters that you can use:

[zdmuser@zdm]$ more $ZDM_HOME/rhp/zdm/template/zdm_template.rsp

Now, I can create my own response file:

[zdmuser@zdm]$ vi ~/migrate.rsp

In my demo it contains the following:

#Migration method: DG_OSS - DataGuard using object storage for standby initialization
MIGRATION_METHOD=DG_OSS
#This is DB_UNIQUE_NAME of the target database, connect to the target database and execute: SELECT db_unique_name FROM v$database;
TGT_DB_UNIQUE_NAME=CDB1_fra3kw
#Name of the ASM diskgroups that I will use. To get a list of disk groups and free space, connect to target database and execute: SELECT name, free_mb, total_mb FROM v$asm_diskgroup;
TGT_DATADG=+DATA
TGT_REDODG=+RECO
TGT_RECODG=+RECO
#BACKUP_PATH should be left blank when MIGRATION_METHOD=DG_OSS
BACKUP_PATH=
#URL to OCI object storage
HOST=https://swiftobjectstorage.[region].oraclecloud.com/v1/[your-tenancy-object-storage-namespace]
#The Object Storage Bucket that will be used as a staging area
OPC_CONTAINER=zdm-staging
#In my simple demo I will skip configuration of fallback, and just shutdown the source database after the switchover
SKIP_FALLBACK=TRUE
SHUTDOWN_SRC=TRUE

If you need help figuring out what HOST should be set to, you can look in the documentation. In my demo I use the Frankfurt data center, and, thus, the region is set to eu-frankfurt-1. You can visit the API documentation for a list of regions. But often the region string is listed many places. To figure out what your tenancy object storage namespace is you can use oci os ns get. Alternatively, in the OCI web page open the Profile menu and click Tenancy: <your tenancy name>. The namespace string is listed under Object Storage Settings. To read more about have a look at the OCI documentation. Thanks to Bartlomiej Sowa for putting in a comment with this information – much appreciated!

Now, let’s start a migration evaluation. Most of the parameters are self-explanatory but you can also look in the documentation. Normally, you should leave srcauth and tgtauth to zdmauth. srcarg2 identify_file refers to the private key files that are needed to SSH to the source and target host. The backupuser is the user name that you want to use to connect to OCI object storage:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \
  -rsp ~/migrate.rsp \
  -sourcesid CDB1 \
  -sourcenode srchost \
  -srcauth zdmauth \
  -srcarg1 user:opc \
  -srcarg2 identity_file:/home/zdmuser/.ssh/srchost \
  -srcarg3 sudo_location:/usr/bin/sudo \
  -targetnode tgthost \
  -tgtauth zdmauth \
  -tgtarg1 user:opc \
  -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost \
  -tgtarg3 sudo_location:/usr/bin/sudo \
  -targethome /u01/app/oracle/product/19.0.0.0/dbhome_1 \
  -backupuser "daniel.overby.hansen@oracle.com" \
  -eval

Shortly after you will be prompted for the SYS password to the source database and also the password for the backupuser (which is your OCI user). For the latter, please note that this password is not your user password, however, it is an auth token: When using OCI oject storage this is NOT your user password, but an auth token Also, from the output you can see the ID of your job. Use it to query the status of the job:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli query job -jobid 1

Hopefully, you will end up with a successful evaluation: When status of an evaluation is SUCCEEDED

Before we start the actual migration let me say a few words about backup during ZDM migration. You should keep regular backup strategy during migration. But avoid having ZDM backups and regular backups run at the same time. Also, if you are dealing with a RAC database be sure to put the snapshot controlfile on shared storage. Otherwise, you might get ORA-00245 errors during backups.

Back on track! Start the migration, but – VERY IMPORTANT – specify that ZDM should pause after the standby has been built and redo apply has started using -pauseafter option. If you fail to do so the switch-over will be executed as soon as the standby has been built:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \
  -rsp ~/migrate.rsp \
  -sourcesid CDB1 -sourcenode srchost \
  -srcauth zdmauth \
  -srcarg1 user:opc \
  -srcarg2 identity_file:/home/zdmuser/.ssh/srchost \
  -srcarg3 sudo_location:/usr/bin/sudo \
  -targetnode tgthost \
  -targethome /u01/app/oracle/product/19.0.0.0/dbhome_1 \
  -backupuser "daniel.overby.hansen@oracle.com" \
  -tgtauth zdmauth \
  -tgtarg1 user:opc \
  -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost \
  -tgtarg3 sudo_location:/usr/bin/sudo \
  -pauseafter ZDM_CONFIGURE_DG_SRC

I can use the same command (zdmcli query job) to query the progress of the migration. After a while the migration will pause after ZDM_CONFIGURE_DG_SRC: Standby database is created - waiting for the final "Go"

Now the standby database has been built in OCI. Redo gets transferred from my source database to the target database in OCI and applied. If you want to verify it, you can use these queries on the source/primary database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

And on the target/standby database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;

One of the really cool features of ZDM is that I can now use my standby database for testing in OCI – before I decide to do the switch-over. You can archieve this by converting to a snapshot standby database. I will explain this in a later blog post.

Now I can just sit back and wait. All the changes from my source environment are immediately replicated to my target database in OCI. When it is time to complete the migration and perform the switch-over, I can simply just let ZDM finalize the job. You shouldn’t worry – ZDM won’t do the switch-over until it have verified that all changes are applied on the target database.

However, to prove that it really works lets add another piece of fruit to our SALES PDB:

INSERT INTO zdmtest.items VALUES (4, 'Lemon', 4);
COMMIT;

And instruct ZDM to resume the job:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli resume job -jobid 2
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli query job -jobid 2

In the end you will have output similar to this:

Job execution elapsed time: 3 minutes 37 seconds
ZDM_GET_SRC_INFO .............. COMPLETED
ZDM_GET_TGT_INFO .............. COMPLETED
ZDM_SETUP_SRC ................. COMPLETED
ZDM_SETUP_TGT ................. COMPLETED
ZDM_GEN_RMAN_PASSWD ........... COMPLETED
ZDM_PREUSERACTIONS ............ COMPLETED
ZDM_PREUSERACTIONS_TGT ........ COMPLETED
ZDM_VALIDATE_SRC .............. COMPLETED
ZDM_VALIDATE_TGT .............. COMPLETED
ZDM_OBC_INST_SRC .............. COMPLETED
ZDM_OBC_INST_TGT .............. COMPLETED
ZDM_BACKUP_FULL_SRC ........... COMPLETED
ZDM_BACKUP_INCREMENTAL_SRC .... COMPLETED
ZDM_DISCOVER_SRC .............. COMPLETED
ZDM_COPYFILES ................. COMPLETED
ZDM_PREPARE_TGT ............... COMPLETED
ZDM_SETUP_TDE_TGT ............. COMPLETED
ZDM_CLONE_TGT ................. COMPLETED
ZDM_FINALIZE_TGT .............. COMPLETED
ZDM_CONFIGURE_DG_SRC .......... COMPLETED
ZDM_SWITCHOVER_SRC ............ COMPLETED
ZDM_SWITCHOVER_TGT ............ COMPLETED
ZDM_SHUTDOWN_SRC .............. COMPLETED
ZDM_NONCDBTOPDB_PRECHECK ...... COMPLETED
ZDM_NONCDBTOPDB_CONVERSION .... COMPLETED
ZDM_POSTUSERACTIONS ........... COMPLETED
ZDM_POSTUSERACTIONS_TGT ....... COMPLETED
ZDM_CLEANUP_SRC ............... COMPLETED
ZDM_CLEANUP_TGT ............... COMPLETED

And now my database is migrated to the cloud. Let’s query the target database:

SELECT
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;

And check that all four pieces of fruit are in our SALES PDB:

SELECT * FROM zdmtest.items;

And that should be it: Welcome to OCI. Personally, I would also take a quick peek at the alert log. Just to ensure things are running smooth.

Exadata DB System

If your target environment is an Exadata DB System (or ExaCS) it is a good idea to ensure that the cloud tooling is fully up-to-date with the new database.

[root@zdm]$ dbaascli registerdb prereqs --dbname [db_name] --db_unique_name [db_unique_name]
[root@zdm]$ dbaascli registerdb begin --dbname [db_name] --db_unique_name [db_unique_name]

You can read more about it in the ZDM run book which also has a sample output from the commands.

Troubleshooting

PRCZ-4001

If you run into this error:

PRCZ-4001 : failed to execute command "/bin/uname" using the privileged execution plugin "zdmauth" on nodes "doverbyh-zdm-tgt" within 120 seconds
PRCZ-2006 : Unable to establish SSH connection to node "doverbyh-zdm-tgt" to execute command "/bin/uname"
No more authentication methods available

Check your key files. They must be in RSA/PEM format (the private key must start with -----BEGIN RSA PRIVATE KEY-----).

ZDM_OBC_INSTALL_CREDENTIALS_INVALID

If you run into this error:

srchost: 07:08:00.000: Validating object store credentials..
srchost: <ERR_FILE><Facility>PRGO</Facility><ID>ZDM_OBC_INSTALL_CREDENTIALS_INVALID</ID></ERR_FILE>

Check your credentials to OCI. Remember when prompted for the password of your OCI account it is an auth token, not your password (even though the prompt text is misleading).

ZDM_GET_SRC_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the source database host is added to the known_hosts file on the ZDM service host. Also, you can verify connectivity by trying to log on via SSH:

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-source-host>

ZDM_GET_TGT_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the target database host is added to the known_hosts file on the ZDM service host.

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-target-host>

Other Blog Posts In This Series

Zero Downtime Migration – Install And Configure ZDM

To use Zero Downtime Migration (ZDM) I must install a Zero Downtime Migration service host. It is the piece of software that will control the entire process of migrating my database into Oracle Cloud Infrastructure (OCI). The requirements are:

  • Must be running Oracle Linux 7 or newer.
  • 100 GB disk space according to the documentation. I could do with way less – basically there should be a few GBs for the binaries and then space for configuration and log files.
  • SSH access (port 22) to each of the database hosts.
  • Recommended to install it on a separate server (although technically possible to use one of the database hosts).

Create and configure server

In my example, I will install the ZDM service host on a compute instance in OCI. There are no requirements to CPU nor memory and ZDM is only acting as a coordinator – all the work is done by the database hosts – so I can use the smallest compute shape available. I am using OCI CLI* to create the compute instance which you can install on your own computer or use a Cloud Shell. But I could just as well use the web interface or REST APIs.

First, I will define a few variables that you have to change to your needs. DISPLAYNAME is the hostname of my compute instance – and also the name I see in the OCI webpage. AVAILDOM is the availability domain into which the compute instance is created. SHAPE is the compute shape:

DISPLAYNAME=zdm
AVAILDOM=OUGC:EU-FRANKFURT-1-AD-1
SHAPE=VM.Standard2.1

When I create a compute instance using the webpage these are the values: Screenshot of OCI webpage where display name, availability domain and shape are shown

In addition, I will define the OCID of my compartment, and also the OCID of the subnet that I will use. I am making sure to select a subnet that I can reach via SSH from my own computer. Last, I have the public key file:

COMPARTMENTID="..."
SUBNETID="..."
PUBKEYFILE="/path/to/key-file.pub"

Because I want to use the latest Oracle Linux image I will query for the OCID of that and store it in a variable:

IMAGEID=`oci compute image list \
   --compartment-id $COMPARTMENTID \
   --operating-system "Oracle Linux" \
   --sort-by TIMECREATED \
   --query "data[?contains(\"display-name\", 'GPU')==\\\`false\\\`].{DisplayName:\"display-name\", OCID:\"id\"} | [0]" \
   | grep OCID \
   | awk -F'[\"|\"]' '{print $4}'`

And now I can create the compute instance:

oci compute instance launch \
 --compartment-id $COMPARTMENTID \
 --display-name $DISPLAYNAME \
 --availability-domain $AVAILDOM \
 --subnet-id $SUBNETID \
 --image-id $IMAGEID \
 --shape $SHAPE \
 --ssh-authorized-keys-file $PUBKEYFILE \
 --wait-for-state RUNNING

The command will wait until the compute instance is up and running because I used the wait-for-state RUNNING option. Now, I can get the public IP address so I can connect to the instance:

VMID=`oci compute instance list \
  --compartment-id $COMPARTMENTID \
  --display-name $DISPLAYNAME \
  --lifecycle-state RUNNING \
  | grep \"id\" \
  | awk -F'[\"|\"]' '{print $4}'`
oci compute instance list-vnics \
 --instance-id $VMID \
 | grep public-ip \
 | awk -F'[\"|\"]' '{print $4}'

Prepare Host

The installation process is described in the documentation which you should visit to get the latest changes. Log on to the ZDM service host as OPC using the public IP address. By using -o ServerAliveInterval=300 I can avoid getting kicked off all the time:

ssh -o ServerAliveInterval=300 -i [key-file] opc@[ip-address]

Now, switch to root and install required packages:

[opc@zdm]$ sudo su -
[root@zdm]$ yum -y install \
  gcc \
  kernel-devel \
  kernel-headers \
  dkms \
  make \
  bzip2 \
  perl \
  glibc-devel \
  expect \
  zip \
  unzip \
  kernel-uek-devel-$(uname -r)

Create a ZDM group and user:

[root@zdm]$ groupadd zdm ; useradd -g zdm zdmuser

Make it possible to SSH to the box as zdmuser. I will just reuse the SSH keys from opc:

[root@zdm]$ cp -r /home/opc/.ssh /home/zdmuser/.ssh ; chown -R zdmuser:zdm /home/zdmuser/.ssh

Create directory for Oracle software and change permissions:

[root@zdm]$ mkdir /u01 ; chown zdmuser:zdm /u01

Edit hosts file, and ensure name resolution work to the source host (srchost) and target hosts (tgthost):

[root@zdm]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@zdm]$ echo -e "[ip address] tgthost" >> /etc/hosts

Install And Configure ZDM

Now, to install ZDM I will log on as zdmuser and set the environment in my .bashrc file:

[root@zdm]$ su - zdmuser
[zdmuser@zdm]$ echo "INVENTORY_LOCATION=/u01/app/oraInventory; export INVENTORY_LOCATION" >> ~/.bashrc
[zdmuser@zdm]$ echo "ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_BASE=\$ORACLE_BASE; export ZDM_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_HOME=\$ZDM_BASE/zdm19; export ZDM_HOME" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_INSTALL_LOC=/u01/zdm19-inst; export ZDM_INSTALL_LOC" >> ~/.bashrc
[zdmuser@zdm]$ source ~/.bashrc

Create directories

[zdmuser@zdm]$ mkdir -p $ORACLE_BASE $ZDM_BASE $ZDM_HOME $ZDM_INSTALL_LOC

Next, download the ZDM software into $ZDM_INSTALL_LOC.

Once downloaded, start the installation:

[zdmuser@zdm]$ $ZDM_INSTALL_LOC/zdminstall.sh setup \
  oraclehome=$ZDM_HOME \
  oraclebase=$ZDM_BASE \
  ziploc=$ZDM_INSTALL_LOC/zdm_home.zip -zdm

And it should look something similar to this: Screenshot of a successful ZDM installation

The installation process will output some warnings which are ignorable:

[WARNING] [INS-42505] The installer has detected that the Oracle Grid Infrastructure home software at (/u01/app/oracle/zdm19) is not complete.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
[WARNING] [INS-41875] Oracle ASM Administrator (OSASM) Group specified is same as the users primary group.
[WARNING] [INS-32022] Grid infrastructure software for a cluster installation must not be under an Oracle base directory.
[WARNING] [INS-32055] The Central Inventory is located in the Oracle base.
[WARNING] [INS-13014] Target environment does not meet some optional requirements.

And there are reference to scripts that I should execute, but this is not needed as mentioned in the documentation. ZDM piggybacks on Grid Infrastructure components which is why the messages are displayed: ZDM incorrectly informs you to run root scripts - don't run the scripts Start the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start

Which should produce something like this: Sample output when starting ZDM service (jwcctl debug jwc) And, optionally, I can verify the status of the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice status

The ZDM service is running

Configure Network Connectivity

The ZDM service host must communicate with the source and target hosts via SSH. For that purpose I need private key files to each of the hosts. The private key files must be without a passphrase, in RSA/PEM format and I have to put them at /home/zdmuser/.ssh/[host name]. In my demo, the files are to be named:

  • /home/zdmuser/.ssh/srchost
  • /home/zdmuser/.ssh/tgthost

Ensure that only zdmuser can read them:

[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/srchost
[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/tgthost

Now, I will verify the connection. In my example I will connect to opc on both database hosts, but you can change it if you like:

[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/srchost opc@srchost
[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/tgthost opc@tgthost

If you get an error when connecting ensure the following:

  • The public key is added to /home/opc/.ssh/authorized_keys on both database hosts (change opc if you are connecting as another user)
  • The key files are in RSA/PEM format (the private key file should start with -----BEGIN RSA PRIVATE KEY-----)
  • The key files are without a passphrase

That’s It

Now, I have a working ZDM service host. Previously, I have prepared my source and target environments which means that I am ready to start the migration process. Stay tuned!

References

* I found this blog post by Michał very usefull when figuring out how to use OCI CLI to create a compute instance.

Other Blog Posts In This Series

Zero Downtime Migration – Preparations

In the following posts I will demo how to migrate a database into OCI. To make the demo as simple as possible my source database will already be running in OCI and my target will be a VM DB System.

Prepare source environment

I will create a VM DB System that can act as my source database. I will call it srchost and the database is called CDB1. Using OCI CLI it can be done like this:

oci db system launch \
  --compartment-id "..." \
  --availability-domain "..." \
  --subnet-id "..." \
  --shape "VM.Standard2.4" \
  --cpu-core-count 4 \
  --database-edition "ENTERPRISE_EDITION" \
  --admin-password "..." \
  --ssh-authorized-keys-file "/path/to/key-file.pub" \
  --license-model "BRING_YOUR_OWN_LICENSE" \
  --db-name "CDB1" \
  --pdb-name "SALES" \
  --storage-management "ASM" \
  --node-count 1 \
  --initial-data-storage-size-in-gb 2048 \
  --display-name "SRCHOST" \
  --hostname "SRCHOST" \
  --db-version "19.0.0.0"

Once completed, I can log on as opc and switch to root and copy the authorized_keys file to allow SSH connection as oracle:

[opc@srchost]$ sudo su -
[root@srchost]$ rm -f /home/oracle/.ssh/authorized_keys
[root@srchost]$ cp /home/opc/.ssh/authorized_keys /home/oracle/.ssh/authorized_keys
[root@srchost]$ chown -R oracle:oinstall /home/oracle/.ssh

Now, I will ensure that the database is in ARCHIVELOG mode; it is a requirement for using Data Guard:

SELECT log_mode FROM v$database;

Since my database is running 12.2 or higher I have to ensure that there is a TDE Keystore. This applies even if the database is not encrypted. You don’t need a license to create a TDE Keystore – not until you start encrypting stuff. The keystore must be OPEN and the type is either AUTOLOGIN, LOCAL_AUTOLOGIN or PASSWORD. In a CDB, this applies to CDB$ROOT and all PDBs:

SELECT con_id, status, wallet_type FROM v$encryption_wallet;

If status is OPEN_NO_MASTER_KEY it means that no TDE master encryption key has been created and you will need to create one. You can also find instructions on how to create a TDE keystore in the ZDM documentation.

Prepare target environment

In my demo I will be targeting a VM DB System which I will call tgthost. For the migration to work, I have to create a placeholder database on the target host. During migration the placeholder database is overwritten by the source database. On a VM DB System this is easy; I just use the database that gets created automatically as placeholder.

DB_NAME must be the same in the two databases, and hence in my demo I must set db-name in the OCI CLI command below to CDB1 (the name of my source database). Contrary, DB_UNIQUE_NAME must be different but this will likely not be a problem because OCI automatically generates a semi-unique DB_UNIQUE_NAME (if you are a strong believer in Murphy’s Law you should double-check it).

The target database patch level can be the same or higher than the source. If higher, you just have to manually execute datapatch after the switch-over (I will put this information into a later blog post). By setting db-version to 19.0.0.0 I automatically get the latest available Release Update.

Also, the SYS password has to match in the two databases, so be sure to set admin-password to your source database SYS password. Finally, you should set storage-management to ASM. This will cause OCI to install Grid Infrastructure as well and set up a SCAN listener.

In my demo I end up with this OCI CLI command:

oci db system launch \
  --compartment-id  "..." \
  --availability-domain "..." \
  --subnet-id "..." \
  --shape "VM.Standard2.4" \
  --cpu-core-count 4 \
  --database-edition "ENTERPRISE_EDITION" \
  --admin-password "..." \
  --ssh-authorized-keys-file /path/to/key-file.pub \
  --license-model "BRING_YOUR_OWN_LICENSE" \
  --db-name "CDB1" \
  --pdb-name "SALES" \
  --storage-management "ASM" \
  --node-count 1 \
  --initial-data-storage-size-in-gb 2048 \
  --display-name "tgthost" \
  --hostname "tgthost" \
  --db-version "19.0.0.0"

As opc, switch to root and enable oracle to log on via SSH:

[opc@tgthost]$ sudo su -
[root@tgthost]$ rm -f /home/oracle/.ssh/authorized_keys
[root@tgthost]$ cp /home/opc/.ssh/authorized_keys /home/oracle/.ssh/authorized_keys
[root@tgthost]$ chown -R oracle:oinstall /home/oracle/.ssh

The requirements to the TDE Keystore applies to the target environment as well, but since I am using an OCI DB System that will be setup up correctly when the system is created.

Configure Connectivity

I need to ensure that the source host can resolve the network name of the target host. It is important to add two entries – one with the host name and one with the SCAN name (they should both point to the target host):

[root@srchost]$ echo -e "[ip address] tgthost" >> /etc/hosts
[root@srchost]$ echo -e "[ip address] tgthost-scan" >> /etc/hosts

Also, I need to ensure that I can connect to the target database from the source host over SQL*Net. Just use the default service of the target CDB, and you can get it using lsnrctl:

[oracle@tgthost]$ lsnrctl status
[oracle@srchost]$ sqlplus system@tgthost-scan/[target-cdb-service-name]

Now, I do it the other way around. Ensure that the target host can resolve the network name of the source host:

[root@tgthost]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@tgthost]$ echo -e "[ip address] srchost-scan" >> /etc/hosts

Ensure that I can connect to the source database from the target host:

[oracle@srchost]$ lsnrctl status
[oracle@tgthost]$ sqlplus system@srchost-scan/[source-cdb-service-name]

Create Sample Data

I will connect to the source database:

[oracle@srchost]$ export ORACLE_SID=CDB1
[oracle@srchost]$ export ORACLE_PDB_SID=SALES
[oracle@srchost]$ sqlplus / as sysdba

Note, that ORACLE_PDB_SID works from 18.5.0 and beyond. For lower versions you can just use ALTER SESSION SET CONTAINER=SALES instead. And create some sample data, so I can verify the migration actually works:

CREATE USER zdmtest IDENTIFIED BY [your-password];
GRANT CREATE TABLE, CONNECT TO zdmtest;
GRANT SELECT ON v_$instance TO zdmtest;
GRANT SELECT ON v_$database TO zdmtest;
ALTER USER zdmtest DEFAULT TABLESPACE users;
ALTER USER zdmtest QUOTA UNLIMITED ON users;

CREATE TABLE zdmtest.items (
   id      NUMBER,
   item    VARCHAR2(20),
   price   NUMBER
);
INSERT INTO zdmtest.items VALUES (1, 'Apple', 2);
INSERT INTO zdmtest.items VALUES (2, 'Banana', 1);
INSERT INTO zdmtest.items VALUES (3, 'Orange', 4);
COMMIT;

That’s It

My source database is now ready to be migrated to OCI. It will be migrated to a new VM DB System that I just created. In the next blog post I will install and configure ZDM. Stay tuned!

Other Blog Posts In This Series

Zero Downtime Migration

When you need to migrate into OCI we have a cool – and free – tool that you can use: Zero Downtime Migration (ZDM). In short: ZDM builds a standby database in OCI and when you are ready to migrate, all it takes is a simple switch-over. My fellow Product Manager, Ricardo Gonzalez, has made a good video that gives you a perfect introduction in less than two minutes.

In this blog post series I will take you through the entire process using version 19.2 (the latest at time of writing). In the end you will know all there is to know – and you can start migrating your databases into OCI. Granted, in this first blog post there will be a lot of text but I want you to have all the details. Take a deep breath – here we go!

You can use Zero Downtime Migration to easily migrate to Oracle Cloud Infrastructure - OCI

Platforms

Your source database can be located on-premises, in OCI Classic (you know, our old cloud) or already in OCI. The latter scenario is useful when you want to migrate between regions or locations or between system types (like from Virtual Machine to Bare Metal).

The operating system must be Linux. No other platforms are supported. And you can migrate into one of these platforms:

  • Virtual Machine DB System
  • Bare Metal DB System
  • Exadata DB System
  • Exadata Cloud at Customer

Release And Edition

You can use ZDM if the database is 11.2.0.4 or newer and the target release will be the same. If you need to migrate into a newer database release, you must manually upgrade the database afterwards, or use another approach. However, it is possible to migrate to a higher patch level. You just have to manually execute datapatch afterwards.

If your source database is Enterprise Edition the database will be migrated with zero downtime. For Standard Edition the source database will be offline during the entire migration. The reason is that Data Guard is used behind the scenes, and it is not licensed on Standard Edition.

Last, you can’t migrate between editions, e.g. from Standard Edition to Enterprise Edition.

Architecture

Your source database can be non-CDB or CDB.

  • Non-CDB databases will be migrated to non-CDB. In a future release of ZDM we plan to include PDB conversion as well.
  • CDBs will be migrated as-is -meaning with all PDBs.

Your source database can be single instance, RAC One Node or RAC.

  • Single instance databases are migrated to single instance or, optionally, to a RAC database.
  • RAC One Node and RAC databases are always migrated into a RAC database.

Encryption

Your source database can be encrypted, but it is not required. Unencrypted databases will be encrypted on-the-fly once they are created in OCI. The initial backup of the source database is sent over an encrypted connection to OCI Object Storage, and redo are transferred over encrypted SQL*Net.

Be aware, for unencrypted databases (or in fact databases that are not licensed for Advanced Security Option) you are not allowed to fallback to the source environment after the switch-over. The target environment will be encrypted using TDE Tablespace Encryption, and, thus, the redo will be encrypted. To decrypt the redo at the source environment – you need a license for Advanced Security Option.

Network Connectivity

  • The ZDM service host needs SSH access (22) to both the source and target database host.
  • The source and target database host needs access to OCI Object Storage over HTTPS (443).
  • SQL*Net connection (1521) are needed between the two databases hosts. In fact, you only need connection from the target database host and back to the source, if you want the option of being able to fallback to the source database host after the switch-over.

References

In case you want to read more here are some useful links:

Other Blog Posts In This Series

This is the introduction blog post in this series. Over the next days the other blog posts will follow. Stay tuned!

Upgrading in the cloud – VM DB Systems – What about downgrade?

In a previous blog post I showed how you could upgrade a 12.2 PDB by plugging it into a 19c CDB. But what about downgrade? Yeah, downgrade. You know, that really cool feature that you never practice, but you know you should.

In the previous blog post, I used the CDB that gets automatically created when you deploy a new 19c VM DB System and it comes with COMPATIBLE set to 19.0.0 – the default. When you provision a new VM DB System there is no way to control that parameter. Thus, when I plug in my old release PDB into the new release CDB the COMPATIBLE parameter is automatically raised, and I have lost the possibility of doing a downgrade. The only option to get back to the old release would be a Data Pump export.

If you want to preserve the possibility of doing a database downgrade, I strongly recommend you switch to Bare Metal DB Systems or ExaCS which are much more flexible. But if you insist on using VM DB Systems there is an option – it is cumbersome – but doable. And believe it or not – after working with Oracle Database for so many years it was the first time ever that I had to a downgrade – not even in a lab or a test environment (which became fairly obvious after I had asked for advice the 100th time that day).

Drop pre-created 19c database

To get a 19c CDB with a non-default COMPATIBLE setting we will drop the pre-created database and replace it with a backup that has the proper COMPATIBLE setting. This is supported and the same approach is used in the whitepaper “Hybrid Data Guard to Oracle Cloud Infrastructure”. It is a good read and it has a lot of script examples that I stole… oh… got inspired by.

Connect to the new release VM DB System and ensure that the environment variable ORACLE_UNQNAME is set to the DB_UNIQUE_NAME of the database:

echo $ORACLE_UNQNAME

If it is not set, you can get it from srvctl:

srvctl config database
export ORACLE_UNQNAME=...

Generate a script that can delete all data -, temp -, redo log – and control files:

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_replace_files.lst
SELECT 'asmcmd rm '|| name FROM V$DATAFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$TEMPFILE UNION ALL SELECT 'asmcmd rm '|| member FROM V$LOGFILE UNION ALL SELECT 'asmcmd rm '|| name FROM V$CONTROLFILE;
SPOOL OFF
host chmod 777 /tmp/db_replace_files.lst

You must edit the script and get rid of the unneeded lines. Next, stop the database:

srvctl stop database -d $ORACLE_UNQNAME -o immediate

And log on as grid and delete the files using the script we just created:

. /tmp/db_replace_files.lst

As oracle, now restart the database instance in NOMOUNT mode (can’t really go further since we nuked the control files) and set the COMPATIBLE to the same setting as the old release CDB.

srvctl start database -db $ORACLE_UNQNAME -o NOMOUNT

sql / as sysdba
SQL> ALTER SYSTEM SET COMPATIBLE='12.2.0' SCOPE=SPFILE;
SQL> CREATE PFILE FROM SPFILE;

srvctl stop database -db $ORACLE_UNQNAME -o immediate
srvctl start database -db $ORACLE_UNQNAME -o nomount

sql / as sysdba
SQL> SHOW PARAMETER COMPATIBLE

Now we have a new release instance running with the old COMPATIBLE setting. Obviously, it is of no use – yet. Starting a 19c instance with a lower compatible setting

Create a backup of old release CDB

I will use the old release CDB as the source for my new release CDB and thus preserve my COMPATIBLE setting. Obviously, the old release CDB must be upgraded to the new release, so let us first must ensure that the source CDB can actually be upgraded. Use AutoUpgrade in ANALYZE and FIXUPS mode which is described in a previous blog post. We must execute the ANALYZE and FIXUPS mode on the source system because the target system will only be able to open the database in UPGRADE mode, and these steps must be executed while the database is running in normal mode.

Next, I will create a File Storage Service that I can use to share files between the two VM DB Systems. The File Storage Service is really nice because the transfer speed in and out of the service depends on the network bandwidth of your VM DB System. So, if you add more CPUs to the system, you get more network bandwidth to the service automatically. It is really easy to create a File Storage Service and it is very well documented, so I will skip that part here. After that you can mount the file system in your systems (as opc):

sudo mkdir -p /mnt/db-downgrade-122
sudo mount x.x.x.x:/db-downgrade-122 /mnt/db-downgrade-122
sudo chmod 777 /mnt/db-downgrade-122

I can now take a backup of the CDB and store it directly in my NFS mount point:

rman target / 

RMAN> BACKUP DATABASE ROOT FORMAT '/mnt/db-downgrade-122/cdbroot_%U' PLUGGABLE DATABASE 'PDB$SEED' FORMAT '/mnt/db-downgrade-122/pdbseed_%U' PLUS ARCHIVELOG FORMAT '/mnt/db-downgrade-122/arch_%U';
RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/db-downgrade-122/cf_%U';

Since the database is encrypted, we also need a copy of the keystore (or wallet). An easy solution could be to put the keystore files into the File Storage Service but a safer approach is to transfer the file directly using scp. Although, the keystore is protected by a password, it is still safer to keep them separated.

At time of writing, in OCI you can find the location of the keystore in sqlnet.ora.

cat $ORACLE_HOME/network/admin/sqlnet.ora | grep -i encryption_wallet

But that will change at some point in time because as of Oracle Database release 19c the sqlnet.ora parameter ENCRYPTION_WALLET_LOCATION is deprecated. You might have to look at the database parameter WALLET_ROOT instead.

For now, just note down the location of the keystore.

Restore old release CDB and upgrade

Now we can restore the backup of the 12.2 database using the 19c binaries. This is supported and in fact newer releases of RMAN can always restore lower release backups. However, normally RMAN will try to open the database in normal mode which we can’t do because of the version mismatch. I will use the NOOPEN keyword which causes RMAN to leave the database in MOUNT mode and I can manually open the database with RESETLOGS and in UPGRADE mode.

But first we must copy the keystore files from the source database to the new release DB System. In my example I am also copying the auto-login keystore which you shouldn’t do if you are using local auto-login keystores. In that situation the auto-login keystore should be re-created at the new release system:

cd 
scp -i  oracle@:/cwallet.sso cwallet.sso
scp -i  oracle@:/ewallet.p12 ewallet.p12

Now let’s do the restore and open the database in upgrade mode:

rman auxiliary /

RMAN> DUPLICATE DATABASE TO CDB1 AS ENCRYPTED NOOPEN SKIP PLUGGABLE DATABASE SALES BACKUP LOCATION '/mnt/db-downgrade-122/';

And in the end, you can see that RMAN does not open the database: Using the NOPEN keyword you can prevent RMAN from opening a database at the end of the restore and recover operation

Then, you can open the database in UPGRADE mode and with RESETLOGS option to complete the restore. Also, drop the skipped pluggable databases from the data dictionary:

ALTER DATABASE OPEN RESETLOGS UPGRADE;
DROP PLUGGABLE DATABASE SALES INCLUDING DATAFILES;

Finally, you can use AutoUpgrade in UPGRADE mode to upgrade the CDB to the new release. You now have a new target system running on 19c but with a lower COMPATIBLE setting.

If you want to know more about the UPGRADE mode, have a look in the documentation.

Downgrading a PDB

Previously, we have laid the groundworks for the being able to do a downgrade. For the following, I am assuming that you have already upgraded your PDB to 19c and now you ended up in a big doo doo and have to downgrade.

Downgrading in VM DB Systems are also slightly more complicated than on other systems. Remember that VM DB Systems does only support one Oracle Home (the one that comes deployed automatically) and that means that we must move the database back to the source system. For that operation we can’t use data guard, RMAN, or any other fancy approach because they only work to the same or newer release. So, we will have to do an old-school cold copy of the database – and that requires additional downtime. But let’s get started!

I will create a file that gives me all the commands I need to copy the data files out of ASM and into my File Storage Service (you could also scp them directly to the old release system):

SET HEADING OFF LINESIZE 999 PAGESIZE 999 FEEDBACK OFF TRIMSPOOL ON TIMING OFF
SPOOL /tmp/db_downgrade_files.lst
SELECT 'asmcmd cp ' || name || ' ''/mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) || '''' FROM V$DATAFILE
UNION
SELECT 'chown oracle:oinstall /mnt/dbbackupstaging' || SUBSTR(name, INSTR(name, '/', -1 )) FROM V$DATAFILE ORDER BY 1;
SPOOL OFF

And then I can proceed with the actual downgrade. I need to ensure that the unified audit trail is emptied before the downgrade:

ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES OPEN DOWNGRADE;
EXEC DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, FALSE);
SPOOL /tmp/db_downgrade.lst
SET TERMOUT ON TIMING ON SERVEROUT ON ECHO ON
@?/rdbms/admin/catdwgrd.sql
SPOOL OFF

Unplug the PDB and because the PDB is encrypted I have to specify a password that can protect the sensitive information inside the manifest file:

ALTER SESSION SET CONTAINER=CDB$ROOT;
ALTER PLUGGABLE DATABASE SALES CLOSE;
ALTER PLUGGABLE DATABASE SALES UNPLUG INTO '/mnt/db-downgrade-122/manifest_sales.xml' ENCRYPT USING [a-secret-password];

Now we can copy the data files from the local storage and on to the File Storage Service so we can use at the source system. Use the commands that we generated previously:

asmcmd ...
chown ...

Now switching to the old release system and create the PDB from manifest file. I will use the data files right off the File Storage Service and optionally I can move them afterwards – as an online operation (you might not want to do that, but I ran out of disk space):

CREATE PLUGGABLE DATABASE SALES USING '/mnt/db-downgrade-122/manifest_sales.xml' SOURCE_FILE_DIRECTORY='/mnt/db-downgrade-122' NOCOPY KEYSTORE IDENTIFIED BY [keystore-password] DECRYPT USING [a-secret-password];

Start up the database in UPGRADE mode and open the keystore:

ALTER PLUGGABLE DATABASE SALES OPEN UPGRADE;
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE CONTAINER=all;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY [keystore-password] CONTAINER=all;

Switch to the SALES PDB and complete the downgrade:

ALTER SESSION SET CONTAINER=SALES;
SET TERMOUT ON ECHO ON TIMING ON
SPOOL /home/oracle/sales_catreload.log
@?/rdbms/admin/catrelod.sql
SPOOL OFF

Restart, Recompile and fresh stats:

ALTER PLUGGABLE DATABASE SALES CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE SALES OPEN;
@?/rdbms/admin/utlrp.sql
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
EXEC DBMS_STATS.GATHER_FIXED_OBJECT_STATS;

Check the state of the Oracle Data Dictionary

SET SQLFORMAT ANSICONSOLE LINES 300
SELECT COMP_ID, COMP_NAME, VERSION, STATUS FROM DBA_REGISTRY ORDER BY MODIFIED;

And there you have it. Not exactly super easy, which is why I highly recommend you to look at Bare Metal DB Systems or Exadata DB Systems if you are required to be able to do downgrades (or be prepared to use Data Pump instead).

Other Posts in This Series