Minimal Downtime Migration with Full Transportable Export Import and Incremental Backups

If you need to migrate a database to the cloud or anywhere else for that matter, you should consider using Full Transportable Export Import (FTEX) and incremental backups. Even for really large databases – 10s or 100s of TB – you can still migrate with minimal downtime. And it works across different endian formats.

FTEX uses transportable tablespaces and the solution has these benefits:

  • You can implicitly upgrade the database as part of the migration
  • You can migrate from a non-CDB and into a PDB
  • You can keep downtime at a minimum by using frequent incremental backups
  • You can migrate across endianness – e.g. from AIX or Solaris to Oracle Linux

How Does It Work

To concept is explained in this video on our YouTube Channel and it includes a demo:

To make the backup and convert process really easy, Oracle is providing a perl script that can automate the entire process. You download the scripts from My Oracle Support: V4 PERL Scripts to reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

What You Need to Be Aware Of

Transportable Tablespaces

To get a complete list of limitations on transporting data, you should have a look in the documentation. Also, there are some specific to transportable tablespaces. The most notable are:

  • Character set and national character set should be the same. If not, there are a few options available, however.
  • No columns can be encrypted with TDE Column Encryption. Only option is to remove the encryption before migration, and re-encrypt afterwards.
  • TDE Tablespace Encryption is supported for same-endian migration. If you need to go across endianness, you must decrypt the tablespaces, and re-encrypt after migration. Remember in Oracle Database 12.2 you can encrypt tablespaces online.
  • If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the convert on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
  • The database timezone and timezone file version must be identical.

Full Transportable Export Import

  • Source database must be 11.2.0.3 or higher
  • Target database must be 12.1.0.1 or higher

If you can’t meet these requirements, you can still use this solution. But instead of doing a FTEX, you need to use Data Pump in another way.

Incremental Backups Using Perl Scripts

  • Source database must be 10.2.0.3 or higher
  • Target database must be 11.2.0.4 or higher

In addition, it is strongly recommended to use Block Change Tracking (BCT) on the source database. Note, that this is an Enterprise Edition feature. If you don’t enable BCT the incremental backups will be much slower, because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.

Conclusion

By using a combination of Full Transportable Export Import and incremental backups, you can migrate even huge databases to the cloud. And it even works for cross-endian migrations, like AIX or Solaris to Oracle Linux.

If you want to learn more about endianness and transportable tablespaces, you should watch this video on our YouTube Channel:

Further Reading

Move to the Cloud – Webinar

Yesterday, Mike Dietrich and I gave the final webinar in the Oracle Database 19c Upgrade Virtual Classroom series. It was about Move to the Cloud – not only for techies. Now, I say final – but we all know you should never, say never. And in this case, it applies to final as well. We are already talking about subjects for a seventh webinar. If you have any interesting topic, that you think we should cover, get in touch with me.

Oracle Database 19c Upgrade Virtual Classroom

Unfortunately, due to a technical glitch we skipped the part about migrating using transportable tablespaces and full transportable export/import. We uploaded the missing part to YouTube, so you can watch it.

For those interested, you can now download the slides. We had really much information to share, so browse through the deck to find a lot of hidden slides. Typically, there are references and links to more information about a specific topic.

Within a week it should be possible to watch a recording of the webinar.

The Demos and Videos

This presentation we gave, was a brand new one. We used as many demos and videos as we could – or rather had time to prepare. We will post them on our YouTube channel as soon as possible. I suggest that you subscribe to it, so you can receive word as soon as new contents arrives. Further, we want to enhance the presentation even more, so we will be putting in more videos and demos. Let me know, if there was a topic, that could improve with a video or demo.

Thank You

Thanks to everyone that participated yesterday. Happy migrating!

Debut on the Big Stage

Mike Dietrich and I will give two webinars in mid-October on database migrations. One of them will focus solely on migration to Oracle Cloud Infrastructure. The webinars are part of the Oracle Database 19c Upgrade Virtual Classroom series. If you are interested, you have to register.

Oracle Database 19c Upgrade Virtual Classroom

Migration Strategies: Insights, Tips and Secrets

Date: Tuesday 13 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Now it’s time for us to dig deeper. We’d like to offer you further insights and a deep dive from a technical point of view, starting with Data Pump and Transportable Tablespaces, then Full Transportable Export Import, and adding RMAN Incremental Backups to decrease the downtime. Best practices and real-world experience will round up this two-hour webinar.

Move to the Cloud (for techies)

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Debut on the Big Stage

These two webinars will be my debut on the big stage. The previous webinars had a huge interest and, unfortunately, some people couldn’t join because the webinar had reached its maximum number of attendees. If you register, I recommend to join early to get your seat.

I have been presenting in person and virtually for some years now. Also, since I joined Oracle in January, I have been doing quite a few presentations. However, this is my first time for such big crowd. I am excited and look forward to it – but also a little intimidated. Luckily, I have super-star Mike Dietrich there to (virtually) hold my hand.

And, finally, I promise you: No marketing slides – just demos and details.

I hope to see you there!

Upgrading in the cloud – VM DB Systems – 11.2.0.4 to 19c (minimal downtime)

This blog post is a follow-up blog post to a previous post. The procedure I described earlier was a simple approach that required downtime while the entire database is moved from one VM DB System to another. If you have strict requirements to downtime you might not be able to use that approach. In this blog post I will come up with an alternative. I will describe how you can use incremental backups to significantly lower the downtime required. Instead of doing a full backup when the database is down my idea is to:

  • Take a level 0 backup while the source database is up and running
  • Restore the database on target system
  • These two steps take time – but I don’t care because the source database is still up
  • Take incremental backup on source database
  • Recover target database using incremental backup
  • Perform final incremental backup/recover after downtime has started

Overview of DB Systems and databases

My source environment is the red environment. The DB System is called SRCHOST11 and it has an 11.2.0.4 database that is called SALES. Due to the restrictions of the VM DB System I have to move the database to a new DB System in order to upgrade it. I have created a brand-new target environment – the green environment – on the release that I want to target. I have named the DB System TGTHOST19 and it has a multitenant database called CDB1. When I am done, the target environment – CDB1 – will also contain a PDB named SALES. The SALES PDB will be the original 11.2.0.4 database that has been upgraded and converted.

For a short period of time I need to spin up a second database instance on the target system. This second – or temporary – instance will be a duplicate of the source database (as non-CDB database) and I will upgrade it to the new release. Then I can plug in the database as a PDB in the precreated CDB database and get rid of the second/temporary instance. You will see how it works later in the blog post.

Backup Database

I need to exchange files between the source and the target systems, and I will use a File Storage service for that. Check out the documentation if you need help creating one – I created one already called upgsales and now I can mount it on my source system:

[opc@srchost11]$ sudo mkdir -p /mnt/upgsales
[opc@srchost11]$ sudo chmod 777 /mnt/upgsales/
[opc@srchost11]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

While the source database is still open and in use, I will start preparing the backup. First, the password file and wallet:

[oracle@srchost11]$ mkdir -p /mnt/upgsales/backup
[oracle@srchost11]$ cp /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/ewallet.p12 /mnt/upgsales/
[oracle@srchost11]$ cp $ORACLE_HOME/dbs/orapw$ORACLE_SID /mnt/upgsales/orapw$ORACLE_SID

If you are really concerned about security, you can copy the wallet file directly to the target system – instead of via the File Storage service. The File Storage service itself is secured, but the fewer places you have a copy of the wallet – the better and the safer, I assume. Further, you can also encrypt traffic to and from the File Storage service.

Next, a PFile:

SALES SQL> CREATE PFILE='/mnt/upgsales/init.ora' FROM SPFILE;

And now I start a level 0 backup:

SALES RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE FORMAT '/mnt/upgsales/backup/lvl0%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U' ;
SALES RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/upgsales/backup/controlfile';

Restore Database

On my target system, I need to access the File Storage service as well:

[opc@tgthost19]$ sudo mkdir -p /mnt/upgsales
[opc@tgthost19]$ sudo chmod 777 /mnt/upgsales/
[opc@tgthost19]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

Next, I will copy the password file and PFile into the target Oracle Home. I need that in order to start the temporary instance. Note, the name of the temporary instance will be SALES – the same as the source database SID:

[oracle@tgthost19]$ cp /mnt/upgsales/init.ora $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ cp /mnt/upgsales/orapwSALES $ORACLE_HOME/dbs/orapwSALES

I also need to copy the wallet:

[oracle@tgthost19]$ mkdir -p /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ cp /mnt/upgsales/ewallet.p12 /opt/oracle/dcs/commonstore/wallets/tde/SALES/

And I need to create a directory for audit_file_dest:

[oracle@tgthost19]$ mkdir -p /u01/app/oracle/admin/SALES/adump

Now, I must edit the PFile:

[oracle@tgthost19]$ vi $ORACLE_HOME/dbs/initSALES.ora

And make the following changes:

  • Remove all the double-underscore parameters that contains the memory settings from last restart. That could for instance be SALES.__db_cache_size.
  • Set audit_file_dest=’/u01/app/oracle/admin/SALES/adump’
  • Set control_files=’+RECO/sales/controlfile/current.256.1048859635′
  • Set SALES.sga_target=6G
  • Set SALES.pga_aggregate_target=2G
  • Set db_unique_name=’SALES’

I don’t have an abundance of memory on this system, so I keep the memory settings. Strictly speaking you don’t have to change db_unique_name, but I am doing it so it will be easier to cleanup afterwards.

While I work on the temporary instance, I must shut down the other database – the pre-created one that eventually will hold the PDB. Most likely there is not enough memory on the system to support two databases:

[oracle@tgthost19]$ $ORACLE_HOME/bin/srvctl stop database -db $ORACLE_UNQNAME

Let’s start the temporary instance in NOMOUNT mode. Remember to set the environment:

[oracle@tgthost19]$ export ORACLE_UNQNAME=SALES
[oracle@tgthost19]$ export ORACLE_SID=SALES
[oracle@tgthost19]$ sql / as sysdba

SALES SQL> STARTUP NOMOUNT

And finally, I can start the restore using RMAN. Once the database is mounted I must open the keystore, otherwise, the database can’t perform recovery. Then, I can use the catalog command to find the backup pieces in my staging area. And finally, do the restore:

[oracle@tgthost19]$ rman target /

SALES RMAN> RESTORE CONTROLFILE FROM '/mnt/upgsales/backup/controlfile';
SALES RMAN> ALTER DATABASE MOUNT;
SALES RMAN> sql 'ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>';
SALES RMAN> sql "ADMINISTER KEY MANAGEMENT CREATE LOCAL AUTO_LOGIN KEYSTORE FROM KEYSTORE ''/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/'' IDENTIFIED BY <SALES-keystore-password>";
SALES RMAN> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;
SALES RMAN> RESTORE DATABASE;

The SALES database is now restored on my target system. I will leave it there – unrecovered and in MOUNT mode so I can apply incremental backups later on.

Incremental Backup/Recover

I can do as many incremental backup/recover cycles as I want. But what matters is that I make one and restore it – as close to the start of the downtime window as possible. This will significantly reduce the time it takes to make the final incremental backup/restore later on.

On my source database, start an incremental backup:

[oracle@srchost11]$ rman target /

SALES RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE FORMAT '/mnt/upgsales/backup/lvl1%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U';

Now, switch to the target system and recover using that backup. I use the CATALOG command to instruct RMAN to find new backups at the shared file storage.

[oracle@tgthost19]$ rman target /

RMAN SALES> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;
RMAN SALES> RECOVER DATABASE;

RMAN will complain about a missing log file. But worry – this is expected and will be fixed later on:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/28/2020 09:06:51
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 60 and starting SCN of 793358

Down Time Starts

Now it is time to kick users off the database. Your precious downtime starts now.

Prepare Database For Upgrade

In my database I will create some sample data so I can verify the upgrade:

SALES SQL> CREATE USER UPG19 IDENTIFIED BY <secret-password>;
SALES SQL> ALTER USER UPG19 QUOTA UNLIMITED ON USERS;
SALES SQL> CREATE TABLE UPG19.ORDERS(ID NUMBER, CUSTOMER VARCHAR2(50), AMOUNT NUMBER) TABLESPACE USERS;
SALES SQL> INSERT INTO UPG19.ORDERS VALUES(1, 'John', 500);
SALES SQL> COMMIT;

I must prepare my database for upgrade on the source system. When I open the database on the target system, I can only do that in UPGRADE mode (because the database will be restored using 19c Oracle Home). In UPGRADE mode it is impossible to do the pre-upgrade tasks.

I will use the classic preupgrade.jar tool in this demo, but you could also use the newer AutoUpgrade. Always get the latest preupgrade tool from My Oracle Support. Upload the zip file (named preupgrade_19_cbuild_7_lf.zip in my demo) to the source system, extract to $ORACLE_HOME/rdbms/admin and do the pre-upgrade checks:

[oracle@srchost11]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ cd $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ unzip preupgrade_19_cbuild_7_lf.zip

[oracle@srchost11]$ mkdir -p /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ cd /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/preupgrade.jar FILE TEXT DIR .

You must upload the same version of the preupgrade tool to the target system before you can run the post-upgrade fixups. Hence, save the zip file so you don’t have to download it again.

Next, I will review the report generated by the tool:

[oracle@srchost11]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

And I can execute the pre-upgrade fixups:

SALES SQL> SET SERVEROUT ON
SALES SQL> @/mnt/upgsales/preupg_logs_SALES/preupgrade_fixups.sql

Final Incremental Backup/Recover

I can now make the last incremental backup on my source system. To be absolutely sure nothing else gets into the source database from now on, I restart the database in restricted mode:

[oracle@srchost11]$ sqlplus / as sysdba

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP RESTRICT

Then I use RMAN to archive the current log file and start the last backup:

[oracle@srchost11]$ rman target /

SALES RMAN> sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
SALES RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE FORMAT '/mnt/upgsales/backup/lvl1%U' PLUS ARCHIVELOG FORMAT '/mnt/upgsales/backup/arch%U';

Now, switch to the target system, catalog the new backups:

[oracle@tgthost19]$ rman target /

RMAN SALES> CATALOG START WITH '/mnt/upgsales/backup' NOPROMPT;

By default, RMAN will try to perform complete recovery. But I can’t do that because I don’t have the online redo logs. I must perform incomplete recovery. That sounds dangerous, but it is not. I archived the current log file after I had ensured that no one was using the system (I started in restricted mode, remember). To perform incomplete recovery, I must know at which sequence to stop. I will use the LIST command in RMAN to do that:

RMAN SALES> LIST BACKUP OF ARCHIVELOG ALL;

Sample output of RMAN command "list archivelog all" showing which sequence to restore until I take the last available sequence and add one. In my case, I will recover until sequence 65:

RMAN SALES> run {
   SET UNTIL SEQUENCE 65 THREAD 1;
   RECOVER DATABASE;
}

Finally, you can switch to SQLPlus and open the database. You could in theory also do that from RMAN but likely you will hit ORA-04023: Object SYS.STANDARD could not be validated or authorized:

[oracle@tgthost19]$ sqlplus / as sysdba

SALES SQL> ALTER DATABASE OPEN RESETLOGS UPGRADE;

Upgrade Database

I must upload the same version of the preupgrade tool to the target Oracle Home, before I can do the post-upgrade fixups:

[oracle@tgthost19]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ cd $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ unzip preupgrade_19_cbuild_7_lf.zip

I can now upgrade the database. Ensure to use the same prompt that has the environment set to the SALES database – the temporary instance:

[oracle@tgthost19]$ mkdir -p /mnt/upgsales/upg_logs_SALES
[oracle@tgthost19]$ dbupgrade -l /mnt/upgsales/upg_logs_SALES

Once the upgrade completes, I will finish with the post-upgrade tasks

SQL> STARTUP

SQL> --Recompile
SQL> @$ORACLE_HOME/rdbms/admin/utlrp
SQL> --Check outcome of upgrade
SQL> @$ORACLE_HOME/rdbms/admin/utlusts.sql
SQL> --Post-upgrade fixups
SQL> @/mnt/upgsales/preupg_logs_$SOURCE_SID/postupgrade_fixups.sql
SQL> --Timezone file upgrade
SQL> SET SERVEROUTPUT ON
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql

Last, have a look in the report generated by preupgrade.jar to see if there are any post-upgrade tasks that you have to execute:

[oracle@tgthost19]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

Plug In Database

Now that the temporary database is upgraded let’s look at what we need to prepare for the conversion to a PDB. First, I will export the encryption keys:

SALES SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "<a-secret-password>" TO '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>;

And then I open the database in READ ONLY mode to create a manifest file. After that, I completely shut down the temporary database and, hopefully, it won’t be needed anymore:

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP MOUNT
SALES SQL> ALTER DATABASE OPEN READ ONLY;
SALES SQL> EXEC DBMS_PDB.DESCRIBE('/mnt/upgsales/manifest_sales.xml');
SALES SQL> SHUTDOWN IMMEDIATE

Now, I will restart CDB1 which I shut down previously. I will work in CDB1 for the rest of the blog post. Notice, how I am resetting my environment variables to the original values using the source command. You could also open a new SSH session instead. Anyway, just ensure that your environment is now set to work on the original database, CDB1:

[oracle@tgthost19]$ source ~/.bashrc
[oracle@tgthost19]$ env | grep ORA
[oracle@tgthost19]$ $ORACLE_HOME/bin/srvctl start database -db $ORACLE_UNQNAME

I check for plug in compatibility:

CDB1 SQL> SET SERVEROUT ON
CDB1 SQL> BEGIN 
    IF DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/mnt/upgsales/manifest_sales.xml', 'SALES') THEN
        DBMS_OUTPUT.PUT_LINE('SUCCESS');
    ELSE
        DBMS_OUTPUT.PUT_LINE('ERROR');
    END IF;
END;
/

Hopefully, it should read out SUCCESS. If not, you can query PDB_PLUG_IN_VIOLATIONS to find out why:

CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

I can plug in the SALES database as a new PDB – which I also will call SALES. I am using the MOVE keyword to have my data files moved to a directory that matches the naming standard:

CDB1 SQL> CREATE PLUGGABLE DATABASE SALES USING '/mnt/upgsales/manifest_sales.xml' MOVE;
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES OPEN;

I could also use the NOCOPY keyword and just use the data files from where they currently are placed. Later on, I could move the data files to a proper directory that follows the naming standard, and if I were on Enterprise Edition, I could even use online datafile move.

Next, I can switch to the SALES PDB and import my encryption keys from the file I made a little earlier. Note, that I must enter the secret that I used in the export. And now I have to enter the keystore password for CDB1:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "a-secret-password" FROM '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <CDB1-keystore-password> WITH BACKUP;

Be aware, that if your system tablespaces are encrypted, you might have to import the encryption key into CDB$ROOT as well before you can open the database.

Now, it is time to fully convert the database to a PDB:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
CDB1 SQL> SHUTDOWN IMMEDIATE
CDB1 SQL> STARTUP

Now, check and resolve any plug-in violations:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

And finally, ensure that OPEN_MODE=READ WRITE and RESTRICTED=NO. When so, I can save the state of the PDB so it will auto-open whenever the CDB restarts:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT OPEN_MODE, RESTRICTED FROM V$PDBS WHERE NAME='SALES';
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES SAVE STATE;

Verify that my test data exist:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> SELECT * FROM UPG19.ORDERS;

That’s it. The database is now fully upgraded to 19c and converted to a PDB. Be sure to:

  • Start a backup
  • Test your application
  • Adjust your connection strings
  • And what else your procedure mandates

Wrap-Up

Let’s clean up on the target system! I can remove the files and folders that were created to support the temporary instance:

[oracle@tgthost19]$ #audit dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/admin/SALES/adump
[oracle@tgthost19]$ #diag dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/diag/rdbms/sales
[oracle@tgthost19]$ #wallet
[oracle@tgthost19]$ rm -rf /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ #instance files
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/orapwSALES
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/spfileSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/hc_SALES.dat
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/lkSALES
[oracle@tgthost19]$ #exported master key
[oracle@tgthost19]$ rm /mnt/upgsales/key_export_SALES

Also, since I stored data files in ASM I can delete those as well. Note you have to log on as grid to do that:

[grid@tgthost19]$ asmcmd rm -rf +DATA/SALES
[grid@tgthost19]$ asmcmd rm -rf +RECO/SALES

I can also drop the PDB that gets created automatically when you deploy the new DB System. In my case it is named CDB1_PDB1:

SQL> ALTER PLUGGABLE DATABASE CDB1_PDB1 CLOSE;
SQL> DROP PLUGGABLE DATABASE CDB1_PDB1 INCLUDING DATAFILES;

Also, I can remove the File Storage service that I created. If you want to keep log files from the upgrade (or other files) be sure to copy them somewhere else.

Last, when I am convinced that my upgraded and converted database is doing good, I can terminate the entire source DB system.

Tweaks

If you have a license for any of the Enterprise Edition offerings you might be able to use some of the below features to speed up and backup and recovery. Before using any of them be sure to check the license guide and confirm you have a proper license.

  • Block change tracking – reduces backup time because RMAN doesn’t need to scan the entire database.
  • Parallel backup and recovery – more channels, faster backups and faster restores.
  • Compression – reduces the size of the backups. Since you can apply incremental backups continuously the backup size should be fairly small anyway.

Disclaimer

I am not a backup expert (probably far from). When writing this post I was struggling a lot with missing archive logs. I even had to call an old mentor for advice. But in the end, I decided just to include them in all backups. Possibly, there is a die-hard-RMAN-expert out there that can tell me a better way of doing it. But for sure it doesn’t hurt to include them…

If you come up with a better way, please leave a comment. I would love to learn more.

Conclusion

You can upgrade a 11.2.0.4 database to 19c by moving the database to a new VM DB System. You can reduce downtime by using incremantal backups. You must convert the database to a pluggable database as well because multitenant is the only supported architecture for VM DB Systems on 19c.

References

Other posts in this series

Upgrading in the cloud – VM DB Systems – 11.2.0.4 to 19c (simple)

In this blog post I will show you how you can upgrade a database on 11.2.0.4 to 19c. It will also include conversion from the non-CDB architecture into a pluggable database. I have to do this because for VM DB Systems the only supported architecture for 19c is multitenant. Finally, I will use a Standard Edition database to show you something that can be used in any edition.

Overview of DB Systems and databases

My source environment is the red environment. The DB System is called SRCHOST11 and it has an 11.2.0.4 database that is called SALES. Due to the restrictions of the VM DB System I have to move the database to a new DB System in order to upgrade it. I have created a brand-new target environment – the green environment – on the release that I want to target. I have named the DB System TGTHOST19 and it has a multitenant database called CDB1. When I am done, the target environment – CDB1 – will also contain a PDB named SALES. The SALES PDB will be the original 11.2.0.4 database that has been upgraded and converted.

The aim of this blog post is to make it as easy as possible. When I have to move the database from the source DB System to the target DB System, I will just make a full backup that I can restore on the target environment. Obviously, this requires downtime and the amount depends on the size of the database and the transfer speed between the two DB Systems.

My highlevel plan for the task looks like this:

  • Prepare database for upgrade
  • Backup database
  • Restore database
  • Upgrade database
  • Plug in database
  • Wrap-Up

I will elaborate a little on the Restore database part. On VM DB Systems you are not allowed to create your own databases. You can only use the database that gets created when the system is provisioned. However, for a short period of time I need to spin up a second database instance on the target system. This second – or temporary – instance will be a duplicate of the source database (as non-CDB database) and I will upgrade it to the new release. Then I can plug in the database as a PDB in the precreated CDB database, and get rid of the second/temporary instance. You will see how it works later in the blog post.

Prepare Database For Upgrade

I need to exchange files between the source and the target systems and I will use a File Storage service for that. Check out the documentation if you need help creating one – I created one already called upgsales and now I can mount it on my source system:

[opc@srchost11]$ sudo mkdir -p /mnt/upgsales
[opc@srchost11]$ sudo chmod 777 /mnt/upgsales/
[opc@srchost11]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

In my database I will create some sample data so we can verify the upgrade:

SALES SQL> CREATE USER UPG19 IDENTIFIED BY <secret-password>;
SALES SQL> ALTER USER UPG19 QUOTA UNLIMITED ON USERS;
SALES SQL> CREATE TABLE UPG19.ORDERS(ID NUMBER, CUSTOMER VARCHAR2(50), AMOUNT NUMBER) TABLESPACE USERS;
SALES SQL> INSERT INTO UPG19.ORDERS VALUES(1, 'John', 500);
SALES SQL> COMMIT;

DOWN TIME STARTS NOW – get those users off!

I must prepare my database for upgrade on the source system. When I restore the database on the target system I can only open the database in UPGRADE mode (because the database will be restored using 19c Oracle Home). In UPGRADE mode it is impossible to do the pre-upgrade tasks.

I will use the classic preupgrade.jar tool in this demo, but you could also use the newer AutoUpgrade. Always get the latest preupgrade tool from My Oracle Support. Upload the zip file (named preupgrade_19_cbuild_7_lf.zip in my demo) to the source system, extract to $ORACLE_HOME/rdbms/admin and do the pre-upgrade checks:

[oracle@srchost11]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ cd $ORACLE_HOME/rdbms/admin
[oracle@srchost11]$ unzip preupgrade_19_cbuild_7_lf.zip

[oracle@srchost11]$ mkdir -p /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ cd /mnt/upgsales/preupg_logs_SALES
[oracle@srchost11]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/preupgrade.jar FILE TEXT DIR .

You must upload the same version of the preupgrade tool to the target system before you can run the post-upgrade fixups. Hence, save the zip file so you don’t have to download it again.

Next, I will review the report generated by the tool:

[oracle@srchost11]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

And I can execute the pre-upgrade fixups:

SALES SQL> SET SERVEROUT ON
SALES SQL> @/mnt/upgsales/preupg_logs_SALES/preupgrade_fixups.sql

Backup Database

The database is now prepared for upgrade. Next, I will get what I need to move the database. First, a PFile:

SALES SQL> CREATE PFILE='/mnt/upgsales/init.ora' FROM SPFILE;

Now I will shut down the database and restart in MOUNT mode. Then I can start a level 0 backup:

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP MOUNT
SALES SQL> EXIT

[oracle@srchost11]$ rman target /

SALES RMAN> BACKUP DATABASE FORMAT '/mnt/upgsales/db_%U';
SALES RMAN> BACKUP CURRENT CONTROLFILE FORMAT '/mnt/upgsales/cf_%U';

Now we just need the password file and wallet:

[oracle@srchost11]$ cp /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME/ewallet.p12 /mnt/upgsales/
[oracle@srchost11]$ cp $ORACLE_HOME/dbs/orapw$ORACLE_SID /mnt/upgsales/orapw$ORACLE_SID

If you are really concerned about security you can copy the wallet file directly to the target system – instead of via the File Storage service. The File Storage service itself is secured, but the fewer places you have a copy of the wallet – the better and the safer, I assume. Further, you can also encrypt traffic to and from the File Storage service.

Restore Database

On my target system, I need to access the File Storage service as well:

[opc@tgthost19]$ sudo mkdir -p /mnt/upgsales
[opc@tgthost19]$ sudo chmod 777 /mnt/upgsales/
[opc@tgthost19]$ sudo mount x.x.x.x:/upgsales /mnt/upgsales

Next, I will copy the password file and PFile into the target Oracle Home. I need that in order to start the temporary instance. Note, the name of the temporary instance will be SALES – the same as the source database SID:

[oracle@tgthost19]$ cp /mnt/upgsales/init.ora $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ cp /mnt/upgsales/orapwSALES $ORACLE_HOME/dbs/orapwSALES

I also need to copy the wallet:

[oracle@tgthost19]$ mkdir -p /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ cp /mnt/upgsales/ewallet.p12 /opt/oracle/dcs/commonstore/wallets/tde/SALES/

And I need to create a directory for audit_file_dest:

[oracle@tgthost19]$ mkdir -p /u01/app/oracle/admin/SALES/adump

Now, I must edit the PFile:

[oracle@tgthost19]$ vi $ORACLE_HOME/dbs/initSALES.ora

And make the following changes:

  • Remove all the double-underscore parameters that contains the memory settings from last restart. That could for instance be SALES.__db_cache_size.
  • Set audit_file_dest=’/u01/app/oracle/admin/SALES/adump’
  • Set control_files=’+RECO/sales/controlfile/current.256.1048859635′
  • Set SALES.sga_target=6G
  • Set SALES.pga_aggregate_target=2G
  • Set db_unique_name=’SALES’

I don’t have an abundance of memory on this sytem, so I keep the memory settings. Strictly speaking you don’t have to change db_unique_name, but I am doing it so it will be easier to cleanup afterwards.

While I work on the temporary instance I must shut down the other database – the pre-created one that eventually will hold the PDB. Most likely there is not enough memory on the system to support two databases:

[oracle@tgthost19]$ sql / as sysdba

CDB1 SQL> SHUTDOWN IMMEDIATE

Let’s start the temporary instance in NOMOUNT mode. Remember to set the environment:

[oracle@tgthost19]$ export ORACLE_UNQNAME=SALES
[oracle@tgthost19]$ export ORACLE_SID=SALES
[oracle@tgthost19]$ sql / as sysdba

SALES SQL> STARTUP NOMOUNT

And finally, I can start the restore using RMAN. Notice how I am using the NOOPEN keyword which instructs RMAN to keep the database MOUNTED and not try to attempt to open the database. If you try to open the database it will fail because the database must be open in UPGRADE mode. At this point in time, the database itself is on 11.2.0.4 but running on 19c binaries:

[oracle@tgthost19]$ rman auxiliary /

SALES RMAN> DUPLICATE DATABASE TO SALES NOOPEN BACKUP LOCATION '/mnt/upgsales/';

Upgrade Database

RMAN left the database in MOUNTED mode. Before I can open the database I must open the keystore:

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY <source-keystore-password>;

Now, I can open the database and execute the RESETLOGS operation that RMAN didn’t do:

SQL> ALTER DATABASE OPEN RESETLOGS UPGRADE;

I must upload the same version of the preupgrade tool to the target Oracle Home, before I can do the post-upgrade fixups:

[oracle@tgthost19]$ cp preupgrade_19_cbuild_7_lf.zip $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ cd $ORACLE_HOME/rdbms/admin
[oracle@tgthost19]$ unzip preupgrade_19_cbuild_7_lf.zip

And I can now upgrade the database:

[oracle@tgthost19]$ mkdir -p /mnt/upgsales/upg_logs_SALES
[oracle@tgthost19]$ dbupgrade -l /mnt/upgsales/upg_logs_SALES

Once the upgrade completes I will finish with the post-upgrade tasks

SQL> STARTUP

SQL> --Recompile
SQL> @$ORACLE_HOME/rdbms/admin/utlrp
SQL> --Check outcome of upgrade
SQL> @$ORACLE_HOME/rdbms/admin/utlusts.sql
SQL> --Post-upgrade fixups
SQL> @/mnt/upgsales/preupg_logs_$SOURCE_SID/postupgrade_fixups.sql
SQL> --Timezone file upgrade
SQL> SET SERVEROUTPUT ON
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql

Last, have a look in the report generated by preupgrade.jar to see if there are any post-upgrade tasks that you have to execute:

[oracle@tgthost19]$ more /mnt/upgsales/preupg_logs_SALES/preupgrade.log

Plug In Database

Now that the temporary database is upgraded let’s look at what we need to prepare for the conversion to a PDB. First, I will export the encryption keys:

SALES SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "<a-secret-password>" TO '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <SALES-keystore-password>;

And then I open the database in READ ONLY mode to create a manifest file. After that, I completely shutdown the temporary database and, hopefully, it wont be needed anymore:

SALES SQL> SHUTDOWN IMMEDIATE
SALES SQL> STARTUP MOUNT
SALES SQL> ALTER DATABASE OPEN READ ONLY;
SALES SQL> EXEC DBMS_PDB.DESCRIBE('/mnt/upgsales/manifest_sales.xml');
SALES SQL> SHUTDOWN IMMEDIATE

Now, I will restart CDB1 which I shut down previously. I will work in CDB1 for the rest of the blog post. Notice, how I am resetting my environment variables to the original values using the source command. You could also open a new SSH session instead. Anyway, just ensure that your environment is now set to work on the original database, CDB1:

[oracle@tgthost19]$ source ~/.bashrc
[oracle@tgthost19]$ env | grep ORA
[oracle@tgthost19]$ sql / as sysdba

CDB1 SQL> STARTUP

I check for plug in compability:

CDB1 SQL> SET SERVEROUT ON
CDB1 SQL> BEGIN 
    IF DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/mnt/upgsales/manifest_sales.xml', 'SALES') THEN
        DBMS_OUTPUT.PUT_LINE('SUCCESS');
    ELSE
        DBMS_OUTPUT.PUT_LINE('ERROR');
    END IF;
END;
/

Hopefully, it should read out SUCCESS. If not, you can query PDB_PLUG_IN_VIOLATIONS to find out why:

CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

I can plugin the SALES database as a new PDB – which I also will call SALES. I am using the MOVE keyword to have my data files moved to a directory that matches the naming standard:

CDB1 SQL> CREATE PLUGGABLE DATABASE SALES USING '/mnt/upgsales/manifest_sales.xml' MOVE;
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES OPEN;

I could also use the NOCOPY keyword and just use the data files from where they currently are placed. Later on, I could move the data files to a proper directory that follows the naming standard, and if I were on Enterprise Edition I could even use online datafile move.

Next, I can switch to the SALES PDB and import my encryption keys from the file I made a little earlier. Note, that I must enter the secret that I used in the export. And now I have to enter the keystore password for CDB1:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "a-secret-password" FROM '/mnt/upgsales/key_export_SALES' FORCE KEYSTORE IDENTIFIED BY <CDB1-keystore-password> WITH BACKUP;

Be aware, that if your system tablespaces are encrypted, you might have to import the encryption key into CDB$ROOT as well before you can open the database.

Now, it is time to fully convert the database into a PDB:

CDB1 SQL> ALTER SESSION SET CONTAINER=SALES;
CDB1 SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
CDB1 SQL> SHUTDOWN IMMEDIATE
CDB1 SQL> STARTUP

Now, check and resolve any plug in violations:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT type, message, action FROM pdb_plug_in_violations WHERE name='SALES' and status='PENDING';

And finally, ensure that OPEN_MODE=READ WRITE and RESTRICTED=NO. When so, I can save the state of the PDB so it will auto-open whenever the CDB restarts:

CDB1 SQL> ALTER SESSION SET CONTAINER=CDB$ROOT;
CDB1 SQL> SELECT OPEN_MODE, RESTRICTED FROM V$PDBS WHERE NAME='SALES';
CDB1 SQL> ALTER PLUGGABLE DATABASE SALES SAVE STATE;

That’s it. The database is now fully upgraded to 19c and converted to a PDB. Be sure to:

  • Start a backup
  • Test your application
  • Adjust your connection strings
  • And what else your procedure mandates

Wrap-Up

Let’s clean up on the target system! I can remove the files and folders that were created to support the temporary instance:

[oracle@tgthost19]$ #audit dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/admin/SALES/adump
[oracle@tgthost19]$ #diag dest
[oracle@tgthost19]$ rm -rf /u01/app/oracle/diag/rdbms/sales
[oracle@tgthost19]$ #wallet
[oracle@tgthost19]$ rm -rf /opt/oracle/dcs/commonstore/wallets/tde/SALES
[oracle@tgthost19]$ #instance files
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/initSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/orapwSALES
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/spfileSALES.ora
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/hc_SALES.dat
[oracle@tgthost19]$ rm $ORACLE_HOME/dbs/lkSALES
[oracle@tgthost19]$ #exported master key
[oracle@tgthost19]$ rm /mnt/upgsales/key_export_SALES

Also, since I stored data files in ASM I can delete those as well. Note you have to log on as grid to do that:

[grid@tgthost19]$ asmcmd rm -rf +DATA/SALES
[grid@tgthost19]$ asmcmd rm -rf +RECO/SALES

I can also drop the PDB that gets created automatically when you deploy the new DB System. In my case it is named CDB1_PDB1:

SQL> ALTER PLUGGABLE DATABASE CDB1_PDB1 CLOSE;
SQL> DROP PLUGGABLE DATABASE CDB1_PDB1 INCLUDING DATAFILES;

Also, I can remove the File Storage service that I created. If you want to keep log files from the upgrade (or other files) be sure to copy them somewhere else.

Last, when I am convinced that my upgraded and converted database is doing good, I can terminate the entire source DB system.

Tweaks

The transfer speed to the File Storage service is depending on the number of CPUs on your system (more CPUs, more network speed). If the bottleneck is the network, you can try to temporarily add more CPUs.

If you have a license for any of the Enterprise Edition offerings you might be able to use some of the below features to speed up and backup and recovery. Before using any of them be sure to check the license guide and confirm you have a proper license.

  • Parallel backup and recovery – more channels, faster backups and faster restores.
  • Compression – reduces the size of the backups which is beneficial when they are transported over the network.

Conclusion

You can upgrade a 11.2.0.4 database to 19c by moving the database to a new VM DB System. You must convert the database to a pluggable database as well because multitenant is the only supported architecture for VM DB Systems on 19c.

References

Other posts in this series

Upgrading in the cloud – VM DB Systems – Transfer Speed

Actually, this blog post doesn’t have anything to do with upgrades – and yet it does. I will be talking about how fast you can transfer data from one VM DB System to another. How does that relate to upgrading? In some situations, it will have a direct impact on the downtime needed to upgrade a VM DB System. Allow me to explain.

For now – when you need to upgrade a VM DB System you have to provision a brand-new system, move the database and then upgrade. The move part is where the transfer speed comes into play. Whether you are transferring a cold copy of the database or doing RMAN backups you will eventually end up with a bottleneck in terms of getting the data from one system to the other. Other more elegant approaches (like having a standby on the new system) can’t be used because of the limitations of VM DB Systems. You are – for instance – not allowed to install any other Oracle Homes on a system than the one that comes when the system is provisioned.

Now back to transfer speed. There are four factors that comes into play:

  • How fast can the source system read the data from disk
  • How fast can the source system send the data over the network
  • How fast can the target system receive the data from the network
  • How fast can the target system write the data to disk

That now boils down to two things:

  • I/O throughput
  • Network speed

I/O Throughput

A VM DB System uses block storage that is allocated when the system is created. The speed of the storage depends on the amount of storage you provision. The more storage, the faster disks.

Storage (GB) Throughput MB/s
256 120
512 240
1024 480
2048 960
4096 1280
6144 1280
8192 1280
10240 1600
12288 1920
14336 2240
16384 2560
18432 2880
20480 3200

You can scale up on storage and get more throughput but be aware that you can’t scale down. Once you have allocated storage there is no way to get rid of it again. So, it is not really suitable for a one-time operation. The good news is that storage scales online so there is no need for downtime to make changes.

Network Speed

The network speed of your VM DB System depends on the number of OCPUs that you allocate. The more OCPUs, the more network speed.

Shape Throughput MB/s
VM.Standard.2.1 128
VM.Standard.2.2 256
VM.Standard.2.4 512
VM.Standard.2.8 1024
VM.Standard.2.16 2048
VM.Standard.2.24 3200

You can scale up and scale down a VM DB System. So if you need more network throughput for a period of time, you can just scale up. You only need $$$ and downtime. I did some wristwatch measurements and it takes roughly 10 minutes to do a scale operation (either up or down).

Conclusion

If you want to increase the transfer speed between two VM DB Systems, you have two options:

  • Add more OCPUs
  • Add more storage

Adding OCPUs is easy and can be reverted once there is no longer a need for the increased throughput. Storage however can only scale up. Obviously, you have to consider the limits of both the sending system as well as the receiving system.

If you end up in a situation where downtime matters to you and you need to move data between two VM DB Systems, you can increase transfer speeds by scaling up.

Other posts in this series

Zero Downtime Migration – The Pro Tips

In this final (for now, at least) blog post I will show the pro tips that might come in handy. It is a little mix and match of all my notes that didn’t make it into the previous blog posts, but are still too good to go.

Pro Tip 1: Converting To Snapshot Standby For Testing

This is a really cool feature of ZDM (or in fact any migration that uses a standby database). Once the standby database has been built in OCI and while you are waiting to do the switch-over, you can use the database in OCI for testing. So you can do realistic testing on the database you are about to switch over to. To convert the standby database to a snapshot standby database:

alter database recover managed standby database cancel;
shutdown immediate
startup mount
alter database convert to snapshot standby;
alter database open;

Now, the database is opened in READ WRITE mode and you can use it for testing. Don’t worry, the database is fully protected by flashback logs so you can always rewind any changes made, and resync with the primary database. To convert back to a physical standby database:

shutdown immediate
startup mount
alter database convert to physical standby;
shutdown immediate
startup
alter database recover managed standby database disconnect from session;

You can even do this multiple times if you want several test runs on your OCI database. If you want you can read more about the different standby databases or snapshot standby databases in particular.

Pro Tip 2: Monitoring Queries

When you have created the standby database in OCI and are waiting for the switch-over you can use these commands for monitoring. On the source/primary database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

On the target/standby database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;
--MRP process should be 'APPLYING_LOG'
SELECT process, status, sequence# FROM v$managed_standby;
SELECT * FROM v$archive_gap;

Pro Tip 3: Log Files

If something goes south where can you find the log files? On the ZDM service host:

  • $ZDM_BASE/chkbase/scheduled
  • $ZDM_BASE/crsdata/[hostname]/rhp

On the source and target hosts you can also find additional log files containg all the commands that are executed by ZDM:

  • /tmp/zdm-[some number]/zdm/log

Pro Tip 4: Troubleshooting

When you a troubleshooting it is sometimes useful to get rid of all the log files and have ZDM start all over. Some of the log files get really big and are a hard to read, so I usually stop the ZDM service, delete all the log files, and restart ZDM and my troubleshooting. But only do this if there are no other jobs running than the one you are troubleshooting:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice stop
[zdmuser@zdm]$ rm $ZDM_BASE/crsdata/*/rhp/rhpserver.log*
[zdmuser@zdm]$ rm $ZDM_BASE/chkbase/scheduled/*
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start

There is also a chapter about troubleshooting in the documentation but it more or less says the same.

Pro Tip 5: Aborting A Job

Some times it is useful to completely restart a migration. If a database migration is already registered in ZDM, you are not allowed to specify another migration job. First, you have to abort the existing job, before you can enter a new migration job.

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli abort job -jobid n

Now, you can use zdmcli migrate database command again. By the way, the abort job command is missing from the CLI reference but it is a valid, and fully supported command.

Pro Tip 6: Show All Phases

A ZDM migration is split into phases, and you can have ZDM pause after each of the phases. The documentation has a list of all phases but you can also get it directly from the ZDM tool itself for a specific migration job:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp ~/migrate.rsp
   ... \
   ... \
   ... \
   -listphases

Pro Tip 7: Adding Custom Scripts

You can add your own custom scripts to run before or after a phase in the migration job. You can use the -listphases command (described above) to get a list of all the phases. Then decide whether your script should run before or after that phase. This is called an action plug-in. You can bundle those together in a template to make it easier to re-use. If this is something you need, you should dive into the documentation.

Pro Tip 8: Remember To Patch On-Premises Oracle Home

If your OCI Oracle Home is running on a newer Release Update (i.e. higher patch level) then you have to patch your on-premises Oracle Home after the switch-over – and before you execute datapatch. The two patch levels should be identical after the switch-over. Release Updates are always Standby-First Installable. That means that it is allowed to have a standby database running on an Oracle Home of newer patch level – but not older one. This concept is widely used to reduce downtime during database patching and it is basically the same concept that applies for ZDM.

Pro Tip 9: Fallback To On-Premises Database

It is possible to configure ZDM to keep the on-premises database after the switch-over. It will then become a physical standby database. If something happens with the OCI database, you can do an additional switch-over and run off the original on-premises source database. This is a very nice and handy fallback method. If you want to read about it that procedure I suggest that you visit the MOS note MAA Practices for Cloud Migration Using ZDM (Doc ID 2562063.1). Be aware, falling back to the source database requires a license for Advanced Security Option. The target database in OCI is encryted using TDE Tablespace Encryption. You get that as part of any OCI DB System offering. Once the OCI database is the primary database it will generate encrypted redo – and if the source database has to apply that – it must have a license for the Advanced Security Option.

Pro Tip 10: Convert From Single Instance To RAC

A useful feature of ZDM is that it can convert a single instance database to a RAC database in OCI. And it is super simple to do that. The only thing you have to do is to create the target placeholder database as a RAC database. ZDM will detect that and take care of the rest.

Finally, let me mention that if the source database is RAC One Node or RAC, then the target database must be a RAC database. Be sure to create the target placeholder database as RAC.

Other Blog Posts In This Series

Zero Downtime Migration – Migrate Your Database

In the previous blog posts we configured the environments and installed and configured the ZDM service host. Now we can start working on the actual migration.

I will use Object Storage as the staging area between the source and target host, and, hence, I need to create a bucket that I can use for that purpose:

$ oci os bucket create \
  --compartment-id "..." \
  --name "zdm-staging"

Connect to the ZDM service host as zdmuser and have a look at a template response file. It contains a description of each of the parameters that you can use:

[zdmuser@zdm]$ more $ZDM_HOME/rhp/zdm/template/zdm_template.rsp

Now, I can create my own response file:

[zdmuser@zdm]$ vi ~/migrate.rsp

In my demo it contains the following:

#Migration method: DG_OSS - DataGuard using object storage for standby initialization
MIGRATION_METHOD=DG_OSS
#This is DB_UNIQUE_NAME of the target database, connect to the target database and execute: SELECT db_unique_name FROM v$database;
TGT_DB_UNIQUE_NAME=CDB1_fra3kw
#Name of the ASM diskgroups that I will use. To get a list of disk groups and free space, connect to target database and execute: SELECT name, free_mb, total_mb FROM v$asm_diskgroup;
TGT_DATADG=+DATA
TGT_REDODG=+RECO
TGT_RECODG=+RECO
#BACKUP_PATH should be left blank when MIGRATION_METHOD=DG_OSS
BACKUP_PATH=
#URL to OCI object storage
HOST=https://swiftobjectstorage.[region].oraclecloud.com/v1/[your-tenancy-object-storage-namespace]
#The Object Storage Bucket that will be used as a staging area
OPC_CONTAINER=zdm-staging
#In my simple demo I will skip configuration of fallback, and just shutdown the source database after the switchover
SKIP_FALLBACK=TRUE
SHUTDOWN_SRC=TRUE

If you need help figuring out what HOST should be set to, you can look in the documentation. In my demo I use the Frankfurt data center, and, thus, the region is set to eu-frankfurt-1. You can visit the API documentation for a list of regions. But often the region string is listed many places. To figure out what your tenancy object storage namespace is you can use oci os ns get. Alternatively, in the OCI web page open the Profile menu and click Tenancy: <your tenancy name>. The namespace string is listed under Object Storage Settings. To read more about have a look at the OCI documentation. Thanks to Bartlomiej Sowa for putting in a comment with this information – much appreciated!

Now, let’s start a migration evaluation. Most of the parameters are self-explanatory but you can also look in the documentation. Normally, you should leave srcauth and tgtauth to zdmauth. srcarg2 identify_file refers to the private key files that are needed to SSH to the source and target host. The backupuser is the user name that you want to use to connect to OCI object storage:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \
  -rsp ~/migrate.rsp \
  -sourcesid CDB1 \
  -sourcenode srchost \
  -srcauth zdmauth \
  -srcarg1 user:opc \
  -srcarg2 identity_file:/home/zdmuser/.ssh/srchost \
  -srcarg3 sudo_location:/usr/bin/sudo \
  -targetnode tgthost \
  -tgtauth zdmauth \
  -tgtarg1 user:opc \
  -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost \
  -tgtarg3 sudo_location:/usr/bin/sudo \
  -targethome /u01/app/oracle/product/19.0.0.0/dbhome_1 \
  -backupuser "daniel.overby.hansen@oracle.com" \
  -eval

Shortly after you will be prompted for the SYS password to the source database and also the password for the backupuser (which is your OCI user). For the latter, please note that this password is not your user password, however, it is an auth token: When using OCI oject storage this is NOT your user password, but an auth token Also, from the output you can see the ID of your job. Use it to query the status of the job:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli query job -jobid 1

Hopefully, you will end up with a successful evaluation: When status of an evaluation is SUCCEEDED

Before we start the actual migration let me say a few words about backup during ZDM migration. You should keep regular backup strategy during migration. But avoid having ZDM backups and regular backups run at the same time. Also, if you are dealing with a RAC database be sure to put the snapshot controlfile on shared storage. Otherwise, you might get ORA-00245 errors during backups.

Back on track! Start the migration, but – VERY IMPORTANT – specify that ZDM should pause after the standby has been built and redo apply has started using -pauseafter option. If you fail to do so the switch-over will be executed as soon as the standby has been built:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli migrate database \
  -rsp ~/migrate.rsp \
  -sourcesid CDB1 -sourcenode srchost \
  -srcauth zdmauth \
  -srcarg1 user:opc \
  -srcarg2 identity_file:/home/zdmuser/.ssh/srchost \
  -srcarg3 sudo_location:/usr/bin/sudo \
  -targetnode tgthost \
  -targethome /u01/app/oracle/product/19.0.0.0/dbhome_1 \
  -backupuser "daniel.overby.hansen@oracle.com" \
  -tgtauth zdmauth \
  -tgtarg1 user:opc \
  -tgtarg2 identity_file:/home/zdmuser/.ssh/tgthost \
  -tgtarg3 sudo_location:/usr/bin/sudo \
  -pauseafter ZDM_CONFIGURE_DG_SRC

I can use the same command (zdmcli query job) to query the progress of the migration. After a while the migration will pause after ZDM_CONFIGURE_DG_SRC: Standby database is created - waiting for the final "Go"

Now the standby database has been built in OCI. Redo gets transferred from my source database to the target database in OCI and applied. If you want to verify it, you can use these queries on the source/primary database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log GROUP BY thread#;

And on the target/standby database:

SELECT 
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;
SELECT thread#, max(sequence#) FROM v$archived_log WHERE applied='YES' GROUP BY thread#;

One of the really cool features of ZDM is that I can now use my standby database for testing in OCI – before I decide to do the switch-over. You can archieve this by converting to a snapshot standby database. I will explain this in a later blog post.

Now I can just sit back and wait. All the changes from my source environment are immediately replicated to my target database in OCI. When it is time to complete the migration and perform the switch-over, I can simply just let ZDM finalize the job. You shouldn’t worry – ZDM won’t do the switch-over until it have verified that all changes are applied on the target database.

However, to prove that it really works lets add another piece of fruit to our SALES PDB:

INSERT INTO zdmtest.items VALUES (4, 'Lemon', 4);
COMMIT;

And instruct ZDM to resume the job:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli resume job -jobid 2
[zdmuser@zdm]$ $ZDM_HOME/bin/zdmcli query job -jobid 2

In the end you will have output similar to this:

Job execution elapsed time: 3 minutes 37 seconds
ZDM_GET_SRC_INFO .............. COMPLETED
ZDM_GET_TGT_INFO .............. COMPLETED
ZDM_SETUP_SRC ................. COMPLETED
ZDM_SETUP_TGT ................. COMPLETED
ZDM_GEN_RMAN_PASSWD ........... COMPLETED
ZDM_PREUSERACTIONS ............ COMPLETED
ZDM_PREUSERACTIONS_TGT ........ COMPLETED
ZDM_VALIDATE_SRC .............. COMPLETED
ZDM_VALIDATE_TGT .............. COMPLETED
ZDM_OBC_INST_SRC .............. COMPLETED
ZDM_OBC_INST_TGT .............. COMPLETED
ZDM_BACKUP_FULL_SRC ........... COMPLETED
ZDM_BACKUP_INCREMENTAL_SRC .... COMPLETED
ZDM_DISCOVER_SRC .............. COMPLETED
ZDM_COPYFILES ................. COMPLETED
ZDM_PREPARE_TGT ............... COMPLETED
ZDM_SETUP_TDE_TGT ............. COMPLETED
ZDM_CLONE_TGT ................. COMPLETED
ZDM_FINALIZE_TGT .............. COMPLETED
ZDM_CONFIGURE_DG_SRC .......... COMPLETED
ZDM_SWITCHOVER_SRC ............ COMPLETED
ZDM_SWITCHOVER_TGT ............ COMPLETED
ZDM_SHUTDOWN_SRC .............. COMPLETED
ZDM_NONCDBTOPDB_PRECHECK ...... COMPLETED
ZDM_NONCDBTOPDB_CONVERSION .... COMPLETED
ZDM_POSTUSERACTIONS ........... COMPLETED
ZDM_POSTUSERACTIONS_TGT ....... COMPLETED
ZDM_CLEANUP_SRC ............... COMPLETED
ZDM_CLEANUP_TGT ............... COMPLETED

And now my database is migrated to the cloud. Let’s query the target database:

SELECT
   host_name, instance_name, db_unique_name, status, database_role, open_mode 
FROM 
   v$database, v$instance;

And check that all four pieces of fruit are in our SALES PDB:

SELECT * FROM zdmtest.items;

And that should be it: Welcome to OCI. Personally, I would also take a quick peek at the alert log. Just to ensure things are running smooth.

Exadata DB System

If your target environment is an Exadata DB System (or ExaCS) it is a good idea to ensure that the cloud tooling is fully up-to-date with the new database.

[root@zdm]$ dbaascli registerdb prereqs --dbname [db_name] --db_unique_name [db_unique_name]
[root@zdm]$ dbaascli registerdb begin --dbname [db_name] --db_unique_name [db_unique_name]

You can read more about it in the ZDM run book which also has a sample output from the commands.

Troubleshooting

PRCZ-4001

If you run into this error:

PRCZ-4001 : failed to execute command "/bin/uname" using the privileged execution plugin "zdmauth" on nodes "doverbyh-zdm-tgt" within 120 seconds
PRCZ-2006 : Unable to establish SSH connection to node "doverbyh-zdm-tgt" to execute command "/bin/uname"
No more authentication methods available

Check your key files. They must be in RSA/PEM format (the private key must start with -----BEGIN RSA PRIVATE KEY-----).

ZDM_OBC_INSTALL_CREDENTIALS_INVALID

If you run into this error:

srchost: 07:08:00.000: Validating object store credentials..
srchost: <ERR_FILE><Facility>PRGO</Facility><ID>ZDM_OBC_INSTALL_CREDENTIALS_INVALID</ID></ERR_FILE>

Check your credentials to OCI. Remember when prompted for the password of your OCI account it is an auth token, not your password (even though the prompt text is misleading).

ZDM_GET_SRC_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the source database host is added to the known_hosts file on the ZDM service host. Also, you can verify connectivity by trying to log on via SSH:

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-source-host>

ZDM_GET_TGT_INFO

If the phase ends in PRECHECK_FAILED and there is no real clue about the error, ensure that the target database host is added to the known_hosts file on the ZDM service host.

[zdmuser@zdm]$ ssh -i <specified-key-file> opc@<name-of-target-host>

Other Blog Posts In This Series

Zero Downtime Migration – Install And Configure ZDM

To use Zero Downtime Migration (ZDM) I must install a Zero Downtime Migration service host. It is the piece of software that will control the entire process of migrating my database into Oracle Cloud Infrastructure (OCI). The requirements are:

  • Must be running Oracle Linux 7 or newer.
  • 100 GB disk space according to the documentation. I could do with way less – basically there should be a few GBs for the binaries and then space for configuration and log files.
  • SSH access (port 22) to each of the database hosts.
  • Recommended to install it on a separate server (although technically possible to use one of the database hosts).

Create and configure server

In my example, I will install the ZDM service host on a compute instance in OCI. There are no requirements to CPU nor memory and ZDM is only acting as a coordinator – all the work is done by the database hosts – so I can use the smallest compute shape available. I am using OCI CLI* to create the compute instance which you can install on your own computer or use a Cloud Shell. But I could just as well use the web interface or REST APIs.

First, I will define a few variables that you have to change to your needs. DISPLAYNAME is the hostname of my compute instance – and also the name I see in the OCI webpage. AVAILDOM is the availability domain into which the compute instance is created. SHAPE is the compute shape:

DISPLAYNAME=zdm
AVAILDOM=OUGC:EU-FRANKFURT-1-AD-1
SHAPE=VM.Standard2.1

When I create a compute instance using the webpage these are the values: Screenshot of OCI webpage where display name, availability domain and shape are shown

In addition, I will define the OCID of my compartment, and also the OCID of the subnet that I will use. I am making sure to select a subnet that I can reach via SSH from my own computer. Last, I have the public key file:

COMPARTMENTID="..."
SUBNETID="..."
PUBKEYFILE="/path/to/key-file.pub"

Because I want to use the latest Oracle Linux image I will query for the OCID of that and store it in a variable:

IMAGEID=`oci compute image list \
   --compartment-id $COMPARTMENTID \
   --operating-system "Oracle Linux" \
   --sort-by TIMECREATED \
   --query "data[?contains(\"display-name\", 'GPU')==\\\`false\\\`].{DisplayName:\"display-name\", OCID:\"id\"} | [0]" \
   | grep OCID \
   | awk -F'[\"|\"]' '{print $4}'`

And now I can create the compute instance:

oci compute instance launch \
 --compartment-id $COMPARTMENTID \
 --display-name $DISPLAYNAME \
 --availability-domain $AVAILDOM \
 --subnet-id $SUBNETID \
 --image-id $IMAGEID \
 --shape $SHAPE \
 --ssh-authorized-keys-file $PUBKEYFILE \
 --wait-for-state RUNNING

The command will wait until the compute instance is up and running because I used the wait-for-state RUNNING option. Now, I can get the public IP address so I can connect to the instance:

VMID=`oci compute instance list \
  --compartment-id $COMPARTMENTID \
  --display-name $DISPLAYNAME \
  --lifecycle-state RUNNING \
  | grep \"id\" \
  | awk -F'[\"|\"]' '{print $4}'`
oci compute instance list-vnics \
 --instance-id $VMID \
 | grep public-ip \
 | awk -F'[\"|\"]' '{print $4}'

Prepare Host

The installation process is described in the documentation which you should visit to get the latest changes. Log on to the ZDM service host as OPC using the public IP address. By using -o ServerAliveInterval=300 I can avoid getting kicked off all the time:

ssh -o ServerAliveInterval=300 -i [key-file] opc@[ip-address]

Now, switch to root and install required packages:

[opc@zdm]$ sudo su -
[root@zdm]$ yum -y install \
  gcc \
  kernel-devel \
  kernel-headers \
  dkms \
  make \
  bzip2 \
  perl \
  glibc-devel \
  expect \
  zip \
  unzip \
  kernel-uek-devel-$(uname -r)

Create a ZDM group and user:

[root@zdm]$ groupadd zdm ; useradd -g zdm zdmuser

Make it possible to SSH to the box as zdmuser. I will just reuse the SSH keys from opc:

[root@zdm]$ cp -r /home/opc/.ssh /home/zdmuser/.ssh ; chown -R zdmuser:zdm /home/zdmuser/.ssh

Create directory for Oracle software and change permissions:

[root@zdm]$ mkdir /u01 ; chown zdmuser:zdm /u01

Edit hosts file, and ensure name resolution work to the source host (srchost) and target hosts (tgthost):

[root@zdm]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@zdm]$ echo -e "[ip address] tgthost" >> /etc/hosts

Install And Configure ZDM

Now, to install ZDM I will log on as zdmuser and set the environment in my .bashrc file:

[root@zdm]$ su - zdmuser
[zdmuser@zdm]$ echo "INVENTORY_LOCATION=/u01/app/oraInventory; export INVENTORY_LOCATION" >> ~/.bashrc
[zdmuser@zdm]$ echo "ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_BASE=\$ORACLE_BASE; export ZDM_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_HOME=\$ZDM_BASE/zdm19; export ZDM_HOME" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_INSTALL_LOC=/u01/zdm19-inst; export ZDM_INSTALL_LOC" >> ~/.bashrc
[zdmuser@zdm]$ source ~/.bashrc

Create directories

[zdmuser@zdm]$ mkdir -p $ORACLE_BASE $ZDM_BASE $ZDM_HOME $ZDM_INSTALL_LOC

Next, download the ZDM software into $ZDM_INSTALL_LOC.

Once downloaded, start the installation:

[zdmuser@zdm]$ $ZDM_INSTALL_LOC/zdminstall.sh setup \
  oraclehome=$ZDM_HOME \
  oraclebase=$ZDM_BASE \
  ziploc=$ZDM_INSTALL_LOC/zdm_home.zip -zdm

And it should look something similar to this: Screenshot of a successful ZDM installation

The installation process will output some warnings which are ignorable:

[WARNING] [INS-42505] The installer has detected that the Oracle Grid Infrastructure home software at (/u01/app/oracle/zdm19) is not complete.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
[WARNING] [INS-41875] Oracle ASM Administrator (OSASM) Group specified is same as the users primary group.
[WARNING] [INS-32022] Grid infrastructure software for a cluster installation must not be under an Oracle base directory.
[WARNING] [INS-32055] The Central Inventory is located in the Oracle base.
[WARNING] [INS-13014] Target environment does not meet some optional requirements.

And there are reference to scripts that I should execute, but this is not needed as mentioned in the documentation. ZDM piggybacks on Grid Infrastructure components which is why the messages are displayed: ZDM incorrectly informs you to run root scripts - don't run the scripts Start the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start

Which should produce something like this: Sample output when starting ZDM service (jwcctl debug jwc) And, optionally, I can verify the status of the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice status

The ZDM service is running

Configure Network Connectivity

The ZDM service host must communicate with the source and target hosts via SSH. For that purpose I need private key files to each of the hosts. The private key files must be without a passphrase, in RSA/PEM format and I have to put them at /home/zdmuser/.ssh/[host name]. In my demo, the files are to be named:

  • /home/zdmuser/.ssh/srchost
  • /home/zdmuser/.ssh/tgthost

Ensure that only zdmuser can read them:

[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/srchost
[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/tgthost

Now, I will verify the connection. In my example I will connect to opc on both database hosts, but you can change it if you like:

[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/srchost opc@srchost
[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/tgthost opc@tgthost

If you get an error when connecting ensure the following:

  • The public key is added to /home/opc/.ssh/authorized_keys on both database hosts (change opc if you are connecting as another user)
  • The key files are in RSA/PEM format (the private key file should start with -----BEGIN RSA PRIVATE KEY-----)
  • The key files are without a passphrase

That’s It

Now, I have a working ZDM service host. Previously, I have prepared my source and target environments which means that I am ready to start the migration process. Stay tuned!

References

* I found this blog post by Michał very usefull when figuring out how to use OCI CLI to create a compute instance.

Other Blog Posts In This Series

Zero Downtime Migration – Preparations

In the following posts I will demo how to migrate a database into OCI. To make the demo as simple as possible my source database will already be running in OCI and my target will be a VM DB System.

Prepare source environment

I will create a VM DB System that can act as my source database. I will call it srchost and the database is called CDB1. Using OCI CLI it can be done like this:

oci db system launch \
  --compartment-id "..." \
  --availability-domain "..." \
  --subnet-id "..." \
  --shape "VM.Standard2.4" \
  --cpu-core-count 4 \
  --database-edition "ENTERPRISE_EDITION" \
  --admin-password "..." \
  --ssh-authorized-keys-file "/path/to/key-file.pub" \
  --license-model "BRING_YOUR_OWN_LICENSE" \
  --db-name "CDB1" \
  --pdb-name "SALES" \
  --storage-management "ASM" \
  --node-count 1 \
  --initial-data-storage-size-in-gb 2048 \
  --display-name "SRCHOST" \
  --hostname "SRCHOST" \
  --db-version "19.0.0.0"

Once completed, I can log on as opc and switch to root and copy the authorized_keys file to allow SSH connection as oracle:

[opc@srchost]$ sudo su -
[root@srchost]$ rm -f /home/oracle/.ssh/authorized_keys
[root@srchost]$ cp /home/opc/.ssh/authorized_keys /home/oracle/.ssh/authorized_keys
[root@srchost]$ chown -R oracle:oinstall /home/oracle/.ssh

Now, I will ensure that the database is in ARCHIVELOG mode; it is a requirement for using Data Guard:

SELECT log_mode FROM v$database;

Since my database is running 12.2 or higher I have to ensure that there is a TDE Keystore. This applies even if the database is not encrypted. You don’t need a license to create a TDE Keystore – not until you start encrypting stuff. The keystore must be OPEN and the type is either AUTOLOGIN, LOCAL_AUTOLOGIN or PASSWORD. In a CDB, this applies to CDB$ROOT and all PDBs:

SELECT con_id, status, wallet_type FROM v$encryption_wallet;

If status is OPEN_NO_MASTER_KEY it means that no TDE master encryption key has been created and you will need to create one. You can also find instructions on how to create a TDE keystore in the ZDM documentation.

Prepare target environment

In my demo I will be targeting a VM DB System which I will call tgthost. For the migration to work, I have to create a placeholder database on the target host. During migration the placeholder database is overwritten by the source database. On a VM DB System this is easy; I just use the database that gets created automatically as placeholder.

DB_NAME must be the same in the two databases, and hence in my demo I must set db-name in the OCI CLI command below to CDB1 (the name of my source database). Contrary, DB_UNIQUE_NAME must be different but this will likely not be a problem because OCI automatically generates a semi-unique DB_UNIQUE_NAME (if you are a strong believer in Murphy’s Law you should double-check it).

The target database patch level can be the same or higher than the source. If higher, you just have to manually execute datapatch after the switch-over (I will put this information into a later blog post). By setting db-version to 19.0.0.0 I automatically get the latest available Release Update.

Also, the SYS password has to match in the two databases, so be sure to set admin-password to your source database SYS password. Finally, you should set storage-management to ASM. This will cause OCI to install Grid Infrastructure as well and set up a SCAN listener.

In my demo I end up with this OCI CLI command:

oci db system launch \
  --compartment-id  "..." \
  --availability-domain "..." \
  --subnet-id "..." \
  --shape "VM.Standard2.4" \
  --cpu-core-count 4 \
  --database-edition "ENTERPRISE_EDITION" \
  --admin-password "..." \
  --ssh-authorized-keys-file /path/to/key-file.pub \
  --license-model "BRING_YOUR_OWN_LICENSE" \
  --db-name "CDB1" \
  --pdb-name "SALES" \
  --storage-management "ASM" \
  --node-count 1 \
  --initial-data-storage-size-in-gb 2048 \
  --display-name "tgthost" \
  --hostname "tgthost" \
  --db-version "19.0.0.0"

As opc, switch to root and enable oracle to log on via SSH:

[opc@tgthost]$ sudo su -
[root@tgthost]$ rm -f /home/oracle/.ssh/authorized_keys
[root@tgthost]$ cp /home/opc/.ssh/authorized_keys /home/oracle/.ssh/authorized_keys
[root@tgthost]$ chown -R oracle:oinstall /home/oracle/.ssh

The requirements to the TDE Keystore applies to the target environment as well, but since I am using an OCI DB System that will be setup up correctly when the system is created.

Configure Connectivity

I need to ensure that the source host can resolve the network name of the target host. It is important to add two entries – one with the host name and one with the SCAN name (they should both point to the target host):

[root@srchost]$ echo -e "[ip address] tgthost" >> /etc/hosts
[root@srchost]$ echo -e "[ip address] tgthost-scan" >> /etc/hosts

Also, I need to ensure that I can connect to the target database from the source host over SQL*Net. Just use the default service of the target CDB, and you can get it using lsnrctl:

[oracle@tgthost]$ lsnrctl status
[oracle@srchost]$ sqlplus system@tgthost-scan/[target-cdb-service-name]

Now, I do it the other way around. Ensure that the target host can resolve the network name of the source host:

[root@tgthost]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@tgthost]$ echo -e "[ip address] srchost-scan" >> /etc/hosts

Ensure that I can connect to the source database from the target host:

[oracle@srchost]$ lsnrctl status
[oracle@tgthost]$ sqlplus system@srchost-scan/[source-cdb-service-name]

Create Sample Data

I will connect to the source database:

[oracle@srchost]$ export ORACLE_SID=CDB1
[oracle@srchost]$ export ORACLE_PDB_SID=SALES
[oracle@srchost]$ sqlplus / as sysdba

Note, that ORACLE_PDB_SID works from 18.5.0 and beyond. For lower versions you can just use ALTER SESSION SET CONTAINER=SALES instead. And create some sample data, so I can verify the migration actually works:

CREATE USER zdmtest IDENTIFIED BY [your-password];
GRANT CREATE TABLE, CONNECT TO zdmtest;
GRANT SELECT ON v_$instance TO zdmtest;
GRANT SELECT ON v_$database TO zdmtest;
ALTER USER zdmtest DEFAULT TABLESPACE users;
ALTER USER zdmtest QUOTA UNLIMITED ON users;

CREATE TABLE zdmtest.items (
   id      NUMBER,
   item    VARCHAR2(20),
   price   NUMBER
);
INSERT INTO zdmtest.items VALUES (1, 'Apple', 2);
INSERT INTO zdmtest.items VALUES (2, 'Banana', 1);
INSERT INTO zdmtest.items VALUES (3, 'Orange', 4);
COMMIT;

That’s It

My source database is now ready to be migrated to OCI. It will be migrated to a new VM DB System that I just created. In the next blog post I will install and configure ZDM. Stay tuned!

Other Blog Posts In This Series