Zero Downtime Migration – Monitor GoldenGate Replication

When you are migrating your Oracle Database with Zero Downtime Migration (ZDM) and the Logical Online method, you are using Oracle GoldenGate to handle the replication. The replication keeps the OCI target database in sync until you can perform the switchover. Thus, it is a vital part of the process. Let’s see how you can monitor the replication.

I have already started a migration which is currently paused at the phase ZDM_MONITOR_GG_LAG.

Monitor Services

Log on to the Oracle GoldenGate Hub via HTTPS. Use the host name, private or public IP address or whatever your network allows.

You are prompted for credentials. Log on as oggadmin – in lowercase. The password you can find in the file ogg-credentials.json which you find on GoldenGate Hub. The log in prompt of Oracle GoldenGate

Once logged in you can ensure that all the services are up and running. Most important are the Administration Server services – on source and target. They are handling the replication. Ensure all services are running You can start and stop the services from the Action drop-down menu.

By clicking on the Port number of each of the services, you can go to the specific console for each of the services. Again, the Administration Server are the ones of interest. Click on the port number to go to the specific console

Monitor Extract

If you followed the link to the Source Administration Server (by clicking on the port number) you end up here: Administration Server overview

Ensure that the extract is running. Further down the page you see a list of events. From the Action drop-down menu, you can start and stop the extract. And you get into the Details as well: Follow the Details link

You can get a lot of details, like extract statistics. Here you can see that extract has caught one insert into the table locations: Extract statistics

Monitor Replicat

Now you have monitored the extract, and you should also monitor the replication, which is handled by the replicat process. Basically, you do the same as with extract but use the link on the target administration server: Follow the link to the target administration server to monitor the replicat process

Here you should look at the replicat process and ensure that it is running: Ensure replicat process is running

You start and stop the services and get into the details like described above.

Troubleshooting

Here is an example of an error. I have created a table in my source database that does not exist in the target. This break the replication. I can use the list of critical events to figure out what went wrong: Replicat failed

Also, if you go into the details of the replicat process, and switch to the report tab, you can find even more details:

You can also find more logs on the GoldenGate host in the directories:

  • /u02/deployments/Source/etc/conf/ogg/
  • /u02/deployments/Target/etc/conf/ogg/

Conclusion

With Oracle GoldenGate Microservices Architecture it is easy to get an overview of the replication process. There are nice graphical overviews and you can even get detailed information and statistics.

Other Blog Posts in This Series

Zero Downtime Migration – Logical Online Migration

You can migrate a database with Zero Downtime Migration (ZDM) using the Logical Online method. The migration will happen with a combination of Data Pump and Oracle GoldenGate. This way even very large databases can be migrated with very little downtime.

In the introduction blog post I covered the basic characteristics of this approach and explained that Oracle GoldenGate can be used at no extra license cost and it is really simple to use.

Migrate your Oracle Database with Zero Downtime Migration ZDM using Data Pump and GoldenGate

With the Logical Online method, you can even target Autonomous Databases. Both Oracle Autonomous Transaction Processing (ATP) and Oracle Autonomous Data Warehouse (ADW) and also shared and dedicated. When you target an Autonomous Database, you don’t have access to the underlying database host, so instead of SSH access, ZDM will require SQL*Net connectivity.

Benefits

  • First and most important, you don’t need a license for Oracle GoldenGate to use this approach. Yes, that’s right! You are allowed to use a marketplace edition of Oracle GoldenGate 19c for 183 days to migrate your database with ZDM.

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools: Oracle Zero Downtime Migration and Oracle Cloud Infrastructure Database Migration

  • Even if you don’t have any GoldenGate experience, you can still use this approach. For example, you don’t need to install and configure GoldenGate yourself. The marketplace image creates the GoldenGate installation for you. Afterwards, ZDM configures GoldenGate and will eventually clean up as well.
  • You can migrate directly into a higher release. For example, if your source database is 12.1.0.2, you can migrate into 19c directly. This avoids the extra downtime normally needed to perform an upgrade.
  • You can migrate directly into a PDB. For example, if you source database is a non-CDB, you can migrate directly into a PDB. This avoids the extra downtime normally needed to convert the database into a PDB using the script noncdb_to_pdb.sql.
  • You can use the full power of Data Pump and GoldenGate to transform your data. By default, ZDM will convert any old BasicFile LOBs into SecureFile LOBs during import. But you can apply other transformations as well. You can partition the table in the target database, you can change the tablespace layout, you can apply compression and anything else that is possible with Data Pump.
  • You can migrate into a different character set (Data Pump and GoldenGate restrictions apply).
  • Your Data Pump export is automatically compressed if the source database is Enterprise Edition. Normally, Data Pump compression would require a license for the Advanced Compression Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump compression without having the Advanced Compression Option.
  • Your Data Pump export is automatically encrypted if the source database is Enterprise Edition. Normally, Data Pump encryption would require a license for the Advanced Security Option. But when you migrate with ZDM to OCI, Oracle allows you to use Data Pump encryption without having the Advanced Security Option.
  • You get recommendations on the migration before it is actually started. ZDM includes the Cloud Premigration Advisor Tool (CPAT) which analyzes your database and give you specific recommendations that matches the target database. For example, if you target an Autonomous Database, CPAT will warn if you have unsupported objects.
  • You can migrate a SE2 database into OCI using this approach as well.
  • You can build your target database in advance and configure backup upfront as well. You can test your backup/recovery strategy before you complete the migration.
  • You can build a standby database in advance as well. Your new OCI target database can be protected by Data Guard from the very second you perform the switch-over. I would recommend doing the initial Data Pump load first, and then build the standby database. This ensures that a minimum of changes has to be applied via redo.

Considerations

  • The time zone file in the OCI target database must be higher than that in the source database. Typically, this is not a problem if you provision a brand-new DB System. But if you plan to use an existing CDB in OCI do check the time zone file upfront:
SQL> select * from v$timezone_file;
  • In My Oracle Support you can find a list of recommended patches to apply on the source database when using Oracle GoldenGate. There is one for 11g and one for 12c and newer. You don’t have to apply those patches, but it is a recommendation. However, if you run into issues, you potentially have to patch your source database.
  • Most databases can be migrated with Oracle GoldenGate. Only very exotic data types are not supported. Be sure to check the documentation.
  • When you use Data Pump to migrate you will lose a lot of the information that is stored in the data dictionary, like
    • AWR
    • SQL Plan Baselines
    • SQL Profiles
    • SQL Patches
  • You can still transfer that information, but you need to use other tools. If this is relevant to you, I suggest that you watch our webinar Performance Stability, Tips and Tricks and Underscores where it is covered in detail.
  • Depending on whether you use a full or schema mode export you need to take care of your public objects afterwards. It could be synonyms or database links.
  • For a huge database you should be prepared for the time it takes to perform the Data Pump export and import. No need to worry too much because GoldenGate will keep track of the changes. But for a 100 TB database you should not start the job the day before the switchover is supposed to take place.
  • It is not recommended to execute DDL statements when GoldenGate is replicating changes. The extract and replicat processes that are created by GoldenGate does not replicate DDL statements. If you for example add a table after the initial Data Pump load, it will not get replicated by GoldenGate. However, GoldeGate is a very powerful tool and you can configure it to replicate DDL as well. But you will need to adjust the GoldenGate user privileges and change the parameter files. In the documentation you can read more about the possibilities of DDL replication and how to configure it.

Fallback

Using this approach there is no straight forward way of falling back to the source on-prem database. You could use GoldenGate to reverse the replication when you switch over to the OCI target database. However, it is not configured automatically by ZDM. So, this is very you would need to know GoldenGate and do the required configuration.

Conclusion

You get a lot more options when you use the Logical Online method to migrate your database. Especially, the possibility of migrating directly into a higher release or into a PDB is very attractive. Plus, you can use GoldenGate during the migration at no extra license cost.

However, the flexibility comes at a cost. There is a little more configuration and it is not as straight-forward as the Physical Online method. But it is not complicated.

Want to Know More

If you want to know more about using Data Pump and GoldenGate for database migrations, I suggest that you take a look at our webinar Migration Strategies – Insights, Tips and Secrets

In addition, these links contain additional useful information:

Other Blog Posts in This Series

Zero Downtime Migration – Create GoldenGate Hub

The Logical Online migration method in Zero Downtime Migration (ZDM) uses Oracle GoldenGate to keep the Oracle Database in OCI in sync until you perform the switchover. You need an Oracle GoldenGate Hub to do that and you should use the deployment from the OCI Marketplace. Let’s do it!

  1. First, go to the OCI Marketplace and search for goldengate for oracle. You should use the image called Oracle GoldenGate for Oracle – Database Migrations. Use the OCI Marketplace to find the Oracle GoldenGate application

  2. On the application page you can verify that the software is free but you still have to pay for the underlying infrastructure, like the compute instance. In addition, certain terms apply:

Oracle GoldenGate for Oracle – Database Migrations can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using the following tools: Oracle Zero Downtime Migration and Oracle Cloud Infrastructure Database Migration

  1. When you hit Get App button you need to log in to OCI. Be sure to select the right region and compartment. Again, you can verify that the software is free – but you pay for the compute instance. Hit Launch Stack. Launching an Oracle GoldenGate stack using OCI Marketplace This version of Oracle GoldenGate is available for free but you have to pay for the underlying infrastructure

  2. On the next page Create Stack – Stack Information simply give it a name. I call it OGG4DEMO.

  3. On the next page Create Stack – Configure Variables you fill in additional information like network, your SSH public key etc. But most important – you also need to specify the Oracle Database version of the source and target database – and whether the target database is an Autonomous Database. Don’t worry about the deployment names they are good as they are. Even if the target attributes are listed as optional – fill them out. Pick the right database versions for the source and target database

  4. Review the settings and hit Create. The stack is now being deployed. The stack is deploying and very soon it will be ready

  5. Now, head on over to Compute and Instances to find the newly created compute instance with Oracle GoldenGate already installed. Depending on your network settings use either the public or private IP address or the computer name (DNS). When the stack has been created, you can find it in Compute > Instances

  6. Use the IP address (or computer name) and your SSH key to connect to the server as opc. There is a file called ogg-credentials.json which contains the username and password that is needed to connect to the Oracle GoldenGate Hub In ogg-credentials.json you find the username and password required to connect to the Oracle GoldenGate Hub

  7. Now you can connect via HTTPS (https://193.122.52.34) to the Oracle GoldenGate Hub. Use the credentials from ogg-credentials.json and log in. That’s it – your very own Oracle GoldenGate deployment. The landing page when you log in to the Oracle GoldenGate Hub

Other Blog Posts in This Series

Appendix

A. The very last thing is to apply the latest patches to Oracle GoldenGate. But that will be a topic for a future post.

B. When you connect to Oracle GoldenGate via HTTPS, you will probably get a security warning in your browser. The initial deployment comes with a self-signed certificate which alerts many browsers. Follow the procedures in your organization to deploy a proper certificate. Read more about Securing the Microservices Architecture.

C. The credentials for Oracle GoldenGate are case sensitive. It applies to the password but also the username (oggadmin, not OGGADMIN).

How to Migrate a Database Using Full Transportable Export Import and Incremental Backups

These steps will guide you through a migration of a database using Full Transportable Export Import (FTEX) and incremental backups. I covered the concept in a previous blog post, which you should read to understand the basics. Remember Transportable Tablespaces and Full Transportable Export/Import requires Enterprise Edition.

My demo environment looks like this: Overview of demo environment for migrating using FTEX and incremental backups

I have an 12.1.0.2 database that I want to migrate to a PDB in a new CDB that runs 19c.

Check Prerequisites

Create a new PDB called SALES in the target CDB:

TARGET/CDB1 SQL> create pluggable database sales admin user admin identified by admin;
TARGET/CDB1 SQL> alter pluggable database sales open;
TARGET/CDB1 SQL> alter pluggable database sales save state;

Prepare the database to use TDE Tablespace Encryption:

TARGET/CDB1 SQL> alter session set container=sales;
TARGET/CDB1 SQL> administer key management set key force keystore identified by <keystore-pwd> with backup;

Verify SQL*Net connectivity from source host to target PDB:

[oracle@source]$ sqlplus system@<target ip>/<pdb-service-name>

Verify database character set and national character set are the same:

SOURCE/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

TARGET/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');

Ensure the source database is in ARCHIVELOG mode:

SOURCE/SALES SQL> select log_mode from v$database;

Enable block change tracking on source database. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata. Although strictly speaking not required, it is strongly recommended:

SOURCE/SALES SQL> select status, filename from v$block_change_tracking;
SOURCE/SALES SQL> alter database enable block change tracking;

Ensure that you can connect from the source to the target host as oracle:

[oracle@source]$ ssh <target ip> date

Identify Tablespaces

Identify all the tablespaces that you will migrate. With FTEX you should transport all the tablespaces, except those that contain Oracle maintained data, like SYSTEM, SYSAUX, UNDO and so forth:

SOURCE/SALES SQL> select tablespace_name from dba_tablespaces;

Save the list of tablespaces for later. In my demo, I only have the tablespace SALES except the Oracle maintained ones.

Next, on the target database ensure that any of the existing tablespaces doesn’t conflict with the ones you are transporting:

TARGET/SALES SQL> select tablespace_name from dba_tablespaces;

If there is a conflict of names, you have to drop or rename the tablespaces in the target database.

Download and Configure Perl Scripts

Create a folder to hold the perl scripts, download the scripts from MOS doc ID 2471245.1, and unzip:

[oracle@source]$ rm -rf /home/oracle/xtts
[oracle@source]$ mkdir /home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ --Download file from MOS
[oracle@source]$ unzip rman_xttconvert_VER4.3.zip

Create a working directory (aka. scratch location) which will hold the backups. Ensure that you have enough space at this location at both source and target database.

[oracle@source]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@source]$ mkdir -p /u01/app/oracle/xtts_scratch

Create the same location on the target host:

[oracle@target]$ rm -rf /u01/app/oracle/xtts_scratch
[oracle@target]$ mkdir -p /u01/app/oracle/xtts_scratch

Configure your migration in xtt.properties. In this demo the file looks like this:

tablespaces=SALES
platformid=13
src_scratch_location=/u01/app/oracle/xtts_scratch
dest_scratch_location=/u01/app/oracle/xtts_scratch
dest_datafile_location=+DATA
asm_home=/u01/app/19.0.0.0/grid
asm_sid=+ASM1
parallel=4
rollparallel=4
getfileparallel=4
metatransfer=1
dest_user=oracle
dest_host=<target_ip>
desttmpdir=/u01/app/oracle/xtts_scratch
srcconnstr=sys/<password>@sales
destconnstr=sys/<password>@newsales
usermantransport=1

A little explanation:

  • platformid is set to 13 because this is a Linux migration. You can get the number by querying v$transportable_platform.
  • Adjust the parallel options according to the capabilities of the source and target system.
  • When you are using ASM disk group in dest_datafile_location you must also set asm_home and asm_sid.

Finally, copy the scripts (and the configuration) to your target system:

[oracle@source]$ scp -r /home/oracle/xtts/ <target_ip>:/home/oracle/

Initial Backup and Restore

Now, you can start the first initial backup of the database. You take it while the source database is up and running, so it doesn’t matter if the backup/restore cycle take hours or days to complete:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

The perl script has been configured in such a way that it automatically transfers the backups to the target system. In addition to that, a small text file must be transferred as well:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

Now, on the target system, you can restore the backup that was just taken. If needed, the data files are automatically converted to the proper endian format. If conversion is needed, you need space for a copy of all the data files:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Incremental Backup and Restore

You can – and should – run the incremental backup and restores as many times as possible. The more frequent you run them, the faster they will run because there will be fewer changes. At least, close to the migration downtime window starts you should run them often, to minimize the time it will take to perform the final backup and restore:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Final Incremental Backup and Restore

Now downtime starts! Set the tablespaces read-only:

SOURCE/SALES SQL> alter tablespace SALES read only;

Perform the final incremental backup:

[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

You will receive an error because the tablespace is read-only. This is ignorable: This error is ignorable because the tablespace was set read-only on purpose

Transfer res.txt:

[oracle@source]$ scp res.txt oracle@<target_ip>:/home/oracle/xtts

And restore on the target system:

[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

Import Metadata Using FTEX

Create a directory object that points to the xtts folder:

TARGET/SALES SQL> create directory LOGDIR as '/home/oracle/xtts';

Next, create a database link to the source database that can be used to import the metadata. If the source database is already a PDB, ensure that the database link points directly into the PDB:

TARGET/SALES SQL> create public database link SRCLNK connect to system identified by <password> using '//<source_ip>:1521/<service_name>';

Test that it works:

TARGET/SALES SQL> select * from dual@srclnk;

Next, create a par file (sales_imp.par) that you can use for the Data Pump import (see appendix below for explanation):

network_link=SRCLNK
full=y
transportable=always
metrics=y
logtime=all
exclude=TABLE_STATISTICS,INDEX_STATISTICS
exclude=SYS_USER
exclude=TABLESPACE:"IN('TEMP')"
exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')"
directory=logdir
logfile=sales_imp.log
transport_datafiles='+DATA/CDB1_FRA2VD/B2D617FCB79B0684E053AF01000A6DCE/DATAFILE/SALES.281.105552609'

Start Data Pump and perform the import. newsales is a TNS alias that points into the SALES PDB in the target CDB. If you have encrypted tablespaces, you should use the option encryption_pwd_prompt. It allows you to input the TDE password. It can be omitted if there are no encrypted tablespaces.

$ impdp system@newsales parfile=sales_imp.par encryption_pwd_prompt=yes

Once the import has completed, you should examine the Data Pump log file for any critical errors. Check the appendix (see below) for ignorable errors:

[oracle@target]$ vi /home/oracle/xtts/sales_imp.log

That’s it! Your data has been migrated. Now would be a good time to:

  • Test your application.
  • Start a backup.
  • Gather statistics – they were excluded from the export.
  • Drop the database link that points to the source database.
  • Cleanup the file system:
    • /home/oracle/xtts
    • /u01/app/oracle/xtts_scratch

Conclusion

Even huge, TB-sized, databases can be migrated with very little downtime by using incremental backups. By using the perl script from My Oracle Support and combined with Full Transportable Export/Import it is a simple process. In addition, you can even migrate to a new endian format, to a higher release and into a PDB in one operation. It requires Enterprise Edition and you must have plenty of disk space – potentially twice the size of your database.

There is a video on our YouTube channel that you can watch. It demos the entire process. I suggest that you subscribe to our channel and get notified whenever there are new videos.

Thanks to my good colleague, Robert Pastijn, for supplying a runbook that was used as inspiration.

Appendix

If Source Database Is in OCI and Automatic Backup Is Enabled

If the source database is running in OCI and you have enabled automatic backup, you must make a few changes.

In xttprep.tmpl around line 319 change:

cp('backup for transport allow inconsistent ' ||

to

cp('set encryption off for all tablespaces;set compression algorithm "basic";backup for transport allow inconsistent ' ||

In xttdriver.pl around line 4268 change:

my $rman_str1 = "set nocfau;";

to

my $rman_str1 = "set nocfau;".
                "set encryption off for all tablespaces ;".
                "set compression algorithm 'basic' ;" ;

ORA-02085

If you get ORA-02085 when querying over the database link:

TARGET/SALES SQL> alter system set global_names=false;

Data Pump Parameters

Use network_link to specify the name of the database link that points back to the source database.

full=y and transportable=always instructs Data Pump to perform a full transportable export/import.

exclude=TABLE_STATISTICS,INDEX_STATISTICS exclude statistics from the import. It is better and faster to gather new, fresh statistics on the target database. If you insist on importing your statistics, you should use DBMS_STATS.

exclude=SYS_USER excludes the import of the SYS user. In a PDB that is not even allowed, and most likely you are not interested in importing the definition of the SYS user.

exclude=TABLESPACE:"IN('TEMP')" excludes the temporary tablespace from the import. Most likely there is already a temporary tablespace in the new, target PDB. It is faster to create a TEMP tablespace in advance – and name it the same as in the source database.

A change was made to Spatial in 19c and some Spatial admin users are removed. To avoid errors/noise in the log file you can safely exclude them from the import by specifying exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')".

transport_datafiles is used to specify the data files that make you the tablespace you are transporting. Specify the parameter multiple times to specify more data files. You can use asmcmd to get the data file paths and names.

Data Pump Ignorable Errors

Multimedia desupported in 19c, but code is still there. You can safely disregard this error:

Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
ORA-39342: Internal error - failed to import internal objects tagged with ORDIM due to ORA-00955: name is already used by an existing object.

Package is removed in 12.2. See ORA-39083 And ORA-04042 Errors On DBMS_DEFER_SYS When Importing Into 12.2 Database (Doc ID 2335846.1):

Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
ORA-39083: Object type PROCACT_SYSTEM failed to create with error:ORA-04042: procedure, function, package, or package body does not exist

Failing sql is:
BEGIN
SYS.DBMS_UTILITY.EXEC_DDL_STATEMENT('GRANT EXECUTE ON DBMS_DEFER_SYS TO "DBA"');COMMIT; END;

Minimal Downtime Migration with Full Transportable Export Import and Incremental Backups

If you need to migrate a database to the cloud or anywhere else for that matter, you should consider using Full Transportable Export Import (FTEX) and incremental backups. Even for really large databases – 10s or 100s of TB – you can still migrate with minimal downtime. And it works across different endian formats.

FTEX uses transportable tablespaces and the solution has these benefits:

  • You can implicitly upgrade the database as part of the migration
  • You can migrate from a non-CDB and into a PDB
  • You can keep downtime at a minimum by using frequent incremental backups
  • You can migrate across endianness – e.g. from AIX or Solaris to Oracle Linux

How Does It Work

To concept is explained in this video on our YouTube Channel and it includes a demo:

To make the backup and convert process really easy, Oracle is providing a perl script that can automate the entire process. You download the scripts from My Oracle Support: V4 PERL Scripts to reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

What You Need to Be Aware Of

Transportable Tablespaces

To get a complete list of limitations on transporting data, you should have a look in the documentation. Also, there are some specific to transportable tablespaces. The most notable are:

  • Character set and national character set should be the same. If not, there are a few options available, however.
  • No columns can be encrypted with TDE Column Encryption. Only option is to remove the encryption before migration, and re-encrypt afterwards.
  • TDE Tablespace Encryption is supported for same-endian migration if the source database is 12.1.0.2 or newer. If you need to go across endianness, you must decrypt the tablespaces, and re-encrypt after migration. Remember in Oracle Database 12.2 you can encrypt tablespaces online.
  • If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the convert on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
  • The database timezone and timezone file version must be identical.
  • Requires Enterprise Edition.

Full Transportable Export Import

  • Source database must be 11.2.0.3 or higher
  • Target database must be 12.1.0.1 or higher
  • It is recommended to import directly into the target database using the NETWORK_LINK option.
  • Requires Enterprise Edition.

If you can’t meet these requirements, you can still use this solution. But instead of doing a FTEX, you need to use Data Pump in another way.

Incremental Backups Using Perl Scripts

  • Source database must be 10.2.0.3 or higher
  • Target database must be 11.2.0.4 or higher

In addition, it is strongly recommended to use Block Change Tracking (BCT) on the source database. Note, that this is an Enterprise Edition feature (in OCI: DBCS EE-EP or ExaCS). If you don’t enable BCT the incremental backups will be much slower, because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.

The scripts will create a level 0 image file backup, and you must have room to accomodate this on your file system.

Conclusion

By using a combination of Full Transportable Export Import and incremental backups, you can migrate even huge databases to the cloud. And it even works for cross-endian migrations, like AIX or Solaris to Oracle Linux.

If you want to learn more about endianness and transportable tablespaces, you should watch this video on our YouTube Channel:

Update 30 November 2020

Thanks to Mark for his comment (see below). I have added some additional useful information.

Further Reading

Move to the Cloud – Webinar

Yesterday, Mike Dietrich and I gave the final webinar in the Oracle Database 19c Upgrade Virtual Classroom series. It was about Move to the Cloud – not only for techies. Now, I say final – but we all know you should never, say never. And in this case, it applies to final as well. We are already talking about subjects for a seventh webinar. If you have any interesting topic, that you think we should cover, get in touch with me.

Oracle Database 19c Upgrade Virtual Classroom

Unfortunately, due to a technical glitch we skipped the part about migrating using transportable tablespaces and full transportable export/import. We uploaded the missing part to YouTube, so you can watch it.

For those interested, you can now download the slides. We had really much information to share, so browse through the deck to find a lot of hidden slides. Typically, there are references and links to more information about a specific topic.

Within a week it should be possible to watch a recording of the webinar.

The Demos and Videos

This presentation we gave, was a brand new one. We used as many demos and videos as we could – or rather had time to prepare. We will post them on our YouTube channel as soon as possible. I suggest that you subscribe to it, so you can receive word as soon as new contents arrives. Further, we want to enhance the presentation even more, so we will be putting in more videos and demos. Let me know, if there was a topic, that could improve with a video or demo.

Thank You

Thanks to everyone that participated yesterday. Happy migrating!

Migration Strategies

Mike Dietrich and I gave a presentation on Migration Strategies – Tips and Tricks and Insights and Secrets today. The webinar is part of the Oracle Database 19c Upgrade Virtual Classroom series.

Oracle Database 19c Upgrade Virtual Classroom

The slide deck contains more information than we presented at the webinar. We had hidden a lot of slides that contain additional information and useful links. The slide deck is ready to download.

Soon you will be able to watch a recording as well.

Move to the Cloud (for techies)

If you want to know even more about migrations, you should register for the next webinar. It’s all about cloud migrations.

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Thank You

Thanks to everyone that participated today. We had a lot of fun and look forward to seeing you on Thursday. Happy migrating!

Debut on the Big Stage

Mike Dietrich and I will give two webinars in mid-October on database migrations. One of them will focus solely on migration to Oracle Cloud Infrastructure. The webinars are part of the Oracle Database 19c Upgrade Virtual Classroom series. If you are interested, you have to register.

Oracle Database 19c Upgrade Virtual Classroom

Migration Strategies: Insights, Tips and Secrets

Date: Tuesday 13 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Now it’s time for us to dig deeper. We’d like to offer you further insights and a deep dive from a technical point of view, starting with Data Pump and Transportable Tablespaces, then Full Transportable Export Import, and adding RMAN Incremental Backups to decrease the downtime. Best practices and real-world experience will round up this two-hour webinar.

Move to the Cloud (for techies)

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Debut on the Big Stage

These two webinars will be my debut on the big stage. The previous webinars had a huge interest and, unfortunately, some people couldn’t join because the webinar had reached its maximum number of attendees. If you register, I recommend to join early to get your seat.

I have been presenting in person and virtually for some years now. Also, since I joined Oracle in January, I have been doing quite a few presentations. However, this is my first time for such big crowd. I am excited and look forward to it – but also a little intimidated. Luckily, I have super-star Mike Dietrich there to (virtually) hold my hand.

And, finally, I promise you: No marketing slides – just demos and details.

I hope to see you there!

Zero Downtime Migration – The Pro Tips

Here are my pro tips. It is a little mix and match of all my notes that didn’t make it into the previous blog posts but are still too good to go.

Pro Tip 1: Log Files

If something goes south where can you find the log files? On the ZDM service host:

  • $ZDM_BASE/chkbase/scheduled
  • $ZDM_BASE/crsdata/<hostname>/rhp

On the source and target hosts you can also find additional log files containing all the commands that are executed by ZDM:

  • $ORACLE_BASE/zdm/zdm_<db_unique_name>_<zdm_job_id>/zdm/log

Other sources:

  • Alert log
  • Data Pump process trace file DM00

Data Pump log file

  • Directory referenced by directory object
  • $ORACLE_HOME/rdbms/log/<PDB GUID>

Pro Tip 2: Troubleshooting

When you are troubleshooting it is sometimes useful to get rid of all the log files and have ZDM start all over. Some of the log files get really big and are a hard to read, so I usually stop the ZDM service, delete all the log files, and restart ZDM and my troubleshooting. But only do this if there are no other jobs running than the one you are troubleshooting:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice stop
[zdmuser@zdmhost]$ rm $ZDM_BASE/crsdata/*/rhp/rhpserver.log*
[zdmuser@zdmhost]$ rm $ZDM_BASE/chkbase/scheduled/*
[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice start

There are also several chapters on troubleshooting:

Pro Tip 3: Aborting A Job

Sometimes it is useful to completely restart a migration. If a database migration is already registered in ZDM, you are not allowed to specify another migration job. First, you have to abort the existing job, before you can enter a new migration job.

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli abort job -jobid n

Now, you can use zdmcli migrate database command again.

Pro Tip 4: Show All Phases

A ZDM migration is split into phases, and you can have ZDM pause after each of the phases. The documentation has a list of all phases but you can also get it directly from the ZDM tool itself for a specific migration job:

[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \
   -rsp ~/migrate.rsp
   ... \
   ... \
   ... \
   -listphases

Pro Tip 5: Adding Custom Scripts

You can add your own custom scripts to run before or after a phase in the migration job. You can use the -listphases command (described above) to get a list of all the phases. Then decide whether your script should run before or after that phase. This is called an action plug-in. You can bundle those together in a template to make it easier to re-use. If this is something you need, you should dive into the documentation.

If you target an Autonomous Database, you are not allowed to execute scripts on the target database host. Instead, you can .sql scripts.

The environment in which the script starts has some environment variables that you can use, like:

  • Database (ZDM_SRCDB)
  • Oracle Home (ZDM_SRCDBHOME)
  • ZDM Phase (RHP_PHASE)

Pro Tip 6: GoldenGate Health Check

You can use the healthcheck script on the source and target databases – where the extract and replicat process is running. It will give you invaluable information for your troubleshooting experience and it is a good idea to run and attach a health check if you need to contact My Oracle Support. It is like an AWR report but with information specific to Oracle GoldenGate replication.

Generate report by:

  • Installing objects in database: ogghc_install.sql
  • Execute health check: ogghc_run.sql
  • Optionally, clean-up objects: ogghc_uninstall.sql

For GoldenGate MicroServices Architecture find the scripts on the GoldenGate hub:

  • /u01/app/ogg/oraclenn/lib/sql/healthcheck

And run the scripts in source and target database.

Pro Tip 7: Convert From Single Instance To RAC

A useful feature of ZDM is that it can convert a single instance database to a RAC database in OCI. And it is super simple to do that. The only thing you have to do is to create the target placeholder database as a RAC database. ZDM will detect that and take care of the rest.

Finally, let me mention that if the source database is RAC One Node or RAC, then the target database must be a RAC database. Be sure to create the target placeholder database as RAC.

Pro Tip 8: Get Data Pump Log File in Autonomous Database

When you are importing into Autonomous Database, the Data Pump log file is stored in the directory DATA_PUMP_DIR. But in Autonomous Database you don’t have access to the underlying file system, so how do you get the log file? One approach is to upload the log file into Object Storage.

  1. ZDM will create a set of credentials as part of the migration workflow. Find the name of the credentials (or create new ones using DBMS_CLOUD):
select owner, credential_name, username, enabled from dba_credentials;
  1. Find the name of the Data Pump log file:
select * from dbms_cloud.list_files('DATA_PUMP_DIR');
  1. Upload it. If you need help generating the URI, check the documentation:
begin
    DBMS_CLOUD.PUT_OBJECT (
       credential_name => '<your credential>',
       object_uri      => 'https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket>/',
       directory_name  => 'DATA_PUMP_DIR',
       file_name       => '<file name>');
end;
/
  1. Your OCI Console to download the Data Pump log file.

Other Blog Posts in This Series

Zero Downtime Migration – Install And Configure ZDM

To use Zero Downtime Migration (ZDM) I must install a Zero Downtime Migration service host. It is the piece of software that will control the entire process of migrating my database into Oracle Cloud Infrastructure (OCI). This post describes the process for ZDM version 21. The requirements are:

  • Must be running Oracle Linux 7 or newer.
  • 100 GB disk space according to the documentation. I could do with way less – basically there should be a few GBs for the binaries and then space for configuration and log files.
  • SSH access (port 22) to each of the database hosts.
  • Recommended to install it on a separate server (although technically possible to use one of the database hosts).

Create and configure server

In my example, I will install the ZDM service host on a compute instance in OCI. There are no requirements to CPU nor memory and ZDM is only acting as a coordinator – all the work is done by the database hosts – so I can use the smallest compute shape available. I was inspired to use OCI CLI after reading a blog post by Michał. I will use that approach to create the compute instance. But I could just as well use the web interface or REST APIs.

First, I will define a few variables that you have to change to your needs. DISPLAYNAME is the hostname of my compute instance – and also the name I see in the OCI webpage. AVAILDOM is the availability domain into which the compute instance is created. SHAPE is the compute shape:

DISPLAYNAME=zdm
AVAILDOM=OUGC:EU-FRANKFURT-1-AD-1
SHAPE=VM.Standard2.1

When I create a compute instance using the webpage these are the values: Screenshot of OCI webpage where display name, availability domain and shape are shown

In addition, I will define the OCID of my compartment, and also the OCID of the subnet that I will use. I am making sure to select a subnet that I can reach via SSH from my own computer. Last, I have the public key file:

COMPARTMENTID="..."
SUBNETID="..."
PUBKEYFILE="/path/to/key-file.pub"

Because I want to use the latest Oracle Linux 7 image I will query for the OCID of that and store it in a variable:

IMAGEID=`oci compute image list \
   --compartment-id $COMPARTMENTID \
   --operating-system "Oracle Linux" \
   --sort-by TIMECREATED \
   --query "data[?contains(\"display-name\", 'GPU')==\\\`false\\\` && contains(\"display-name\", 'Oracle-Linux-7')==\\\`true\\\`].{DisplayName:\"display-name\", OCID:\"id\"} | [0]" \
   | grep OCID \
   | awk -F'[\"|\"]' '{print $4}'`

And now I can create the compute instance:

oci compute instance launch \
 --compartment-id $COMPARTMENTID \
 --display-name $DISPLAYNAME \
 --availability-domain $AVAILDOM \
 --subnet-id $SUBNETID \
 --image-id $IMAGEID \
 --shape $SHAPE \
 --ssh-authorized-keys-file $PUBKEYFILE \
 --wait-for-state RUNNING

The command will wait until the compute instance is up and running because I used the wait-for-state RUNNING option. Now, I can get the public IP address so I can connect to the instance:

VMID=`oci compute instance list \
  --compartment-id $COMPARTMENTID \
  --display-name $DISPLAYNAME \
  --lifecycle-state RUNNING \
  | grep \"id\" \
  | awk -F'[\"|\"]' '{print $4}'`
oci compute instance list-vnics \
 --instance-id $VMID \
 | grep public-ip \
 | awk -F'[\"|\"]' '{print $4}'

Prepare Host

The installation process is described in the documentation which you should visit to get the latest changes. Log on to the ZDM service host and install required packages (python36 is needed for OCI CLI):

[root@zdm]$ yum -y install \
  glibc-devel \
  expect \
  unzip \
  libaio \
  kernel-uek-devel-$(uname -r) \
  python36

Create a ZDM group and user:

[root@zdm]$ groupadd zdm ; useradd -g zdm zdmuser

Make it possible to SSH to the box as zdmuser. I will just reuse the SSH keys from opc:

[root@zdm]$ cp -r /home/opc/.ssh /home/zdmuser/.ssh ; chown -R zdmuser:zdm /home/zdmuser/.ssh

Create directory for Oracle software and change permissions:

[root@zdm]$ mkdir /u01 ; chown zdmuser:zdm /u01

Edit hosts file, and ensure name resolution work to the source host (srchost) and target hosts (tgthost):

[root@zdm]$ echo -e "[ip address] srchost" >> /etc/hosts
[root@zdm]$ echo -e "[ip address] tgthost" >> /etc/hosts

Install And Configure ZDM

Now, to install ZDM I will log on as zdmuser and set the environment in my .bashrc file:

[zdmuser@zdm]$ echo "INVENTORY_LOCATION=/u01/app/oraInventory; export INVENTORY_LOCATION" >> ~/.bashrc
[zdmuser@zdm]$ echo "ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_BASE=\$ORACLE_BASE; export ZDM_BASE" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_HOME=\$ZDM_BASE/zdm21; export ZDM_HOME" >> ~/.bashrc
[zdmuser@zdm]$ echo "ZDM_INSTALL_LOC=/u01/zdm21-inst; export ZDM_INSTALL_LOC" >> ~/.bashrc
[zdmuser@zdm]$ source ~/.bashrc

Create directories

[zdmuser@zdm]$ mkdir -p $ORACLE_BASE $ZDM_BASE $ZDM_HOME $ZDM_INSTALL_LOC

Next, download the ZDM software into $ZDM_INSTALL_LOC.

Once downloaded, start the installation:

[zdmuser@zdm]$ ./zdminstall.sh setup \
  oraclehome=$ZDM_HOME \
  oraclebase=$ZDM_BASE \
  ziploc=./zdm_home.zip -zdm

And it should look something similar to this: Screenshot of a successful ZDM installation

Start the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice start

Which should produce something like this: Sample output when starting ZDM service (jwcctl debug jwc) And, optionally, I can verify the status of the ZDM service:

[zdmuser@zdm]$ $ZDM_HOME/bin/zdmservice status

The ZDM service is running

Install OCI CLI

You might need OCI CLI as part of the migration. It is simple to install, so I always do it:

bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
oci setup config

You find further instructions here.

Configure Network Connectivity

The ZDM service host must communicate with the source and target hosts via SSH. For that purpose I need private key files to each of the hosts. The private key files must be without a passphrase, in RSA/PEM format and I have to put them at /home/zdmuser/.ssh/[host name]. In my demo, the files are to be named:

  • /home/zdmuser/.ssh/srchost
  • /home/zdmuser/.ssh/tgthost

Ensure that only zdmuser can read them:

[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/srchost
[zdmuser@zdm]$ chmod 400 /home/zdmuser/.ssh/tgthost

Now, I will verify the connection. In my example I will connect to opc on both database hosts, but you can change it if you like:

[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/srchost opc@srchost
[zdmuser@zdm]$ ssh -i /home/zdmuser/.ssh/tgthost opc@tgthost

If you get an error when connecting ensure the following:

  • The public key is added to /home/opc/.ssh/authorized_keys on both database hosts (change opc if you are connecting as another user)
  • The key files are in RSA/PEM format (the private key file should start with -----BEGIN RSA PRIVATE KEY-----)
  • The key files are without a passphrase

That’s It

Now, I have a working ZDM service host. I am ready to start the migration process.

It is probably also a good idea to find a way to start the ZDM service automatically, if the server restarts.

There is also a community marketplace image that comes with ZDM already installed. You can read about it here; evaluate it and see if it is something for you.

Other Blog Posts in This Series