Here are my pro tips. It is a little mix and match of all my notes that didn’t make it into the previous blog posts but are still too good to go.
Pro Tip 1: Log Files
If something goes south where can you find the log files? On the ZDM service host:
On the source and target hosts you can also find additional log files containing all the commands that are executed by ZDM:
- Alert log
- Data Pump process trace file DM00
Data Pump log file
- Directory referenced by directory object
Pro Tip 2: Troubleshooting
When you are troubleshooting it is sometimes useful to get rid of all the log files and have ZDM start all over. Some of the log files get really big and are a hard to read, so I usually stop the ZDM service, delete all the log files, and restart ZDM and my troubleshooting. But only do this if there are no other jobs running than the one you are troubleshooting:
[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice stop [zdmuser@zdmhost]$ rm $ZDM_BASE/crsdata/*/rhp/rhpserver.log* [zdmuser@zdmhost]$ rm $ZDM_BASE/chkbase/scheduled/* [zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmservice start
There are also several chapters on troubleshooting:
- Oracle Zero Downtime Migration 21 .1 Release Notes – Troubleshooting
- Oracle Zero Downtime Migration 21 .1 Release Notes – Known Issues
- Move to Oracle Cloud Using Zero Downtime Migration – Troubleshooting
Pro Tip 3: Aborting A Job
Sometimes it is useful to completely restart a migration. If a database migration is already registered in ZDM, you are not allowed to specify another migration job. First, you have to abort the existing job, before you can enter a new migration job.
[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli abort job -jobid n
Now, you can use
zdmcli migrate database command again.
Pro Tip 4: Show All Phases
A ZDM migration is split into phases, and you can have ZDM pause after each of the phases. The documentation has a list of all phases but you can also get it directly from the ZDM tool itself for a specific migration job:
[zdmuser@zdmhost]$ $ZDM_HOME/bin/zdmcli migrate database \ -rsp ~/migrate.rsp ... \ ... \ ... \ -listphases
Pro Tip 5: Adding Custom Scripts
You can add your own custom scripts to run before or after a phase in the migration job. You can use the
-listphases command (described above) to get a list of all the phases. Then decide whether your script should run before or after that phase. This is called an action plug-in. You can bundle those together in a template to make it easier to re-use. If this is something you need, you should dive into the documentation.
If you target an Autonomous Database, you are not allowed to execute scripts on the target database host. Instead, you can .sql scripts.
The environment in which the script starts has some environment variables that you can use, like:
- Database (ZDM_SRCDB)
- Oracle Home (ZDM_SRCDBHOME)
- ZDM Phase (RHP_PHASE)
Pro Tip 6: GoldenGate Health Check
You can use the healthcheck script on the source and target databases – where the extract and replicat process is running. It will give you invaluable information for your troubleshooting experience and it is a good idea to run and attach a health check if you need to contact My Oracle Support. It is like an AWR report but with information specific to Oracle GoldenGate replication.
Generate report by:
- Installing objects in database:
- Execute health check:
- Optionally, clean-up objects:
For GoldenGate MicroServices Architecture find the scripts on the GoldenGate hub:
And run the scripts in source and target database.
Pro Tip 7: Convert From Single Instance To RAC
A useful feature of ZDM is that it can convert a single instance database to a RAC database in OCI. And it is super simple to do that. The only thing you have to do is to create the target placeholder database as a RAC database. ZDM will detect that and take care of the rest.
Finally, let me mention that if the source database is RAC One Node or RAC, then the target database must be a RAC database. Be sure to create the target placeholder database as RAC.
Pro Tip 8: Get Data Pump Log File in Autonomous Database
When you are importing into Autonomous Database, the Data Pump log file is stored in the directory DATA_PUMP_DIR. But in Autonomous Database you don’t have access to the underlying file system, so how do you get the log file? One approach is to upload the log file into Object Storage.
- ZDM will create a set of credentials as part of the migration workflow. Find the name of the credentials (or create new ones using
select owner, credential_name, username, enabled from dba_credentials;
- Find the name of the Data Pump log file:
select * from dbms_cloud.list_files('DATA_PUMP_DIR');
- Upload it. If you need help generating the URI, check the documentation:
begin DBMS_CLOUD.PUT_OBJECT ( credential_name => '<your credential>', object_uri => 'https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket>/', directory_name => 'DATA_PUMP_DIR', file_name => '<file name>'); end; /
- Your OCI Console to download the Data Pump log file.
Other Blog Posts in This Series
- Install And Configure ZDM
- Physical Online Migration
- Physical Online Migration to DBCS
- Physical Online Migration to ExaCS
- Physical Online Migration and Testing
- Physical Online Migration of Very Large Databases
- Logical Online Migration
- Logical Online Migration to DBCS
- Logical Offline Migration to Autonomous Database
- Logical Online Migration and Testing
- Logical Online Migration of Very Large Databases
- Logical Online and Sequences
- Logical Migration and Statistics
- Logical Migration and the Final Touches
- Create GoldenGate Hub
- Monitor GoldenGate Replication
- The Pro Tips (this post)