When doing the XTTS blog post series, I came across a lot of valuable information. These nuggets were not suitable for a dedicated blog post but are still worth sharing.
Pro Tip 1: The Other Backups
When you are preparing for an XTTS migration, you will be doing a lot of backups of the source database. Will those backups somehow interfere with your existing backups?
The Perl script takes the first backup – the initial level 0 backup – using:
RMAN> backup for transport .... ;
It is a backup created for cross-platform transport and not something to be used to restore the source database. The documentation states about cross-platform backups:
RMAN does not catalog backup sets created for cross-platform transport in the control file. This ensures that backup sets created for cross-platform transport are not used during regular restore operations.
This is good because it ensures that your recovery strategy will not take those backups into account. Most likely, you will be moving the files from the source system pretty soon, and in that case, you don’t want RMAN depending on them.
But after the initial level 0 backup, you will create level 1 incremental backups. The incremental backups are just regular incremental backups:
RMAN> backup incremental from scn ... tablespace ... ;
It is not a cross-platform backup, so it will be recorded in the control file and/or recovery catalog. Once you move those incremental backups away from the source system, be sure to tidy them up in the RMAN catalog:
RMAN> crosscheck backupset;
RMAN> #Optionally, remove them
RMAN> delete expired backupset;
Sum up: While preparing for the migration, keep taking your regular backups.
Pro Tip 2: Data Pump Parameters
Use a parameter file for your Data Pump export and import. Especially, the import will be a lot easier because you don’t need to write a very long command line with character escapes and the like:
$ cat export.par
directory=mydir
full=y
transportable=always
...
$ expdp ... parfile=export.par
Use multiple dump files and split the files at a reasonable size:
dumpfile=xtts%L.dmp
filesize=5g
Add diagnostic information to the log file (Oracle Database 12c and beyond):
metrics=y
logtime=all
Exclude statistics and check my blog post on how to deal with statistics:
exclude=table_statistics,index_statistics
Pro Tip 3: Generate Data Files Parameters For Data Pump
The list of files that Data Pump needs, I generate with this query. It works if you are using ASM, and transport_datafile
will point to the alias – not the real file:
export ASMDG=+DATA
asmcmd ls -l $ASMDG | grep '^DATAFILE' | sed -n -e 's/ => .*//p' | sed -n -e 's/^.* N \s*/transport_datafiles='$ASMDG'\//p'
Pro Tip 4: Queries
Generate a comma separated list of tablespaces:
select
listagg(tablespace_name, ',') within group (order by tablespace_name)
from
dba_tablespaces
where
contents='PERMANENT'
and tablespace_name not in ('SYSTEM','SYSAUX');
Generate a comma separated list of tablespaces in n batches:
define batches=8
select
batch,
listagg(tablespace_name, ',') within group (order by tablespace_name)
from (
select
mod(rownum, &batches) as batch,
tablespace_name
from (
select
t.tablespace_name,
sum(d.bytes)
from
dba_tablespaces t,
dba_data_files d
where
t.tablespace_name=d.tablespace_name
and t.contents='PERMANENT'
and t.tablespace_name not in ('SYSTEM','SYSAUX')
group by
t.tablespace_name
order by 2 desc
)
)
group by batch;
Generate read-only commands
select
'ALTER TABLESPACE ' || tablespace_name ||' READ ONLY;'
from
dba_tablespaces
where
contents='PERMANENT'
and tablespace_name not in ('SYSTEM','SYSAUX');
Pro Tip 5: Troubleshooting
Be sure to always run the Perl script with debug option enabled:
$ #Set environment variable
$ export XTTDEBUG=1
$ #Or use --debug flag on commands
$ $ORACLE_HOME/perl/bin/perl xttdriver.pl ... --debug 3
MOS notes:
- Known Issues for Cross Platform Transportable Tablespaces XTTS (Doc ID 2311677.1)
- XTTS Creates Alias on Destination when Source and Destination use ASM (Doc ID 2351123.1)
Here’s a video on how to troubleshoot in Data Pump:
Pro Tip 6: Update Perl
Rarely have I seen issues in the Perl script caused by an old, outdated version of Perl. Depending on which Perl instance you are using:
- Update Perl in the operating system
- Update Perl in your Oracle Home: JDK and PERL Patches for Oracle Database Home and Grid Home (Doc ID 2584628.1)
Pro Tip 7: Additional Information
These webinars might be of interest to you:
- Migrating Very Large Databases
- Data Pump Extreme – Deep Dive with Development
- Performance Stability, Tips and Tricks and Underscores
You can also flip the slides of all the webinars.
Pro Tip 8: Zero Data Loss Recovery Appliance (ZDLRA)
If you have a ZDLRA you can use it to make your migration easier:
- MOS note: Cross Platform Database Migration using ZDLRA (Doc ID 2460552.1)
- MOS note: ZDLRA: Download new sbt library
- Office Hours: Accelerate Cross-Platform Database Migration with Recovery Appliance
Pro Tip 9: E-Business Suite (EBS)
Pro Tip 10: ORA-39032: function TRANSPORTABLE is not supported in FULL jobs
If you get this error stack when exporting in an Oracle Database 11g:
ORA-39005: inconsistent arguments
ORA-39032: function TRANSPORTABLE is not supported in FULL jobs
You most likely forgot to add the version
parameter to your Data Pump export:
$ expdp ... version=12
Pro Tip 11: Advanced Queues (AQ)
A few good MOS notes to read:
- Understanding How Advanced Queueing (AQ) Objects Are Exported And Imported. (Doc ID 2291530.1)
- What Objects Are Created When Creating a Queue Table ? (Doc ID 224027.1)
The first MOS note is especially interesting:
Now it comes the interesting part, let say DBA needs to export a particular schema that contains queues, and import into another database, so if at the end of the impdp operation the DBA does a simple comparison of the number objects from the origin database schema with the target database schema, there is a big chance these counts will not match, but there will be no errors or warnings at the expdp/impdp logs. This happens exactly because during the export/import we will not consider the queue objects created on the fly on the source side, usually the ones ending by _P and _D, thus the target database may not have these objects, but of course, they may get created later on whenever required during the use of the queue. This is an expected behavior and the functionally is working as expected. A suggested way to check whether everything has been imported successfully is to use a simple query to check the total number of "QUEUE" type objects instead, for example: SQL> select count(*) from DBA_OBJECTS where owner=’&schema’ and object_type = ‘QUEUE’;
Other Blog Posts in This Series
- Introduction
- Prerequisites
- Full Transportable Export/Import
- What Is a Self-contained Transportable Tablespace Set
- How to Migrate a Database Using Full Transportable Export/Import and Incremental Backups
- Make It Fast
- Backup on Standby Database
- Slow Network Between Source and Target
- Bigfile Tablespaces
- Testing the Procedure On Activated Standby Database
- Testing the Procedure Multiple Times Using Read-Only Tablespaces
- Target Database and Data Guard
- ASM Aliases and Why You Should Get Rid of Them
- Pro Tips
Thank you!!
On Thu, Mar 24, 2022 at 4:02 AM Databases Are Fun wrote:
> Daniel Overby Hansen posted: ” When doing the XTTS blog post series, I > came across a lot of valuable information. These nuggets were not suitable > for a dedicated blog post but are still worth sharing. Pro Tip 1: The Other > Backups When you are preparing for an XTTS migration, you will ” >
LikeLiked by 1 person
Hi Mimi,
You’re welcome. I’m glad you found it useful.
Regards,
Daniel
LikeLike