I always fear the worst when I get a TNS error. It’s not my expertise. A TNS error was exactly what I got while I configured a Data Guard environment. Redo Transport didn’t work; the redo logs never made it to the standby database.
The Error
I took a look in the alert log on the primary database and found this error:
2022-05-10T08:25:28.739917+00:00
"alert_SALES2.log" 5136L, 255034C
TCP/IP NT Protocol Adapter for Linux: Version 12.2.0.1.0 - Production
Time: 10-MAY-2022 18:09:02
Tracing not turned on.
Tns error struct:
ns main err code: 12650
TNS-12650: No common encryption or data integrity algorithm
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
A little further in the alert log, I found proof that the primary database could not connect to the standby database:
2022-05-10T18:09:02.991061+00:00
Error 12650 received logging on to the standby
TT04: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (12650)
TT04: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
2022-05-10T18:09:02.991482+00:00
Errors in file /u01/app/oracle/diag/rdbms/sales2_fra3cx/SALES2/trace/SALES2_tt04_75629.trc:
ORA-12650: No common encryption or data integrity algorithm
Error 12650 for archive log file 1 to '...'
The Investigation
As always, Google it! Although I have used DuckDuckGo for privacy reasons instead of Google for many years, I still say google it, which is fairly annoying.
The above means that any connection made to or from this database must use data integrity checks. CRYPTO_CHECKSUM_SERVER and CRYPTO_CHECKSUM_CLIENT defines that. Also, the database will only accept connections using the SHA1 algorithm.
Then I looked in sqlnet.ora on the standby database:
This database does not require data integrity checks. But if the other party requests or requires it, then the server is fine with it. That’s the meaning of ACCEPTED. But look at the allowed algorithms. When acting as server (i.e. receiving connections from someone else), it does not allow SHA1 algorithm, the only one allowed by the counterpart.
The Solution
I decided to remove all instances of SHA1 because:
Data modification attack
An unauthorized party intercepting data in transit, altering it, and retransmitting it is a data modification attack. For example, intercepting a $100 bank deposit, changing the amount to $10,000, and retransmitting the higher amount is a data modification attack.
Replay attack
Repetitively retransmitting an entire set of valid data is a replay attack, such as intercepting a $100 bank withdrawal and retransmitting it ten times, thereby receiving $1,000.
Can I do more to strengthen security in sqlnet.ora?
It’s old and has been made insecure by computer evolution. From Wikipedia:
In cryptography, SHA-1 (Secure Hash Algorithm 1) is a cryptographically broken but still widely used hash function which takes an input and produces a 160-bit (20-byte) hash value known as a message digest – typically rendered as a hexadecimal number, 40 digits long. It was designed by the United States National Security Agency, and is a U.S. Federal Information Processing Standard.
Since 2005, SHA-1 has not been considered secure against well-funded opponents; as of 2010 many organizations have recommended its replacement. NIST formally deprecated use of SHA-1 in 2011 and disallowed its use for digital signatures in 2013. As of 2020, chosen-prefix attacks against SHA-1 are practical. As such, it is recommended to remove SHA-1 from products as soon as possible and instead use SHA-2 or SHA-3. Replacing SHA-1 is urgent where it is used for digital signatures.
Many commands that involve Transparent Data Encryption (TDE) require inputting the TDE keystore password. Also, when you use AutoUpgrade. on an encrypted Oracle Database you probably need to store the TDE keystore password using the -load_password option.
Manually inputting passwords is unsuitable for an environment with a high degree of automation. In Oracle Database it is solved by Secure External Password Store (SEPS) (as of Oracle Database 12.2). In a previous blog post, I showed how you could use it to your advantage.
This blog post is about how to use AutoUpgrade together with SEPS.
Good News
As of version 22.2 AutoUpgrade fully supports Oracle Database with a Secure External Password Store. If SEPS contains the TDE keystore password, you don’t have to input the password using the -load_password option.
If you are using AutoUpgrade in some sort of automation (like Ansible), you should look into SEPS. AutoUpgrade can use SEPS when the TDE keystore password is needed, and you can upgrade and convert completely unattended.
How To
The Oracle Database DB12 is encrypted and on Oracle Database 12.2. I want to upgrade, convert, and plug it into CDB2 on Oracle Database 19c.
Ensure that your Oracle Databases DB12 and CDB2 are properly configured with a Secure External Password Store and it contains the TDE keystore password.
Ensure that AutoUpgrade is version 22.2 or higher:
$ java -jar autoupgrade.jar -version
Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
Action Required is empty and verifies that I don’t need to input the TDE keystore passwords. AutoUpgrade checked SEPS in CDB2 and found that it works. It is impossible to check SEPS in DB12 because it is on Oracle Database 12.2. The functionality was added in Oracle Database 19c.
You must configure an AutoUpgrade keystore. Even though you are not loading any TDE keystore passwords, it is still required. Some commands require a passphrase (or transport secret) and AutoUpgrade must store them in its keystore.
Whenever a database is using SEPS, and a TDE keystore password is required, AutoUpgrade will use the IDENTIFIED BY EXTERNAL STORE clause.
What Else
You can mix and match. If only one database uses SEPS, you can input the other TDE keystore password manually using the -load_password option. AutoUpgrade will check your database configuration and ask only for the needed TDE keystore passwords.
Converting an encrypted non-CDB to a PDB requires the keystore passwords of the non-CDB and the target CDB. You can do it with AutoUpgrade, and you can upgrade in the same operation.
How To
The Oracle Database DB12 is encrypted and on Oracle Database 12.2. I want to upgrade, convert, and plug it into CDB2 on Oracle Database 19c.
Ensure that AutoUpgrade is version 22.2 or higher:
$ java -jar autoupgrade.jar -version
Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
Add the TDE keystore passwords into the AutoUpgrade keystore:
$ java -jar autoupgrade.jar -config DB12.cfg -load_password
TDE> add DB12
Enter your secret/Password:
Re-enter your secret/Password:
TDE> add CDB2
Enter your secret/Password:
Re-enter your secret/Password:
Save the passwords into the AutoUpgrade keystore. I choose to create an auto-login keystore:
TDE> save
Convert the keystore to auto-login [YES|NO] ? YES
TDE> exit
If AutoUpgrade does not report any other problems, start the upgrade and conversion. Since I chose to create an AutoUpgrade auto-login keystore, I don’t have to provide the password when AutoUpgrade starts:
First, AutoUpgrade upgrades the database to Oracle Database 19c. This is a regular non-CDB database upgrade. It requires an auto-login keystore.
After the upgrade, AutoUpgrade exports the encryption keys into a file. To avoid writing the encryption keys in clear text in the export file, the database needs a passphrase (transport secret) to encrypt the encryption key. AutoUpgrade generates a passphrase and stores it in the AutoUpgrade keystore. In addition, the database needs the keystore password. This is the WITH SECRET and IDENTIFIED BY clauses of the ADMINISTER KEY MANAGEMENT EXPORT KEYS statement.
The encryption keys is imported into CDB$ROOT of the target CDB. To load the encryption keys from the export file, the database needs the passphrase and keystore password (of the target CDB). AutoUpgrade gets both password from the AutoUpgrade keystore. This is the WITH SECRET and IDENTIFIED BY clauses of the ADMINISTER KEY MANAGEMENT IMPORT KEYS statement.
The pluggable database is created from the manifest file using CREATE PLUGGABLE DATABASE statement.
AutoUpgrade executes the ADMINISTER KEY MANAGEMENT IMPORT KEYS statement again – this time while connected to the PDB itself.
Finally, AutoUpgrade completes the PDB conversion by running noncdb_to_pdb.sql.
The encryption keys are imported two times – first in CDB$ROOT and then in the PDB itself. AutoUpgrade must import into CDB$ROOT if the PDB has any of the system tablespaces (SYSTEM or SYSAUX) or the undo tablespace encrypted.
Fallback
AutoUpgrade fallback functionality also works for an upgrade and PDB conversion. But there are a few requirements:
A target_pdb_copy_option must be specified.
The database must be Enterprise Edition.
A guaranteed restore point must be created (default behavior).
It is not possible to revert the PDB conversion. To fall back the data files must be copied as part of the PDB conversion. You specify that the data files are copied by using the config file parameter target_pdb_copy_option.
As an example, if I want to copy the data files during plug-in and generate OMF names, I use this parameter:
AutoUpgrade automatically creates a guaranteed restore point in the beginning of an upgrade. AutoUpgrade will issue a FLASHBACK DATABASE statement to revert the upgrade. The parameter restoration governs the creation of the restore point. The default value is YES, meaning AutoUpgrade creates a guaranteed restore point, and fallback is possible.
If all prerequisites are met, I can revert the entire operation and return the database to the original state (from 19c PDB back into a 12.2 non-CDB). 103 is the job id of the upgrade/PDB conversion:
An unplug-plug upgrade of an encrypted PDB requires the keystore password of the source and target CDB, and you can do it with AutoUpgrade.
How To
The pluggable database PDB1 is encrypted and is plugged into CDB1, which is Oracle Database 12.2. I want to upgrade the PDB to Oracle Database 19c by plugging it into CDB2.
Ensure that AutoUpgrade is version 22.2 or higher:
$ java -jar autoupgrade.jar -version
Create your AutoUpgrade config file and set global.keystore as specified in a previous blog post:
Add the TDE keystore passwords into the AutoUpgrade keystore:
$ java -jar autoupgrade.jar -config PDB1.cfg -load_password
TDE> add CDB1
Enter your secret/Password:
Re-enter your secret/Password:
TDE> add CDB2
Enter your secret/Password:
Re-enter your secret/Password:
Save the passwords into the AutoUpgrade keystore. I choose to create an auto-login keystore:
TDE> save
Convert the keystore to auto-login [YES|NO] ? YES
TDE> exit
If AutoUpgrade does not report any other problems, start the unplug-plug upgrade. Since I chose to create an AutoUpgrade auto-login keystore, I don’t have to provide the password when AutoUpgrade starts:
When AutoUpgrade needs to unplug the encrypted PDB into a manifest file, the source CDB will need the TDE keystore password. AutoUpgrade can get it from its keystore. This is the IDENTIFIED BY clause of the ALTER PLUGGABLE DATABASE ... UNPLUG INTO statement.
The encryption keys of the PDB go into the manifest file. The database doesn’t want to write the encryption keys in clear text in the manifest file and asks for a passphrase that can encrypt the encryption keys. AutoUpgrade generates a passphrase and stores the passphrase in the AutoUpgrade keystore. This is the ENCRYPT USING clause of the ALTER PLUGGABLE DATABASE ... UNPLUG INTO statement.
When the PDB plugs into the target CDB, the target CDB will need the TDE keystore password. This is the IDENTIFIED BY clause of the CREATE PLUGGABLE DATABASE ... USING statement.
The database must get the encryption keys of the PDB from the manifest files. The encryption keys are encrypted using a passphrase. The database asks AutoUpgrade about the passphrase which is stored in the AutoUpgrade keystore. This is the DECRYPT USING clause of the CREATE PLUGGABLE DATABASE ... USING statement.
Fallback
AutoUpgrade fallback functionality also works on an encrypted PDB. When it comes to unplug-plug upgrades and fallback capability, it is a requirement that the data files were copied as part of the upgrade process.
In the above example, a fallback using AutoUpgrade would not be possible. Since I did not specify a target_pdb_copy_option the data files were re-used. Other means of falling back to the original state is needed.
Had I specified a target_pdb_copy_option in my config file, a fallback would be possible. In the below example, I am specifying a copy option. file_name_convert=none means that the data files are copied and new OMF names are generated:
In CDBs the default way of storing TDE encryption keys is in a united keystore. The CDB has one keystore and all PDBs store their encryption keys in that one keystore.
With Oracle Database 19.14 a new option became possible: isolated keystore. The CDB still has a keystore that PDBs can use. But you can also configure each individual PDB to use its own keystore.
You can load a password for an isolated keystore using:
Upgrading a non-CDB or an entire CDB is straightforward with AutoUpgrade. There is only one requirement:
An auto-login keystore must be present.
Upgrade Non-CDB and CDB
The auto-login keystore enables the database to open the TDE keystore without a DBA manually entering the keystore password. During a database upgrade, the database will restart multiple times. The upgrade process embeds the restarts, and there is no way for a DBA to intervene halfway to enter the TDE keystore password. Hence, it is required to use an auto-login keystore.
You can query the database for the type of the TDE keystore:
SQL> select wallet_type from v$encryption_wallet;
AUTOLOGIN
It must be an AUTOLOGIN keystore or a LOCAL_AUTOLOGIN. I like the local autologin keystore because it adds an additional layer of security.
When a proper keystore is in place, you can start the upgrade.
As of Oracle Database 19c, WALLET_ROOT initialization parameter.
ENCRYPTION_WALLET_LOCATION sqlnet.ora parameter.
$ORACLE_BASE/admin/DB_UNIQUE_NAME/wallet
$ORACLE_HOME/admin/DB_UNIQUE_NAME/wallet
Oracle recommends using the parameter WALLET_ROOT when your database is on Oracle Database 19c. The parameter is introduced in Oracle Database 19c, and all other methods have been deprecated.
It is easier to configure the TDE keystore using WALLET_ROOT than sqlnet.ora.
AutoUpgrade can implement the changes needed to switch to the WALLET_ROOT parameter as part of an upgrade . I recommend doing that.
TNS_ADMIN
Often, sqlnet.ora defines the TDE keystore configuration. This means that the TNS_ADMIN location is important.
TNS_ADMIN defaults to $ORACLE_HOME/network/admin. But sometimes, it is relocated either via a profile (like .bashrc) or using srvctl setenv database. AutoUpgrade fully supports both methods.
But it does happen from time to time that there are issues with the TNS_ADMIN location. Recently, I saw it at a customer. The customer used a dedicated sqlnet.ora for each database. The parameter ENCRYPTION_WALLET_LOCATION was unique in each of the sqlnet.ora files. They had issues with their profiles and AutoUpgrade picked up the wrong sqlnet.ora. This caused AutoUpgrade to report issues with the TDE keystore during analyze phase.
Luckily, there is functionality in AutoUpgrade to override the TNS_ADMIN location:
You can put them into the config file. AutoUpgrade will set the TNS_ADMIN environment variable before executing any command. That will effectively override any other TNS_ADMIN setting:
Usually, I would not recommend using these parameters. In most cases, the correct TNS_ADMIN location is set and all is good. Use only when you encounter issues.
It is now easier to upgrade and convert your encrypted Oracle Database. The latest version of AutoUpgrade adds much better support for Oracle Databases that are encrypted with Transparent Data Encryption (TDE).
You must ensure that you are using the latest version of AutoUpgrade. You can download it from My Oracle Support AutoUpgrade Tool (Doc ID 2485457.1). At the time of writing, the latest version of AutoUpgrade is 22.2:
Dealing with TDE, also means dealing with sensitive information. AutoUpgrade must adequately protect the TDE keystore passwords. To do so, AutoUpgrade can have its own keystore to store sensitive information, i.e., TDE keystore passwords. Whenever a TDE keystore password is needed, e.g., during an unplug-plug upgrade of an encrypted PDB, it can get the password from the AutoUpgrade keystore.
You need to tell AutoUpgrade where it can create the keystore. You do so in the config file:
It is similar to other keystores that Oracle Database use. ewallet.p12 is the keystore, and cwallet.sso is an auto-login keystore used to open the real keystore. You don’t have to create an auto-login keystore.
You should protect the AutoUpgrade keystore files like you protect any other Oracle Database keystore:
Apply restrictive file system permissions.
Audit access.
Back it up.
Using the Keystore
Create your AutoUpgrade config file and specify global.keystore as described above. Start an interactive prompt that allows you to add the necessary passwords:
The first time you use the AutoUpgrade keystore, you must provide a password that protects the AutoUpgrade keystore:
Starting AutoUpgrade Password Loader - Type help for available options
Creating new keystore - Password required
Enter password:
Enter password again:
Keystore was successfully created
In the TDE console, the following commands are available:
add
delete
list
save
help
exit
The SID references the databases. If you want to add a TDE password for the database DB12, use the following command:
TDE> add DB12
Enter your secret/Password:
Re-enter your secret/Password:
TDE> add CDB2
Enter your secret/Password:
Re-enter your secret/Password:
If you want to delete the TDE password for DB12:
TDE> delete DB12
Keystore Password is required prior to operation
Enter wallet password:
When you save the passwords into the AutoUpgrade keystore, you must decide whether you want to have an auto-login keystore:
TDE> save
Convert the keystore to auto-login [YES|NO] ?
I recommend using auto-login keystores. If you do not create an AutoUpgrade auto-login keystore, you will be prompted for the AutoUpgrade keystore password when you start AutoUpgrade.
If you want to use AutoUpgrade in noconsole mode (-noconsole), then an auto-login keystore is required.
I will show how to upgrade and convert encrypted databases in later blog posts.
Loss of AutoUpgrade Keystore
What happens if your AutoUpgrade keystore is lost? This is fairly simple. You can re-create the keystore and load all passwords into it using the load_password command line option as described above.
Preupgrade Checks
We have added new preupgrade checks to the analyze phase in AutoUpgrade. These checks will help you to provide the needed passwords and ensure your TDE configuration meets certain standards:
In this blog post series, I use Full Transportable Export/Import (FTEX) to move the metadata during a cross-platform transportable tablespace migration (XTTS). The documentation states:
You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.
Requirements
A different blog post already covers the requirements for FTEX. Below is a supplement to that list:
The user performing the export and import must have the roles DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE, respectively. Don’t run the Data Pump jobs as SYS AS SYSDBA!
During export, the default tablespace of the user performing the export must not be one of the tablespaces being transported. In addition, the default tablespace of the user performing the export must be writable. Data Pump needs this to create the control table.
The target database (non-CDB) or PDB must not contain a tablespace of the same name as one of the tablespaces being transported. Often this is the case with the USERS tablespace. Either use Data Pump remap_tablespace or rename the tablespace (alter tablespace users rename to users2).
All tablespaces are transported. It is not possible to exclude a tablespace or a user from the operation.
What Is Included?
Generally, you should count on everything is included, except SYS objects and things specified in the next chapter. Below is a list of things that are included as well. It is a list of examples from previous questions I have been asked.
If a user schema has tables in SYSTEM or SYSAUX tablespace, such tables are also transported. But they are not stored in the transported tablespaces. Instead, those tables are exported into the dump file using conventional export. Examples:
If you created any new tables as SYSTEM or any other internal schema, except SYS, those tables will also be transported. If such tables are in the SYSTEM or SYSAUX tablespace, then they are exported into the dump file. Examples:
No need to emphasize that you should never create any objects in Oracle maintained schemas. But we all know it happens…
Public and private database links.
Private synonyms.
Profiles.
Directories including the privileges granted on them, although they are owned by SYS. The contents stored in the directory in the file system must be moved manually.
External tables definition, but the underlying external files must be moved manually.
BFILE LOBs, but the underlying external files must be moved manually.
All schema level triggers (CREATE TRIGGER ... ON SCHEMA), including on system events, except those owned by SYS
All database level triggers (CREATE TRIGGER ... ON DATABASE) owned by an internal schema, except SYS.
SQL patches.
SQL plan baselines.
SQL profiles.
SQL plan directives.
User-owned scheduler objects.
Unified auditing policies and audit records.
What Is Not Included?
The transport does not include any object owned by SYS. Here are some examples:
User-created tables in SYS schema are not transported at all. You must re-create such tables (but you should never create such tables in the first place).
Grants on tables or views owned by SYS, like DBA_USERS or v$datafile.
Any trigger owned by SYS.
SYS-owner scheduler objects.
In addition, the following is not included:
Index monitoring (ALTER INDEX ... MONITORING USAGE).
Public synonyms.
AWR data is not included. You can move such data using the script $ORACLE_HOME/rdbms/admin/awrextr.sql.
How Does It Work?
There are two keywords used to start a full transportable job: TRANSPORTABLE and FULL. If you want to start an FTEX import directly over a network link:
Start on a small database and work on your runbook.
Eventually, prove it works on a production-size database.
Automate
To ensure consistency. There are many steps, and it is easy to overlook a step or miss a detail.
To avoid human error. Humans make mistakes. Period!
Save logs
Data Pump
RMAN
Terminal output
Automate clean-up procedure
To repeat tests and effectively clean up the target environment.
In case of failure and rollback during production migration, you should know how to resume operations safely.
Shut down source database
Be sure to offline source database after migration. Having users connect to the wrong database after a migration is a disaster.
Data Pump Import
Importing directly into the target database using the NETWORK_LINK option is recommended.
Timezone File Version
Check the timezone file version of your source and target database:
SQL> select * from v$timezone_file;
If they differ and the target timezone file version is higher than the source database, Data Pump will convert any TIMESTAMP WITH TIME ZONE (TSTZ) column to the newer timezone conventions. The conversion happens automatically during import.
Since Data Pump must update data during import, it requires that Data Pump can turn the tablespaces READ WRITE. Thus, you can’t use TRANSPORTABLE=KEEP_READ_ONLY if you have tables with TSTZ columns. Trying to do so will result in:
ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues resulting from time zone version mismatch.
Source time zone version is ?? and target time zone version is ??.
If your target database has a lower timezone file version, you can’t use FTEX. You must upgrade the timezone file in your database.
TDE Tablespace Encryption
If the source database has one or more encrypted tablespaces, you must either:
Supply the keystore password on export using the Data Pump option ENCRYPTION_PASSWORD.
Specify ENCRYPTION_PWD_PROMPT=YES and Data Pump will prompt for the keystore password. This approach is more safer because the encryption password is otherwise stored in the shell history.
You can read more about Full Mode and transportable tablespaces in the documentation.
You can only transport encrypted tablespaces, if the source and target platform share the same Endian format. For example, going from Windows to Linux is fine, because they are both little Endian platforms. Going from AIX to Linux will not work, that’s big to little Endian.
When a tablespace is transported to a platform of a different Endian format, the data files must be converted. The conversion does not work on encrypted tablespaces. The only option is to decrypt the tablespace before transport.
When migrating Oracle Database to a different endian format using transportable tablespaces and incremental backups (XTTS), a list of requirements must be met. The following list of requirements exist when using:
The most important requirements – for a complete list check MOS note:
Windows is not supported.
RMAN on the source system must not have DEVICE TYPE DISK configured to COMPRESSED.
RMAN on the source system must not have default channel configured to type SBT.
For Linux: Minimum version for source and destination is 11.2.0.3.
Other platforms: Minimum version for source and destination is 12.1.0.2.
Disk space for a complete backup of the database on both source and target host. If your data files take up 100 TB, you need an additional 100 TB of free disk space. For 12c databases, and if your data files have a lot of free space, the backup might be smaller due to RMAN unused block compression.
Also worth mentioning is that the Perl script during the roll forward phase (applying level 1 incremental) will need to restart the target database. Applying the incremental backups on the target data files happens in NOMOUNT mode. Be sure nothing else uses the target database while you roll forward.
Block Change Tracking (BCT) is strongly recommended on the source database. Note, that this is an Enterprise Edition feature (in OCI: DBCS EE-EP or ExaCS). If you don’t enable BCT the incremental backups will be much slower because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.
What If – V3 Perl Script
If disk space is a problem or if you can’t meet any of the other requirements, check out the below two MOS notes:
They describe a previous version of the Perl script, version 3. The scripts use DBMS_FILE_TRANSFER to perform the conversion of the data files in-flight. That way no extra disk space is needed. However, DBMS_FILE_TRANSFER has a limitation that data files can’t be bigger than 2 TB.
Also, the V3 scripts might be useful for very old databases.
Transportable Tablespaces In General
To get a complete list of limitations on transporting data, you should look in the documentation. The most notable are:
Source and target database must have compatible character sets. If the character sets in both databases are not the same, check documentation for details.
No columns can be encrypted with TDE Column Encryption. The only option is to remove the encryption before migration and re-encrypt afterward.
TDE Tablespace Encryption is supported for same-endian migration if the source database is 12.1.0.2 or newer. If you need to go across endianness, you must decrypt the tablespaces and re-encrypt after migration. Remember, Oracle Database 12.2 can encrypt tablespaces online.
If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the conversion on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
Requires Enterprise Edition.
The database timezone file version in the target database must be equal to or higher than the source database.
The database time zone must match if you have tables with TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). If you have such tables, and the database time zone does not match, those tables are skipped during import. You can then move the affected tables using a normal Data Pump table mode export and import.
To check the database time zone:
SQL> select dbtimezone from dual;
You can alter the time zone for a database with an ALTER DATABASE statement.
Full Transportable Export/Import
FTEX automates the process of importing the metadata. It is simpler to use and automatically includes all the metadata in your database. Compared to a traditional transportable tablespace import, FTEX is a lot easier and removes a lot of manual work from the end user. But there are a few requirements that must be met:
Source database must be 11.2.0.3 or higher.
Target database must be 12.1.0.1 or higher.
Requires Enterprise Edition.
COMPATIBLE must be set to 12.0.0 or higher in both source and target database. If your source database is an Oracle Database 11g, this is not possible. In that case, set version to 12 or higher during Data Pump export instead.
If you can’t meet the requirements, check out traditional transportable tablespace. It have different requirements, and it allows more customization.
When doing the XTTS blog post series, I came across a lot of valuable information. These nuggets were not suitable for a dedicated blog post but are still worth sharing.
Pro Tip 1: The Other Backups
When you are preparing for an XTTS migration, you will be doing a lot of backups of the source database. Will those backups somehow interfere with your existing backups?
The Perl script takes the first backup – the initial level 0 backup – using:
RMAN> backup for transport .... ;
It is a backup created for cross-platform transport and not something to be used to restore the source database. The documentation states about cross-platform backups:
RMAN does not catalog backup sets created for cross-platform transport in the control file. This ensures that backup sets created for cross-platform transport are not used during regular restore operations.
This is good because it ensures that your recovery strategy will not take those backups into account. Most likely, you will be moving the files from the source system pretty soon, and in that case, you don’t want RMAN depending on them.
But after the initial level 0 backup, you will create level 1 incremental backups. The incremental backups are just regular incremental backups:
RMAN> backup incremental from scn ... tablespace ... ;
It is not a cross-platform backup, so it will be recorded in the control file and/or recovery catalog. Once you move those incremental backups away from the source system, be sure to tidy them up in the RMAN catalog:
Sum up: While preparing for the migration, keep taking your regular backups.
Pro Tip 2: Data Pump Parameters
Use a parameter file for your Data Pump export and import. Especially, the import will be a lot easier because you don’t need to write a very long command line with character escapes and the like:
Pro Tip 3: Generate Data Files Parameters For Data Pump
The list of files that Data Pump needs, I generate with this query. It works if you are using ASM, and transport_datafile will point to the alias – not the real file:
export ASMDG=+DATA
asmcmd ls -l $ASMDG | grep '^DATAFILE' | sed -n -e 's/ => .*//p' | sed -n -e 's/^.* N \s*/transport_datafiles='$ASMDG'\//p'
Pro Tip 4: Queries
Generate a comma separated list of tablespaces:
select
listagg(tablespace_name, ',') within group (order by tablespace_name)
from
dba_tablespaces
where
contents='PERMANENT'
and tablespace_name not in ('SYSTEM','SYSAUX');
Generate a comma separated list of tablespaces in n batches:
define batches=8
select
batch,
listagg(tablespace_name, ',') within group (order by tablespace_name)
from (
select
mod(rownum, &batches) as batch,
tablespace_name
from (
select
t.tablespace_name,
sum(d.bytes)
from
dba_tablespaces t,
dba_data_files d
where
t.tablespace_name=d.tablespace_name
and t.contents='PERMANENT'
and t.tablespace_name not in ('SYSTEM','SYSAUX')
group by
t.tablespace_name
order by 2 desc
)
)
group by batch;
Generate read-only commands
select
'ALTER TABLESPACE ' || tablespace_name ||' READ ONLY;'
from
dba_tablespaces
where
contents='PERMANENT'
and tablespace_name not in ('SYSTEM','SYSAUX');
Pro Tip 5: Troubleshooting
Be sure to always run the Perl script with debug option enabled:
$ #Set environment variable
$ export XTTDEBUG=1
$ #Or use --debug flag on commands
$ $ORACLE_HOME/perl/bin/perl xttdriver.pl ... --debug 3
Now it comes the interesting part, let say DBA needs to export a particular schema that contains queues, and import into another database, so if at the end of the impdp operation the DBA does a simple comparison of the number objects from the origin database schema with the target database schema, there is a big chance these counts will not match, but there will be no errors or warnings at the expdp/impdp logs.
This happens exactly because during the export/import we will not consider the queue objects created on the fly on the source side, usually the ones ending by _P and _D, thus the target database may not have these objects, but of course, they may get created later on whenever required during the use of the queue. This is an expected behavior and the functionally is working as expected. A suggested way to check whether everything has been imported successfully is to use a simple query to check the total number of "QUEUE" type objects instead, for example:
SQL> select count(*) from DBA_OBJECTS where owner=’&schema’ and object_type = ‘QUEUE’;
Pro Tip 12: ORA-39218 or ORA-39216
If your Data Pump metadata import fails with the below error, you are having problems with evolved types. This blog post tells you what to do:
W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch
Failing sql is:
BEGIN SYS.DBMS_METADATA.CHECK_TYPE('APPUSER','CAR_TYPE','4','613350A0E37618A6108C500936D0C6162C',''); END;
In the last three customer cases I worked on, we solved the performance problems by using the latest Release Update plus Data Pump bundle patch. This is my number 1 advice.
Skip Statistics
I recommend that you skip statistics when you export:
Import statistics from source database using DBMS_STATS.
Import statistics from a test system using DBMS_STATS.
Options 1 and 3 are especially appealing if your target database is very different from the source. Imagine going from AIX to Exadata, from 11.2.0.4 to 19c, and non-CDB to PDB. The platform itself is very different; Exadata has superiour capabilities. In addition, it is a new version with other histogram types and different architecture. In this case, it does make sense to get new statistics that can better reflect the new environment.
Data Pump uses Advanced Queueing which uses the streams pool in the SGA. If you have enough memory, I suggest that you allocate a big streams pool right from the beginning:
SQL> alter system set streams_pool_size=2G scope=both;
You don’t want the streams pool to eat too much from your buffer cache and shared pool, so if you are short on memory find a more suitable value.
Database Complexity
The Data Pump export and import must be done during downtime. The time it takes to perform these two tasks is often critical.
How long will it take? It depends (classic IT answer)!
The complexity of your user data dictionary has the biggest impact. The more objects, generally the longer the export and import will take. Also, certain features like partitioning have a big impact as well. It might be impossible to reduce the user data dictionary complexity, but it can have a big impact. In some situations, I have seen old or obsolete data in the database. Or partitions that had to be archived. Or entire groups of tables that were used by a feature in the application that was no longer in use.
Another thing to look at is invalid objects. If you have objects that can’t compile, check the reason and whether the object can be dropped. Often these invalid objects are just remnants from old times.
In a recent case I worked on, the customer found more than 1.000 old tables that could be dropped. This significantly decreased the downtime required for the migration.
Dump File or Network Mode
A thing to test: What works best in your migration: Data Pump in dump file mode or network mode?
Normally, we recommend dump file mode because it has much better parallel capabilities. But metadata export and import for transportable tablespace jobs happen serially anyway (until Oracle Database 21c), so there might be a benefit of using Data Pump in network mode. When using Data Pump in network mode, you just start the Data Pump import without first doing an export. The information is loaded directly into the target database over a database link.
Parallel Metadata Export and Import
Starting with Oracle Database 21, Data Pump supports parallel export and import of metadata when using transportable tablespaces. Add the following to your Data Pump parameter file. n is the level of parallelism:
parallel=n
If an export was made in a lower release that didn’t support parallel export, you can still import in parallel. Parallel Metadata import works regardless of how the Data Pump export was made.
Auditing
If you are using Traditional Auditing, you can disable it in source and target database during the migration:
SQL> alter system set audit_trail=none scope=spfile;
SQL> alter system set audit_sys_operations=none scope=spfile;
SQL> shutdown immediate
SQL> startup
In at least one case, I’ve seen it make a difference for the customer.
Additionally, if you have a lot of traditional auditing data, I suggest you get rid of it (archive if needed, else delete it).
I strongly recommend that you apply the recent-most Release Update to your source and target Oracle Database. Use the download assistant to find it.
Use Backup Sets
If both source and target databases are Oracle Database 12c or newer, you should set the following in xtt.properties:
usermantransport=1
RMAN will use backup sets using the new backup for transport syntax. Backup sets are better than image copies because RMAN automatically adds unused block compression. Unused block compression can shrink the size of the backup and improve performance.
Block Change Tracking
Enable block change tracking on source database. Although strictly speaking not required, it is strongly recommended, because it will shorten the time it takes to perform incremental backups dramatically. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata:
SQL> select status, filename from v$block_change_tracking;
SQL> alter database enable block change tracking;
If you look in xtt.properties, there is a parameter called parallel. What does it do?
It controls the number of batches in which the backup and restore/recover commands run. The Perl script will split the tablespaces into n batches – n is parallel from xtt.properties. One batch will process all the data files belonging to those tablespaces.
If you have 20 tablespaces, the Perl script will run in four batches of five tablespaces. If each tablespace has three data files, the Perl script will run four batches of each 15 data files.
Each batch will process n data files at the same time. n being the default parallelism assigned to the disk channel. To find the current parallelism (here it is two):
RMAN> show all;
...
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
...
If you want to change it to eight:
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
When you restore and convert the data files on the target database, it will also use the RMAN configuration parameter.
To enable parallel backup and restore, be sure to change the default disk parallelism on both source and target database host.
For image file backups (usermantransport=0), when the data files are converted on the target database, it will use the parallel degree specified in xtt.properties parameter parallel. Backup sets are converted using the RMAN configuration parameter.
Multiple Perl Scripts
If you really want to squeeze the very last drop of performance out of your system, or if you want to use multiple RAC nodes, you can use multiple Perl scripts.
Normally, you only have one Perl script with corresponding files like xtt.properties:
[oracle@sales12 xtts]$ pwd
/home/oracle/xtts
[oracle@sales12 xtts]$ ls -l
total 260
-rw-r--r-- 1 oracle oinstall 5169 Mar 11 19:30 xtt.newproperties
-rw-r--r-- 1 oracle oinstall 266 Mar 11 19:30 xtt.properties
-rw-r--r-- 1 oracle oinstall 1390 Mar 11 19:30 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall 71 Mar 11 19:30 xttdbopen.sql
-rw-r--r-- 1 oracle oinstall 180408 Mar 11 19:30 xttdriver.pl
-rw-r--r-- 1 oracle oinstall 11710 Mar 11 19:30 xttprep.tmpl
-rw-r--r-- 1 oracle oinstall 52 Mar 11 19:30 xttstartupnomount.sql
The idea with multiple Perl scripts is that you have many sets of Perl scripts; each set working on a unique batch of tablespaces.
So instead of just one folder, I could have four folders. Each folder is a complete Perl script with all the files. Download rman_xttconvert_VER4.3.zip and extract to four folders:
[oracle@sales12 ~]$ pwd
/home/oracle
[oracle@sales12 ~]$ ls -l
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts1
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts2
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts3
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts4
Each of the xtt.properties files will work on a unique set of tablespaces:
You must also ensure that src_scratch_location and dest_scratch_location are set to different locations. Each set of Perl scripts must have dedicated scratch locations.
You have multiple concurrent sessions running when you need to backup and restore/recover. Each session will use one of the Perl scripts, and, thus, process the tablespaces concurrently.
SSH session 1:
export TMPDIR=/home/oracle/xtts1
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 2 (notice I changed TMPDIR to another directory, xtts2):
export TMPDIR=/home/oracle/xtts2
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 3:
export TMPDIR=/home/oracle/xtts3
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 4:
export TMPDIR=/home/oracle/xtts4
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
In the above example, I used four different Perl scripts and four concurrent sessions. But you can scale up if you have the resources for it. One of our customers ran with 40 concurrent sessions!
You must ensure to include all your tablespaces. The sum of all the tablespaces in your Perl scripts must be all of the tablespaces in your database. Don’t forget one of the tablespaces.
On RAC, you can run the Perl scripts on all the nodes, utilizing all your resources.
Watch this video to learn how a customer migrated a 230 TB database using multiple Perl scripts