More often than you think, I get mails about migration options from ancientOracle Database releases. Typically, it is from Oracle 8i or 9i. My high score is an inquiry about Oracle 7.2.
My good colleague, Rodrigo, calls it migrations from Jurassic Park data centers.
At one point, our team got so many questions about it that we decided to create a presentation. It premiered at Oracle DatabaseWorld at CloudWorld 2023. We named it:
In Oracle Database 10g, Oracle introduced Data Pump – replacing the original export and import utilities. If possible, use Data Pump.
In older versions, you must use original export, exp.
In recent versions of Oracle Database, Oracle desupported original export, but they still support original import utility, imp, for migrations from really old Oracle Database releases.
Oracle released original export back in Oracle v5. You can export data in Oracle v5 (or newer) and import it directly into an Oracle Database 23c PDB. This is 40 years of backward compatibility. This is rather impressive!
From Oracle 8i Database, you can use same-platform transportable tablespaces. From Oracle Database 10g, you can even do cross-platform transportable tablespaces.
To Which Version Should You Migrate?
If you have such old databases in your environment, I strongly recommend migrating to Oracle Autonomous Database. In Autonomous Database, Oracle will take care of the database for you. Oracle will patch it, maintain it, and upgrade it. You will never end up with a legacy database again.
Note that the original import utility does not meet the security standards of Autonomous Database, so it is impossible to import directly into Autonomous Database. In such a situation, you first import into a staging database that supports Data Pump, and then into Autonomous Database.
Otherwise, I recommend going to the latest long-term support release. Also, you should migrate to the multitenant architecture and import directly into a PDB.
Client/Server Interoperability
Before you migrate the database, you must come up with a plan for your clients and applications. Older database clients can’t connect to modern databases.
With each Oracle Database release, the authentication mechanism changes to become more secure and support newer features. This means that clients must be of a certain version to communicate with the database.
Here’s a video showing you such a migration step-by-step.
The biggest pitfall in such migration is the loss of characters due to incorrect character set conversion. You must set the NLS_LANG environment variable to the character set of the database. Otherwise, you’ll use the default language setting of your operating system and that might lead to character loss.
Only in specific situations do you set the NLS_LANG to something else.
Appendix
Fun Fact
Here’s a list of Oracle Database releases, including their year of release, marque features, and a mobile phone of the same year.
At the recent Oracle DatabaseWorld at CloudWorld I spoke to several customers that had to upgrade to Oracle Database 19c and convert their non-CDB into the multitenant architecture.
The conversion to multitenant does not offer the same rollback options as an upgrade. Normally, when you upgrade a database, you rely on Flashback Database as the primary rollback option. However, that does not work for conversion to multitenant.
When you plug your non-CDB into a CDB, the CDB makes changes to the data file headers. Those changes are irreversible and prevents you from ever using those data files in a non-CDB. Not even Flashback Database can revert the changes.
So, what are your rollback options?
Restore a backup
It might take longer than your organization can accept.
Make a copy of the data files before conversion
It requires disk space and a longer downtime window to copy the data files.
This is where Refreshable Clone PDBs come into play.
Refreshable Clone PDBs
Here is an overview of what AutoUpgrade does for you:
AutoUpgrade creates a PDB in the target CDB as a refreshable clone PDB of the source non-CDB.
The target CDB starts to copy the data files from the source non-CDB.
The target CDB refreshes the PDB. In other words, it rolls forward the data files using the redo from the source non-CDB.
Now, downtime starts. AutoUpgrade issues a final refresh to bring over the latest changes.
AutoUpgrade disconnects the refreshable clone PDB from its source. Now, the PDB is a real, stand-alone PDB. AutoUpgrade upgrades the PDB and converts it into a proper PDB.
If something happens during the upgrade or conversion and you want to roll back, simply start the original non-CDB. It is left completely untouched.
You can learn about the concept in detail in our AutoUpgrade 2.0 webinar:
Refreshable clone PDBs does not work for cross-endian migrations (like AIX to Linux), but cross-platform should work fine (like Windows to Linux).
How To
In the source non-CDB, I create a user:
create user dblinkuser identified by ... ;
grant create session,
create pluggable database,
select_catalog_role to dblinkuser;
grant read on sys.enc$ to dblinkuser;
In my target CDB, I create a database link connecting to my source non-CDB:
create database link clonepdb
connect to dblinkuser identified by ...
using 'source-db-alias';
source_home and target_home is the Oracle Home of the source non-CDB and target CDB respectively.
sid is the source non-CDB that I want to upgrade and convert.
target_cdb is the CDB into which I want to plug in the non-CDB. You must create the CDB in advance or use an existing one.
source_dblink has the name of the database link (CLONEPDB) and the rate at which the target CDB brings over redo and rolls forward the copy (600 seconds or 10 minutes).
target_pdb_name specifies that I want to rename the non-CDB to PDB1 when I plug it in. You can leave this out if you want to keep the name.
start_time specifies when downtime starts. At this point, AutoUpgrade refreshes the PDB for the last time and then moves on with upgrade and PDB conversion.
Start AutoUpgrade in analyze mode on the source system:
This runs the fixups identified by AutoUpgrade in analyze mode. You can run this task even after you start AutoUpgrade in deploy mode. Just ensure that the fixups complete before the final refresh (as specified in the start_time paramter).
If there are no errors found in the analysis, I start AutoUpgrade in deploy mode:
AutoUpgrade copies the data files over the database link.
Rolls the copies of the data files forward with redo from the source non-CDB.
At one point, issues a final refresh and disconnects the PDB from the source non-CDB.
Upgrades and converts the database to a PDB.
Here’s a demo of it:
Words of Caution
Disconnect Users from Source Database
Right before the upgrade and conversion starts, AutoUpgrade executes a final refresh. The last redo from the source non-CDB is applied to ensure no data is lost. You must ensure that no users are connected to the source non-CDB after this time. Otherwise, that data will be lost.
AutoUpgrade starts the final refresh at the start time specified in the config file:
upg1.start_time=25/09/2023 06:30:00
You must be careful about disconnecting users from the source non-CDB. Remember, AutoUpgrade connects to the source non-CDB over a database link as a regular user (not SYS). This means the listener must be available, and you can’t enable restricted session or similar means.
Data Guard
If the target CDB is protected by Data Guard, special attention is needed to handle the standby databases. I explain the details in our AutoUpgrade 2.0 webinar:
Redo
The procedure relies on redo from the source non-CDB. Ensure that redo is kept in the Fast Recovery Area of the source non-CDB until it has been applied on the target PDB. Either postpone your archive backups or change the archive log deletion policy so the archive logs remain on disk.
Final Refresh
Check this blog post if you want to be in control over when the final refresh happens.
In a recent migration, a customer noticed that the size of the database decreased dramatically after the import. The size of the data files went from 500 GB to only 300 GB. Data Pump didn’t report any errors. Nevertheless, the customer was afraid that data was lost during migration.
The customer asked:
Am I missing data?
Why Would a Database Shrink During Import?
First, the following applies to Data Pump imports. If you migrate with transportable tablespaces or Data Guard, things are different.
Fragmentation
If you perform a lot of DML on a table, it will become fragmented over time. A fragmented table will use much more space, because the database will only re-use a data block with free space, once the free space reaches a certain limit (PCTUSED). Often, this leads to blocks that are never filled and waste of space.
When you import, the database neatly packs all blocks, and the table is completed defragmented.
Depending on the nature of your DML statements, you can see a dramatic impact on space usage for a defragmented table.
Indexes
After importing the data, Data Pump rebuilds the indexes. In many cases, an index has some level of fragmentation, but a rebuild removes all that, resulting in a neatly packed and efficient index. Most often, an index rebuild ends up using less space.
PCTFREE
If you change PCTFREE for an existing object, it applies to new blocks only. The existing blocks are left untouched. Any space saving from the lower setting applies to new data blocks only.
However, during an import (or index rebuild) the database builds the object from scratch and applies the setting to all the blocks. Now, all blocks have the new settings.
Connor’s Words
Connor McDonald was part of the original conversation with the customer. He phrased it like this:
Create a 100 GB table, and a 1 GB table. Your database is now (at least) 101 GB. Drop the 100 GB table and do an export/import. You now have a 1 GB database.
How Can You Know No Data Is Lost?
If you can’t use the database’s size to indicate whether data is lost, what do you do?
Then, can the database become bigger as part of an import? Yes, it may.
If you move from a singlebyte character set to Unicode, then characters stored in your database may take up more space. In particular, LOBs now take up much more space.
PCTFREE setting of your segments may also cause them to increase in size. Opposite of the case above.
In a recent migration, a customer had millions of SQL plan baselines. The customer performed a Full Transportable Export/Import, but the export of the SQL plan baselines took a lot of time. I suggested moving the SQL plan baselines the day before the downtime window.
But can you move SQL plan baselines into an empty database; before you move the data? Do SQL plan baselines care about the underlying schema objects the SQL accesses?
SQL Plan Baselines and Underlying Schema Objects
The optimizer matches a SQL statement with a SQL plan baseline using the signature of the SQL (see appendix). Originally, I thought that it was the SQL ID, but I was wrong:
SQL statements are matched to SQL plan baselines using the signature of the SQL statement. A signature is a unique SQL identifier generated from the normalized SQL text (uncased and with whitespaces removed).
SQL Plan Management uses the signature only to match SQLs to SQL plan baselines. There is no reference to the underlying schema objects, which is evident since the same SQL from different schemas share the same signature.
The conclusion is that importing SQL plan baselines into an empty database is safe before you move the data.
Empty Database and Plan Evolution
But there is a catch. You don’t want plan evolution to happen in a database with no data.
Plan evolution is the process that validates whether a plan performs better than existing ones and, if so, marks the plans as accepted. This can have undesired side effects in an empty database.
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
parameter => 'ACCEPT_PLANS',
value => 'FALSE');
END;
/
How to Migrate SQL Plan Baselines
DBMS_SPM
The easiest way you can move SQL plan baselines is using DBMS_SPM. You can find a good example of using it in the documentation.
Data Pump
Data Pump exports SQL plan baselines as part of the SQL Management Base (SMB). Data Pump includes the SMB as part of a full export only. To export the SMB only:
expdp ... full=y include=SMB
Please note that the SMB includes the following as well:
SQL Profiles
SQL Patches
SQL Plan Directives
How to Exclude SQL Plan Baselines
If you migrate SQL plan baselines in advance, you should exclude them from the production migration. As described above, the SQL plan baselines are part of the SQL Management Base. You exclude it using:
expdp ... exclude=SMB
That will exclude the items below. Depending on how you move the SQL plan baselines and if you need the other items in the target database, you might need to move these items manually:
SQL Profiles
You can move SQL Profiles using DBMS_SQLTUNE.
SQL Patches
You can move SQL Patches using DBM_SQLDIAG.
SQL Plan Directives
You can move SQL plan directives using DBMS_SPD. Only relevant if you use Adaptive Statistics.
A numeric hash value computed using a SQL statement text that has been normalized for case insensitivity and white space. It uniquely identifies a SQL statement. The database uses this signature as a key to maintain SQL management objects such as SQL profiles, SQL plan baselines, and SQL patches.
Let me use an example. If you have the following SQL text:
select * from dual
The signature is:
14103420975540283355
There is a function that calculates the signature from the SQL text:
col signature format 99999999999999999999999
select dbms_sqltune.sqltext_to_signature ('select * from dual') as signature from dual;
Acknowledgment
Thanks to Nigel Bayliss, optimizer product manager, for valuable assistance.
In the tablespace, the TEK is stored. When you use transportable tablespace, you copy the data files, and thus, the TEK remains the same. The data in the tablespace continues to be encrypted during the entire migration using the same TEK.
But what about the MEK? It is required to get access to the TEK.
Option 1: Use ENCRYPTION_PASSWORD parameter
During Data Pump transportable tablespace export, you specify an ENCRYPTION_PASSWORD:
A benefit of this option is that the source and target database is encrypted using a different MEK. You can query the database and verify that no new MEK is added to the target database. The target database continues to use its own MEK:
SQL> select * from v$encryption_keys;
According to the documentation, this is the recommended option.
Option 2: Import Master Encryption Key
You start the Data Pump transportable tablespace export.
In the source database, you export the source database MEK:
SQL> administer key management export keys
with secret "<another_strong_and_secure_password>"
to '/home/oracle/secure_location/exported-keys'
force keystore identified by "<source_database_MEK";
You import the source database MEK into the target database:
SQL> administer key management import keys
with secret "<another_strong_and_secure_password>"
from '/home/oracle/secure_location/exported-keys'
force keystore identified by "<target_database_MEK>"
with backup;
You start the Data Pump transportable tablespace import.
By querying v$encryption_keys, you can see that another key has been added to the database.
You can read more about export and import of MEKs in the documentation.
Option 3: Oracle Key Vault
If you are using Oracle Key Vault, it’s very easy to allow the target database to access the source database master encryption key.
When you perform the Data Pump transportable tablespace import in the target database, it will already have access to the encryption keys that protect the tablespaces. Nothing further is needed.
What About Rekeying?
If you make the source database encryption key available to the target database, consider whether you also want to perform a rekey operation. This applies to options 2 and 3.
ORA-39396
If you use options 2 or 3, you will receive the below warning during Data Pump transportable tablespace export:
ORA-39396: Warning: exporting encrypted data using transportable option without password
This warning points out that in order to successfully import such a transportable tablespace job, the target database wallet must contain a copy of the same database master key used in the source database when performing the export. Using the ENCRYPTION_PASSWORD parameter during the export and import eliminates this requirement.
Cross-endian Migration of TDE Encrypted Tablespaces
If your database is encrypted, you must use the newer M5 migration method. It supports encrypted tablespaces.
The older XTTS v4 doesn’t support encrypted tablespace.
If you have an encrypted tablespace and you want to use transportable tablespace:
Decrypt the tablespace
Migrate the tablespace using transportable tablespace
Advanced Queueing offers great queueing functionality built into Oracle Database. When you want to create a queue, Oracle Database will create several supporting objects. Depending on how you use Advanced Queueing and Oracle Database, these objects might change.
Let’s investigate a case that came up during a project.
The Situation
After I recreated a queue table, some of the underlying objects changed definition. Specifically, columns that apparently contained user information changed from a NUMBER to a VARCHAR2.
Object Name
Column Name
Data Type Before
Data Type After
AQ$<queue_table>
ENQ_USER_ID
NUMBER
VARCHAR2
AQ$<queue_table>
DEQ_USER_ID
NUMBER
VARCHAR2
AQ$_<queue_table>_F
ENQ_UID
NUMBER
VARCHAR2
AQ$_<queue_table>_F
DEQ_UID
NUMBER
VARCHAR2
AQ$_<queue_table>_H
DEQUEUE_USER
NUMBER
VARCHAR2
AQ$_<queue_table>_L
DEQUEUE_USER
NUMBER
VARCHAR2
The column data type changed from NUMBER to VARCHAR2. I created the queue tables using DBMS_AQADM and Oracle Database created the AQ$ objects recursively.
SQL> select queue_table, compatible from user_queue_tables;
In the documentation, you can find information on which functionality gets enabled by which compatible setting. In this case, the following is of interest:
Mixed case (upper and lower case together) queue names, queue table names, and subscriber names are supported if database compatibility is 10.0
When you set compatible on the queue table to 10.0 there is better support for certain user names (subscriber names), and that requires a different data type on the underlying objects.
How to Solve the Problem
There are two options:
You can recreate the queue tables using the same compatible setting. You start by querying USER_QUEUE_TABLES to find the compatible setting. Then, use DBMS_AQADM.CREATE_QUEUE_TABLE to recreate the queue and remember to specify the correct compatible setting.
You can adapt the newest compatible setting on your queues. The underlying objects change. You can use all the features of Advanced Queueing. You should test your application and ensure it works with the new setting.
I recommend option 2. It is uses the default setting for compatible. The default setting has been around in many years, so it is thoroughly tested and I assume that most customers use this configuration.
You Can Migrate Old Queue Tables
You can migrate old queues in your Oracle Database. Any queues that don’t have compatible set to 10.0, you can migrate to the newest compatible setting:
SQL> begin
dbms_aqadm.migrate_queue_table(..., compatible => '10.0.0');
end;
Now you can start to use all the features in Advanced Queueing.
You can query the data dictionary to find old queues in your Oracle Database:
SQL> select queue_table, compatible
from user_queue_tables
where compatible != '10.0.0';
Appendix
Thanks to oracle-base.com for supplying the starting point for my test case.
Test Case
conn / as sysdba
--create user and grant privileges
drop user appuser cascade;
create user appuser identified by appuser;
grant dba to appuser;
conn appuser/appuser
--type used for queue payload
create type car_type as object (
name varchar2(20),
color varchar2(10)
);
/
--get the database instance compatible setting
--used to derive the queue table compatible setting
--if not specified
select value from v$parameter where name='compatible';
begin
--create queue table without expliciti 'compatible'
--compatible should be 10.0.0
dbms_aqadm.create_queue_table (
queue_table => 'APPUSER.CAR_QUEUE_TAB',
queue_payload_type => 'APPUSER.CAR_TYPE');
--create new queue table with lower compatible setting
dbms_aqadm.create_queue_table (
queue_table => 'APPUSER.CAR_QUEUE_TAB8',
queue_payload_type => 'APPUSER.CAR_TYPE',
compatible => '8.0');
end;
/
--verify queue table compatible setting
select queue_table, compatible from user_queue_tables;
Transportable Tablespaces is a great way of migrating data from one Oracle Database to another. However, if your database uses object types and those object types have evolved over time, you might hit an error during import:
W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch
Failing sql is:
BEGIN SYS.DBMS_METADATA.CHECK_TYPE('APPUSER','CAR_TYPE','4','613350A0E37618A6108C500936D0C6162C',''); END;
Typically, there will be a number of dependent objects that fails with ORA-39112.
Let me explain the problem and what you can do about it.
Object Types and Evolution
The term object types refers to database objects that you create using the CREATE TYPE statement.
Object types are also known as user-defined types (UDT) and abstract data types (ADT).
When Oracle Database introduced object types many releases ago, they could not evolve. In simple words, you only had CREATE TYPE available. If you needed to change the type, you had to drop and recreate it. Later on, Oracle Database allowed type evolution, and you could also do ALTER TYPE.
An Example of Evolution
Let me illustrate object types and evolution using an example:
I create a supertype called VEHICLE_TYPE
I create another type called CAR_INFO_TYPE
I create a subtype called CAR_TYPE which inherits VEHICLE_TYPE and includes an attribute based on CAR_INFO_TYPE
I add the three types:
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
IS OBJECT (
make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
IS OBJECT (
model VARCHAR2(40)
);
/
CREATE TYPE "APPUSER"."CAR_TYPE"
UNDER VEHICLE_TYPE (
car_info CAR_INFO_TYPE
);
/
I query the data dictionary to find the types and verify that their version is 1. I expect this because the types are brand-new.
select o.name, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
NAME VERSION#
-------------------------
CAR_INFO_TYPE 1
CAR_TYPE 1
VEHICLE_TYPE 1
Now I want to add two additional attributes to CAR_INFO_TYPE:
horsepower
color
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/
Now, I check the data dictionary again. CAR_INFO_TYPE has now advanced two versions to version 3. Two ALTER TYPE statements mean two new versions. But notice CAR_TYPE. It advances as well.
select o.name, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
NAME VERSION#
-------------------------
CAR_INFO_TYPE 1
CAR_INFO_TYPE 2
CAR_INFO_TYPE 3
CAR_TYPE 1
CAR_TYPE 2
CAR_TYPE 3
VEHICLE_TYPE 1
If you alter a type that another type uses, both types evolve. No ALTER TYPE was issued on CAR_TYPE, yet it still evolve. This becomes a problem later on.
Same Objects but Different Evolution
If I use the example above but change the order of statements, the types will evolve in a different way. They will still be the same objects, but they evolve differently.
Let me illustrate how the types can evolve differently. I execute the ALTER TYPE statements on CAR_INFO_TYPE immediately after its CREATE TYPE statement:
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
IS OBJECT (
make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
IS OBJECT (
model VARCHAR2(40)
);
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/
CREATE TYPE "APPUSER"."CAR_TYPE"
UNDER VEHICLE_TYPE (
car_info CAR_INFO_TYPE
);
/
Here you can see the two approaches compared to each other
First approach
Second approach
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
CREATE TYPE "APPUSER"."CAR_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
CREATE TYPE "APPUSER"."CAR_TYPE"
When I check the version of the types, I see that CAR_TYPE doesn’t evolve together with CAR_INFO_TYPE:
select o.name, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
NAME VERSION#
-------------------------
CAR_INFO_TYPE 1
CAR_INFO_TYPE 2
CAR_INFO_TYPE 3
CAR_TYPE 1
VEHICLE_TYPE 1
In contrast to the previous example, because I changed the order of the statements, CAR_TYPE is now still on version 1 (not 3 as previously).
If you choose to update the corresponding table data, it can take quite some time on a big table.
If you choose to skip the update, the alteration will be fast. However, when you need to read the data, the database needs to use the evolution information to determine how to read the data.
What Data Pump Does
Data Pump can recreate the types during a regular import and transportable tablespaces. But it will handle the types one-by-one:
Create VEHICLE_TYPE (and all alterations if there were any)
Then create CAR_INFO_TYPE and all alterations
Then create CAR_TYPE (and all alterations if there were any)
Now, the definition of the types is correct (it has the correct attributes and stuff). But the evolution of the types does not necessarily match that of the source database. Unless the types in the source database were created in the same order as Data Pump does, then the versions of the types might be different.
What Happens During a Regular Import
During a regular (not transportable) export, Data Pump always use external table as access method when there is an evolved type involved. This access method causes a CREATE TABLE <external table> ... AS SELECT which converts each row’s version of a type to the highest version. All rows in the dump file is now of the highest version of the type.
On import, Data Pump creates the types, and perhaps the versions of the types are different. When Data Pump loads the data into the table, all rows are on the same version because they were converted during export. The data access layer underneath Data Pump is now more lenient. If one type version is found with a matching hash code, then data can be imported safely.
Data Pump then checks for a matching object type using a hash code only. The version does not have to match. Here’s a query to get the hash code:
select o.name, t.version#, t.hashcode
from obj$ o, user$ u, type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
What Happens During a Transportable Tablespace Import
In transportable tablespace mode, things are different. Data Pump does not load the data into a dump file. Instead, it is kept in the data files. To read the data correctly, you must have the correct type definitions and information on how they evolve.
If the type evolution doesn’t match those of the source database, then Data Pump can’t be sure the data is accurate and will issue an error. Using the above query, the type evolution must match on versionandhash code.
Solution
This is not a bug in Data Pump. Data Pump ensures that your data is not corrupted during import. Due to a restriction in the implementation of type evolution, Data Pump is not able to completely recreate a type’s evolution and this might cause problems.
These are your options:
Option 1: Use Regular Data Pump Import
As described above, during a regular import Data Pump handles the types and dependent objects differently.
Perform transportable tablespace import.
Find types and dependent objects that Data Pump can’t import.
Import the affected types and objects using a regular Data Pump import.
Option 2: Recreate Types and Dependent Objects Without Data
In one case I worked on, a queue table used the evolved type. Before migration, the application team would ensure that the application processed all messages in the queue. Thus, at the time of migration, the queue was empty.
The transportable tablespace import failed to create the queue table due to evolved type.
After import, the DBA recreated the queue using DBMS_AQADMIN.
Also, the DBA granted permissions on the queue.
If a regular table used the evolved type, you would need to recreate the table using CREATE TABLE syntax instead.
Option 3: Recreate Types Without Evolution Before Export
Recreate the types in the source database in the same way as Data Pump will create it on the target database. One type at a time. First, the CREATE TYPE statement and any ALTER TYPE statements on the same type. Then next type, and the next, and the next…
If a dependent table or queue uses the evolved type, it can be quite cumbersome and time consuming to recreate the type in the source database.
Option 4: Create Type in Target Using Same Evolution Before Import
If you know the evolution history of the types in the source database (i.e. in which order the CREATE TYPE and ALTER TYPE statements were issued), then you can create the types manually in the target database and ensure they evolve in the same way. You must creat the types exactly in the same way (including number of attributes, the order they were added and so forth).
In target database, you create and alter types in the correct order. In order to do that you must also create the schema.
You verify that types are of the same version in source and target database. The hash code must be identical as well:
select o.name, t.version#, t.hashcode
from obj$ o, user$ u, type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
You perform the transportable tablespace import.
Data Pump reports an error for the types that already exist. You ignore those errors. You can also exclude types from the import (previous step) using EXCLUDE=TYPES.
Ensure that dependent objects that use the affected types are valid. It could be other types, tables, or queues.
Conclusion
Evolved types can hit ORA-39218 or ORA-39216 during a transportable tablespace import. Implementation restrictions in object types cause these errors. Data Pump can’t evolve the types in the same way, and issues an error. You must find a different way of moving the types and dependent objects.
export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
CONN / AS SYSDBA
DROP USER APPUSER CASCADE;
DROP TABLESPACE APPS INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE APPS;
CREATE USER APPUSER IDENTIFIED BY APPUSER DEFAULT TABLESPACE APPS;
ALTER USER APPUSER QUOTA UNLIMITED ON APPS;
GRANT CONNECT TO APPUSER;
GRANT CREATE TYPE TO APPUSER;
GRANT CREATE TABLE TO APPUSER;
GRANT AQ_ADMINISTRATOR_ROLE TO APPUSER;
CONN APPUSER/APPUSER
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
IS OBJECT (
make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
IS OBJECT (
model VARCHAR2(40)
);
/
CREATE TYPE "APPUSER"."CAR_TYPE"
UNDER VEHICLE_TYPE (
car_info CAR_INFO_TYPE
);
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/
CREATE TABLE "APPUSER"."CARS" (
id number,
car_info car_type
);
INSERT INTO "APPUSER"."CARS"
VALUES (1, APPUSER.CAR_TYPE('FORD', APPUSER.CAR_INFO_TYPE('FIESTA', 200, 'BLUE')));
COMMIT;
BEGIN
DBMS_AQADM.CREATE_QUEUE_TABLE (
queue_table => 'APPUSER.CAR_QUEUE_TABLE',
queue_payload_type => 'APPUSER.CAR_TYPE');
DBMS_AQADM.CREATE_QUEUE (
queue_name => 'APPUSER.CAR_QUEUE',
queue_table => 'APPUSER.CAR_QUEUE_TABLE');
DBMS_AQADM.START_QUEUE (
queue_name => 'APPUSER.CAR_QUEUE',
enqueue => TRUE);
END;
/
DECLARE
l_enqueue_options DBMS_AQ.enqueue_options_t;
l_message_properties DBMS_AQ.message_properties_t;
l_message_handle RAW(16);
l_event_msg APPUSER.CAR_TYPE;
BEGIN
l_event_msg := APPUSER.CAR_TYPE('VOLVO', APPUSER.CAR_INFO_TYPE('V90', 400, 'GREY'));
DBMS_AQ.enqueue(queue_name => 'APPUSER.CAR_QUEUE',
enqueue_options => l_enqueue_options,
message_properties => l_message_properties,
payload => l_event_msg,
msgid => l_message_handle);
COMMIT;
END;
/
conn / as sysdba
select o.name, o.oid$, t.hashcode, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
2. Transportable Tablespace Using FTEX
mkdir /tmp/dpdir
rm /tmp/dpdir/*
mkdir /u02/oradata/CDB2/pdb2
sqlplus / as sysdba<<EOF
ALTER TABLESPACE USERS READ ONLY;
ALTER TABLESPACE APPS READ ONLY;
create or replace directory dpdir as '/tmp/dpdir';
grant read, write on directory dpdir to system;
EOF
export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba<<EOF
startup
alter pluggable database PDB2 close;
drop pluggable database PDB2 including datafiles;
create pluggable database PDB2 admin user adm identified by adm file_name_convert=('pdbseed', 'pdb2');
alter pluggable database PDB2 open;
alter pluggable database PDB2 save state;
alter session set container=PDB2;
create directory mydir as '/tmp/dpdir';
grant read, write on directory mydir to system;
create public database link SOURCEDB connect to system identified by oracle using 'UP19';
EOF
export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba<<EOF
set line 300
select 'cp ' || file_name || ' /u02/oradata/CDB2/pdb2/df' || file_id as c1 from dba_data_files where tablespace_name in ('USERS', 'APPS');
EOF
#Copy the files
export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
impdp system/oracle@pdb2 network_link=sourcedb full=y \
transportable=always metrics=y exclude=statistics directory=mydir \
logfile=pdb2.log \
transport_datafiles='/u02/oradata/CDB2/pdb2/df4' \
transport_datafiles='/u02/oradata/CDB2/pdb2/df5'
sqlplus appuser/APPUSER@PDB2<<EOF
select * from appuser.cars;
select * from APPUSER.CAR_QUEUE_TABLE;
EOF
I have already created a Vault. DMS needs one to store sensitive information.
How to
I must execute all commands in the same shell. I need to specify a lot of information that I use later on:
#Specify a base name of the migration. All-migration related objects are prefixed with the name
export MIGRNAME=SALES
#Specify the OCID of the compartment where all the resources are running. This procedure assumes all resources are placed in the same compartment
export COMPARTMENTOCID="ocid1.compartment.oc1...."
#Vault details
export VAULTOCID="ocid1.vault.oc1...."
export VAULTKEYOCID="ocid1.key.oc1...."
Next, I specify information about the source database:
#Source database OCID
export SRCDBOCID="ocid1.database.oc1...."
#Network stuff
#Private IP address of the source DB System
export SRCHOSTIP=10.0.1.186
#Subnet OCID that the source DB System uses
export SRCSUBNETOCID="ocid1.subnet.oc1...."
#VCN OCID that the DB System uses
export SRCVCNOCID="ocid1.vcn.oc1...."
#Location of the private key file that can be used to communicate over SSH to the source host
export SRCHOSTKEYFILE=/Users/daniel/Documents/ssh/my-private-key
#Name and path of the database directory object. Remember to create the directory in the file system
export SRCDBDIRNAME=EXPDIR
export SRCDBDIRPATH=/u01/app/oracle/$SRCDBDIRNAME
#Connection details
#Details for non-CDB or PDB
export SRCPDBUSERNAME=SYSTEM
export SRCPDBPASSWORD=*****
export SRCPDBSVCNAME=SALESDB_fra1b4....oraclevcn.com
#If source is a PDB, fill in details for CDB. For non-CDB leave them blank
export SRCCDBUSERNAME=
export SRCCDBPASSWORD=
export SRCCDBSVCNAME=
Finally, I specify information about the target database – the autonomous database:
#Target ADB OCID
export TGTADBOCID="ocid1.autonomousdatabase.oc1...."
#Username and password - typically the ADMIN user
export TGTDBUSERNAME=ADMIN
export TGTDBPASSWORD=*****
Now, let the fun begin. I first create an object storage bucket which DMS uses to store dump files, log files, CPAT output and the like:
I can check the evaluation outcome, including the Cloud Premigration Advisor Tool (CPAT) report. You can find this information in the object storage bucket as well. You can run the evaluation as many times as needed:
#Get the last job and the details about it
export MIGRLASTJOBOCID=$(oci database-migration job list \
--migration-id $MIGROBJOCID \
--limit 1 \
--sort-by timeCreated \
--sort-order desc \
--query "data.items[0].id" \
--raw-output
)
oci database-migration job get-job-output-content \
--job-id $MIGRLASTJOBOCID \
--file -
#Get the CPAT report
oci database-migration job get-advisor-report \
--job-id $MIGRLASTJOBOCID
Once I have cleared any issues preventing the migration, I can start the actual migration. The command will return control immediately when the migration is started. Optionally, I use the parameters --wait-for-state and --max-wait-seconds to keep it running until the command completes:
#Start the real migration
export MIGRSTARTJOBID=$(oci database-migration migration start \
--migration-id $MIGROBJOCID \
--query "data.id" \
--raw-output
)
I use the below two commands to monitor the migration. The first command gives me an overall status. The second command returns a log file with additional details.
#Get the current status
oci database-migration job get \
--job-id $MIGRSTARTJOBID
oci database-migration job get-job-output-content \
--job-id $MIGRSTARTJOBID \
--file -
That’s it!
I have migrated my database to an autonomous database.
Although the OCI CLI commands are documented, it can be hard to figure out exactly which parameters to add and the exact syntax. My colleague, Alex Kotopoulis, gave me rock-star advice.
Switch to the GUI and configure the migration as you want. Turn on "Web developer tools" (might have a different name in your browser) and investigate the network traffic. You can see the REST API calls made by the GUI and easily translate those into OCI CLI parameters.
When migrating Oracle Databases, you often transport the statistics using dbms_stats. Our team got involved in an interesting case the other day, and I would like to share the solution with you.
Problem
A customer migrated to Oracle Database 19c and decided to move the statistics using dbms_stats.import_schema_stats. They noticed that the procedure started to burn a lot of CPU, and after 38 hours, they gave up.
A SQL Monitor report showed one statement which spent almost all the time. An insert statement into sys.dbms_stats_id_map_tab with a subselect referencing the staging table. The staging table is the one you specify in the argument stattab in the call to dbms_stats.import_schema_stats. The staging holds the optimizer statistics in a portable format. From the staging table, the procedure can insert it into the data dictionary in the proper format.
Obviously, they could see already that the procedure would use far too much time.
Diagnosing
The first shot was to look at the code behind dbms_stats. But this specific code hadn’t changed since its introduction many releases ago.
Recursive statements that touch the data dictionary immediately brought our attention to dictionary statistics. But the customer told us that they were not stale.
The customer ran SQL Tuning Advisor on the offending statement, and one of the findings was about stale statistics. SQL Tuning Advisor recommended gathering statistics on the staging table and corresponding indexes.
One of our performance experts looked at the execution plan and found a pattern he had seen before. He tried to disable Join Predicate Push Down in the session. It helped, but this was just a workaround. We wanted to find the root cause.
Solution
The SQL Tuning advisor came up with the real problem. Stale statistics on the staging table and corresponding indexes. Once the customer gathered statistics on the staging table and indexes, the import of statistics finished in 2 hours and 27 minutes which was acceptable.
We also discovered that the dictionary statistics were not as accurate as the customer had initially concluded. In fact, by mistake, they had misinformed us. A fresh run of dbms_stats.gather_dictionary_stats gave a slight performance improvement as well.
Recommendations
These recommendations are now part of our best practices for migrations.
Before importing statistics using dbms_stats.import_schema_stats, gather stats on the staging table.
Immediately after importing a lot of data, and before you import statistics with dbms_stats or anything else, you should gather dictionary statistics. This applies to regular Data Pump imports and transportable tablespace metadata imports.
In the above example, the staging table is called SYSTEM.MY_STG_TAB_1.
The recommendation applies as well if you are using the procedures dbms_stats.import_database_stats or dbms_stats.import_table_stats.
Dictionary Statistics
Why is it important to gather dictionary statistics immediately after the import?
When you import data or use transportable tablespaces, you often have a brand new, empty database. Then you import a lot of objects. Those objects are represented as rows in the data dictionary.
For instance, the tables you import now appear as rows in SYS.TAB$, the partitions in SYS.TABPART$, the indexes in SYS.IND$, and so forth.
Those internal tables were almost empty before – there were not a lot of tables. Now they have a lot of rows. This means that the statistics are stale. When you start to use functionality in the database, like importing statistics, recursive queries using the internal tables will be executed. With stale statistics on the dictionary, you can have suboptimal execution plans and bad performance. Gathering dictionary statistics can fix this for you.
Conclusion
Statistics are always vital, whether optimizer statistics on user data or internally in the data dictionary. Be sure to verify the accuracy of your statistics when you have problems.
Also, SQL Tuning Advisor is a great tool. It can quickly come up with suggestions for fixing problems. Use the recommendations as input to your troubleshooting. SQL Tuning Advisor also works on internal tables.
Additional Information
We have a few videos on our YouTube channel which have more information about transporting statistics with dbms_stats.
In this blog post series, I use Full Transportable Export/Import (FTEX) to move the metadata during a cross-platform transportable tablespace migration (XTTS). The documentation states:
You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.
Requirements
A different blog post already covers the requirements for FTEX. Below is a supplement to that list:
The user performing the export and import must have the roles DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE, respectively. Don’t run the Data Pump jobs as SYS AS SYSDBA!
During export, the default tablespace of the user performing the export must not be one of the tablespaces being transported. In addition, the default tablespace of the user performing the export must be writable. Data Pump needs this to create the control table.
The target database (non-CDB) or PDB must not contain a tablespace of the same name as one of the tablespaces being transported. Often this is the case with the USERS tablespace. Either use Data Pump remap_tablespace or rename the tablespace (alter tablespace users rename to users2).
All tablespaces are transported. It is not possible to exclude a tablespace or a user from the operation.
What Is Included?
Generally, you should count on everything is included, except SYS objects and things specified in the next chapter. Below is a list of things that are included as well. It is a list of examples from previous questions I have been asked.
If a user schema has tables in SYSTEM or SYSAUX tablespace, such tables are also transported. But they are not stored in the transported tablespaces. Instead, those tables are exported into the dump file using conventional export. Examples:
If you created any new tables as SYSTEM or any other internal schema, except SYS, those tables will also be transported. If such tables are in the SYSTEM or SYSAUX tablespace, then they are exported into the dump file. Examples:
No need to emphasize that you should never create any objects in Oracle maintained schemas. But we all know it happens…
Public and private database links.
Private synonyms.
Profiles.
Directories including the privileges granted on them, although they are owned by SYS. The contents stored in the directory in the file system must be moved manually.
External tables definition, but the underlying external files must be moved manually.
BFILE LOBs, but the underlying external files must be moved manually.
All schema level triggers (CREATE TRIGGER ... ON SCHEMA), including on system events, except those owned by SYS
All database level triggers (CREATE TRIGGER ... ON DATABASE) owned by an internal schema, except SYS.
SQL patches.
SQL plan baselines.
SQL profiles.
SQL plan directives.
User-owned scheduler objects.
Unified auditing policies and audit records.
What Is Not Included?
The transport does not include any object owned by SYS. Here are some examples:
User-created tables in SYS schema are not transported at all. You must re-create such tables (but you should never create such tables in the first place).
Grants on tables or views owned by SYS, like DBA_USERS or v$datafile.
Any trigger owned by SYS.
SYS-owner scheduler objects.
In addition, the following is not included:
Index monitoring (ALTER INDEX ... MONITORING USAGE).
Public synonyms.
AWR data is not included. You can move such data using the script $ORACLE_HOME/rdbms/admin/awrextr.sql.
How Does It Work?
There are two keywords used to start a full transportable job: TRANSPORTABLE and FULL. If you want to start an FTEX import directly over a network link:
Start on a small database and work on your runbook.
Eventually, prove it works on a production-size database.
Automate
To ensure consistency. There are many steps, and it is easy to overlook a step or miss a detail.
To avoid human error. Humans make mistakes. Period!
Save logs
Data Pump
RMAN
Terminal output
Automate clean-up procedure
To repeat tests and effectively clean up the target environment.
In case of failure and rollback during production migration, you should know how to resume operations safely.
Shut down source database
Be sure to offline source database after migration. Having users connect to the wrong database after a migration is a disaster.
Data Pump Import
Importing directly into the target database using the NETWORK_LINK option is recommended.
Timezone File Version
Check the timezone file version of your source and target database:
SQL> select * from v$timezone_file;
If they differ and the target timezone file version is higher than the source database, Data Pump will convert any TIMESTAMP WITH TIME ZONE (TSTZ) column to the newer timezone conventions. The conversion happens automatically during import.
Since Data Pump must update data during import, it requires that Data Pump can turn the tablespaces READ WRITE. Thus, you can’t use TRANSPORTABLE=KEEP_READ_ONLY if you have tables with TSTZ columns. Trying to do so will result in:
ORA-39339: Table "SCHEMA"."TABLE" was skipped due to transportable import and TSTZ issues resulting from time zone version mismatch.
Source time zone version is ?? and target time zone version is ??.
If your target database has a lower timezone file version, you can’t use FTEX. You must upgrade the timezone file in your database.
TDE Tablespace Encryption
If the source database has one or more encrypted tablespaces, you must either:
Supply the keystore password on export using the Data Pump option ENCRYPTION_PASSWORD.
Specify ENCRYPTION_PWD_PROMPT=YES and Data Pump will prompt for the keystore password. This approach is more safer because the encryption password is otherwise stored in the shell history.
You can read more about Full Mode and transportable tablespaces in the documentation.
You can only transport encrypted tablespaces, if the source and target platform share the same Endian format. For example, going from Windows to Linux is fine, because they are both little Endian platforms. Going from AIX to Linux will not work, that’s big to little Endian.
When a tablespace is transported to a platform of a different Endian format, the data files must be converted. The conversion does not work on encrypted tablespaces. The only option is to decrypt the tablespace before transport.