In the tablespace, the TEK is stored. When you use transportable tablespace, you copy the data files, and thus, the TEK remains the same. The data in the tablespace continues to be encrypted during the entire migration using the same TEK.
But what about the MEK? It is required to get access to the TEK.
Option 1: Use ENCRYPTION_PASSWORD parameter
During Data Pump transportable tablespace export, you specify an ENCRYPTION_PASSWORD:
A benefit of this option is that the source and target database is encrypted using a different MEK. You can query the database and verify that no new MEK is added to the target database. The target database continues to use its own MEK:
SQL> select * from v$encryption_keys;
According to the documentation, this is the recommended option.
Option 2: Import Master Encryption Key
You start the Data Pump transportable tablespace export.
In the source database, you export the source database MEK:
SQL> administer key management export keys
with secret "<another_strong_and_secure_password>"
to '/home/oracle/secure_location/exported-keys'
force keystore identified by "<source_database_MEK";
You import the source database MEK into the target database:
SQL> administer key management import keys
with secret "<another_strong_and_secure_password>"
from '/home/oracle/secure_location/exported-keys'
force keystore identified by "<target_database_MEK>"
with backup;
You start the Data Pump transportable tablespace import.
By querying v$encryption_keys, you can see that another key has been added to the database.
You can read more about export and import of MEKs in the documentation.
Option 3: Oracle Key Vault
If you are using Oracle Key Vault, it’s very easy to allow the target database to access the source database master encryption key.
When you perform the Data Pump transportable tablespace import in the target database, it will already have access to the encryption keys that protect the tablespaces. Nothing further is needed.
What About Rekeying?
If you make the source database encryption key available to the target database, consider whether you also want to perform a rekey operation. This applies to options 2 and 3.
ORA-39396
If you use options 2 or 3, you will receive the below warning during Data Pump transportable tablespace export:
ORA-39396: Warning: exporting encrypted data using transportable option without password
This warning points out that in order to successfully import such a transportable tablespace job, the target database wallet must contain a copy of the same database master key used in the source database when performing the export. Using the ENCRYPTION_PASSWORD parameter during the export and import eliminates this requirement.
Cross-endian Migration of TDE Encrypted Tablespaces
If your database is encrypted, you must use the newer M5 migration method. It supports encrypted tablespaces.
The older XTTS v4 doesn’t support encrypted tablespace.
If you have an encrypted tablespace and you want to use transportable tablespace:
Decrypt the tablespace
Migrate the tablespace using transportable tablespace
Cloning Oracle Homes is a convenient way of getting a new Oracle Home. It’s particularly helpful when you need to patch out-of-place.
A popular method for cloning Oracle Homes is to use clone.pl. However, in Oracle Database 18c, it is deprecated.
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image.
You must use runInstaller script available to perform the Software Only install. For more details on image based installation, refer to help documentation.
This Is How You Should Clone Oracle Home
You should use runInstaller to create golden images instead of clone.pl. Golden image is just another word for the zip file containing the Oracle Home.
How to Create a Golden Image
First, only create a golden image from a freshly installed Oracle Home. Never use an Oracle Home that is already in use. As soon as you start to use an Oracle Home you taint it with various files and you don’t want to carry those files around in your golden image. The golden image must be completely clean.
Then, you create a directory where you can store the golden image:
If you need to exclude files, you can use -exclFiles. It accepts a wilcard, so for example you can specify -exclFiles network/admin* to exclude all files and subdirectories in a directory.
The installer creates the golden image as a zip file in the specified directory. The name of the zip file is unique and printed on the console.
One of the differences between clone.pl and runInstaller is that the latter does not include the file $ORACLE_HOME/oraInst.loc.
This is intentional because the file is not needed for golden image deployment. runInstaller recreates the file when you install the golden image.
One of the things listed in oraInst.loc is the location of the Oracle inventory. Either runInstaller finds the value itself, or you can specify it on the command line using INVENTORY_LOCATION=<path-to-inventory>.
Previously, many tools existed to do the same – clone an Oracle Home. Now, we have consolidated our resources into one tool.
From now on, there is one method for cloning Oracle Home. That is easier for everyone.
In addition, runInstaller has some extra features that clone.pl doesn’t. For instance:
Better error reporting
Precheck run
Multimode awareness
Ability to apply patches during installation
When Will It Be Desupported?
I don’t know. Keep an eye out on the Upgrade Guide, which contains information about desupported features.
However, I can see in the Oracle Database 23c documentation that clone.pl is still listed. But that’s subject to change until Oracle Database 23c is released.
If you clone Oracle Homes because you are doing out-of-place patching, you are on the right track. I strongly recommend always using out-of-place patching. Also, when you patch out-of-place, remember to move all the database configuration files.
If you clone Oracle Homes, you keep adding stuff to the same Oracle Home. Over time the Oracle Home will increase in size. The more patches you install over time, the more the Oracle Home increases in size. OPatch has functionality to clean up inactive patches from an Oracle Home. Consider running it from time to time using opatch util deleteinactivepatches. Mike Dietrich has a really good blog post about it. I also describe it in our of our previous webinars:
Appendix
Thanks to Anil Nair for pointing me in the right direction.
Advanced Queueing offers great queueing functionality built into Oracle Database. When you want to create a queue, Oracle Database will create several supporting objects. Depending on how you use Advanced Queueing and Oracle Database, these objects might change.
Let’s investigate a case that came up during a project.
The Situation
After I recreated a queue table, some of the underlying objects changed definition. Specifically, columns that apparently contained user information changed from a NUMBER to a VARCHAR2.
Object Name
Column Name
Data Type Before
Data Type After
AQ$<queue_table>
ENQ_USER_ID
NUMBER
VARCHAR2
AQ$<queue_table>
DEQ_USER_ID
NUMBER
VARCHAR2
AQ$_<queue_table>_F
ENQ_UID
NUMBER
VARCHAR2
AQ$_<queue_table>_F
DEQ_UID
NUMBER
VARCHAR2
AQ$_<queue_table>_H
DEQUEUE_USER
NUMBER
VARCHAR2
AQ$_<queue_table>_L
DEQUEUE_USER
NUMBER
VARCHAR2
The column data type changed from NUMBER to VARCHAR2. I created the queue tables using DBMS_AQADM and Oracle Database created the AQ$ objects recursively.
SQL> select queue_table, compatible from user_queue_tables;
In the documentation, you can find information on which functionality gets enabled by which compatible setting. In this case, the following is of interest:
Mixed case (upper and lower case together) queue names, queue table names, and subscriber names are supported if database compatibility is 10.0
When you set compatible on the queue table to 10.0 there is better support for certain user names (subscriber names), and that requires a different data type on the underlying objects.
How to Solve the Problem
There are two options:
You can recreate the queue tables using the same compatible setting. You start by querying USER_QUEUE_TABLES to find the compatible setting. Then, use DBMS_AQADM.CREATE_QUEUE_TABLE to recreate the queue and remember to specify the correct compatible setting.
You can adapt the newest compatible setting on your queues. The underlying objects change. You can use all the features of Advanced Queueing. You should test your application and ensure it works with the new setting.
I recommend option 2. It is uses the default setting for compatible. The default setting has been around in many years, so it is thoroughly tested and I assume that most customers use this configuration.
You Can Migrate Old Queue Tables
You can migrate old queues in your Oracle Database. Any queues that don’t have compatible set to 10.0, you can migrate to the newest compatible setting:
SQL> begin
dbms_aqadm.migrate_queue_table(..., compatible => '10.0.0');
end;
Now you can start to use all the features in Advanced Queueing.
You can query the data dictionary to find old queues in your Oracle Database:
SQL> select queue_table, compatible
from user_queue_tables
where compatible != '10.0.0';
Appendix
Thanks to oracle-base.com for supplying the starting point for my test case.
Test Case
conn / as sysdba
--create user and grant privileges
drop user appuser cascade;
create user appuser identified by appuser;
grant dba to appuser;
conn appuser/appuser
--type used for queue payload
create type car_type as object (
name varchar2(20),
color varchar2(10)
);
/
--get the database instance compatible setting
--used to derive the queue table compatible setting
--if not specified
select value from v$parameter where name='compatible';
begin
--create queue table without expliciti 'compatible'
--compatible should be 10.0.0
dbms_aqadm.create_queue_table (
queue_table => 'APPUSER.CAR_QUEUE_TAB',
queue_payload_type => 'APPUSER.CAR_TYPE');
--create new queue table with lower compatible setting
dbms_aqadm.create_queue_table (
queue_table => 'APPUSER.CAR_QUEUE_TAB8',
queue_payload_type => 'APPUSER.CAR_TYPE',
compatible => '8.0');
end;
/
--verify queue table compatible setting
select queue_table, compatible from user_queue_tables;
Earlier this month, the team and I presented our webinar Data Pump – Best Practices and Real World Scenarios. Over the years, we have accumulated information from many different customer projects, and we wanted to compile all that information into a webinar. You can watch the result on YouTube or flip through the slides.
On YouTube, the recording is divided into pieces, so you can easily dive right into the subject that has your particular interest.
Next month, in May, we are hosting another webinar about Release and Patching Strategies for Oracle Database 23c. Sign up now and secure your seat. It is, by the way, the 16th webinar in our series. If you want, you can watch the previous ones on demand.
Transportable Tablespaces is a great way of migrating data from one Oracle Database to another. However, if your database uses object types and those object types have evolved over time, you might hit an error during import:
W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch
Failing sql is:
BEGIN SYS.DBMS_METADATA.CHECK_TYPE('APPUSER','CAR_TYPE','4','613350A0E37618A6108C500936D0C6162C',''); END;
Typically, there will be a number of dependent objects that fails with ORA-39112.
Let me explain the problem and what you can do about it.
Object Types and Evolution
The term object types refers to database objects that you create using the CREATE TYPE statement.
Object types are also known as user-defined types (UDT) and abstract data types (ADT).
When Oracle Database introduced object types many releases ago, they could not evolve. In simple words, you only had CREATE TYPE available. If you needed to change the type, you had to drop and recreate it. Later on, Oracle Database allowed type evolution, and you could also do ALTER TYPE.
An Example of Evolution
Let me illustrate object types and evolution using an example:
I create a supertype called VEHICLE_TYPE
I create another type called CAR_INFO_TYPE
I create a subtype called CAR_TYPE which inherits VEHICLE_TYPE and includes an attribute based on CAR_INFO_TYPE
I add the three types:
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
IS OBJECT (
make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
IS OBJECT (
model VARCHAR2(40)
);
/
CREATE TYPE "APPUSER"."CAR_TYPE"
UNDER VEHICLE_TYPE (
car_info CAR_INFO_TYPE
);
/
I query the data dictionary to find the types and verify that their version is 1. I expect this because the types are brand-new.
select o.name, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
NAME VERSION#
-------------------------
CAR_INFO_TYPE 1
CAR_TYPE 1
VEHICLE_TYPE 1
Now I want to add two additional attributes to CAR_INFO_TYPE:
horsepower
color
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/
Now, I check the data dictionary again. CAR_INFO_TYPE has now advanced two versions to version 3. Two ALTER TYPE statements mean two new versions. But notice CAR_TYPE. It advances as well.
select o.name, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
NAME VERSION#
-------------------------
CAR_INFO_TYPE 1
CAR_INFO_TYPE 2
CAR_INFO_TYPE 3
CAR_TYPE 1
CAR_TYPE 2
CAR_TYPE 3
VEHICLE_TYPE 1
If you alter a type that another type uses, both types evolve. No ALTER TYPE was issued on CAR_TYPE, yet it still evolve. This becomes a problem later on.
Same Objects but Different Evolution
If I use the example above but change the order of statements, the types will evolve in a different way. They will still be the same objects, but they evolve differently.
Let me illustrate how the types can evolve differently. I execute the ALTER TYPE statements on CAR_INFO_TYPE immediately after its CREATE TYPE statement:
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
IS OBJECT (
make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
IS OBJECT (
model VARCHAR2(40)
);
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/
CREATE TYPE "APPUSER"."CAR_TYPE"
UNDER VEHICLE_TYPE (
car_info CAR_INFO_TYPE
);
/
Here you can see the two approaches compared to each other
First approach
Second approach
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
CREATE TYPE "APPUSER"."CAR_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
ALTER TYPE "APPUSER"."CAR_INFO_TYPE"
CREATE TYPE "APPUSER"."CAR_TYPE"
When I check the version of the types, I see that CAR_TYPE doesn’t evolve together with CAR_INFO_TYPE:
select o.name, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
NAME VERSION#
-------------------------
CAR_INFO_TYPE 1
CAR_INFO_TYPE 2
CAR_INFO_TYPE 3
CAR_TYPE 1
VEHICLE_TYPE 1
In contrast to the previous example, because I changed the order of the statements, CAR_TYPE is now still on version 1 (not 3 as previously).
If you choose to update the corresponding table data, it can take quite some time on a big table.
If you choose to skip the update, the alteration will be fast. However, when you need to read the data, the database needs to use the evolution information to determine how to read the data.
What Data Pump Does
Data Pump can recreate the types during a regular import and transportable tablespaces. But it will handle the types one-by-one:
Create VEHICLE_TYPE (and all alterations if there were any)
Then create CAR_INFO_TYPE and all alterations
Then create CAR_TYPE (and all alterations if there were any)
Now, the definition of the types is correct (it has the correct attributes and stuff). But the evolution of the types does not necessarily match that of the source database. Unless the types in the source database were created in the same order as Data Pump does, then the versions of the types might be different.
What Happens During a Regular Import
During a regular (not transportable) export, Data Pump always use external table as access method when there is an evolved type involved. This access method causes a CREATE TABLE <external table> ... AS SELECT which converts each row’s version of a type to the highest version. All rows in the dump file is now of the highest version of the type.
On import, Data Pump creates the types, and perhaps the versions of the types are different. When Data Pump loads the data into the table, all rows are on the same version because they were converted during export. The data access layer underneath Data Pump is now more lenient. If one type version is found with a matching hash code, then data can be imported safely.
Data Pump then checks for a matching object type using a hash code only. The version does not have to match. Here’s a query to get the hash code:
select o.name, t.version#, t.hashcode
from obj$ o, user$ u, type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
What Happens During a Transportable Tablespace Import
In transportable tablespace mode, things are different. Data Pump does not load the data into a dump file. Instead, it is kept in the data files. To read the data correctly, you must have the correct type definitions and information on how they evolve.
If the type evolution doesn’t match those of the source database, then Data Pump can’t be sure the data is accurate and will issue an error. Using the above query, the type evolution must match on versionandhash code.
Solution
This is not a bug in Data Pump. Data Pump ensures that your data is not corrupted during import. Due to a restriction in the implementation of type evolution, Data Pump is not able to completely recreate a type’s evolution and this might cause problems.
These are your options:
Option 1: Use Regular Data Pump Import
As described above, during a regular import Data Pump handles the types and dependent objects differently.
Perform transportable tablespace import.
Find types and dependent objects that Data Pump can’t import.
Import the affected types and objects using a regular Data Pump import.
Option 2: Recreate Types and Dependent Objects Without Data
In one case I worked on, a queue table used the evolved type. Before migration, the application team would ensure that the application processed all messages in the queue. Thus, at the time of migration, the queue was empty.
The transportable tablespace import failed to create the queue table due to evolved type.
After import, the DBA recreated the queue using DBMS_AQADMIN.
Also, the DBA granted permissions on the queue.
If a regular table used the evolved type, you would need to recreate the table using CREATE TABLE syntax instead.
Option 3: Recreate Types Without Evolution Before Export
Recreate the types in the source database in the same way as Data Pump will create it on the target database. One type at a time. First, the CREATE TYPE statement and any ALTER TYPE statements on the same type. Then next type, and the next, and the next…
If a dependent table or queue uses the evolved type, it can be quite cumbersome and time consuming to recreate the type in the source database.
Option 4: Create Type in Target Using Same Evolution Before Import
If you know the evolution history of the types in the source database (i.e. in which order the CREATE TYPE and ALTER TYPE statements were issued), then you can create the types manually in the target database and ensure they evolve in the same way. You must creat the types exactly in the same way (including number of attributes, the order they were added and so forth).
In target database, you create and alter types in the correct order. In order to do that you must also create the schema.
You verify that types are of the same version in source and target database. The hash code must be identical as well:
select o.name, t.version#, t.hashcode
from obj$ o, user$ u, type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
You perform the transportable tablespace import.
Data Pump reports an error for the types that already exist. You ignore those errors. You can also exclude types from the import (previous step) using EXCLUDE=TYPES.
Ensure that dependent objects that use the affected types are valid. It could be other types, tables, or queues.
Conclusion
Evolved types can hit ORA-39218 or ORA-39216 during a transportable tablespace import. Implementation restrictions in object types cause these errors. Data Pump can’t evolve the types in the same way, and issues an error. You must find a different way of moving the types and dependent objects.
export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
CONN / AS SYSDBA
DROP USER APPUSER CASCADE;
DROP TABLESPACE APPS INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE APPS;
CREATE USER APPUSER IDENTIFIED BY APPUSER DEFAULT TABLESPACE APPS;
ALTER USER APPUSER QUOTA UNLIMITED ON APPS;
GRANT CONNECT TO APPUSER;
GRANT CREATE TYPE TO APPUSER;
GRANT CREATE TABLE TO APPUSER;
GRANT AQ_ADMINISTRATOR_ROLE TO APPUSER;
CONN APPUSER/APPUSER
CREATE TYPE "APPUSER"."VEHICLE_TYPE"
IS OBJECT (
make VARCHAR2 (40)
) NOT FINAL;
/
CREATE TYPE "APPUSER"."CAR_INFO_TYPE"
IS OBJECT (
model VARCHAR2(40)
);
/
CREATE TYPE "APPUSER"."CAR_TYPE"
UNDER VEHICLE_TYPE (
car_info CAR_INFO_TYPE
);
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE horsepower NUMBER CASCADE
/
ALTER TYPE "APPUSER"."CAR_INFO_TYPE" ADD ATTRIBUTE color VARCHAR2(20) CASCADE
/
CREATE TABLE "APPUSER"."CARS" (
id number,
car_info car_type
);
INSERT INTO "APPUSER"."CARS"
VALUES (1, APPUSER.CAR_TYPE('FORD', APPUSER.CAR_INFO_TYPE('FIESTA', 200, 'BLUE')));
COMMIT;
BEGIN
DBMS_AQADM.CREATE_QUEUE_TABLE (
queue_table => 'APPUSER.CAR_QUEUE_TABLE',
queue_payload_type => 'APPUSER.CAR_TYPE');
DBMS_AQADM.CREATE_QUEUE (
queue_name => 'APPUSER.CAR_QUEUE',
queue_table => 'APPUSER.CAR_QUEUE_TABLE');
DBMS_AQADM.START_QUEUE (
queue_name => 'APPUSER.CAR_QUEUE',
enqueue => TRUE);
END;
/
DECLARE
l_enqueue_options DBMS_AQ.enqueue_options_t;
l_message_properties DBMS_AQ.message_properties_t;
l_message_handle RAW(16);
l_event_msg APPUSER.CAR_TYPE;
BEGIN
l_event_msg := APPUSER.CAR_TYPE('VOLVO', APPUSER.CAR_INFO_TYPE('V90', 400, 'GREY'));
DBMS_AQ.enqueue(queue_name => 'APPUSER.CAR_QUEUE',
enqueue_options => l_enqueue_options,
message_properties => l_message_properties,
payload => l_event_msg,
msgid => l_message_handle);
COMMIT;
END;
/
conn / as sysdba
select o.name, o.oid$, t.hashcode, t.version#
from obj$ o, user$ u,type$ t
where o.owner# = u.user# and o.oid$ = t.toid and u.name='APPUSER'
order by o.name, t.version#;
2. Transportable Tablespace Using FTEX
mkdir /tmp/dpdir
rm /tmp/dpdir/*
mkdir /u02/oradata/CDB2/pdb2
sqlplus / as sysdba<<EOF
ALTER TABLESPACE USERS READ ONLY;
ALTER TABLESPACE APPS READ ONLY;
create or replace directory dpdir as '/tmp/dpdir';
grant read, write on directory dpdir to system;
EOF
export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba<<EOF
startup
alter pluggable database PDB2 close;
drop pluggable database PDB2 including datafiles;
create pluggable database PDB2 admin user adm identified by adm file_name_convert=('pdbseed', 'pdb2');
alter pluggable database PDB2 open;
alter pluggable database PDB2 save state;
alter session set container=PDB2;
create directory mydir as '/tmp/dpdir';
grant read, write on directory mydir to system;
create public database link SOURCEDB connect to system identified by oracle using 'UP19';
EOF
export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba<<EOF
set line 300
select 'cp ' || file_name || ' /u02/oradata/CDB2/pdb2/df' || file_id as c1 from dba_data_files where tablespace_name in ('USERS', 'APPS');
EOF
#Copy the files
export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
impdp system/oracle@pdb2 network_link=sourcedb full=y \
transportable=always metrics=y exclude=statistics directory=mydir \
logfile=pdb2.log \
transport_datafiles='/u02/oradata/CDB2/pdb2/df4' \
transport_datafiles='/u02/oradata/CDB2/pdb2/df5'
sqlplus appuser/APPUSER@PDB2<<EOF
select * from appuser.cars;
select * from APPUSER.CAR_QUEUE_TABLE;
EOF
When you migrate your data using Oracle Data Pump, you can transform your data as part of the import. One of the popular transformations is applying compression.
Check the License Guide and ensure your licenses are in place. Most compression options require a separate license.
$ impdp ... transform=table_compression_clause:\"compress for oltp\"
It works for table partitions and subpartitions as well.
How about Tablespace Default Compression Clause?
You can create a tablespace with a default compression clause. Any new tables, table partitions or table subpartitions will inherit this compression clause:
SQL> create tablespace demo1 ... default table compress for oltp;
If you import your tables into such a tablespace, it will not change the compression type for the tables. The tables will preserve the compression type from the source database.
Unless you instruct Data Pump to remove any compression clause from the DDL. Then the tables inherit the tablespace compression clause:
Using the SQLFILE command line parameter, you can extract the DDL from a dump file. If you have uncompressed tables, you can see that the CREATE TABLE statements have a dedicated NOCOMPRESS clause. This will override the tablespace setting unless you instruct Data Pump to remove it.
Index Compression
In Oracle Database 21c, you can use the TRANSFORM parameter to apply compression on indexes during import:
sqlplus / as sysdba<<EOF
create or replace directory dpdir as '/u01/app/oracle/admin/UP19/dpdump';
drop tablespace apps including contents and datafiles;
drop user appuser cascade;
create tablespace apps;
create user appuser identified by APPUSER;
grant dba to APPUSER;
alter user APPUSER default tablespace APPS;
connect appuser/APPUSER
create table t1 as select * from dba_objects;
ALTER TABLE t1 MODIFY
PARTITION BY HASH (object_id)
PARTITIONS 2
STORE IN (apps, apps, apps, apps);
CREATE INDEX i1local ON t1 (last_ddl_time) LOCAL;
CREATE INDEX i1global ON t1 (owner) global;
create table t3 as select * from dba_objects;
ALTER TABLE t3 MODIFY
PARTITION BY HASH (object_id)
SUBPARTITION BY HASH (object_name)
SUBPARTITIONS 2 PARTITIONS 2;
CREATE INDEX i3local ON t3 (last_ddl_time) LOCAL;
CREATE INDEX i3global ON t3 (owner) global;
CREATE INDEX i3globalpart ON t3 (object_id) global partition by hash (object_id) partitions 2;
EOF
rm /u01/app/oracle/admin/UP19/dpdump/*
expdp system/oracle schemas=appuser directory=dpdir
sqlplus / as sysdba<<EOF
drop user appuser cascade;
drop tablespace apps including contents and datafiles;
create tablespace apps default table compress for oltp index compress advanced low;
alter system set db_index_compression_inheritance=tablespace scope=both;
EOF
impdp system/oracle directory=dpdir transform=table_compression_clause:\"none\"
sqlplus / as sysdba<<EOF
prompt "TABLES"
select table_name, compression, compress_for from all_tables where owner='APPUSER';
prompt "TAB PARTITIONS"
select table_name,partition_name, compression, compress_for from all_tab_partitions where table_owner='APPUSER';
prompt "TAB SUBPARTITIONS"
select table_name,subpartition_name, compression, compress_for from all_tab_subpartitions where table_owner='APPUSER';
prompt "INDEXES"
select index_name, compression from all_indexes where table_owner='APPUSER';
prompt "IND PARTITIONS"
select index_name, compression from all_ind_partitions where index_owner='APPUSER';
prompt "IND SUBPARTITIONS"
select index_name, compression from all_ind_subpartitions where index_owner='APPUSER';
EOF
BFILEs are data objects stored in operating system files, outside the database tablespaces. Data stored in a table column of type BFILE is physically located in an operating system file, not in the database. The BFILE column stores a reference to the operating system file.
BFILEs are read-only data types. The database allows read-only byte stream access to data stored in BFILEs. You cannot write to or update a BFILE from within your application.
They are sometimes referred to as external LOBs.
You can store a BFILE locator in the database and use the locator to access the external data:
To associate an operating system file to a BFILE, first create a DIRECTORY object that is an alias for the full path name to the operating system file. Then, you can initialize an instance of BFILE type, using the BFILENAME function in SQL or PL/SQL …
In short, it is stuff stored outside the database that you can access from inside the database. Clearly, this requires special attention when you want to move your data.
How Do I Move It?
There are three things to consider:
The file outside the database – in the operating system.
The directory object.
The BFILE locator stored in the table.
Table and Schema Mode Export
You must copy the file in the operating system. Since a BFILE is read-only, you can copy the file before you perform the actual export.
You must create the directory object. Directory objects are system-owned objects and not part of a table or schema mode export.
Data Pump exports a BFILE locator together with the table. It exports the BFILE locator just like any other column. On import, Data Pump inserts the BFILE locator but performs no sanity checking. The database will not throw an error if the file is missing in the OS or if the directory is missing or erroneous.
Full Export
Like table and schema mode, you must copy the file.
Directory objects are part of a full export. On import, Data Pump creates a directory object with the same definition. If you place the external files in a different location in the target system, you must update the directory object.
Like table and schema mode. Data Pump exports the BFILE locator as part of the table.
Do I Have BFILEs in My Database?
You can query the data dictionary and check if there are any BFILEs:
SQL> select owner, table_name
from dba_tab_cols
where data_type='BFILE';
If you use Active Data Guard and have read-only sessions in your standby database, you should ensure that instances are properly drained before restarting the GI stack (via root.sh).
Later, when you want to patch the database, you can follow up the standby-first method described in Oracle Patch Assurance – Data Guard Standby-First Patch Apply (Doc ID 1265700.1). If the database patches you install are RAC Rolling Installable (like Release Updates), you should choose option 1 in phase 3 to avoid any downtime or brownout.
Alternative Approach
If you have many nodes in your cluster and an application that doesn’t behave well during draining, consider switching over to the standby site instead of patching the primary site in a rolling manner.
When you switch over, there is only one interruption, whereas many interruptions in a rolling patch apply.
Patch standby site, aarhus.
Switch over to aarhus.
Patch former primary, copenhagen.
What If You Want to Patch the Database At the Same Time?
Out-of-place SwitchGridHome
You get complete control over the process with Out-of-place SwitchGridHome. It is my preferred method. There are more commands to execute, but it doesn’t matter if you automate it.
Here is an overview of the process. You can use many of the commands from this blog post:
Prepare new GI homes using gridSetup. Be sure to apply the needed patches. Do it on one node in both primary (copenhagen) and standby site (aarhus). The process will copy the new GI home to all other nodes in the cluster. Do not executeroot.sh.
Prepare new database homes. Be sure to apply the needed patches. Here is an example. Do it on one node in both primary (copenhagen) and standby site (aarhus). The process will copy the new database home to all other nodes in the cluster. Remember to execute root.sh.
Out-of-place OPatchAuto is a convenient way of patching because it also automates the database operations. However, I still recommend using Out-of-place SwitchGridHome method because it gives you more control over draining.
Here is an overview of the process:
Deploy new GI and database homes using opatchauto apply ... -prepare-clone. Do it on all nodes in both primary (copenhagen) and standby site (aarhus). Since you want to patch GI and database homes, you should omit the -oh parameter.
Complete patching of all nodes in the standby site (aarhus) using opatchauto apply -switch-clone.
When OPatchAuto completes the switch on a node, it takes down the entire GI stack on that node, including database instance.
GI restarts using the new GI home. But the database instance still run on the old database home.
On the last node, after the GI stack has been restarted, all database instances restart again to switch to the new database home. This means that each database instance will restart two times.
Proceed with the primary site, copenhagen.
Complete patching of all nodes in the primary site (copenhagen) using opatchauto apply -switch-clone.
The procedure is the same as on the standby site.
In addition, OPatchAuto executes Datapatch to complete the database patching.
When you perform maintenance operations, like patching, consider what to do about Fast-Start Failover (FSFO).
If you have one standby database
Single instance standby
I recommend disabling FSFO. If something happens to the primary database while you are patching the standby site, you don’t want to switch over or fail over automatically. Since the standby site is being patched, the standby database might restart shortly. You should evaluate the situation and determine what to do rather than relying on FSFO handling it.
RAC standby
I recommend disabling FSFO for the same reasons as above. Now, you could argue that the standby database is up all the time if you perform rolling patching. That’s correct, but nodes are being restarted as part of the patching process, and services are being relocated. Having sessions switching over or failing over while you are in the middle of a rolling patch apply is a little delicate situation. Technically, it works; the Oracle stack can handle it. But I prefer to evaluate the situation before switching or failing over. Unless you have a super-cool application that can transparently handle it.
Nevertheless, leaving FSFO enabled when you patch GI or a database is fully supported.
If you have more standby databases
I recommend keeping FSFO enabled if you have multiple standby databases.
When you patch one standby database, you can set FastStartFailoverTarget to the other standby database. When patching completes, you can set FastStartFailoverTarget to the first standby database and continue patching the second standby database. This keeps your primary database protected at all times.
The Easy Way
As shown above, you can patch Oracle Grid Infrastructure even when you have Oracle Data Guard configured. But why not take the easy way and use Oracle Fleet Patching and Provisioning (FPP)?
FPP automatically detects the presence of Data Guard and executes the commands in the appropriate order, including invoking Datapatch when needed.
If you need to know more, you can reach out to Philippe Fierens, product manager for FPP. He is always willing to get you started.
The inventory registers the GI Release Updates as OCW RELEASE UPDATE. In this example, GI is running on 19.17.0.
Sometimes critical one-off patches are delivered as merge patches with the GI Release Update. It can mess up the patch description. This example is from a Base Database Service in OCI:
The patch description no longer contains the name of the Release Update. In this case, you can trawl through MOS to find the individual patches in the merge patch to identify which Release Update it contains. Or, you can often look at the ACFS patch instead:
I have shown you a few ways to patch Oracle Grid Infrastructure 19c (GI). Which one should you choose? Here’s an overview of the pros and cons of each method.
Just Grid Infrastructure Or Also The Database
You can patch:
Just GI and later on the database
Or GI and the database at the same time
If possible, I recommend patching GI and database in a separate maintenance operation. Proceed with the database when you are confident the new GI runs fine. If you do it a week apart, you should have enough time to kick the tires on the new GI.
I like to keep things separate. If there is a problem, I can quickly identify whether the GI or database patches are causing problems. The more patches you put in simultaneously, the more changes come in, and the harder it is to keep things apart.
The downside is that you now have two maintenance operations; one for GI and one for the database. But if your draining strategy works and/or you are using Application Continuity, you can complete hide the outage from your end users.
If you have a legacy application or draining is a nightmare for you, then it does make sense to consider patching GI and database at the same time.
I recommend out-of-place patching. There are multiple ways of doing that. Choose the one that suits you best. My personal favorite is the SwitchGridHome method.
Happy Patching
There are even more methods than I have shown in this blog post series. I have demonstrated the methods that most people would consider. Evaluate the pros and cons yourself and choose what works best for you.
What’s your favorite? Why did you choose a specific method? Leave a comment and let me know.
If you decide to patch GI and database at the same time, be aware of the following. The database instance will need to restart two times. First, each instance goes does to switch to the new GI. The second time is when you switch on the last node. Then all database instances are brought down again in a rolling manner and restarted in the new Oracle Home.
If you want to control draining yourself, don’t use this method. The second database restarts happens completely automated one after the other. Without any possibility for you to intervene to control draining.