A Few Details about Using Refreshable Clone PDB for Non-CDB to PDB Migration

Our team has been advocating the use of refreshable clone PDB for non-CDB to PDB migrations using AutoUpgrade. It is a great feature and our entire team loves it – so does many of the customers we work with.

However, in a recent non-CDB to PDB migration, we encountered some issues with refreshable clone PDB and AutoUpgrade.

Can My Target Container Database Be a RAC Database?

Yes, this works perfectly fine.

Be aware that CREATE PLUGGABLE DATABASE statement scales out on all nodes in your cluster. By default, the database also uses parallel processes, so potentially, this will put quite a load on the source non-CDB. Consider restricting the use of parallel processes using the AutoUpgrade config file parameter:

upg1.parallel_pdb_creation_clause=4

Since the creation scales out on all nodes, all nodes must be able to resolve the connect identifier to the source non-CDB. If you use an alias from tnsnames.ora, be sure to add that on all nodes. Failure to do so will lead to an error during the CREATE PLUGGABLE DATABASE command:

ERROR at line 1:
ORA-65169: error encountered while attempting to copy file
+DATAC1/SRCDB/DATAFILE/system.262.1178083869
ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified
ORA-17629: Cannot connect to the remote database server

What Happens If the Source Database Extends a Data File?

If the source database extends a data file – either through AUTOEXTEND ON NEXT or manually by a user – the target database extends the matching data file as well. Here is an extract from the target alert log when it extends a data file:

2024-08-27T07:01:26.671975+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178089277
2024-08-27T07:01:32.773191+00:00
PDB1(4):Resize operation completed for file# 26, fname +DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs02.290.1178089287, old size 10240K, new size 1058816K

It works with smallfile and bigfile tablespaces.

What Happens If I Create a Tablespace on the Source Database?

The target database attempts to create the same tablespace.

For this to work, one of the following must be true:

If either one of the above isn’t true, you’ll receive an error during ALTER PLUGGABLE DATABASE ... REFRESH:

ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as
'+DATAC1/SRCDB/DATAFILE/srctbs04.282.1178091655'
You can use PDB_FILE_NAME_CONVERT instead.

It works with smallfile and bigfile tablespaces.

What Happens If I Add a Data File to an Existing Tablespace?

The target database attempts to add a matching data file.

The target database must be able to translate the data file location according to the section above.

2024-08-27T06:51:19.294612+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178088679
2024-08-27T06:51:20.268208+00:00
PDB1(4):Successfully added datafile 25 to media recovery
PDB1(4):Datafile #25: '+DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs01.289.1178088681'

What Happens If I Set a Tablespace Read-Only?

The refreshable clone PDB does not support this. Neither is going the other way: setting a tablespace read-write.

If you do so, the database reports an error:

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:23:02.893946+00:00
PDB2(6):Error! unsupported source PDB operation: 21
2024-08-28T05:23:02.994035+00:00
PDB2(6):Media Recovery failed with error 65339

Operation 21 is setting a tablespace read-only. If you set a tablespace read-write, the database reports operation 20 instead.

PDB2(7):Error! unsupported source PDB operation: 20

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

What Happens If I Restart the Source Database?

Refreshable clone PDB does not support restarting the source database.

When you restart the source database, the source database places a special marker in the redo stream. This even happens for a clean shutdown (SHUTDOWN NORMAL). The target CDB does not understand how to recover beyond this marker.

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:27:00.451985+00:00
PDB2(4):Error! unsupported source PDB operation: 3
2024-08-28T05:27:00.710236+00:00
PDB2(4):Media Recovery failed with error 65339

Operation 3 is the source database restart.

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

How Do I Drain My Source Database Before Migration?

Right before the migration, when you cut the connection from the source non-CDB to the target PDB, it could be useful to restart the database. But that’s not possible.

I suggest that you:

  • Ensure that the target CDB connects to the source non-CDB using a dedicated service. This applies to the database link that you establish between the two databases.
  • Stop all other services and specify a drain timeout.
  • Shut down the application that connects to the source non-CDB.
  • Kill sessions manually.

Remember that the target database connects to the source database via a database link, so stopping the database listener is not an option. Nor is enabling RESTRICTED SESSION.

Update: Armando managed to perform the migration using restricted session. Check his comment (see below) for details.

What Happens If I Restart the Target Container Database?

You can safely restart the target CDB while you have a refreshable clone PDB. This works fine.

What About NOLOGGING Operations?

You can’t perform NOLOGGING operations on the source database.

Since refreshable clone PDB relies on redo, then a NOLOGGING operation on the source will prevent that data from going to the target. When you try to query the NOLOGGING table on the target database after the migration, you will receive an error:

SQL> select count(*) from t1
       *
ERROR at line 1:
ORA-28304: Oracle encrypted block is corrupt (file # 186, block # 131)
ORA-01110: data file 186:
'+DATA/TGTCDB_HBZ_FRA/20CF181D4A925E06E0633D01000ACB50/DATAFILE/srctbs01.297.117
8266961'
ORA-26040: Data block was loaded using the NOLOGGING option

Thanks to Marcelo for leaving a comment. He suggests that you set the source non-CDB in FORCE LOGGING mode. This is a good idea to avoid this potential nightmare:

alter database force logging;

You can read more about NOLOGGING operations in The Gains and Pains of Nologging Operations (Doc ID 290161.1).

What About Hot Backups?

You can’t perform hot backup operations on the source database.

If you do so, you’ll run into the following error:

2025-11-21T14:31:06.845676+00:00
SALES(4):Error! unsupported source PDB operation: 1
2025-11-21T14:31:07.845923+00:00
SALES(4):Media Recovery failed with error 65339

Please note that I’m not referring to RMAN online backups. I’m talking about the old-school ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP commands.

Any restrictions on data types or object types?

No. The refreshable clone is a physical copy of the database, so there are no restrictions on data types or object types.

Services

You must recreate your services after the migration. Neither database managed services nor Clusterware managed services survive the migration.

Further Readin

Summary

Despite these minor restrictions, migration from non-CDB to PDB using refreshable clone PDB and AutoUpgrade is still a very handy method. Knowing the restrictions upfront ensures that you can successfully migrate the database.

Happy migrating!

Does Data Pump Export Advanced Data Optimization Policies and Heat Maps?

During our recent webinar on cross-platform migration, an attendee asked a good question:

Does Data Pump in a Full Transportable Export/Import move ADO policies and heat map information?

Let’s find out.

What Is It?

ADO means Advanced Data Optimization and is a feature that:

… automate the compression and movement of data between different tiers of storage within the database.

ADO uses heat maps that :

… provides data access tracking at the segment-level and data modification tracking at the segment and row level.

In other words,

  • You might have slower and faster storage.
  • You define ADO policies on tables and indexes (including partitions and subpartitions).
  • The policies define in which storage the database stores the rows.
  • The heat maps tell which objects you access.
  • Based on heat maps, a job automatically moves rows between storage depending on the use and/or compress it.

What Happens in Data Pump

There are three pieces of essential information:

  1. ADO policies. Defined on objects and governs how the database should store data.
  2. Heat map information. The database records your usage of the data and stores the information in heat maps. Later on, the database uses the heat maps to determine what to do with your data.
  3. ILM settings. Tells the database how to perform the ILM (Information Lifecycle Management) maintenance. For example, you can define the parallel degree that the database uses during maintenance or the number of concurrent maintenance jobs.

You can find a test case at the end of the blog post. I’m using test data and queries from oracle-base.com.

ADO Policies

Data Pump transfers the ADO policies in all modes:

  • Full transportable
  • Full
  • Schema
  • Tablespace
  • Table

You can verify it:

SELECT   policy_name,
         object_owner,
         object_name,
         object_type,
         inherited_from,
         enabled,
         deleted
FROM     dba_ilmobjects
ORDER BY 1;

Heat Map Information

Data Pump does not transfer any heat map information. I didn’t find any way to move the heat map information to the new database.

ILM Settings

Data Pump does not transfer ILM settings – not even in full modes. You must manually move the settings.

  • You can find the current ILM settings:

    SELECT * FROM DBA_ILMPARAMETERS;
    
  • And change the settings in the target database:

    EXEC DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.DEG_PARALLEL , 4);
    
    • You must translate the name of the setting to the corresponding PL/SQL constant. You change the setting DEGREE OF PARALLELISM by using DBMS_ILM_ADMIN.DEG_PARALLEL.
  • The documentation holds a complete list of ILM settings.

Appendix

Further Reading

Test Data

I am using our Hands-On lab that you can use on Oracle LiveLabs. Again, thanks to oracle-base.com for test data and queries.

CONN / AS SYSDBA
--Turn on heat map tracking
ALTER SYSTEM SET HEAT_MAP=ON SCOPE=BOTH;

--Customize ILM settings
BEGIN
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.RETENTION_TIME, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.JOBLIMIT, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.ABS_JOBLIMIT, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.DEG_PARALLEL , 4);
END;
/
SELECT * FROM DBA_ILMPARAMETERS;

--Create tablespaces for ILM policy
CREATE TABLESPACE FAST_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE MEDIUM_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE SLOW_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE USER TEST IDENTIFIED BY TEST QUOTA UNLIMITED ON USERS;
GRANT CREATE SESSION, CREATE TABLE TO TEST;
ALTER USER TEST QUOTA UNLIMITED ON FAST_STORAGE_TS;
ALTER USER TEST QUOTA UNLIMITED ON MEDIUM_STORAGE_TS;
ALTER USER TEST QUOTA UNLIMITED ON SLOW_STORAGE_TS;

CONN TEST/TEST

--Create table where we can track usage
CREATE TABLE t1 (
  id          NUMBER,
  description VARCHAR2(50),
  CONSTRAINT t1_pk PRIMARY KEY (id)
);
INSERT INTO t1
SELECT level,
       'Description for ' || level
FROM   dual
CONNECT BY level <= 10;
COMMIT;

--Generate "usage"
SELECT *
FROM   t1;
SELECT *
FROM   t1
WHERE  id = 1;

--Create table for ILM policy
CREATE TABLE invoices (
  invoice_no    NUMBER NOT NULL,
  invoice_date  DATE   NOT NULL,
  comments      VARCHAR2(500)
)
PARTITION BY RANGE (invoice_date)
(
  PARTITION invoices_2016_q1 VALUES LESS THAN (TO_DATE('01/04/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
  PARTITION invoices_2016_q2 VALUES LESS THAN (TO_DATE('01/07/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
  PARTITION invoices_2016_q3 VALUES LESS THAN (TO_DATE('01/09/2016', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
    ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
  PARTITION invoices_2016_q4 VALUES LESS THAN (TO_DATE('01/01/2017', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
    ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
  PARTITION invoices_2017_q1 VALUES LESS THAN (TO_DATE('01/04/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
    ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS,
  PARTITION invoices_2017_q2 VALUES LESS THAN (TO_DATE('01/07/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
    ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS
)
ILM ADD POLICY ROW STORE COMPRESS BASIC SEGMENT AFTER 3 MONTHS OF NO ACCESS;

Test Case

###############
# FULL EXPORT #
###############

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle full=y directory=expdir job_name=fullexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

#################
# SCHEMA EXPORT #
#################

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle schemas=test directory=expdir job_name=schemaexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

################
# TABLE EXPORT #
################

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle tables=test.t1,test.invoices directory=expdir job_name=tabexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
    CREATE USER TEST IDENTIFIED BY TEST QUOTA UNLIMITED ON USERS;
    GRANT CREATE SESSION, CREATE TABLE TO TEST;
    ALTER USER TEST QUOTA UNLIMITED ON FAST_STORAGE_TS;
    ALTER USER TEST QUOTA UNLIMITED ON MEDIUM_STORAGE_TS;
    ALTER USER TEST QUOTA UNLIMITED ON SLOW_STORAGE_TS;
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

It’s a Wrap – Transportable Tablespaces to the Extreme

Recently, we hosted our webinar Cross Platform Migration – Transportable Tablespaces to the Extreme. You can now watch the recording on our YouTube channel.

The audience at our webinar was very active and asked many good questions. To know all the details, you can read the Q&A and the slides.

Some of the attendees asked questions we were not able to answer during the webinar. Those questions – and answers – are included in the Q&A.

The New Procedure

For cross-platform and cross-endian migrations, we have a new procedure called M5. It replaces the former XTTS v4 Perl script that has been used for many migrations. We could see the need for changes as Oracle Databases world-wide grows in size and complexity. M5 implements the latest RMAN and Data Pump technology to deliver the fastest possible migrations.

You can download the M5 scripts and read the procedure on My Oracle Support (Doc ID 2999157.1).

Next Webinar

Mark your calendar for our next webinar:

Move to Oracle Database 23c – Everything you need to know about Oracle Multitenant

Oracle Database 23c does only support the CDB architecture. If you haven’t migrated to Oracle Multitenant yet, then you will be with your upgrade to 23c. How do you approach it in the most efficient way? What are the other options? And why is this a migration unless you have PDBs already? All this and way much more about how to work with Multitenant, how AutoUpgrade automates the entire move for you, end-to-end – and best practices and tips and tricks. We’ll guide you, and you will be ready to move to Oracle Database 23c right away

Sign up now and secure your seat.

All tech – no marketing!

Happy Migrating!

The Next-generation Cross-platform Migration for Oracle Database

I am very pleased to share that Oracle has officially launched a new method for cross-platform migrations of Oracle Database.

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

The M5 script is the next-generation cross-platform transportable tablespace migration procedure for Oracle Database

You can also use the script for cross-endian migrations, so this is perfect for AIX, HP-UX or SPARC migrations to Exadata Database Machine or any other little endian platform.

Next-generation

Before we launched M5, you would use the XTTS v4 Perl script for such migrations.

Timeline of methods for cross-endian migrations

We launched the latest version of the XTTS v4 Perl script many years ago. Over the last years, as databases grew bigger and bigger, we saw multiple issues with XTTS v4 Perl script, including:

  • No multisection backups for bigfile tablespaces
  • No encrypted tablespaces
  • Inefficient parallelism
  • Incomplete multitenant support

We wanted to solve all those issues with M5 – and we did! M5 uses newer RMAN functionality, and we made the procedure much simpler. It relies entirely on out-of-the-box RMAN functionality. On the source:

BACKUP ... TABLESPACE ... ;

On target we use:

RESTORE ALL FOREIGN DATAFILES ... ;

The latter command was introduced in Oracle Database 18c and enhanced in Oracle Database 19c. This means that the requirements for source and target database are:

Want to Know More?

We have a webinar later today about this new method. If you are interested, we still have open seats.

Cross Platform Migration – Transportable Tablespaces to the Extreme, February 22, 16:00 CET

Next week, we will add the recording to our YouTube channel. Be sure to subscribe so you don’t miss out on anything.

If you want a sneak peek at the slides, go ahead.

I have an elaborate blog post series about cross-platform migrations. I will soon update it with more information about the M5 migration method.

Happy Migrating!

How to Export and Import Statistics Faster Using DBMS_STATS in Parallel

When you migrate Oracle Database, you often need to transport statistics using DBMS_STATS. Also, during migrations, you want to minimize the downtime. How can you transfer statistics as quickly as possible?

Export and Import

The easiest solution is to use dbms_stats.export_database_stats and dbms_stats.import_database_stats. But, the procedures have no parallel capabilities. If you have an Oracle Database with many objects, it will take a long time.

Parallel Export and Import

dbms_stats also allows you to export and import statistics on schema or table level. You can use this to your advantage to run multiple export or import commands at the same time.

Imagine a database with four schemas.

If you export database stats, the database creates a list of statistics from all four schemas. Then, it exports them one by one.

  • Session 1: exec dbms_stats.export_database_stats( ... );

In contrast, if you start four different sessions, you can export one schema in each session and thus finish much faster.

  • Session 1: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA1');
  • Session 2: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA2');
  • Session 3: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA3');
  • Session 4: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA4');

Benchmark

This benchmark gives you an idea of how much time you can save.

It is an Oracle E-Business Suite (EBS) database with 1.400 schemas and 150.000 tables/indexes.

I compare exporting and importing statistics on database level and on schema level. For schema level, I divide the schemas into chunks and process them from separate sessions simultaneously.

Method Time to export Time to import Total
Database 4m 5s 21m 5s 25m 10s
Schema, parallel 8 57s 3m 58s 4m 55s
Schema, parallel 16 53s 3m 45s 4m 38s

I can save more than 20 minutes in this benchmark by doing the work in parallel. Between parallel 8 and parallel 16, there is not much to gain, probably because of contention.

How to Export and Import in Parallel Using DBMS_STATS

Here is some simple code you can use to export and import in parallel.

I haven’t coded for years, so use at your own risk :)

  1. Create a dedicated user for the statistics staging table and create a control table with the list of schemas to process.

    drop user statuser cascade;
    grant dba to statuser identified by statuser;
    alter user statuser quota unlimited on users;
    --Create the DBMS_STATS staging table
    exec dbms_stats.create_stat_table('STATUSER', 'DBSTATS', 'USERS');
    --Populate control table
    declare
       l_stmt VARCHAR2(1000);
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stat_tab_owner ALL_USERS.USERNAME%TYPE := 'STATUSER';
    begin
       --Create control table and populate it
       --Order schemas by how many tables they have
       l_stmt := 'CREATE TABLE ' || l_stat_tab_owner || '.' || l_ctl_tab_name; 
       l_stmt :=l_stmt || ' as select o.owner stat_id, count(o.table_name) as cnt, 0 exp_status, 0 imp_status from dba_tables o, dba_users u where u.username not in (''SYS'', ''SYSTEM'', ''' || l_stat_tab_owner || ''') and u.oracle_maintained=''N'' and o.owner=u.username group by owner order by 2 desc';
       execute immediate l_stmt;
    end;
    /
    
  2. Start any number of database sessions and run the code below in each. The code selects a schema from the control table and performs a schema-level export of statistics into the staging table.

    declare
       l_cur_stat_id ALL_USERS.USERNAME%TYPE;
       l_statown ALL_USERS.USERNAME%TYPE := 'STATUSER';
       l_stattab ALL_TABLES.TABLE_NAME%TYPE := 'DBSTATS';
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stmt VARCHAR2(1000);
    begin
       begin
          --Loop until you reach "no_data_found exception"
          while true loop
             --Select a schema from the control table
             --Order by count (cnt) to export largest schemas first
             --Select for update to ensure only one session process a schema
             l_stmt := 'select stat_id from ' || l_statown || '.' || l_ctl_tab_name || ' where exp_status = 0 and rownum=1 order by cnt desc for update';
             execute immediate l_stmt into l_cur_stat_id;
             
             --Mark the schemas as "in progress"
             --Commit to release lock on control table			
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set exp_status=1 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
    
             --Perform the schema level export into DBMS_STATS staging table
             dbms_stats.export_schema_stats (ownname=>l_cur_stat_id, stattab=>l_stattab,statid=>l_cur_stat_id, statown=>l_statown);
    
             --Mark the schema as completed in the control table
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set exp_status=2 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
          end loop;
       exception when no_data_found then
          --"No_data_found" exception occurs when there are no more rows/schemas to process	  
          null;
       end;
    end;
    /
    
  3. Now move the DBSTATS schema to the target database and run the import. Start any number of database sessions and run the code below in each. The code selects a schema from the control table and performs a schema-level import of statistics into the staging table.

    declare
       l_cur_stat_id ALL_USERS.USERNAME%TYPE;
       l_statown ALL_USERS.USERNAME%TYPE := 'STATUSER';
       l_stattab ALL_TABLES.TABLE_NAME%TYPE := 'DBSTATS';
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stmt VARCHAR2(1000);
       no_stats_imp EXCEPTION;
       PRAGMA EXCEPTION_INIT(no_stats_imp, -20000);
    begin
       begin
            --Loop until you reach "no_data_found exception"
          while true loop
             --Select a schema from the control table
             --Order by count (cnt) to import largest schemas first
             --Select for update to ensure only one session process a schema            l_stmt := 'select stat_id from ' || l_statown || '.' || l_ctl_tab_name || ' where imp_status = 0 and rownum=1 order by cnt desc for update';
             execute immediate l_stmt into l_cur_stat_id;
     		
             --Mark the schemas as "in progress"
             --Commit to release lock on control table					
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set imp_status=1 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
    
             begin
                --Perform the schema level import into DBMS_STATS staging table	
                dbms_stats.import_schema_stats (ownname=>l_cur_stat_id, stattab=>l_stattab,statid=>l_cur_stat_id, statown=>l_statown);
             exception when no_stats_imp THEN
                --An empty schema, i.e., no tables or indexes			
                null;
             end;
     		
             --Mark the schema as completed in the control table             
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set imp_status=2 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
          end loop;
       exception when no_data_found then
          --"No_data_found" exception occurs when there are no more rows/schemas to process	  	  
          null;
       end;
    end;
    /   
    

Does Exporting Database Statistics Include the Dictionary Statistics?

I recently worked on a migration of Oracle Database to a new platform. The customer was sensitive to downtime. We looked at each item in the migration run book and asked: “Can we make it faster?”

We discussed exporting database statistics, and the question: Does dbms_stats.export_database_stats include dictionary statistics?

The documentation states:

This procedure exports statistics for all objects in the database and stores them in the user statistics tables identified by statown.stattab.

What exactly do all objects mean?

Let’s Find Out

Here is a little test case.

I gather dictionary statistics. All but 77 dictionary tables now have statistics (it’s intentional that some are left un-analyzed):

exec dbms_stats.gather_dictionary_stats;
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;
	   
       CNT
----------
	77

I export database statistics into a staging table:

grant dba to statuser identified by statuser;
begin
   dbms_stats.create_stat_table(
      'STATUSER', 
      'DBSTATS');
   dbms_stats.export_database_stats(
      stattab=>'DBSTATS',
      statown=>'STATUSER');
   dbms_stats.gather_table_stats(
      'STATUSER', 
      'DBSTATS');
end;
/

I delete dictionary statistics. Now, 1674 dictionary tables are without statistics:

exec dbms_stats.delete_dictionary_stats(force=>true);
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;

       CNT
----------
      1674

I import statistics from the staging table. I am back at 77 dictionary tables without statistics.

begin
   dbms_stats.import_database_stats(
      stattab=>'DBSTATS',
      statown=>'STATUSER', 
      force=>TRUE);
end;
/
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;

       CNT
----------
	77

Conclusion: dbms_stats.export_database_stats includes dictionary statistics.

What about Fixed Objects Statistics

I can use the same test case as above (more or less). Here is the query I use to check for missing statistics on fixed objects:

select count(*) cnt
from   dba_tab_statistics 
where  object_type='FIXED TABLE'
       and last_analyzed is null;

The answer is: dbms_stats.export_database_stats does not include fixed objects statistics.

To transfer fixed object statistics, you must use the dedicated export and import functions.

Is It a Good Idea to Transfer Dictionary Statistics?

Well… It depends.

If time allows, I recommend you always gather dictionary statistics in the target database at the appropriate time.

If you want to transfer statistics, this is my recommendation:

Type Command Recommendation
Full export full=y Should be acceptable
Full transportable full=y transportable=always Should be acceptable
Transportable tablespace transport_tablespaces Hmm, not sure it’s a good idea
Tablespace export tablespaces=... Hmm, not sure it’s a good idea
Schema export schemas=... Hmm, probably a bad idea
Table export tables=... No, most likely a bad idea

How to Upgrade Oracle Database and Replace the Operating System

A reader asked me for advice on upgrading Oracle Database and replacing the underlying operating system.

  • Currently on Oracle Database 12.1.0.2
  • Currently on Windows Server 2012
  • Upgrade to Oracle Database 19c
  • Move to new servers with Microsoft Windows Server 2022

What’s the recommended approach for transitioning to Oracle 19c on Windows 2022?

Oracle Data Guard

My first option is always Oracle Data Guard. It is often a superior option. You move the entire database, and the only interruption is a Data Guard switchover.

In this case, where the reader needs to replace the operating system, the first thing to check is platform certifications. Always check platform certifications on My Oracle Support. It has the most up-to-date information.

Here is an overview of the platform certification for Oracle Database 12.1.0.2 and 19c.

Oracle Database 12.1.0.2 Oracle Database 19c
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2 Windows Server 2012 R2
Windows Server 2016
Windows Server 2019
Windows Server 2022

Oracle Database 19c does not support the current platform, Windows Server 2012. Thus, the reader can’t set up a standby database on the new servers and transition via a regular switchover.

Windows Server 2012 R2

Let’s imagine the current servers were using Windows Server 2012 R2. Both database releases support this platform. I would recommend this approach:

  1. Upgrade to Oracle Database 19c on current servers.
  2. Set up new servers with Windows Server 2022.
  3. Create standby database on new server.
  4. Transition to new servers with a regular Data Guard switchver.

This approach requires two maintenance windows. Yet, it is still my favorite because it is very simple.

RMAN Backups

You could also use RMAN and incremental backups. You don’t need much downtime – just the time necessary for a final incremental backup and restore. Like with Data Guard, you bring over the entire database.

RMAN can restore backups from a previous version, and you can use that to your advantage.

  1. Provision new servers with just Oracle Database 19c.
  2. Backup on 12.1.0.2.
  3. Restore and recover the database on the new servers with Oracle Database 19c binaries.
  4. After the final incremental backup, open the new database in upgrade mode and perform the upgrade.

We covered this approach in one of our webinars; you can also find details in this blog post.

Move Storage

You can also unmount the storage from the old server, and attach it to the new server.

  1. Run AutoUpgrade in analyze mode to determine upgrade readiness.
  2. Down time starts.
  3. Run AutoUpgrade in fixup mode to fix any issues preventing the upgrade from starting.
  4. Cleanly shut down the source database.
  5. Move the storage to the new server.
  6. Start the database on the new server in upgrade mode.
  7. Start AutoUpgrade in upgrade mode to complete the upgrade.

This is just a high-level overview. For a real move, there are many more intermediate steps.

Be sure to have a proper rollback plan. You are re-using the data files and AutoUprade in upgrade mode does not create a guaranteed restore point.

Data Pump

Data Pump is also a viable option, especially for smaller, less complex databases. It also enables you to restructure your database, for example:

  • Transform old BasicFile LOBs to SecureFile
  • Implement partitioning
  • Exclude data (for archival)
  • You can import directly into a higher release and even directly into a PDB.

But – the larger the database, the longer downtime (generally speaking).

When you use Data Pump for upgrades, I recommend using a full database export.

Full Transportable Export/Import

You can also use transportable tablespaces for upgrades. You can even migrate directly into a PDB on Oracle Database 19c.

The downside of transportable tablespace is that you must copy the data files to the new system.

But often, you can unmount the storage and mount the storage on the new servers. This avoids the cumbersome process of copying the data files to the new system.

Another approach is to combine transportable tablespaces with incremental backups, if you want to lower the downtime needed. This approach leaves the original database untouched, leaving you with a perfect rollback option.

Oracle GoldenGate

You could also use Oracle GoldenGate. But for most upgrades, it is overkill, partly because of the restrictions and considerations. I see this as a sensible option only if you have very strict downtime or fallback requirements.

Conclusion

What is the best option?

It depends…

This post helps you make the best decision for your organization.

What Happens to Your Oracle Data Guard During Conversion to Multitenant

Oracle Data Guard is an amazing piece of tech. It helps keeping your data safe. When you convert to the multitenant architecture, it is crucial that you don’t jeopardize your Data Guard configuration.

Follow the below steps to bring along your standby database.

What’s The Problem

When you prepare for multitenant conversion, you prepare two things:

  • Data files – you make the data files consistent by opening the non-CDB in read-only mode.
  • Manifest file – you create an XML file which contains information about the non-CDB.

The manifest file contains information about the data files, including the location. However, the manifest file lists only the location on the primary database. There is no information about the standby database.

When you plug in the non-CDB, the plug-in happens without problems on the CDB primary database. It reads the manifest file and finds the data files.

But what about the CDB standby database? Since the manifest file does not list the file location on the standby host, how can the standby database find the corresponding data files?

The Options

There are two options which you control with the standbys clause on the create pluggable database statement:

  • Enabled recovery:
    • You specify standbys=all, or you explicitly list the standby database in the standbys clause.
    • On plug-in, the CDB standby database must find the data files. How the standby database finds the data files depends on the configuration.
    • The new PDB is protected by Data Guard immediately on plug-in.
    • If the standby database fails to find the data files, recovery stops for the entire CDB. All your PDBs are now unprotected unless you use PDB Recovery Isolation (see appendix).
  • Deferred recovery:
    • You specify standbys=none, or you don’t list the standby database in the standbys clause.
    • On plug-in, the CDB standby notes the creation of the PDB but does not attempt to find and recover the data files.
    • The new PDB is not protected by Data Guard until you provide the data files and re-enable recovery as described in Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1). Typically, this means restoring all data files to the standby system. The other PDBs in the CDB standby are fully protected during the entire process.

Convert with AutoUpgrade

You must convert with deferred recovery on the CDB standby database. AutoUpgrade uses this approach by default:

upg1.manage_standbys_clause=standbys=none

When AutoUpgrade completes, you must follow the process to restore the data files on the CDB standby database and re-enable recovery.

There is no way to plug in with enabled recovery. This includes the alias trick. This requires work on the primary and standby systems. AutoUpgrade is a fully automated process that does not allow you to intervene midway.

If you set manage_standbys_clause to anything but the default to plug in with enabled recovery, you will most likely end up in problems. Either the data files are missing on the standby system or not at the right SCN. This stops the MRP process in the standby database. Since the MRP process is responsible for recovering all the other PDBs as well, you are not only breaking the recently added PDB, but also all other PDBs.

Convert Manually

ASM

You can plug-in with enabled recovery and use the data files on the standby. The standby database searches the OMF location for the data files. ASM does not you manually moving files into an OMF location. Instead, you can create aliases in the OMF location as described in Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1). The standby database follows the plug-in operation.

This option won’t work, if you use the as clone clause on the create pluggable database statement. The clause generates a new GUID and since the GUID is part of the OMF location, you won’t be able to create aliases upfront.

Alternatively, you can plug in with deferred recovery.

OMF in File System

You can plug-in with enabled recovery and use the data files on the standby. The CDB standby database searches the OMF location for the data files. Either:

  • Move the data files into the OMF location.
  • Create soft links in the OMF location for each data file pointing to the current location.

These options won’t work, if you want to use the as clone clause. The clause generates a new GUID and since the GUID is part of the OMF location, you don’t know the OMF location upfront.

If you set standby_pdb_source_file_directory in the CDB standby database, it looks for the data files in that directory. However, it will always copy the data files into the OMF location. Even if you specify create pluggable database ... nocopy. Setting standby_pdb_source_file_directory is, however, compatible with the as clone clause.

Alternatively, you can plug in with deferred recovery.

Regular Files

The database uses regular files when db_create_file_dest is empty.

If you plug in with enabled recovery, the CDB standby database expects to find the data files in the exact same location (path and file name) as on the primary database. The location is either the full path from the manifest file or the location specified by create pluggable database ... source_file_directory='<data_file_location>'.

If the data files are in a different location on the CDB standby database, you either:

  • Set db_file_name_convert in your CDB standby database. This changes the name of each of the data files accordingly.
  • Set standby_pdb_source_file_directory in your CDB standby database. When media recovery looks for a specific file during plug-in, it searches this directory instead of the full path from the manifest file.

You can plug-in using the as clone clause without problems.

Alternatively, you can plug in with deferred recovery.

Refreshable Clone PDBs

When you migrate a non-CDB using refreshable clone PDBs, you are using a clone of the non-CDB database. Thus, there are no existing data files on the standby database that you can use.

You can only create a refreshable clone PDB with deferred recovery (standbys=none). After you transition the refreshable clone PDB into a regular, stand-alone PDB using alter pluggable database ... refresh mode none, you must follow the process to restore the data files and re-enable recovery. If you use AutoUpgrade, you must wait until the entire job completes.

Until you have completed the recovery process, the PDB is not protected by Data Guard.

For further information, including how Oracle Cloud Infrastructure makes it easier for you, have a look at Sinan’s blog post.

Important

Whichever method you choose, you must check your Data Guard configuration before going live.

  1. Check the recovery status on all standby databases:

    select name, recovery_status
    from   v$pdbs;
    
  2. Test the Data Guard configuration by performing a switchover.

Don’t go live without checking your Data Guard configuration!

Appendix

PDB Recovery Isolation

PDB Recovery Isolation is a new feature in Oracle Database 21c.

In an Active Data Guard environment, PDB recovery isolation ensures that media recovery of a CDB on the standby is not impacted when one or more PDBs are not consistent with the rest of the CDB.

Source: About PDB Recovery Isolation

If you plug in a database with standbys=all (via a refreshable clone PDB) and the standby database can’t find the data files, PDB recovery isolation kicks in:

  • The standby database disables recovery of the affected PDB.
  • The standby database restores the data files from the primary database.
  • After restore, the standby database re-enables recovery of the PDB.
  • The affected PDB is unprotected until the process is completed.
  • The other PDBs are unaffected by the situation.

PDB Recovery Isolation reduces risk and automates the resolution of the problem.

At the time of writing, it requires a license for Active Data Guard.

Further Reading

Thank You

A big thank you to my valued colleague, Sinan Petrus Toma, for teaching me about PDB recovery isolation.

Data Pump and Parallel Transportable Jobs

In migrations, you often use transportable tablespaces or Full Transportable Export/Import (FTEX). Downtime is always of concern when you migrate, so having Data Pump perform the transportable job in parallel is a huge benefit.

How much benefit? Let’s find out with a little benchmark.

The Results

Let’s start with the interesting part. How much time can you save using parallel transportable jobs in Data Pump.

The following table lists four different scenarios. Each scenario consists of an export and an import. The total is the time Data Pump needs to finish the migration – the sum of export and import. In a real migration, many other things are in play, but here, we are looking solely at Data Pump performance.

Export Time Import Time Total
19c, no parallel 2h 2m 19c, no parallel 6h 44m 8h 46m
23ai, parallel 4 1h 48m 23ai, parallel 4 2h 33m 4h 21m
19c, no parallel 2h 2m 23ai, parallel 16 1h 23m 3h 25m
23ai, parallel 16 1h 8m 23ai, parallel 16 1h 23m 2h 31m

The first row is what you can do in Oracle Database 19c, almost 9 hours. Compare that to the last row you can do with parallel 16 in Oracle Database 23ai, almost a 3.5x reduction.

If you migrate from Oracle Database 19c to Oracle Database 23ai (3rd row), you can still benefit from parallel import and gain a significant benefit.

The Benchmark

I used the following Oracle homes:

My test database:

  • E-Business Suite database
  • 300 GB physical size
  • 630.000 database objects, including
    • 89.000 tables
    • 66.000 indexes
    • 60.000 packages
  • CPU_COUNT = 16
  • SGA_TARGET = 64G

My test machine:

  • OCI shape VM.Standard.E4.Flex
  • 8 CPUs
  • 128 GB mem
  • Fast disks (max. IOPS 128.000)

How to

It’s very easy to enable parallel transportable jobs. If you want to use 16 parallel workers, on export:

expdp ... parallel=16 dumpfile=ftex%L.dmp

On import:

impdp ... parallel=16

What Happens

A Data Pump job consists of several object paths that Data Pump must process. An object path could be tables, indexes, or package bodies.

Parallel Export

Each worker takes an object path and starts to process it. The worker works alone on this object path. You get parallelism by multiple workers processing multiple object paths simultaneously.

Parallel Import

During import, Data Pump must process each object path in a certain order. Data Pump can only import constraints once it has imported tables, for example.

Data Pump processes each object path in the designated order, then splits the work in one object path to many workers. You get parallelism by multiple workers processing one object path in parallel.

The Fine Print

  • Parallel transportable jobs work in Oracle Database 21c and later. In Oracle Database 19c, a transportable job has no parallel capabilities.

  • Data Pump can use parallel only on transportable jobs via a dump file. Network mode is not an option for parallel transportable jobs.

  • If you export in Oracle Database 19c (which does not support parallel transportable jobs), you can still perform a parallel import into Oracle Database 23ai.

  • The export parallel degree and import parallel degree do not have to match. You can export with parallel degree 1 and import with parallel degree 16.

  • When you enable parallel jobs, Data Pump starts more workers. How much extra resources do they use?

    • I didn’t notice any significant difference in undo or temp tablespace use.
    • I didn’t notice any extra pressure on the streams pool, either. I had the streams pool set to 256M, and the database didn’t perform any SGA resize operation during my benchmark.

Conclusion

For migrations, parallel transportable jobs in Data Pump are a huge benefit. Every minute of downtime often counts, and this has a massive impact.

Understand How a Change of Database Time Zone Affects Transportable Tablespaces

In a recent migration using full transportable export/import, I noticed a lot of time spent on the following SQL:

SELECT NVL((SELECT 2
            FROM   sys.ku$_all_tsltz_tables 
            WHERE  owner = :1 AND table_name = :2), 0) 
FROM   sys.dual

The SQL ID was g3qu7py3g0yg0. Each execution of the SQL was a few seconds, but Data Pump executed the statement many times.

Data Pump also wrote in the log file that the database timezone differed:

01-NOV-23 07:43:22.152: W-1 Source time zone is +00:00 and target time zone is -07:00.

The following applies to full transportable export/imports using the following Data Pump parameters:

full=y
transportable=always

What Happens?

In a full transportable import, if the database time zone is different, Data Pump translates any data of type Timestamp with local timezone (TSLTZ) to the new database timezone.

On startup, Data Pump detects the difference in the source and target database timezone. For each table Data Pump checks whether it must convert data. If needed, the data is converted from the source database timezone and stored in the target database timezone.

The check and conversion takes time. This is the price you must pay to change the database timezone on import.

Alternatively, you must import into a database with the same timezone. Then Data Pump completely skips the check and conversion.

In the migration, we could save 22 minutes on import by importing into the same database time zone.

Changing the Database Timezone

You can find the database timezone using:

select dbtimezone from dual;

If you don’t have any tables with TSLTZ columns, you can change the database timezone:

alter database set time_zone='+00:00';
shutdown immediate
startup

The database timezone affects only:

  • TSLTZ columns
  • Function CURRENT_DATE
  • Function CURRENT_TIMESTAMP
  • Function LOCALTIMESTAMP

If you don’t use any of the above, it should be safe to change the database timezone.

Columns of type Timestamp with timezone (TSTZ) and the database timezone file (v$timezone_file) are totally unrelated to the database timezone.

Full Transportable vs. Traditional Transportable

In a traditional transportable import, Data Pump does not import tables with columns of type TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). You must move such tables using a regular Data Pump import.

As mentioned above, a full transportable export/import Data Pump translates the data to match the new database timezone.

Other Blog Posts in This Series