Copy Data Pump Files Before the End of the Export

In Oracle Database 23ai, you can copy the dump files even before the export completes.

This saves time during your migration because you can start moving files to the target host while the export is in progress. Additionally, it potentially saves disk space because you can move the files away from the source host.

Which Files Can You Move

  1. After starting the Data Pump export, you must connect to the database using:

    • The same user who started the export
    • A user with DATAPUMP_EXP_FULL_DATABASE role
  2. Replace the job owner and name, and execute:

    set serverout on
    declare
       l_job_owner    varchar2(30) := 'DPUSER';
       l_job_name     varchar2(30) := 'SYS_EXPORT_FULL_01';
       l_handle       number;
       l_stsmask      integer := dbms_datapump.ku$_status_job_status;
       l_job_state    varchar2(30);
       l_status       ku$_status;
       l_dump_file    ku$_dumpfile;
    begin
       l_handle := dbms_datapump.attach(l_job_name, l_job_owner);
       dbms_datapump.get_status(l_handle, l_stsmask, NULL, l_job_state, l_status);
       dbms_datapump.detach(l_handle);
       
       for i in l_status.job_status.files.first..l_status.job_status.files.last() loop
          
          l_dump_file := l_status.job_status.files(i);
    
          if l_dump_file.file_type = dbms_datapump.ku$_dumpfile_type_template then
            continue;
          end if;
    
          if (l_dump_file.file_bytes_written = l_dump_file.file_size) then
             dbms_output.put_line('DONE: ' || l_dump_file.file_name);
          end if;
    
        end loop; 
    end;
    /
    
    
  3. The code lists all the files that Data Pump has marked as completed. Data Pump no longer writes to those files, and you can safely start moving them to the target system.

Prerequisites

  • You must use multiple dump files. You do that by including %L in your DUMPFILE specification.
    DUMPFILE=exp%L.dmp
    
  • You must specify a file size that allows Data Pump to rotate into multiple files when they are full. I suggest using 5G, which is also a good setting for most cloud migrations.
    FILESIZE=5G
    

What About

  • The code examines the max file size and bytes written to determine if the file is full. This is not the same as the physical size of the file in the operating system. You can’t use the file size information from the file system.

  • What about using rsync or similar tools? I guess that could work; however, we didn’t test that.

  • You could query the Data Pump control table for the information:

    SELECT file_name
    FROM <control_table>
    WHERE process_order = -21
    AND file_max_size = completed_bytes
    AND file_max_size <> 0
    ORDER BY file_name;
    
    • However, querying the control table is not a supported approach. Use the above PL/SQL.

Happy exporting!

Get the Slides and Recording Of “Migration to Oracle Autonomous Database – Part 1: Planning”

Last week, we aired our first webinar in our 4-part series on migration to Oracle Autonomous Database.

Migration to Oracle Autonomous Database – Part 1: Planning

The slides are available for download, and you can read the Q&A.

The Next Webinars

To complete your journey to Autonomous Database, be sure to sign up for the other webinars:

Often, you need to make changes to the database or application to fit into an Autonomous Database. In part 2, we show you how to deal with the findings from the Cloud Premigration Advisor Tool (CPAT).

Part 3 is all about demos—demos on how to migrate to Autonomous Database using a number of different techniques.

Finally, how is the life of a DBA after migrating to an Autonomous Database? Is there anything left to do? We think there is, and we’ll show you the cool life of a DBA on an Autonomous Database.

Happy Migrating

We’re looking for customers to collaborate with. If you’re already working on a migration project, contact us today. You might be our next reference customer.

As a last remark, subscribe to our YouTube channel so you never miss out on our tech videos.

Fast Refresh Materialized Views and Migrations Using Transportable Tablespaces

In some databases, fast refresh materialized views are important for good performance. During a migration using transportable tablespaces, how do you ensure the materialized views are kept up-to-date and avoid a costly complete refresh or stale data?

The Basics

The database populates a materialized view with data from a master table. To enable fast refresh on a materialized view, you must first create a materialized view log on the master table. The materialized view log captures changes on the master table. Then, the database applies the changes to the materialized view, thus avoiding a complete refresh.

When a user changes data in the master table (SALES), those changes are recorded in the materialized view log (MLOG$_SALES) and then used to refresh the materialized view (SALES_MV).

The master table and the materialized view log must reside in the same database (known as the master database). Hence, they are depicted in bluish colors. The materialized view is often in a different database, and then uses a database link to get the changes.

You only need materialized view logs if the materialized view is a fast refresh type.

A fast refresh materialized view needs to perform a complete refresh the first time, before it can move on with fast refreshes (thanks to my good friend, Klaus, for leaving a comment).

Remote Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the materialized view, and a remote database acts as the master database, where the master table and materialized view log reside.

Migrating a database which holds a materialized view that uses a remote master database

The materialized view and the underlying segment are stored in a tablespace in the source database. That segment is used to recreate the materialized view in the target database without any refresh needed.

  1. You must perform the following in the migration downtime window.
  2. In the source database, stop any periodic refresh of the materialized view.
  3. Optionally, perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  5. In the target database, you can now perform a fast refresh.
    • Data Pump has already recreated the database link to the remote master database.
    • Although the materialized view is now in a different database, it can fetch the recent changes from the master database starting with the last refresh in the source database.
  6. In the master database, both materialized views from the source and target database are now registered:
    select * from dba_registered_mviews where name='SALES_MV';
    
  7. Purge materialized view log entries that are related to the materialized view in the source database:
    exec dbms_mview.purge_mview_from_log('<mview-owner>', 'SALES_MV', '<source-db>');
    
  8. Unregister the materialized view in the source database:
    exec dbms_mview.unregister_mview('<mview-owner>', 'SALES_MV', '<source-db>');
    
  9. In the target database, re-enable periodic refresh of the materialized view.

Local Master Table, Remote Materialized View

In this example, you are migrating a database from source to target. This database holds the master table and materialized view log, while a remote database contains the materialized view.

Migrating a database which acts as master database holding a master table and materialized view log

The master table and materialized view log are stored in a tablespace in the source database. The migration moves the data to the target database. The materialized view is in the same database, so no remote database or database link is involved.

  1. You must perform the following in the migration downtime window.
  2. In the remote database, stop any periodic refresh of the materialized view.
  3. Optionally, perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  5. In the remote database, ensure that the database link now points to the new target database.
    • If the database link uses a TNS alias, you can update it.
    • Or recreate the database link with a new connect descriptor.
  6. Perform a fast refresh.
    exec dbms_mview.refresh ('sales_mv','f');
    
    • If you hit ORA-04062: timestamp of package "SYS.DBMS_SNAPSHOT_UTL" has been changed, run the refresh again.
  7. Re-enable periodic refresh of the materialized view.
  8. In the target database, ensure that the materialized view log is now empty.
    select * from mlog$_sales;
    

Local Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the master table, the materialized view log, and the materialized view. There is no remote database involved.

Migrating a database which acts as master database holding a master table and materialized view log

The master table and materialized view log are stored in a tablespace in the source database. The migration moves the data to the target database. The materialized view is in the same database, so there is no remote database or database link involved.

  1. You must perform the following in the migration downtime window.
  2. In the source database, stop any periodic refresh of the materialized view.
  3. Perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Ensure the materialized view log is empty, i.e., all rows have been refreshed into the materialized view.
    select count(*) from mlog$_sales;
    
    • The query must return 0 rows. If more rows, then perform an additional fast refresh.
    • If a remote materialized view uses the materialized view log then it is acceptable to move on if you are sure the local materialized view is completely updated.
  5. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  6. In the target database, perform a fast refresh.
    exec dbms_mview.refresh ('sales_mv','f');
    
  7. Re-enable periodic refresh of the materialized view.
  8. Ensure that the materialized view log is now empty.
    select * from mlog$_sales;
    

A word of caution here. The materialized view must be completely up-to-date in the source database before the migration. After the migration, the same materialized view won’t be able to refresh the pre-migration rows. That is why you are checking for rows in mlog$_sales.

Any new changes made in the target database will sync fine.

Further Reading

A Few Details about Using Refreshable Clone PDB for Non-CDB to PDB Migration

Our team has been advocating the use of refreshable clone PDB for non-CDB to PDB migrations using AutoUpgrade. It is a great feature and our entire team loves it – so does many of the customers we work with.

However, in a recent non-CDB to PDB migration, we encountered some issues with refreshable clone PDB and AutoUpgrade.

Can My Target Container Database Be a RAC Database?

Yes, this works perfectly fine.

Be aware that CREATE PLUGGABLE DATABASE statement scales out on all nodes in your cluster. By default, the database also uses parallel processes, so potentially, this will put quite a load on the source non-CDB. Consider restricting the use of parallel processes using the AutoUpgrade config file parameter:

upg1.parallel_pdb_creation_clause=4

Since the creation scales out on all nodes, all nodes must be able to resolve the connect identifier to the source non-CDB. If you use an alias from tnsnames.ora, be sure to add that on all nodes. Failure to do so will lead to an error during the CREATE PLUGGABLE DATABASE command:

ERROR at line 1:
ORA-65169: error encountered while attempting to copy file
+DATAC1/SRCDB/DATAFILE/system.262.1178083869
ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified
ORA-17629: Cannot connect to the remote database server

What Happens If the Source Database Extends a Data File?

If the source database extends a data file – either through AUTOEXTEND ON NEXT or manually by a user – the target database extends the matching data file as well. Here is an extract from the target alert log when it extends a data file:

2024-08-27T07:01:26.671975+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178089277
2024-08-27T07:01:32.773191+00:00
PDB1(4):Resize operation completed for file# 26, fname +DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs02.290.1178089287, old size 10240K, new size 1058816K

It works with smallfile and bigfile tablespaces.

What Happens If I Create a Tablespace on the Source Database?

The target database attempts to create the same tablespace.

For this to work, one of the following must be true:

If either one of the above isn’t true, you’ll receive an error during ALTER PLUGGABLE DATABASE ... REFRESH:

ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as
'+DATAC1/SRCDB/DATAFILE/srctbs04.282.1178091655'
You can use PDB_FILE_NAME_CONVERT instead.

It works with smallfile and bigfile tablespaces.

What Happens If I Add a Data File to an Existing Tablespace?

The target database attempts to add a matching data file.

The target database must be able to translate the data file location according to the section above.

2024-08-27T06:51:19.294612+00:00
PDB1(4):Media Recovery Log +RECOC1/SRCDB/partial_archivelog/2024_08_27/thread_2_seq_4.276.1178088679
2024-08-27T06:51:20.268208+00:00
PDB1(4):Successfully added datafile 25 to media recovery
PDB1(4):Datafile #25: '+DATA/TGTCDB_HBZ_FRA/20A568D1FD5DB0A6E0633D01000AC89B/DATAFILE/srctbs01.289.1178088681'

What Happens If I Set a Tablespace Read-Only?

The refreshable clone PDB does not support this. Neither is going the other way: setting a tablespace read-write.

If you do so, the database reports an error:

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:23:02.893946+00:00
PDB2(6):Error! unsupported source PDB operation: 21
2024-08-28T05:23:02.994035+00:00
PDB2(6):Media Recovery failed with error 65339

Operation 21 is setting a tablespace read-only. If you set a tablespace read-write, the database reports operation 20 instead.

PDB2(7):Error! unsupported source PDB operation: 20

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

What Happens If I Restart the Source Database?

Refreshable clone PDB does not support restarting the source database.

When you restart the source database, the source database places a special marker in the redo stream. This even happens for a clean shutdown (SHUTDOWN NORMAL). The target CDB does not understand how to recover beyond this marker.

alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-65339: unsupported operation on the source PDB

From the alert log:

2024-08-28T05:27:00.451985+00:00
PDB2(4):Error! unsupported source PDB operation: 3
2024-08-28T05:27:00.710236+00:00
PDB2(4):Media Recovery failed with error 65339

Operation 3 is the source database restart.

You will not be able to refresh the PDB anymore. You must re-create the refreshable clone PDB.

How Do I Drain My Source Database Before Migration?

Right before the migration, when you cut the connection from the source non-CDB to the target PDB, it could be useful to restart the database. But that’s not possible.

I suggest that you:

  • Ensure that the target CDB connects to the source non-CDB using a dedicated service. This applies to the database link that you establish between the two databases.
  • Stop all other services and specify a drain timeout.
  • Shut down the application that connects to the source non-CDB.
  • Kill sessions manually.

Remember that the target database connects to the source database via a database link, so stopping the database listener is not an option. Nor is enabling RESTRICTED SESSION.

Update: Armando managed to perform the migration using restricted session. Check his comment (see below) for details.

What Happens If I Restart the Target Container Database?

You can safely restart the target CDB while you have a refreshable clone PDB. This works fine.

What About NOLOGGING Operations?

You can’t perform NOLOGGING operations on the source database.

Since refreshable clone PDB relies on redo, then a NOLOGGING operation on the source will prevent that data from going to the target. When you try to query the NOLOGGING table on the target database after the migration, you will receive an error:

SQL> select count(*) from t1
       *
ERROR at line 1:
ORA-28304: Oracle encrypted block is corrupt (file # 186, block # 131)
ORA-01110: data file 186:
'+DATA/TGTCDB_HBZ_FRA/20CF181D4A925E06E0633D01000ACB50/DATAFILE/srctbs01.297.117
8266961'
ORA-26040: Data block was loaded using the NOLOGGING option

Thanks to Marcelo for leaving a comment. He suggests that you set the source non-CDB in FORCE LOGGING mode. This is a good idea to avoid this potential nightmare:

alter database force logging;

You can read more about NOLOGGING operations in The Gains and Pains of Nologging Operations (Doc ID 290161.1).

What About Hot Backups?

You can’t perform hot backup operations on the source database.

If you do so, you’ll run into the following error:

2025-11-21T14:31:06.845676+00:00
SALES(4):Error! unsupported source PDB operation: 1
2025-11-21T14:31:07.845923+00:00
SALES(4):Media Recovery failed with error 65339

Please note that I’m not referring to RMAN online backups. I’m talking about the old-school ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP commands.

Any restrictions on data types or object types?

No. The refreshable clone is a physical copy of the database, so there are no restrictions on data types or object types.

Services

You must recreate your services after the migration. Neither database managed services nor Clusterware managed services survive the migration.

Further Readin

Summary

Despite these minor restrictions, migration from non-CDB to PDB using refreshable clone PDB and AutoUpgrade is still a very handy method. Knowing the restrictions upfront ensures that you can successfully migrate the database.

Happy migrating!

Oracle Database 23ai Is Here – Time to Sharpen Multitenant Skills

Last week Oracle announced the release of Oracle Database 23ai with many significant enhancements. It is available in Oracle Cloud Infrastructure but according to Release Schedule of Current Database Releases (Doc ID 742060.1) other platforms are following soon.

One important thing about Oracle Database 23ai is that it only supports the multitenant architecture. Once you upgrade beyond Oracle Database 19c, you must also convert your database to a pluggable database.

To give you the best possible starting point for the multitenant migration, our team has prepared two webinars about the multitenant architecture. Actually, we planned on just one. Still, we have so many things to share that we decided to make two webinars. When product managers get a chance to talk, they talk a lot!

Move to Oracle Database 23ai – Everything you need to know about Oracle Multitenant

Part 1

May 16, 14:00 CEST, 2024

  • Multitenant architecure
    • Introduction
    • Consolidation strategies
    • Benefits
  • Creation of container database
    • Recommendations
    • Parameters
  • Migration methods
    • Best practices
    • Data Guard
    • RAC
    • Transparent Data Encryption (TDE)
    • What to remember after migration
    • Rollback and fallback
    • Customer case

Part 2

June 27, 14:00 CEST, 2024

  • Operations
    • Cloning
    • Connecting
    • Running scripts
    • Resource Manager
    • Tips and tricks
    • Recommendations
    • Customer case
  • Patching
    • Entire CDB
    • Individual PDB
    • Recommendations
    • Datapatch
  • Upgrading
    • CDB upgrades
    • PDB upgrades (unplug-plug)
    • Replay upgrade
    • Best practices
    • Downgrade
    • Customer case

Sign Up

You can sign up here. My team (Mike, Rodrigo, Roy and Alex) will be there and answer all your questions. I promise we won’t end the webinar until there are no more questions.

If you miss the webinar, you can watch a recording later on our YouTube channel. Be sure to subscribe so you don’t miss out.

Even if you already use Oracle Database on multitenant architecture, I guarantee there are still new things to learn.

As always: All tech, no marketing!

How to Migrate from Oracle v5 to Oracle Database 23c

More often than you think, I get mails about migration options from ancient Oracle Database releases. Typically, it is from Oracle 8i or 9i. My high score is an inquiry about Oracle 7.2.

My good colleague, Rodrigo, calls it migrations from Jurassic Park data centers.

At one point, our team got so many questions about it that we decided to create a presentation. It premiered at Oracle DatabaseWorld at CloudWorld 2023. We named it:

Help! My Database Is Still On 8i

Migration Options From Oracle v5 and Newer

In Oracle Database 10g, Oracle introduced Data Pump – replacing the original export and import utilities. If possible, use Data Pump.

In older versions, you must use original export, exp. In recent versions of Oracle Database, Oracle desupported original export, but they still support original import utility, imp, for migrations from really old Oracle Database releases.

Oracle released original export back in Oracle v5. You can export data in Oracle v5 (or newer) and import it directly into an Oracle Database 23c PDB. This is 40 years of backward compatibility. This is rather impressive!

From Oracle 8i Database, you can use same-platform transportable tablespaces. From Oracle Database 10g, you can even do cross-platform transportable tablespaces.

To Which Version Should You Migrate?

If you have such old databases in your environment, I strongly recommend migrating to Oracle Autonomous Database. In Autonomous Database, Oracle will take care of the database for you. Oracle will patch it, maintain it, and upgrade it. You will never end up with a legacy database again. Note that the original import utility does not meet the security standards of Autonomous Database, so it is impossible to import directly into Autonomous Database. In such a situation, you first import into a staging database that supports Data Pump, and then into Autonomous Database.

Otherwise, I recommend going to the latest long-term support release. Also, you should migrate to the multitenant architecture and import directly into a PDB.

Client/Server Interoperability

Before you migrate the database, you must come up with a plan for your clients and applications. Older database clients can’t connect to modern databases.

With each Oracle Database release, the authentication mechanism changes to become more secure and support newer features. This means that clients must be of a certain version to communicate with the database.

If you migrate to Oracle Database 23c, your clients must be on 19c or newer. Check Client / Server Interoperability Support Matrix for Different Oracle Versions (Doc ID 207303.1) for details.

How to Migrate From 8i to Autonomous Database

Here’s a video showing you such a migration step-by-step.

The biggest pitfall in such migration is the loss of characters due to incorrect character set conversion. You must set the NLS_LANG environment variable to the character set of the database. Otherwise, you’ll use the default language setting of your operating system and that might lead to character loss.

Only in specific situations do you set the NLS_LANG to something else.

Appendix

Fun Fact

Here’s a list of Oracle Database releases, including their year of release, marque features, and a mobile phone of the same year.

Year of release Release Feature Mobile phone
1985 Oracle v5 Support for OS/2 Motorola DynaTAC – retail price in today’s prices $12.000
1988 Oracle v6 Oracle Parallel Server OPS) Motorola MicroTAC
1996 Oracle 7.3 Cost based optimizer Motorola StarTAC – popular amongst trekkis because it looks like a communicator
1998 Oracle 8i Database RMAN Nokia 5110 – you could change the cover (XpressOn covers)
2001 Oracle 9i Database Real Application Cluster (RAC) Sony Ericsson T68 – first phone with color screen
2003 Oracle Database 10g Data Pump Nokia 6600 – still one of the most sold units
2007 Oracle Database 11g Exadata iPhone
2013 Oracle Database 12c Multitenant Blackberry Q10
2019 Oracle Database 19c Automatic Indexing Huawai Mate X – foldable screen

Further Reading

Things to Consider When Importing Advanced Queues using Oracle Data Pump

Oracle Data Pump supports moving Advanced Queues (AQ) using export/import and full transportable export/import. But there are some things to be aware of.

Data Pump Does Not Start Queues

Data Pump creates the queues during import but does not start the queues. At the end of the import, you must manually start the queues:

exec dbms_aqadm.start_queue(queue_name => ... );

Be sure to start all queue:

select owner, name, queue_table, enqueue_enabled, dequeue_enabled 
from   dba_queues;

This behavior is intentional. During a migration, you typically don’t want to use the queues. Depending on your use case, it can have unintended side effects. After the migration, when you confirm a successful migration and are ready to go live, you can manually start the queues.

If you forget to start the queues, your application will start to receive the following error:

ORA-25207: enqueue failed, queue string.string is disabled from enqueueing
ORA-25226: dequeue failed, queue string.string is not enabled for dequeue

The Database Does Not Create All Queue Objects

Data Pump creates the queues using the Advanced Queuing administration API DBMS_AQADM:

exec dbms_aqadm.create_queue_table( ...

The database then creates the essential parts of the underlying queue infrastructure: tables, views, IOTs, etc. This should include:

  • <queue_table_name>
  • AQ$_<queue_table_name>_E
  • AQ$_<queue_table_name>_I
  • AQ$_<queue_table_name>_T
  • AQ$_<queue_table_name>_F

In the source database, depending on your configuration and use of Advanced Queueing, you might also see the following objects:

  • AQ$_<queue_table_name>_C
  • AQ$_<queue_table_name>_D
  • AQ$_<queue_table_name>_G
  • AQ$_<queue_table_name>_H
  • AQ$_<queue_table_name>_L
  • AQ$_<queue_table_name>_P
  • AQ$_<queue_table_name>_S
  • AQ$_<queue_table_name>_V

This is intentional. The database creates those objects when needed. You should never create them or otherwise touch the underlying queue objects.

Typically, as part of a migration, you compare the count of objects in the source and target database to ensure nothing is lost. This comparison must consider the above.

For queue tables, it is sufficient to compare using the following query:

SQL> select count(*) 
     from dba_objects
     where owner='<schema>' 
           and object_type = 'QUEUE';

Definition Of Queue Tables Might Change

As stated, you should not worry about the underlying queue objects. However, if you compare the underlying queue objects, you might see that the definition of the object changes. I describe this situation in a different blog post.

Migrate Old Queues

If the queues are REALLY old, the queue compatibility setting might also be old. If so, I strongly recommend migrating the old queues to the newest version. You can learn more about that in this blog post.

Appendix

Further Reading

If Importing Statistics Using DBMS_STATS Is Slow

When migrating Oracle Databases, you often transport the statistics using dbms_stats. Our team got involved in an interesting case the other day, and I would like to share the solution with you.

Problem

A customer migrated to Oracle Database 19c and decided to move the statistics using dbms_stats.import_schema_stats. They noticed that the procedure started to burn a lot of CPU, and after 38 hours, they gave up.

A SQL Monitor report showed one statement which spent almost all the time. An insert statement into sys.dbms_stats_id_map_tab with a subselect referencing the staging table. The staging table is the one you specify in the argument stattab in the call to dbms_stats.import_schema_stats. The staging holds the optimizer statistics in a portable format. From the staging table, the procedure can insert it into the data dictionary in the proper format.

Obviously, they could see already that the procedure would use far too much time.

Diagnosing

The first shot was to look at the code behind dbms_stats. But this specific code hadn’t changed since its introduction many releases ago.

Recursive statements that touch the data dictionary immediately brought our attention to dictionary statistics. But the customer told us that they were not stale.

The customer ran SQL Tuning Advisor on the offending statement, and one of the findings was about stale statistics. SQL Tuning Advisor recommended gathering statistics on the staging table and corresponding indexes.

One of our performance experts looked at the execution plan and found a pattern he had seen before. He tried to disable Join Predicate Push Down in the session. It helped, but this was just a workaround. We wanted to find the root cause.

Solution

The SQL Tuning advisor came up with the real problem. Stale statistics on the staging table and corresponding indexes. Once the customer gathered statistics on the staging table and indexes, the import of statistics finished in 2 hours and 27 minutes which was acceptable.

We also discovered that the dictionary statistics were not as accurate as the customer had initially concluded. In fact, by mistake, they had misinformed us. A fresh run of dbms_stats.gather_dictionary_stats gave a slight performance improvement as well.

Recommendations

These recommendations are now part of our best practices for migrations.

  1. Before importing statistics using dbms_stats.import_schema_stats, gather stats on the staging table.

  2. Immediately after importing a lot of data, and before you import statistics with dbms_stats or anything else, you should gather dictionary statistics. This applies to regular Data Pump imports and transportable tablespace metadata imports.

The run book should look like this:

$ impdp parfile=import.par ...
$ sqlplus / as sysdba

SQL> exec dbms_stats.gather_dictionary_stats; 
SQL> exec dbms_stats.gather_table_stats(
		ownname=>'SYSTEM',
		tabname=>'MY_STG_TAB_1',
		cascade=>TRUE);
SQL> exec dbms_stats.import_schema_stats(
		ownname=>'SYSTEM', 
		stattab=>'MY_STG_TAB_1', 

In the above example, the staging table is called SYSTEM.MY_STG_TAB_1.

The recommendation applies as well if you are using the procedures dbms_stats.import_database_stats or dbms_stats.import_table_stats.

Dictionary Statistics

Why is it important to gather dictionary statistics immediately after the import?

When you import data or use transportable tablespaces, you often have a brand new, empty database. Then you import a lot of objects. Those objects are represented as rows in the data dictionary. For instance, the tables you import now appear as rows in SYS.TAB$, the partitions in SYS.TABPART$, the indexes in SYS.IND$, and so forth.
Those internal tables were almost empty before – there were not a lot of tables. Now they have a lot of rows. This means that the statistics are stale. When you start to use functionality in the database, like importing statistics, recursive queries using the internal tables will be executed. With stale statistics on the dictionary, you can have suboptimal execution plans and bad performance. Gathering dictionary statistics can fix this for you.

Conclusion

Statistics are always vital, whether optimizer statistics on user data or internally in the data dictionary. Be sure to verify the accuracy of your statistics when you have problems.

Also, SQL Tuning Advisor is a great tool. It can quickly come up with suggestions for fixing problems. Use the recommendations as input to your troubleshooting. SQL Tuning Advisor also works on internal tables.

Additional Information

We have a few videos on our YouTube channel which have more information about transporting statistics with dbms_stats.

XTTS: Prerequisites

When migrating Oracle Database to a different endian format using transportable tablespaces and incremental backups (XTTS), a list of requirements must be met. The following list of requirements exist when using:

V4 Perl Script

The most important requirements – for a complete list check MOS note:

  • Windows is not supported.
  • RMAN on the source system must not have DEVICE TYPE DISK configured to COMPRESSED.
  • RMAN on the source system must not have default channel configured to type SBT.
  • For Linux: Minimum version for source and destination is 11.2.0.3.
  • Other platforms: Minimum version for source and destination is 12.1.0.2.
  • Disk space for a complete backup of the database on both source and target host. If your data files take up 100 TB, you need an additional 100 TB of free disk space. For 12c databases, and if your data files have a lot of free space, the backup might be smaller due to RMAN unused block compression.

Also worth mentioning is that the Perl script during the roll forward phase (applying level 1 incremental) will need to restart the target database. Applying the incremental backups on the target data files happens in NOMOUNT mode. Be sure nothing else uses the target database while you roll forward.

Block Change Tracking (BCT) is strongly recommended on the source database. Note, that this is an Enterprise Edition feature (in OCI: DBCS EE-EP or ExaCS). If you don’t enable BCT the incremental backups will be much slower because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.

What If – V3 Perl Script

If disk space is a problem or if you can’t meet any of the other requirements, check out the below two MOS notes:

They describe a previous version of the Perl script, version 3. The scripts use DBMS_FILE_TRANSFER to perform the conversion of the data files in-flight. That way no extra disk space is needed. However, DBMS_FILE_TRANSFER has a limitation that data files can’t be bigger than 2 TB.

Also, the V3 scripts might be useful for very old databases.

Transportable Tablespaces In General

To get a complete list of limitations on transporting data, you should look in the documentation. The most notable are:

  • Source and target database must have compatible character sets. If the character sets in both databases are not the same, check documentation for details.
  • No columns can be encrypted with TDE Column Encryption. The only option is to remove the encryption before migration and re-encrypt afterward.
  • TDE Tablespace Encryption is supported for same-endian migration if the source database is 12.1.0.2 or newer. If you need to go across endianness, you must decrypt the tablespaces and re-encrypt after migration. Remember, Oracle Database 12.2 can encrypt tablespaces online.
  • If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the conversion on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
  • Requires Enterprise Edition.
  • The database timezone file version in the target database must be equal to or higher than the source database.
  • The database time zone must match if you have tables with TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). If you have such tables, and the database time zone does not match, those tables are skipped during import. You can then move the affected tables using a normal Data Pump table mode export and import. To check the database time zone:
    SQL> select dbtimezone from dual;
    
    You can alter the time zone for a database with an ALTER DATABASE statement.

Full Transportable Export/Import

FTEX automates the process of importing the metadata. It is simpler to use and automatically includes all the metadata in your database. Compared to a traditional transportable tablespace import, FTEX is a lot easier and removes a lot of manual work from the end user. But there are a few requirements that must be met:

  • Source database must be 11.2.0.3 or higher.
  • Target database must be 12.1.0.1 or higher.
  • Requires Enterprise Edition.
  • COMPATIBLE must be set to 12.0.0 or higher in both source and target database. If your source database is an Oracle Database 11g, this is not possible. In that case, set version to 12 or higher during Data Pump export instead.

If you can’t meet the requirements, check out traditional transportable tablespace. It have different requirements, and it allows more customization.

Other Blog Posts in This Series

XTTS: ASM Aliases and Why You Should Get Rid of Them

If you migrate an Oracle Database using cross-platform transportable tablespaces (XTTS) and incremental backups and if your target database use ASM, you should get rid of the aliases after the migration.

What Is an ASM Alias?

When you use ASM, there is tight control over the file names. ASM strictly enforces the naming standard dictated by Oracle Managed Files (OMF), and only the database can create file names that comply with OMF.

Sometimes it is handy to create files in other locations in ASM that still refer to a database file. Here you can use aliases. Aliases work like a symbolic link in the file system.

How can you tell if a file is an alias?

Alias Oracle ASM file names are distinguished from fully qualified file names or numeric file names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers, such as in the format USERS.259.685366091.

When you use ls -l you can also tell whether a file is an alias. The column SYS (System-generated) is N, meaning this is not a proper OMF file. Also, you can see in the Name column that it is an alias. The => indicate it:

ASMCMD> ls -l +DATA
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    account_25.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNT.282.1099469855
DATAFILE  UNPROT  COARSE   MAR 16 08:00:00  N    accountidx_26.dbf => +DATA/CDB1_FRA2KR/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/ACCOUNTIDX.280.1099469855

You can read about Fully Qualified File Name Form in the ASM documentation, if you are interested.

Why Are the Aliases Created?

When the Perl script is restoring and recovering the data files on the target database, they do not belong to any database yet. The tablespaces have not been plugged into any database yet. Hence, it is impossible to figure out the right OMF name of the data files. As an alternative, ASM names the data files according to the syntax of the source database. For instance, it will use the source database GUID (select guid from v$containers) as part of the name. In addition, the Perl script creates ASM aliases using the following format: <dest_datafile_location>/<tablespace_name>_<file#>.dbf

When you perform the Data Pump import, you can refer to the aliases in your Data Pump parameter file (transport_datafile). Using the aliases is especially useful if you plan on having a standby database.

How Do I Get Rid of the Aliases?

After performing the Data Pump import, the tablespaces are plugged into a database, and now the data files belong to a database. But the target database is referring to the data files either via:

  • An ASM alias
  • Or directly via the absolute file name. As described earlier, the absolute file path uses the OMF syntax of the source database

Let me illustrate that. Imagine:

  • In xtt.properties dest_datafile_location=+DATA.
  • My data file is named users01.dbf, belongs to tablespace USERS and has file ID 65.
  • Target DB_UNIQUE_NAME is SALES2.
  • Source database GUID is 86D5DC2587337002E0532AB2A8C0A57C.

How will the file be registered in the database?

  • If I used the aliases, it is known as +DATA/users_65.dbf.
  • If I used the absolute file name, it is known as +DATA/SALES2/86D5DC2587337002E0532AB2A8C0A57C/DATAFILE/users.280.1099469855. ASM generates the last two sets of numbers.

Neither of the two formats is proper OMF names. What is the real OMF name? Imagine:

  • Target database GUID is DA495482D68D0220E0530F01000A98DF
  • The real OMF file name is (notice the change in GUID): +DATA/SALES2/DA495482D68D0220E0530F01000A98DF/DATAFILE/users.280.1099469855

You can get the GUID of a database by using select guid from v$containers.

In ASM, only the database can store a file in OMF syntax. You must fix this from the target database. The easiest way is to use online data file move. If you don’t specify the target location, the database will generate an OMF name:

SQL> --using file number
SQL> alter database move datafile 65;
SQL> --using full name
SQL> alter database move datafile '+DATA/users_65.dbf';

How does the move work?

  • It is a entirely online operation.
  • It is a block-by-block copy.
  • The database copies the data file. While the copy operation takes place, the two files are kept in sync until the database can switch to the new file. After that, the database removes the original file.
  • If the data file belongs to a PDB, you must switch your session to that container.

You can learn more about online data file move in our YouTube video:

Why Bother?

If my database works fine, why should I worry? I can think of at least two reasons:

  • Comply to naming standard
  • Avoid problems in other migrations

Comply to naming standard

I highly recommend that you use and comply with any naming standard, including OMF. Data files that are not appropriately stored according to OMF, should be moved to the correct location.

When I worked outside in the real world as a DBA, I remember multiple occasions of loss of data files. In many situations, a DBA had found a file apparently not belonging to a database – at least according to the naming standard. But the file was used by a database; it was just not stored in the correct location. When the file was removed = big problem in the database.

With many databases and many hosts, it is very important that you make standards and keep with the standards. Otherwise, you will end up in a big mess.

Avoid problems in other migrations

This is especially relevant if you need to perform multiple migrations to the same database host.

The Perl script used for the migration will create the aliases in the location specified by dest_datafile_location. The names of the aliases are very simple, and there is a risk that another migration will try to make the same alias.

Imagine you already did one migration. The database uses the alias +DATA/users_4.dbf. Now you want to make a second migration, and this database also wants to use +DATA/users_4.dbf. The same alias can’t be used for two different files. Big problem!

A user left a comment on my blog telling me this actually lead to corruption in the first database. That risk is a very good reason for getting rid of the aliases and using only proper OMF file names.

Conclusion

ASM aliases are created automatically as part of the migration. The aliases are very useful during the migration, but I highly recommend getting rid of the aliases right after the migration.

Other Blog Posts in This Series