Recreate Database Services After Moving An Oracle Database

Oracle recommends that you connect to the database via custom services. In your connect string, don’t connect:

  • Directly to the SID
  • Or to the database’s default service (the service with the same name as the database).

When you move a database around, in some situations, the database does not retain these services, for instance, when you:

  • Migrate a non-CDB to PDB using refreshable clone PDB
  • Upgrade a PDB using refreshable clone PDB
  • Move a PDB to a different CDB using refreshable clone PDB
  • Migrating a database using Full Transportable Export/Import or transportable tablespaces

The services are important because your application and clients connect to the database through that service. Also, the service might define important properties for things like Application Continuity or set default drain timeout.

Here’s how to recreate such services.

Database Managed Services

A database-managed service is one that you create directly in the database using dbms_service:

begin
   dbms_service.create_service(
      service_name=>'SALESGOLD',
      network_name=>'SALESGOLD');
   dbms_service.start_service('SALESGOLD');   
end;
/

After the migration, you must manually recreate the service in the target database.

dbms_metadata does not support services. So, you must query v$services in the source database to find the service’s defition. Then, construct a call to dbms_service.create_service and dbms_serice.start_service.

Clusterware Managed Services

I recommend defining services in Grid Infrastructure if you are using Oracle RAC or using Oracle Restart to manage your single instance database. Luckily, Grid Infrastructure supports exporting and importing service defitions.

  • You export all the services defined in the source database:

    srvctl config service \
       -db $ORACLE_UNQNAME \
       -exportfile my_services.json \
       -S 2
    
  • You edit the JSON file.

    1. Remove the default services. Keep only your custom services.
    2. Remove the dbunique_name attribute for all services.
    3. If you are renaming the PDB, you must update the pluggable_database attribute.
    4. Update the res_name attribute so it matches the resource name of the target database. Probably you just need to exchange the db_unique_name part of the resource name. You can find the resource name as grid when you execute crsctl stat resource -t.
  • You can now import the services into the target database:

    srvctl add service \
       -db $ORACLE_UNQNAME \
       -importfile my_services.json
    
  • Finally, you start the service(s):

    export ORACLE_SERVICE_NAME=SALESGOLD
    srvctl start service \
       -db $ORACLE_UNQNAME \
       -service $ORACLE_SERVICE_NAME
    

Additional Information

  • The export/import features work from Oracle Database 19c, Release Update 19 and beyond.
  • You can also export/import the definition of:
    • Database: srvctl config database -db $ORACLE_UNQNAME -S 2 -exportfile my_db.json.json
    • PDB: srvctl config pdb -db $ORACLE_UNQNAME -S 2 -exportfile my_pdb.json
    • ACFS filesystem: srvctl config filesystem -S 2 -exportfile /tmp/my_filesystem.json
  • At time of writing, this functionality hasn’t made it into the documentation yet. Consider yourself lucky knowing this little hidden gem.

Final Words

Remember to recreate your custom services after a migration. Your application needs the service to connect in a proper way.

Further Reading

Fast Refresh Materialized Views and Migrations Using Transportable Tablespaces

In some databases, fast refresh materialized views are important for good performance. During a migration using transportable tablespaces, how do you ensure the materialized views are kept up-to-date and avoid a costly complete refresh or stale data?

The Basics

The database populates a materialized view with data from a master table. To enable fast refresh on a materialized view, you must first create a materialized view log on the master table. The materialized view log captures changes on the master table. Then, the database applies the changes to the materialized view, thus avoiding a complete refresh.

When a user changes data in the master table (SALES), those changes are recorded in the materialized view log (MLOG$_SALES) and then used to refresh the materialized view (SALES_MV).

The master table and the materialized view log must reside in the same database (known as the master database). Hence, they are depicted in bluish colors. The materialized view is often in a different database, and then uses a database link to get the changes.

You only need materialized view logs if the materialized view is a fast refresh type.

A fast refresh materialized view needs to perform a complete refresh the first time, before it can move on with fast refreshes (thanks to my good friend, Klaus, for leaving a comment).

Remote Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the materialized view, and a remote database acts as the master database, where the master table and materialized view log reside.

Migrating a database which holds a materialized view that uses a remote master database

The materialized view and the underlying segment are stored in a tablespace in the source database. That segment is used to recreate the materialized view in the target database without any refresh needed.

  1. You must perform the following in the migration downtime window.
  2. In the source database, stop any periodic refresh of the materialized view.
  3. Optionally, perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  5. In the target database, you can now perform a fast refresh.
    • Data Pump has already recreated the database link to the remote master database.
    • Although the materialized view is now in a different database, it can fetch the recent changes from the master database starting with the last refresh in the source database.
  6. In the master database, both materialized views from the source and target database are now registered:
    select * from dba_registered_mviews where name='SALES_MV';
    
  7. Purge materialized view log entries that are related to the materialized view in the source database:
    exec dbms_mview.purge_mview_from_log('<mview-owner>', 'SALES_MV', '<source-db>');
    
  8. Unregister the materialized view in the source database:
    exec dbms_mview.unregister_mview('<mview-owner>', 'SALES_MV', '<source-db>');
    
  9. In the target database, re-enable periodic refresh of the materialized view.

Local Master Table, Remote Materialized View

In this example, you are migrating a database from source to target. This database holds the master table and materialized view log, while a remote database contains the materialized view.

Migrating a database which acts as master database holding a master table and materialized view log

The master table and materialized view log are stored in a tablespace in the source database. The migration moves the data to the target database. The materialized view is in the same database, so no remote database or database link is involved.

  1. You must perform the following in the migration downtime window.
  2. In the remote database, stop any periodic refresh of the materialized view.
  3. Optionally, perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  5. In the remote database, ensure that the database link now points to the new target database.
    • If the database link uses a TNS alias, you can update it.
    • Or recreate the database link with a new connect descriptor.
  6. Perform a fast refresh.
    exec dbms_mview.refresh ('sales_mv','f');
    
    • If you hit ORA-04062: timestamp of package "SYS.DBMS_SNAPSHOT_UTL" has been changed, run the refresh again.
  7. Re-enable periodic refresh of the materialized view.
  8. In the target database, ensure that the materialized view log is now empty.
    select * from mlog$_sales;
    

Local Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the master table, the materialized view log, and the materialized view. There is no remote database involved.

Migrating a database which acts as master database holding a master table and materialized view log

The master table and materialized view log are stored in a tablespace in the source database. The migration moves the data to the target database. The materialized view is in the same database, so there is no remote database or database link involved.

  1. You must perform the following in the migration downtime window.
  2. In the source database, stop any periodic refresh of the materialized view.
  3. Perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Ensure the materialized view log is empty, i.e., all rows have been refreshed into the materialized view.
    select count(*) from mlog$_sales;
    
    • The query must return 0 rows. If more rows, then perform an additional fast refresh.
    • If a remote materialized view uses the materialized view log then it is acceptable to move on if you are sure the local materialized view is completely updated.
  5. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  6. In the target database, perform a fast refresh.
    exec dbms_mview.refresh ('sales_mv','f');
    
  7. Re-enable periodic refresh of the materialized view.
  8. Ensure that the materialized view log is now empty.
    select * from mlog$_sales;
    

A word of caution here. The materialized view must be completely up-to-date in the source database before the migration. After the migration, the same materialized view won’t be able to refresh the pre-migration rows. That is why you are checking for rows in mlog$_sales.

Any new changes made in the target database will sync fine.

Further Reading

Does Data Pump Export Advanced Data Optimization Policies and Heat Maps?

During our recent webinar on cross-platform migration, an attendee asked a good question:

Does Data Pump in a Full Transportable Export/Import move ADO policies and heat map information?

Let’s find out.

What Is It?

ADO means Advanced Data Optimization and is a feature that:

… automate the compression and movement of data between different tiers of storage within the database.

ADO uses heat maps that :

… provides data access tracking at the segment-level and data modification tracking at the segment and row level.

In other words,

  • You might have slower and faster storage.
  • You define ADO policies on tables and indexes (including partitions and subpartitions).
  • The policies define in which storage the database stores the rows.
  • The heat maps tell which objects you access.
  • Based on heat maps, a job automatically moves rows between storage depending on the use and/or compress it.

What Happens in Data Pump

There are three pieces of essential information:

  1. ADO policies. Defined on objects and governs how the database should store data.
  2. Heat map information. The database records your usage of the data and stores the information in heat maps. Later on, the database uses the heat maps to determine what to do with your data.
  3. ILM settings. Tells the database how to perform the ILM (Information Lifecycle Management) maintenance. For example, you can define the parallel degree that the database uses during maintenance or the number of concurrent maintenance jobs.

You can find a test case at the end of the blog post. I’m using test data and queries from oracle-base.com.

ADO Policies

Data Pump transfers the ADO policies in all modes:

  • Full transportable
  • Full
  • Schema
  • Tablespace
  • Table

You can verify it:

SELECT   policy_name,
         object_owner,
         object_name,
         object_type,
         inherited_from,
         enabled,
         deleted
FROM     dba_ilmobjects
ORDER BY 1;

Heat Map Information

Data Pump does not transfer any heat map information. I didn’t find any way to move the heat map information to the new database.

ILM Settings

Data Pump does not transfer ILM settings – not even in full modes. You must manually move the settings.

  • You can find the current ILM settings:

    SELECT * FROM DBA_ILMPARAMETERS;
    
  • And change the settings in the target database:

    EXEC DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.DEG_PARALLEL , 4);
    
    • You must translate the name of the setting to the corresponding PL/SQL constant. You change the setting DEGREE OF PARALLELISM by using DBMS_ILM_ADMIN.DEG_PARALLEL.
  • The documentation holds a complete list of ILM settings.

Appendix

Further Reading

Test Data

I am using our Hands-On lab that you can use on Oracle LiveLabs. Again, thanks to oracle-base.com for test data and queries.

CONN / AS SYSDBA
--Turn on heat map tracking
ALTER SYSTEM SET HEAT_MAP=ON SCOPE=BOTH;

--Customize ILM settings
BEGIN
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.RETENTION_TIME, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.JOBLIMIT, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.ABS_JOBLIMIT, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.DEG_PARALLEL , 4);
END;
/
SELECT * FROM DBA_ILMPARAMETERS;

--Create tablespaces for ILM policy
CREATE TABLESPACE FAST_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE MEDIUM_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE SLOW_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE USER TEST IDENTIFIED BY TEST QUOTA UNLIMITED ON USERS;
GRANT CREATE SESSION, CREATE TABLE TO TEST;
ALTER USER TEST QUOTA UNLIMITED ON FAST_STORAGE_TS;
ALTER USER TEST QUOTA UNLIMITED ON MEDIUM_STORAGE_TS;
ALTER USER TEST QUOTA UNLIMITED ON SLOW_STORAGE_TS;

CONN TEST/TEST

--Create table where we can track usage
CREATE TABLE t1 (
  id          NUMBER,
  description VARCHAR2(50),
  CONSTRAINT t1_pk PRIMARY KEY (id)
);
INSERT INTO t1
SELECT level,
       'Description for ' || level
FROM   dual
CONNECT BY level <= 10;
COMMIT;

--Generate "usage"
SELECT *
FROM   t1;
SELECT *
FROM   t1
WHERE  id = 1;

--Create table for ILM policy
CREATE TABLE invoices (
  invoice_no    NUMBER NOT NULL,
  invoice_date  DATE   NOT NULL,
  comments      VARCHAR2(500)
)
PARTITION BY RANGE (invoice_date)
(
  PARTITION invoices_2016_q1 VALUES LESS THAN (TO_DATE('01/04/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
  PARTITION invoices_2016_q2 VALUES LESS THAN (TO_DATE('01/07/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
  PARTITION invoices_2016_q3 VALUES LESS THAN (TO_DATE('01/09/2016', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
    ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
  PARTITION invoices_2016_q4 VALUES LESS THAN (TO_DATE('01/01/2017', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
    ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
  PARTITION invoices_2017_q1 VALUES LESS THAN (TO_DATE('01/04/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
    ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS,
  PARTITION invoices_2017_q2 VALUES LESS THAN (TO_DATE('01/07/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
    ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS
)
ILM ADD POLICY ROW STORE COMPRESS BASIC SEGMENT AFTER 3 MONTHS OF NO ACCESS;

Test Case

###############
# FULL EXPORT #
###############

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle full=y directory=expdir job_name=fullexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

#################
# SCHEMA EXPORT #
#################

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle schemas=test directory=expdir job_name=schemaexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

################
# TABLE EXPORT #
################

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle tables=test.t1,test.invoices directory=expdir job_name=tabexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
    CREATE USER TEST IDENTIFIED BY TEST QUOTA UNLIMITED ON USERS;
    GRANT CREATE SESSION, CREATE TABLE TO TEST;
    ALTER USER TEST QUOTA UNLIMITED ON FAST_STORAGE_TS;
    ALTER USER TEST QUOTA UNLIMITED ON MEDIUM_STORAGE_TS;
    ALTER USER TEST QUOTA UNLIMITED ON SLOW_STORAGE_TS;
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

It’s a Wrap – Transportable Tablespaces to the Extreme

Recently, we hosted our webinar Cross Platform Migration – Transportable Tablespaces to the Extreme. You can now watch the recording on our YouTube channel.

The audience at our webinar was very active and asked many good questions. To know all the details, you can read the Q&A and the slides.

Some of the attendees asked questions we were not able to answer during the webinar. Those questions – and answers – are included in the Q&A.

The New Procedure

For cross-platform and cross-endian migrations, we have a new procedure called M5. It replaces the former XTTS v4 Perl script that has been used for many migrations. We could see the need for changes as Oracle Databases world-wide grows in size and complexity. M5 implements the latest RMAN and Data Pump technology to deliver the fastest possible migrations.

You can download the M5 scripts and read the procedure on My Oracle Support (Doc ID 2999157.1).

Next Webinar

Mark your calendar for our next webinar:

Move to Oracle Database 23c – Everything you need to know about Oracle Multitenant

Oracle Database 23c does only support the CDB architecture. If you haven’t migrated to Oracle Multitenant yet, then you will be with your upgrade to 23c. How do you approach it in the most efficient way? What are the other options? And why is this a migration unless you have PDBs already? All this and way much more about how to work with Multitenant, how AutoUpgrade automates the entire move for you, end-to-end – and best practices and tips and tricks. We’ll guide you, and you will be ready to move to Oracle Database 23c right away

Sign up now and secure your seat.

All tech – no marketing!

Happy Migrating!

The Next-generation Cross-platform Migration for Oracle Database

I am very pleased to share that Oracle has officially launched a new method for cross-platform migrations of Oracle Database.

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

The M5 script is the next-generation cross-platform transportable tablespace migration procedure for Oracle Database

You can also use the script for cross-endian migrations, so this is perfect for AIX, HP-UX or SPARC migrations to Exadata Database Machine or any other little endian platform.

Next-generation

Before we launched M5, you would use the XTTS v4 Perl script for such migrations.

Timeline of methods for cross-endian migrations

We launched the latest version of the XTTS v4 Perl script many years ago. Over the last years, as databases grew bigger and bigger, we saw multiple issues with XTTS v4 Perl script, including:

  • No multisection backups for bigfile tablespaces
  • No encrypted tablespaces
  • Inefficient parallelism
  • Incomplete multitenant support

We wanted to solve all those issues with M5 – and we did! M5 uses newer RMAN functionality, and we made the procedure much simpler. It relies entirely on out-of-the-box RMAN functionality. On the source:

BACKUP ... TABLESPACE ... ;

On target we use:

RESTORE ALL FOREIGN DATAFILES ... ;

The latter command was introduced in Oracle Database 18c and enhanced in Oracle Database 19c. This means that the requirements for source and target database are:

Want to Know More?

We have a webinar later today about this new method. If you are interested, we still have open seats.

Cross Platform Migration – Transportable Tablespaces to the Extreme, February 22, 16:00 CET

Next week, we will add the recording to our YouTube channel. Be sure to subscribe so you don’t miss out on anything.

If you want a sneak peek at the slides, go ahead.

I have an elaborate blog post series about cross-platform migrations. I will soon update it with more information about the M5 migration method.

Happy Migrating!

How to Upgrade Oracle Database and Replace the Operating System

A reader asked me for advice on upgrading Oracle Database and replacing the underlying operating system.

  • Currently on Oracle Database 12.1.0.2
  • Currently on Windows Server 2012
  • Upgrade to Oracle Database 19c
  • Move to new servers with Microsoft Windows Server 2022

What’s the recommended approach for transitioning to Oracle 19c on Windows 2022?

Oracle Data Guard

My first option is always Oracle Data Guard. It is often a superior option. You move the entire database, and the only interruption is a Data Guard switchover.

In this case, where the reader needs to replace the operating system, the first thing to check is platform certifications. Always check platform certifications on My Oracle Support. It has the most up-to-date information.

Here is an overview of the platform certification for Oracle Database 12.1.0.2 and 19c.

Oracle Database 12.1.0.2 Oracle Database 19c
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2 Windows Server 2012 R2
Windows Server 2016
Windows Server 2019
Windows Server 2022

Oracle Database 19c does not support the current platform, Windows Server 2012. Thus, the reader can’t set up a standby database on the new servers and transition via a regular switchover.

Windows Server 2012 R2

Let’s imagine the current servers were using Windows Server 2012 R2. Both database releases support this platform. I would recommend this approach:

  1. Upgrade to Oracle Database 19c on current servers.
  2. Set up new servers with Windows Server 2022.
  3. Create standby database on new server.
  4. Transition to new servers with a regular Data Guard switchver.

This approach requires two maintenance windows. Yet, it is still my favorite because it is very simple.

RMAN Backups

You could also use RMAN and incremental backups. You don’t need much downtime – just the time necessary for a final incremental backup and restore. Like with Data Guard, you bring over the entire database.

RMAN can restore backups from a previous version, and you can use that to your advantage.

  1. Provision new servers with just Oracle Database 19c.
  2. Backup on 12.1.0.2.
  3. Restore and recover the database on the new servers with Oracle Database 19c binaries.
  4. After the final incremental backup, open the new database in upgrade mode and perform the upgrade.

We covered this approach in one of our webinars; you can also find details in this blog post.

Move Storage

You can also unmount the storage from the old server, and attach it to the new server.

  1. Run AutoUpgrade in analyze mode to determine upgrade readiness.
  2. Down time starts.
  3. Run AutoUpgrade in fixup mode to fix any issues preventing the upgrade from starting.
  4. Cleanly shut down the source database.
  5. Move the storage to the new server.
  6. Start the database on the new server in upgrade mode.
  7. Start AutoUpgrade in upgrade mode to complete the upgrade.

This is just a high-level overview. For a real move, there are many more intermediate steps.

Be sure to have a proper rollback plan. You are re-using the data files and AutoUprade in upgrade mode does not create a guaranteed restore point.

Data Pump

Data Pump is also a viable option, especially for smaller, less complex databases. It also enables you to restructure your database, for example:

  • Transform old BasicFile LOBs to SecureFile
  • Implement partitioning
  • Exclude data (for archival)
  • You can import directly into a higher release and even directly into a PDB.

But – the larger the database, the longer downtime (generally speaking).

When you use Data Pump for upgrades, I recommend using a full database export.

Full Transportable Export/Import

You can also use transportable tablespaces for upgrades. You can even migrate directly into a PDB on Oracle Database 19c.

The downside of transportable tablespace is that you must copy the data files to the new system.

But often, you can unmount the storage and mount the storage on the new servers. This avoids the cumbersome process of copying the data files to the new system.

Another approach is to combine transportable tablespaces with incremental backups, if you want to lower the downtime needed. This approach leaves the original database untouched, leaving you with a perfect rollback option.

Oracle GoldenGate

You could also use Oracle GoldenGate. But for most upgrades, it is overkill, partly because of the restrictions and considerations. I see this as a sensible option only if you have very strict downtime or fallback requirements.

Conclusion

What is the best option?

It depends…

This post helps you make the best decision for your organization.

Data Pump and Parallel Transportable Jobs

In migrations, you often use transportable tablespaces or Full Transportable Export/Import (FTEX). Downtime is always of concern when you migrate, so having Data Pump perform the transportable job in parallel is a huge benefit.

How much benefit? Let’s find out with a little benchmark.

The Results

Let’s start with the interesting part. How much time can you save using parallel transportable jobs in Data Pump.

The following table lists four different scenarios. Each scenario consists of an export and an import. The total is the time Data Pump needs to finish the migration – the sum of export and import. In a real migration, many other things are in play, but here, we are looking solely at Data Pump performance.

Export Time Import Time Total
19c, no parallel 2h 2m 19c, no parallel 6h 44m 8h 46m
23ai, parallel 4 1h 48m 23ai, parallel 4 2h 33m 4h 21m
19c, no parallel 2h 2m 23ai, parallel 16 1h 23m 3h 25m
23ai, parallel 16 1h 8m 23ai, parallel 16 1h 23m 2h 31m

The first row is what you can do in Oracle Database 19c, almost 9 hours. Compare that to the last row you can do with parallel 16 in Oracle Database 23ai, almost a 3.5x reduction.

If you migrate from Oracle Database 19c to Oracle Database 23ai (3rd row), you can still benefit from parallel import and gain a significant benefit.

The Benchmark

I used the following Oracle homes:

My test database:

  • E-Business Suite database
  • 300 GB physical size
  • 630.000 database objects, including
    • 89.000 tables
    • 66.000 indexes
    • 60.000 packages
  • CPU_COUNT = 16
  • SGA_TARGET = 64G

My test machine:

  • OCI shape VM.Standard.E4.Flex
  • 8 CPUs
  • 128 GB mem
  • Fast disks (max. IOPS 128.000)

How to

It’s very easy to enable parallel transportable jobs. If you want to use 16 parallel workers, on export:

expdp ... parallel=16 dumpfile=ftex%L.dmp

On import:

impdp ... parallel=16

What Happens

A Data Pump job consists of several object paths that Data Pump must process. An object path could be tables, indexes, or package bodies.

Parallel Export

Each worker takes an object path and starts to process it. The worker works alone on this object path. You get parallelism by multiple workers processing multiple object paths simultaneously.

Parallel Import

During import, Data Pump must process each object path in a certain order. Data Pump can only import constraints once it has imported tables, for example.

Data Pump processes each object path in the designated order, then splits the work in one object path to many workers. You get parallelism by multiple workers processing one object path in parallel.

The Fine Print

  • Parallel transportable jobs work in Oracle Database 21c and later. In Oracle Database 19c, a transportable job has no parallel capabilities.

  • Data Pump can use parallel only on transportable jobs via a dump file. Network mode is not an option for parallel transportable jobs.

  • If you export in Oracle Database 19c (which does not support parallel transportable jobs), you can still perform a parallel import into Oracle Database 23ai.

  • The export parallel degree and import parallel degree do not have to match. You can export with parallel degree 1 and import with parallel degree 16.

  • When you enable parallel jobs, Data Pump starts more workers. How much extra resources do they use?

    • I didn’t notice any significant difference in undo or temp tablespace use.
    • I didn’t notice any extra pressure on the streams pool, either. I had the streams pool set to 256M, and the database didn’t perform any SGA resize operation during my benchmark.

Conclusion

For migrations, parallel transportable jobs in Data Pump are a huge benefit. Every minute of downtime often counts, and this has a massive impact.

Understand How a Change of Database Time Zone Affects Transportable Tablespaces

In a recent migration using full transportable export/import, I noticed a lot of time spent on the following SQL:

SELECT NVL((SELECT 2
            FROM   sys.ku$_all_tsltz_tables 
            WHERE  owner = :1 AND table_name = :2), 0) 
FROM   sys.dual

The SQL ID was g3qu7py3g0yg0. Each execution of the SQL was a few seconds, but Data Pump executed the statement many times.

Data Pump also wrote in the log file that the database timezone differed:

01-NOV-23 07:43:22.152: W-1 Source time zone is +00:00 and target time zone is -07:00.

The following applies to full transportable export/imports using the following Data Pump parameters:

full=y
transportable=always

What Happens?

In a full transportable import, if the database time zone is different, Data Pump translates any data of type Timestamp with local timezone (TSLTZ) to the new database timezone.

On startup, Data Pump detects the difference in the source and target database timezone. For each table Data Pump checks whether it must convert data. If needed, the data is converted from the source database timezone and stored in the target database timezone.

The check and conversion takes time. This is the price you must pay to change the database timezone on import.

Alternatively, you must import into a database with the same timezone. Then Data Pump completely skips the check and conversion.

In the migration, we could save 22 minutes on import by importing into the same database time zone.

Changing the Database Timezone

You can find the database timezone using:

select dbtimezone from dual;

If you don’t have any tables with TSLTZ columns, you can change the database timezone:

alter database set time_zone='+00:00';
shutdown immediate
startup

The database timezone affects only:

  • TSLTZ columns
  • Function CURRENT_DATE
  • Function CURRENT_TIMESTAMP
  • Function LOCALTIMESTAMP

If you don’t use any of the above, it should be safe to change the database timezone.

Columns of type Timestamp with timezone (TSTZ) and the database timezone file (v$timezone_file) are totally unrelated to the database timezone.

Full Transportable vs. Traditional Transportable

In a traditional transportable import, Data Pump does not import tables with columns of type TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ). You must move such tables using a regular Data Pump import.

As mentioned above, a full transportable export/import Data Pump translates the data to match the new database timezone.

Other Blog Posts in This Series

Is It Possible to Migrate SQL Plan Baselines Before You Migrate Data

In a recent migration, a customer had millions of SQL plan baselines. The customer performed a Full Transportable Export/Import, but the export of the SQL plan baselines took a lot of time. I suggested moving the SQL plan baselines the day before the downtime window.

But can you move SQL plan baselines into an empty database; before you move the data? Do SQL plan baselines care about the underlying schema objects the SQL accesses?

SQL Plan Baselines and Underlying Schema Objects

The optimizer matches a SQL statement with a SQL plan baseline using the signature of the SQL (see appendix). Originally, I thought that it was the SQL ID, but I was wrong:

SQL statements are matched to SQL plan baselines using the signature of the SQL statement. A signature is a unique SQL identifier generated from the normalized SQL text (uncased and with whitespaces removed).

SQL Plan Management uses the signature only to match SQLs to SQL plan baselines. There is no reference to the underlying schema objects, which is evident since the same SQL from different schemas share the same signature.

The conclusion is that importing SQL plan baselines into an empty database is safe before you move the data.

Empty Database and Plan Evolution

But there is a catch. You don’t want plan evolution to happen in a database with no data.

Plan evolution is the process that validates whether a plan performs better than existing ones and, if so, marks the plans as accepted. This can have undesired side effects in an empty database.

There are two ways to avoid this:

  • Disable the SPM Evolve Advisor Task:

    BEGIN
       DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
          task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
          parameter => 'ALTERNATE_PLAN_BASELINE',
          value => '');
       DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
          task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
          parameter => 'ALTERNATE_PLAN_SOURCE',
          value => '');
    END;
    /   
    
  • Don’t allow plans to auto-evolve using the accept_plans parameter:

    BEGIN
       DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
          task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
          parameter => 'ACCEPT_PLANS',
          value => 'FALSE');
    END;
    /
    

How to Migrate SQL Plan Baselines

DBMS_SPM

The easiest way you can move SQL plan baselines is using DBMS_SPM. You can find a good example of using it in the documentation.

Data Pump

Data Pump exports SQL plan baselines as part of the SQL Management Base (SMB). Data Pump includes the SMB as part of a full export only. To export the SMB only:

expdp ... full=y include=SMB

Please note that the SMB includes the following as well:

  • SQL Profiles
  • SQL Patches
  • SQL Plan Directives

How to Exclude SQL Plan Baselines

If you migrate SQL plan baselines in advance, you should exclude them from the production migration. As described above, the SQL plan baselines are part of the SQL Management Base. You exclude it using:

expdp ... exclude=SMB

That will exclude the items below. Depending on how you move the SQL plan baselines and if you need the other items in the target database, you might need to move these items manually:

  • SQL Profiles You can move SQL Profiles using DBMS_SQLTUNE.
  • SQL Patches You can move SQL Patches using DBM_SQLDIAG.
  • SQL Plan Directives You can move SQL plan directives using DBMS_SPD. Only relevant if you use Adaptive Statistics.

Appendix

SQL Signature

The signature of an SQL is:

A numeric hash value computed using a SQL statement text that has been normalized for case insensitivity and white space. It uniquely identifies a SQL statement. The database uses this signature as a key to maintain SQL management objects such as SQL profiles, SQL plan baselines, and SQL patches.

Let me use an example. If you have the following SQL text:

select * from dual

The signature is:

14103420975540283355

There is a function that calculates the signature from the SQL text:

col signature format 99999999999999999999999
select dbms_sqltune.sqltext_to_signature ('select * from dual') as signature from dual;

Acknowledgment

Thanks to Nigel Bayliss, optimizer product manager, for valuable assistance.

How to use Transportable Tablespace with TDE Tablespace Encryption

You can use transportable tablespace to migrate your data between databases. It is a great way of moving your data. How does it work together with TDE Tablespace Encryption?

It depends on which kind of migration you are planning. The endian format of the source and target platforms (or operating system) plays a vital role.

First, you determine the endian format of the source and target database:

SQL> select platform_name, endian_format from v$transportable_platform;
  • If the endian formats are identical, you will perform a same-endian migration.
  • If the endian formats differ, you will perform a cross-endian migration.

Same-endian Migration of TDE Encrypted Tablespaces

It is supported to use transportable tablespace when you migrate to a different platform as long as the endian format does not change.

Oracle Database uses a two-tier key architecture which consists of two encryption keys:

  • Master Encryption Key (MEK)
  • Tablespace Encryption Key (TEK)

In the tablespace, the TEK is stored. When you use transportable tablespace, you copy the data files, and thus, the TEK remains the same. The data in the tablespace continues to be encrypted during the entire migration using the same TEK.

But what about the MEK? It is required to get access to the TEK.

Option 1: Use ENCRYPTION_PASSWORD parameter

  1. During Data Pump transportable tablespace export, you specify an ENCRYPTION_PASSWORD:

    expdp ... encryption_password=<a_strong_and_secure_password>
    

    The encryption password is not the MEK of the source but a special password you choose for the migration only.

  2. On import, you specify the encryption password:

    impdp ... encryption_password=<a_strong_and_secure_password>
    

A benefit of this option is that the source and target database is encrypted using a different MEK. You can query the database and verify that no new MEK is added to the target database. The target database continues to use its own MEK:

SQL> select * from v$encryption_keys;

According to the documentation, this is the recommended option.

Option 2: Import Master Encryption Key

  1. You start the Data Pump transportable tablespace export.
  2. In the source database, you export the source database MEK:
    SQL> administer key management export keys 
         with secret "<another_strong_and_secure_password>"
         to '/home/oracle/secure_location/exported-keys'
         force keystore identified by "<source_database_MEK";
    
  3. You import the source database MEK into the target database:
    SQL> administer key management import keys 
         with secret "<another_strong_and_secure_password>"
         from '/home/oracle/secure_location/exported-keys'
         force keystore identified by "<target_database_MEK>"
         with backup;
    
  4. You start the Data Pump transportable tablespace import.

By querying v$encryption_keys, you can see that another key has been added to the database.

You can read more about export and import of MEKs in the documentation.

Option 3: Oracle Key Vault

If you are using Oracle Key Vault, it’s very easy to allow the target database to access the source database master encryption key.

When you perform the Data Pump transportable tablespace import in the target database, it will already have access to the encryption keys that protect the tablespaces. Nothing further is needed.

What About Rekeying?

If you make the source database encryption key available to the target database, consider whether you also want to perform a rekey operation. This applies to options 2 and 3.

ORA-39396

If you use options 2 or 3, you will receive the below warning during Data Pump transportable tablespace export:

ORA-39396: Warning: exporting encrypted data using transportable option without password

This is expected behavior:

This warning points out that in order to successfully import such a transportable tablespace job, the target database wallet must contain a copy of the same database master key used in the source database when performing the export. Using the ENCRYPTION_PASSWORD parameter during the export and import eliminates this requirement.

Cross-endian Migration of TDE Encrypted Tablespaces

If your database is encrypted, you must use the newer M5 migration method. It supports encrypted tablespaces.

The older XTTS v4 doesn’t support encrypted tablespace.

If you have an encrypted tablespace and you want to use transportable tablespace:

  1. Decrypt the tablespace
  2. Migrate the tablespace using transportable tablespace
  3. Re-encrypt the tablespace

Appendix

Further Reading

Other Blog Posts in This Series