Are Your Oracle Database Clients Ready for the next Database Upgrade?

Each new Oracle Database release changes the client/server communication protocol. A newer protocol supports new features, strengthens security, and so forth. Before upgrading your Oracle Database, you must ensure your clients can connect afterward. If not, your clients may face:

ORA-28040: No matching authentication procotol error
ORA-03134: Connections to this server version are no longer supported

Oracle lists the supported clients in the client/server interoperability support matrix (Doc ID 207303.1). Take a look – it is probably the most colorful MOS note.

For Oracle Database 23ai, your clients must be version:

  • 23ai
  • 21c
  • 19c

How do you know if your clients are ready for the next release?

A Solution

The view V$SESSION_CONNECT_INFO lists the client driver and version of connected sessions. You can join V$SESSION to get more details.

SELECT PROGRAM, CLIENT_DRIVER, CLIENT_VERSION 
FROM   GV$SESSION S, GV$SESSION_CONNECT_INFO CI 
WHERE  S.SID=CI.SID AND S.SERIAL#=CI.SERIAL# 

Here’s is an example of how that data might look like:

PROGRAM CLIENT_DRIVER CLIENT_VERSION
oracle SERVER 19.20.0.0.0
MyApp.exe ODPM.NET : 19.1.0.0.0 19.1.0.0.0
MyApp.exe ODPU.NET : 19.21.0.0.0 19.21.0.0.0
MyApp2.exe (null) 19.21.0.0.0
emagent jdbcthin 12.1.0.2.0
SQL Developer jdbcthin : 21.4.0.0.0 21.4.0.0.0
java jdbcoci : 19.22.0.0.0 19.22.0.0.0
sqlplus SQL*PLUS 19.22.0.0.0
  • If I want to upgrade to Oracle Database 23ai, it looks good except for the emagent with a 12.1.0.2 driver.
  • When client driver is SERVER it is the database itself making connections. In this case, the scheduler was running jobs. You can disregard those entries.
  • The entry with client driver (null) was a thick OCI client used by a C++ program.

The above query gives an overview of the situation right now. But we need to persist the data to check all clients over time. We can solve that with a scheduler job.

If you monitor your database for weeks or a month, you should know which clients connect to your Oracle Database.

A high-five to my friend Frank for helping with some sample connection data.

The Code

Here’s a piece of code that creates a schema, a few objects, and a scheduler job.

The scheduler runs the job every 5 minutes. The job finds new connections, notes the client’s driver, and stores that info in a table.

Run the following code as SYS AS SYSDBA:

--Create a schema
DROP USER ORA_CLIENT_CHECK CASCADE;
CREATE USER ORA_CLIENT_CHECK NO AUTHENTICATION;

--Required privileges
GRANT SELECT ON GV_$SESSION TO ORA_CLIENT_CHECK;
GRANT SELECT ON GV_$SESSION_CONNECT_INFO TO ORA_CLIENT_CHECK;
ALTER USER ORA_CLIENT_CHECK DEFAULT TABLESPACE USERS;
ALTER USER ORA_CLIENT_CHECK QUOTA UNLIMITED ON USERS;

--View contains a current list of clients connected
CREATE VIEW ORA_CLIENT_CHECK.CURRENT_SESSIONS AS (
SELECT DISTINCT S.MACHINE, S.OSUSER, S.PROGRAM, S.MODULE, CLIENT_DRIVER, CLIENT_VERSION FROM GV$SESSION S, GV$SESSION_CONNECT_INFO CI WHERE S.SID=CI.SID AND S.SERIAL#=CI.SERIAL# AND CI.CLIENT_DRIVER != 'SERVER');

--Create a table to hold the result
CREATE TABLE ORA_CLIENT_CHECK.CONNECTED_SESSIONS AS SELECT * FROM ORA_CLIENT_CHECK.CURRENT_SESSIONS WHERE 1=0;

--Create a scheduler job that runs every 5 minutes
BEGIN
   DBMS_SCHEDULER.create_job (
      job_name        => '"ORA_CLIENT_CHECK"."GET_NEW_CONNECTIONS"',
      job_type        => 'PLSQL_BLOCK',
      job_action      => 'BEGIN MERGE INTO ORA_CLIENT_CHECK.CONNECTED_SESSIONS OLD USING (SELECT * FROM ORA_CLIENT_CHECK.CURRENT_SESSIONS) NEW ON (OLD.MACHINE=NEW.MACHINE AND OLD.OSUSER=NEW.OSUSER AND OLD.PROGRAM=NEW.PROGRAM AND OLD.MODULE=NEW.MODULE AND OLD.CLIENT_DRIVER=NEW.CLIENT_DRIVER AND OLD.CLIENT_VERSION=NEW.CLIENT_VERSION) WHEN NOT MATCHED THEN INSERT (MACHINE, OSUSER, PROGRAM, MODULE, CLIENT_DRIVER, CLIENT_VERSION) VALUES(NEW.MACHINE, NEW.OSUSER, NEW.PROGRAM, NEW.MODULE, NEW.CLIENT_DRIVER, NEW.CLIENT_VERSION); COMMIT; END;',
      start_date      => SYSTIMESTAMP,
      repeat_interval => 'freq=minutely; interval=5; bysecond=0;',
      end_date        => NULL,
      enabled         => TRUE,
      comments        => 'Checks for new connections and store the client version/driver to table.');
END;
/

Query the table to find clients that you must upgrade:

SELECT * 
FROM   ORA_CLIENT_CHECK.CONNECTED_SESSIONS;

The Limitations

  • The code monitors a single database only.

  • If you use Active Data Guard, you must consider the users that connect to the standby only. You can use DML redirect to send the data back to the primary database.

  • If you have a job that connects very seldom or connects for a very short time only, there is a risk that the code won’t capture it.

Other Ideas

Unified Auditing captures much information, but the client driver and version is not part of it. If you can’t sample from V$SESSION_CONNECT_INFO, this could be a viable solution.

You can use the below query to find a unique list of sessions and the program they used to connect. Then, you connect each host and manually ensure that the client driver is up-to-date:

SELECT DISTINCT userhost, program, client_program_name 
FROM   unified_audit_trail;

Of course, this requires you to have configured Unified Auditing.

Real-World Database Upgrade and Migration 19c and 23c

Mike Dietrich and I are hosting two in-person workshops in Sweden and Denmark in early May. Yes, that’s right: It’s in-person.

Mike Dietrich and Daniel Overby Hansen will guide you through several different examples of upgrade, migration and consolidation techniques and strategies – shown with real world customer cases.

  • How to take full advantage of the new features and options in Oracle Database 19c and 23c
  • The smoothest and fully unattended migration to the CDB architecture
  • Real World Best Practices and Customer Cases
  • Performance Stability Prescription
  • What’s new, what’s coming (especially for DBAs) in Oracle Database 23c

Oracle Real-World Database Upgrade and Migration 19c and 23c with Mike Dietrich and Daniel Overby Hansen

Sweden

  • Oracle office, Solna
  • Tuesday 7 May 2024, 09:00 – 16:30
  • Sign up

Denmark

  • Oracle office, Hellerup
  • Wednesday 8 May 2024, 08:00 – 15:00
  • Sign up

Secure your seat as quickly as possible. Seats are limited!

Agenda

To whet your appetite, here is the agenda:

Agenda
Release Strategy with Oracle Database 23c
Oracle Database Patching – Welcome to the Jungle
Upgrade to Oracle Database 23c
Migration to the CDB architecture – smoothly and unattended
Data Pump: The universal tool, for ADB migrations and more
Cloud Migration Advisor
Performance Stability Prescription
Secret underscores
Insights into the Oracle Database Development process
Oracle Database 23c – What’s new, what’s coming

Remember our mantra: All tech, no marketing!

I hope to see you at our workshops.

A Different Day at the Office

Yesterday, I had quite a different day at the office. Instead of typing on the keyboard, I spent the day loading a truck bound for Ukraine with humanitarian aid.

I had a fantastic day with the volunteers at Bevar Ukraine and colleagues from Oracle. We loaded a truck with:

  • 50 kW diesel generator
  • Medical supplies
  • Hospital beds
  • Clothes
  • Other necessities

Loading a truck at Bevar Ukraine with humanitarian aid

Bevar Ukraine

What is Bevar Ukraine?

Bevar Ukraine is an independent Danish humanitarian non-profit organization whose purpose is to provide humanitarian aid in Ukraine and to support refugees. The organization works to combat poverty and strengthen civil society. Bevar Ukraine supports the UN’s global goals for sustainable development.

It is run by volunteers who spend much time and effort helping the victims in Ukraine.

I am glad I could spend a day with these wonderful people and help them in their mission. It was touching to listen to their stories. Thank you, Bevar Ukraine!

I encourage you to donate to their organization and help those in need.

Oracle Volunteering

As an employee of Oracle, I can spend my work time on voluntary work. We call this Oracle Volunteering:

Oracle Volunteers lead and participate in virtual and in-person projects with hundreds of nonprofits and public institutions year-round. They support students and educators, protect nature and wildlife, and strengthen communities by helping people in need.

Every year, I can spend a whole week on voluntary work. I feel proud of my employer when they take action and help make the world a better place. Thank you, Oracle!

Reflections

I am grateful to live in a peaceful part of the world. But I feel for those less fortunate. We could do much more together if we all made friends.

In my previous job, I worked together with many Ukrainians. They were all young and eager to sharpen their skills in software development. They worked hard to make a good living for themselves and Ukraine.

Now, much of that is lost due to the unjust occupation and war. Perhaps some of my former colleagues are now dead because they defended their country. How tragic…

Stop the war and free Ukraine!

Does Data Pump Export Advanced Data Optimization Policies and Heat Maps?

During our recent webinar on cross-platform migration, an attendee asked a good question:

Does Data Pump in a Full Transportable Export/Import move ADO policies and heat map information?

Let’s find out.

What Is It?

ADO means Advanced Data Optimization and is a feature that:

… automate the compression and movement of data between different tiers of storage within the database.

ADO uses heat maps that :

… provides data access tracking at the segment-level and data modification tracking at the segment and row level.

In other words,

  • You might have slower and faster storage.
  • You define ADO policies on tables and indexes (including partitions and subpartitions).
  • The policies define in which storage the database stores the rows.
  • The heat maps tell which objects you access.
  • Based on heat maps, a job automatically moves rows between storage depending on the use and/or compress it.

What Happens in Data Pump

There are three pieces of essential information:

  1. ADO policies. Defined on objects and governs how the database should store data.
  2. Heat map information. The database records your usage of the data and stores the information in heat maps. Later on, the database uses the heat maps to determine what to do with your data.
  3. ILM settings. Tells the database how to perform the ILM (Information Lifecycle Management) maintenance. For example, you can define the parallel degree that the database uses during maintenance or the number of concurrent maintenance jobs.

You can find a test case at the end of the blog post. I’m using test data and queries from oracle-base.com.

ADO Policies

Data Pump transfers the ADO policies in all modes:

  • Full transportable
  • Full
  • Schema
  • Tablespace
  • Table

You can verify it:

SELECT   policy_name,
         object_owner,
         object_name,
         object_type,
         inherited_from,
         enabled,
         deleted
FROM     dba_ilmobjects
ORDER BY 1;

Heat Map Information

Data Pump does not transfer any heat map information. I didn’t find any way to move the heat map information to the new database.

ILM Settings

Data Pump does not transfer ILM settings – not even in full modes. You must manually move the settings.

  • You can find the current ILM settings:

    SELECT * FROM DBA_ILMPARAMETERS;
    
  • And change the settings in the target database:

    EXEC DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.DEG_PARALLEL , 4);
    
    • You must translate the name of the setting to the corresponding PL/SQL constant. You change the setting DEGREE OF PARALLELISM by using DBMS_ILM_ADMIN.DEG_PARALLEL.
  • The documentation holds a complete list of ILM settings.

Appendix

Further Reading

Test Data

I am using our Hands-On lab that you can use on Oracle LiveLabs. Again, thanks to oracle-base.com for test data and queries.

CONN / AS SYSDBA
--Turn on heat map tracking
ALTER SYSTEM SET HEAT_MAP=ON SCOPE=BOTH;

--Customize ILM settings
BEGIN
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.RETENTION_TIME, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.JOBLIMIT, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.ABS_JOBLIMIT, 42);
  DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.DEG_PARALLEL , 4);
END;
/
SELECT * FROM DBA_ILMPARAMETERS;

--Create tablespaces for ILM policy
CREATE TABLESPACE FAST_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE MEDIUM_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE SLOW_STORAGE_TS DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE USER TEST IDENTIFIED BY TEST QUOTA UNLIMITED ON USERS;
GRANT CREATE SESSION, CREATE TABLE TO TEST;
ALTER USER TEST QUOTA UNLIMITED ON FAST_STORAGE_TS;
ALTER USER TEST QUOTA UNLIMITED ON MEDIUM_STORAGE_TS;
ALTER USER TEST QUOTA UNLIMITED ON SLOW_STORAGE_TS;

CONN TEST/TEST

--Create table where we can track usage
CREATE TABLE t1 (
  id          NUMBER,
  description VARCHAR2(50),
  CONSTRAINT t1_pk PRIMARY KEY (id)
);
INSERT INTO t1
SELECT level,
       'Description for ' || level
FROM   dual
CONNECT BY level <= 10;
COMMIT;

--Generate "usage"
SELECT *
FROM   t1;
SELECT *
FROM   t1
WHERE  id = 1;

--Create table for ILM policy
CREATE TABLE invoices (
  invoice_no    NUMBER NOT NULL,
  invoice_date  DATE   NOT NULL,
  comments      VARCHAR2(500)
)
PARTITION BY RANGE (invoice_date)
(
  PARTITION invoices_2016_q1 VALUES LESS THAN (TO_DATE('01/04/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
  PARTITION invoices_2016_q2 VALUES LESS THAN (TO_DATE('01/07/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
  PARTITION invoices_2016_q3 VALUES LESS THAN (TO_DATE('01/09/2016', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
    ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
  PARTITION invoices_2016_q4 VALUES LESS THAN (TO_DATE('01/01/2017', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
    ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
  PARTITION invoices_2017_q1 VALUES LESS THAN (TO_DATE('01/04/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
    ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS,
  PARTITION invoices_2017_q2 VALUES LESS THAN (TO_DATE('01/07/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
    ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS
)
ILM ADD POLICY ROW STORE COMPRESS BASIC SEGMENT AFTER 3 MONTHS OF NO ACCESS;

Test Case

###############
# FULL EXPORT #
###############

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle full=y directory=expdir job_name=fullexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

#################
# SCHEMA EXPORT #
#################

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle schemas=test directory=expdir job_name=schemaexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

################
# TABLE EXPORT #
################

export ORAENV_ASK=NO
export ORACLE_SID=UP19
. oraenv
export ORAENV_ASK=YES
rm /tmp/expdat.dmp
sql / as sysdba<<EOF
   create or replace directory expdir as '/tmp';
EOF
expdp system/oracle tables=test.t1,test.invoices directory=expdir job_name=tabexp

export ORAENV_ASK=NO
export ORACLE_SID=CDB2
. oraenv
export ORAENV_ASK=YES
sql / as sysdba<<EOF
    startup
    alter pluggable database pdb1 close immediate;
    drop pluggable database pdb1 including datafiles;
    create pluggable database pdb1 admin user admin identified by admin;
    alter pluggable database all open;
    alter session set container=pdb1;
    create tablespace users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
    create or replace directory expdir as '/tmp';
    CREATE USER TEST IDENTIFIED BY TEST QUOTA UNLIMITED ON USERS;
    GRANT CREATE SESSION, CREATE TABLE TO TEST;
    ALTER USER TEST QUOTA UNLIMITED ON FAST_STORAGE_TS;
    ALTER USER TEST QUOTA UNLIMITED ON MEDIUM_STORAGE_TS;
    ALTER USER TEST QUOTA UNLIMITED ON SLOW_STORAGE_TS;
EOF

impdp system/oracle@pdb1 directory=expdir

sql / as sysdba<<EOF
    alter session set container=pdb1;
    select * from DBA_ILMPARAMETERS;
    SELECT policy_name,
       object_owner,
       object_name,
       object_type,
       inherited_from,
       enabled,
       deleted
    FROM   dba_ilmobjects
    ORDER BY 1;
    SELECT track_time,
        owner,
        object_name,
        segment_write,
        full_scan,
        lookup_scan
    FROM   dba_heat_map_seg_histogram
    ORDER BY 1, 2, 3;
EOF

How to Detect and Repair Corruption in Your Oracle Database

Over time, corruptions can sneak into your Oracle Database. Such corruptions may surface at the most inconvenient times, like upgrades, migrations, or PDB conversions. Here is a quick way of checking your Oracle Database and increasing your chance of success.

Oracle Database Dictionary Check

The documentation states:

DBMS_DICTIONARY_CHECK is a read-only and lightweight PL/SQL package procedure that helps you identify Oracle Database dictionary inconsistencies that are manifested in unexpected entries in the Oracle Database dictionary tables or invalid references between dictionary tables. Oracle Database dictionary inconsistencies can cause process failures and, in some cases, instance crash.

How to Execute a Dictionary Check

  • First, the database must be on Oracle Database 19.22.0 or higher.
  • Start the check:
    set serverout on
    exec dbms_dictionary_check.full;
    
  • The database spools to the console and a trace file:
    dbms_dictionary_check on 02-MAR-2024 13:31:56
    ----------------------------------------------
    Catalog Version 19.0.0.0.0 (1900000000)
    db_name: UP19
    Is CDB?: NO
    Trace File:
    /u01/app/oracle/diag/rdbms/up19/UP19/trace/UP19_ora_3343_DICTCHECK.trc
    
                                    Catalog       Fixed
    Procedure Name                  Version    Vs Release    Timestamp      Result
    --------------------------- ... ---------- -- ---------- -------------- ------
    .- OIDOnObjCol		       ... 1900000000 <=  *All Rel* 03/02 13:31:56 PASS
    .- LobNotInObj		       ... 1900000000 <=  *All Rel* 03/02 13:31:56 PASS
    .- SourceNotInObj	       ... 1900000000 <=  *All Rel* 03/02 13:31:56 PASS
    
    ...
    
    .- LobSeg                   ... 1900000000 <=  *All Rel* 03/02 13:31:56 PASS
    ---------------------------------------
    02-MAR-2024 13:31:56  Elapsed: 0 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File:
    /u01/app/oracle/diag/rdbms/up19/UP19/trace/UP19_ora_3343_DICTCHECK.trc
    
    PL/SQL procedure successfully completed.
    
    • In this case, all checks pass.
  • If there are any warnings or errors, you can find a detailed description of the check in the documentation.
  • In multitenant, you run the check in the root container and all PDBs. The PDBs are most important.
  • In Oracle Database 23ai, the full check even has a fix mode which can correct certain known issues.

Words of Advice

  • Run the check in due time before a major maintenance operation. Although many checks have a fix option, you sometimes need to engage with Oracle Support. This can take time, so don’t do it the day before an important upgrade.

  • AutoUpgrade makes your life easier. Of course, it can run the dictionary check for you. Add the following to your config file:

    <prefix>.run_dictionary_health=full
    

    During the pre-upgrade analysis, AutoUpgrade executes the dictionary check and stores the full report in the precheck folder in the AutoUpgrade log directory.

The Artist Formerly Known As

If you think: This does sound very similar to health check or hcheck.sql, then you are on to something. Oracle moved the health check code into the database. No need to separately download the script; it’s always there.

Weed out those corruptions!

It’s a Wrap – Transportable Tablespaces to the Extreme

Recently, we hosted our webinar Cross Platform Migration – Transportable Tablespaces to the Extreme. You can now watch the recording on our YouTube channel.

The audience at our webinar was very active and asked many good questions. To know all the details, you can read the Q&A and the slides.

Some of the attendees asked questions we were not able to answer during the webinar. Those questions – and answers – are included in the Q&A.

The New Procedure

For cross-platform and cross-endian migrations, we have a new procedure called M5. It replaces the former XTTS v4 Perl script that has been used for many migrations. We could see the need for changes as Oracle Databases world-wide grows in size and complexity. M5 implements the latest RMAN and Data Pump technology to deliver the fastest possible migrations.

You can download the M5 scripts and read the procedure on My Oracle Support (Doc ID 2999157.1).

Next Webinar

Mark your calendar for our next webinar:

Move to Oracle Database 23c – Everything you need to know about Oracle Multitenant

Oracle Database 23c does only support the CDB architecture. If you haven’t migrated to Oracle Multitenant yet, then you will be with your upgrade to 23c. How do you approach it in the most efficient way? What are the other options? And why is this a migration unless you have PDBs already? All this and way much more about how to work with Multitenant, how AutoUpgrade automates the entire move for you, end-to-end – and best practices and tips and tricks. We’ll guide you, and you will be ready to move to Oracle Database 23c right away

Sign up now and secure your seat.

All tech – no marketing!

Happy Migrating!

The Next-generation Cross-platform Migration for Oracle Database

I am very pleased to share that Oracle has officially launched a new method for cross-platform migrations of Oracle Database.

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

The M5 script is the next-generation cross-platform transportable tablespace migration procedure for Oracle Database

You can also use the script for cross-endian migrations, so this is perfect for AIX, HP-UX or SPARC migrations to Exadata Database Machine or any other little endian platform.

Next-generation

Before we launched M5, you would use the XTTS v4 Perl script for such migrations.

Timeline of methods for cross-endian migrations

We launched the latest version of the XTTS v4 Perl script many years ago. Over the last years, as databases grew bigger and bigger, we saw multiple issues with XTTS v4 Perl script, including:

  • No multisection backups for bigfile tablespaces
  • No encrypted tablespaces
  • Inefficient parallelism
  • Incomplete multitenant support

We wanted to solve all those issues with M5 – and we did! M5 uses newer RMAN functionality, and we made the procedure much simpler. It relies entirely on out-of-the-box RMAN functionality. On the source:

BACKUP ... TABLESPACE ... ;

On target we use:

RESTORE ALL FOREIGN DATAFILES ... ;

The latter command was introduced in Oracle Database 18c and enhanced in Oracle Database 19c. This means that the requirements for source and target database are:

Want to Know More?

We have a webinar later today about this new method. If you are interested, we still have open seats.

Cross Platform Migration – Transportable Tablespaces to the Extreme, February 22, 16:00 CET

Next week, we will add the recording to our YouTube channel. Be sure to subscribe so you don’t miss out on anything.

If you want a sneak peek at the slides, go ahead.

I have an elaborate blog post series about cross-platform migrations. I will soon update it with more information about the M5 migration method.

Happy Migrating!

Kickstart Learning in 2024

What better way to start the new year of the Dragon than to learn something new. If it sounds like a good idea, you’ll be pleased to know that we have two new webinars coming your way shortly.

Like the previous webinars, it’s all tech, no marketing!

Cross Platform Migration – Transportable Tablespaces to the Extreme

Recently, the team and I worked with two big customers who had to migrate cross-endian to Exadata. Both customers had huge and very active databases and needed to migrate with as little downtime as possible.

In this webinar, you can hear details on how one of the customers performed the migration using a new procedure developed by Oracle. In addition, we will share the lessons learned and our new best practices for such migrations.

> Transportable Tablespaces and Full Transportable Export Import aren’t a secret feature. But there are many things to take care on when you migrate cross-platform, and especially cross-Endianness. Today we will give you insights and show you how we worked together to migrate a 200TB database writing up to 15TB redo per day from Oracle SPARC SuperCluster to Oracle Exadata with a database downtime of less than 6 hours. Follow us into the extreme. Extreme load. Extreme complexity. And an extremely skilled team working toegther. Such projects aren’t done in 4 weeks. And we will tell you the entire story, from start to the nights of the nights.

Sign up here.

Move to Oracle Database 23c – Everything you need to know about Oracle Multitenant

Oracle Database 23c is on its way. Hopefully, it doesn’t come as a surprise to you that this release supports the multitenant architecture only. If you haven’t migrated your Oracle Database yet, you’ll need to do so as part of the upgrade to Oracle Database 23c.

This webinar shares all the things you need to know to migrate a database to the multitenant architecture successfully, including essential parts like Data Guard, backup, and your rollback plans.

> Oracle Database 23c does only support the CDB architecture. If you haven’t migrated to Oracle Multitenant yet, then you will be with your upgrade to 23c. How do you approach it in the most efficient way? What are the other options? And why is this a migration unless you have PDBs already? All this and way much more about how to work with Multitenant, how AutoUpgrade automates the entire move for you, end-to-end – and best practices and tips and tricks. We’ll guide you, and you will be ready to move to Oracle Database 23c right away

Sign up here.

Registration

The webinars are free, of course. You just need to sign up.

Our entire team will be present during the webinar to answer all your questions. I promise we won’t stop the webinar until we have answered all the questions. If you cannot participate, you can find the recording shortly after the webinar.

If you can’t wait, you can start learning already by watching the previous webinars.

Register your seat now.

How to Export and Import Statistics Faster Using DBMS_STATS in Parallel

When you migrate Oracle Database, you often need to transport statistics using DBMS_STATS. Also, during migrations, you want to minimize the downtime. How can you transfer statistics as quickly as possible?

Export and Import

The easiest solution is to use dbms_stats.export_database_stats and dbms_stats.import_database_stats. But, the procedures have no parallel capabilities. If you have an Oracle Database with many objects, it will take a long time.

Parallel Export and Import

dbms_stats also allows you to export and import statistics on schema or table level. You can use this to your advantage to run multiple export or import commands at the same time.

Imagine a database with four schemas.

If you export database stats, the database creates a list of statistics from all four schemas. Then, it exports them one by one.

  • Session 1: exec dbms_stats.export_database_stats( ... );

In contrast, if you start four different sessions, you can export one schema in each session and thus finish much faster.

  • Session 1: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA1');
  • Session 2: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA2');
  • Session 3: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA3');
  • Session 4: exec dbms_stats.export_schema_stats(ownname=>'SCHEMA4');

Benchmark

This benchmark gives you an idea of how much time you can save.

It is an Oracle E-Business Suite (EBS) database with 1.400 schemas and 150.000 tables/indexes.

I compare exporting and importing statistics on database level and on schema level. For schema level, I divide the schemas into chunks and process them from separate sessions simultaneously.

Method Time to export Time to import Total
Database 4m 5s 21m 5s 25m 10s
Schema, parallel 8 57s 3m 58s 4m 55s
Schema, parallel 16 53s 3m 45s 4m 38s

I can save more than 20 minutes in this benchmark by doing the work in parallel. Between parallel 8 and parallel 16, there is not much to gain, probably because of contention.

How to Export and Import in Parallel Using DBMS_STATS

Here is some simple code you can use to export and import in parallel.

I haven’t coded for years, so use at your own risk :)

  1. Create a dedicated user for the statistics staging table and create a control table with the list of schemas to process.

    drop user statuser cascade;
    grant dba to statuser identified by statuser;
    alter user statuser quota unlimited on users;
    --Create the DBMS_STATS staging table
    exec dbms_stats.create_stat_table('STATUSER', 'DBSTATS', 'USERS');
    --Populate control table
    declare
       l_stmt VARCHAR2(1000);
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stat_tab_owner ALL_USERS.USERNAME%TYPE := 'STATUSER';
    begin
       --Create control table and populate it
       --Order schemas by how many tables they have
       l_stmt := 'CREATE TABLE ' || l_stat_tab_owner || '.' || l_ctl_tab_name; 
       l_stmt :=l_stmt || ' as select o.owner stat_id, count(o.table_name) as cnt, 0 exp_status, 0 imp_status from dba_tables o, dba_users u where u.username not in (''SYS'', ''SYSTEM'', ''' || l_stat_tab_owner || ''') and u.oracle_maintained=''N'' and o.owner=u.username group by owner order by 2 desc';
       execute immediate l_stmt;
    end;
    /
    
  2. Start any number of database sessions and run the code below in each. The code selects a schema from the control table and performs a schema-level export of statistics into the staging table.

    declare
       l_cur_stat_id ALL_USERS.USERNAME%TYPE;
       l_statown ALL_USERS.USERNAME%TYPE := 'STATUSER';
       l_stattab ALL_TABLES.TABLE_NAME%TYPE := 'DBSTATS';
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stmt VARCHAR2(1000);
    begin
       begin
          --Loop until you reach "no_data_found exception"
          while true loop
             --Select a schema from the control table
             --Order by count (cnt) to export largest schemas first
             --Select for update to ensure only one session process a schema
             l_stmt := 'select stat_id from ' || l_statown || '.' || l_ctl_tab_name || ' where exp_status = 0 and rownum=1 order by cnt desc for update';
             execute immediate l_stmt into l_cur_stat_id;
             
             --Mark the schemas as "in progress"
             --Commit to release lock on control table			
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set exp_status=1 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
    
             --Perform the schema level export into DBMS_STATS staging table
             dbms_stats.export_schema_stats (ownname=>l_cur_stat_id, stattab=>l_stattab,statid=>l_cur_stat_id, statown=>l_statown);
    
             --Mark the schema as completed in the control table
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set exp_status=2 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
          end loop;
       exception when no_data_found then
          --"No_data_found" exception occurs when there are no more rows/schemas to process	  
          null;
       end;
    end;
    /
    
  3. Now move the DBSTATS schema to the target database and run the import. Start any number of database sessions and run the code below in each. The code selects a schema from the control table and performs a schema-level import of statistics into the staging table.

    declare
       l_cur_stat_id ALL_USERS.USERNAME%TYPE;
       l_statown ALL_USERS.USERNAME%TYPE := 'STATUSER';
       l_stattab ALL_TABLES.TABLE_NAME%TYPE := 'DBSTATS';
       l_ctl_tab_name ALL_TABLES.TABLE_NAME%TYPE := 'DOH$STATXFERCTL';
       l_stmt VARCHAR2(1000);
       no_stats_imp EXCEPTION;
       PRAGMA EXCEPTION_INIT(no_stats_imp, -20000);
    begin
       begin
            --Loop until you reach "no_data_found exception"
          while true loop
             --Select a schema from the control table
             --Order by count (cnt) to import largest schemas first
             --Select for update to ensure only one session process a schema            l_stmt := 'select stat_id from ' || l_statown || '.' || l_ctl_tab_name || ' where imp_status = 0 and rownum=1 order by cnt desc for update';
             execute immediate l_stmt into l_cur_stat_id;
     		
             --Mark the schemas as "in progress"
             --Commit to release lock on control table					
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set imp_status=1 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
    
             begin
                --Perform the schema level import into DBMS_STATS staging table	
                dbms_stats.import_schema_stats (ownname=>l_cur_stat_id, stattab=>l_stattab,statid=>l_cur_stat_id, statown=>l_statown);
             exception when no_stats_imp THEN
                --An empty schema, i.e., no tables or indexes			
                null;
             end;
     		
             --Mark the schema as completed in the control table             
             l_stmt := 'update ' || l_statown || '.' || l_ctl_tab_name || ' set imp_status=2 where stat_id=:x1';
             execute immediate l_stmt using l_cur_stat_id;
             commit;
          end loop;
       exception when no_data_found then
          --"No_data_found" exception occurs when there are no more rows/schemas to process	  	  
          null;
       end;
    end;
    /   
    

Does Exporting Database Statistics Include the Dictionary Statistics?

I recently worked on a migration of Oracle Database to a new platform. The customer was sensitive to downtime. We looked at each item in the migration run book and asked: “Can we make it faster?”

We discussed exporting database statistics, and the question: Does dbms_stats.export_database_stats include dictionary statistics?

The documentation states:

This procedure exports statistics for all objects in the database and stores them in the user statistics tables identified by statown.stattab.

What exactly do all objects mean?

Let’s Find Out

Here is a little test case.

I gather dictionary statistics. All but 77 dictionary tables now have statistics (it’s intentional that some are left un-analyzed):

exec dbms_stats.gather_dictionary_stats;
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;
	   
       CNT
----------
	77

I export database statistics into a staging table:

grant dba to statuser identified by statuser;
begin
   dbms_stats.create_stat_table(
      'STATUSER', 
      'DBSTATS');
   dbms_stats.export_database_stats(
      stattab=>'DBSTATS',
      statown=>'STATUSER');
   dbms_stats.gather_table_stats(
      'STATUSER', 
      'DBSTATS');
end;
/

I delete dictionary statistics. Now, 1674 dictionary tables are without statistics:

exec dbms_stats.delete_dictionary_stats(force=>true);
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;

       CNT
----------
      1674

I import statistics from the staging table. I am back at 77 dictionary tables without statistics.

begin
   dbms_stats.import_database_stats(
      stattab=>'DBSTATS',
      statown=>'STATUSER', 
      force=>TRUE);
end;
/
select count(*) cnt 
from   dba_tables 
where  owner in ('SYS', 'SYSTEM') 
       and last_analyzed is null;

       CNT
----------
	77

Conclusion: dbms_stats.export_database_stats includes dictionary statistics.

What about Fixed Objects Statistics

I can use the same test case as above (more or less). Here is the query I use to check for missing statistics on fixed objects:

select count(*) cnt
from   dba_tab_statistics 
where  object_type='FIXED TABLE'
       and last_analyzed is null;

The answer is: dbms_stats.export_database_stats does not include fixed objects statistics.

To transfer fixed object statistics, you must use the dedicated export and import functions.

Is It a Good Idea to Transfer Dictionary Statistics?

Well… It depends.

If time allows, I recommend you always gather dictionary statistics in the target database at the appropriate time.

If you want to transfer statistics, this is my recommendation:

Type Command Recommendation
Full export full=y Should be acceptable
Full transportable full=y transportable=always Should be acceptable
Transportable tablespace transport_tablespaces Hmm, not sure it’s a good idea
Tablespace export tablespaces=... Hmm, not sure it’s a good idea
Schema export schemas=... Hmm, probably a bad idea
Table export tables=... No, most likely a bad idea