My Query Performs Horrible, Fix It!

Many years ago, I used to work as an operational DBA, and I remember colleagues asking the following:

My query runs very slow. The other day it all ran fine, but now it’s hanging. What’s going on? Fix it!

They might even start the occasional rant about relational doesn’t scale and all the other blah blah blah.

Sounds familiar? What if you could solve the problem by just running:

EXEC DBMS_SPM.ADD_VERIFIED_SQL_PLAN_BASELINE('<sql_id');

The query runs at warp speed again, and the users are happy.

In Oracle Database 19.22.0, that’s possible.

What’s Going On

The procedure uses SQL Plan Management: the best solution to ensure plan stability.

It will:

  • Search all your SQL tuning sets, including automatic SQL tuning sets, AWR, and cursor cache to find other plans for that SQL.
  • Test each of the plans to find the best one.
  • Create a SQL plan baseline with the best plan.

During the next execution, the database will use the best plan.

Problem solved!

SQL Plan Management

I’m a huge fan of SQL Plan Management. It is one of the most underrated features in Oracle Database.

What is your biggest fear when you upgrade? Often, the answer is changing plans. SQL Plan Management effectively solves that problem, and we recommend it as part of our performance stability prescription.

Afterwords

In fact, I’ve never been a performance expert. When I was tasked with a performance issue, I knew it would take a lot of time.

Over time, I found a better way of solving these issues. I would turn my chair and look directly into the wrinkled eyes of Frank, my trusted grey-haired colleague.

Using my cutest little puppy face and soft voice, young me would ask for advice and smoothly hand over the task to him.

But not everyone has a Frank in your office, so DBMS_SPM.ADD_VERIFIED_SQL_PLAN_BASELINE is very useful.

Experience and grey hair rock!

Further Reading

Is It Possible to Migrate SQL Plan Baselines Before You Migrate Data

In a recent migration, a customer had millions of SQL plan baselines. The customer performed a Full Transportable Export/Import, but the export of the SQL plan baselines took a lot of time. I suggested moving the SQL plan baselines the day before the downtime window.

But can you move SQL plan baselines into an empty database; before you move the data? Do SQL plan baselines care about the underlying schema objects the SQL accesses?

SQL Plan Baselines and Underlying Schema Objects

The optimizer matches a SQL statement with a SQL plan baseline using the signature of the SQL (see appendix). Originally, I thought that it was the SQL ID, but I was wrong:

SQL statements are matched to SQL plan baselines using the signature of the SQL statement. A signature is a unique SQL identifier generated from the normalized SQL text (uncased and with whitespaces removed).

SQL Plan Management uses the signature only to match SQLs to SQL plan baselines. There is no reference to the underlying schema objects, which is evident since the same SQL from different schemas share the same signature.

The conclusion is that importing SQL plan baselines into an empty database is safe before you move the data.

Empty Database and Plan Evolution

But there is a catch. You don’t want plan evolution to happen in a database with no data.

Plan evolution is the process that validates whether a plan performs better than existing ones and, if so, marks the plans as accepted. This can have undesired side effects in an empty database.

There are two ways to avoid this:

  • Disable the SPM Evolve Advisor Task:

    BEGIN
       DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
          task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
          parameter => 'ALTERNATE_PLAN_BASELINE',
          value => '');
       DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
          task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
          parameter => 'ALTERNATE_PLAN_SOURCE',
          value => '');
    END;
    /   
    
  • Don’t allow plans to auto-evolve using the accept_plans parameter:

    BEGIN
       DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
          task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
          parameter => 'ACCEPT_PLANS',
          value => 'FALSE');
    END;
    /
    

How to Migrate SQL Plan Baselines

DBMS_SPM

The easiest way you can move SQL plan baselines is using DBMS_SPM. You can find a good example of using it in the documentation.

Data Pump

Data Pump exports SQL plan baselines as part of the SQL Management Base (SMB). Data Pump includes the SMB as part of a full export only. To export the SMB only:

expdp ... full=y include=SMB

Please note that the SMB includes the following as well:

  • SQL Profiles
  • SQL Patches
  • SQL Plan Directives

How to Exclude SQL Plan Baselines

If you migrate SQL plan baselines in advance, you should exclude them from the production migration. As described above, the SQL plan baselines are part of the SQL Management Base. You exclude it using:

expdp ... exclude=SMB

That will exclude the items below. Depending on how you move the SQL plan baselines and if you need the other items in the target database, you might need to move these items manually:

  • SQL Profiles You can move SQL Profiles using DBMS_SQLTUNE.
  • SQL Patches You can move SQL Patches using DBM_SQLDIAG.
  • SQL Plan Directives You can move SQL plan directives using DBMS_SPD. Only relevant if you use Adaptive Statistics.

Appendix

SQL Signature

The signature of an SQL is:

A numeric hash value computed using a SQL statement text that has been normalized for case insensitivity and white space. It uniquely identifies a SQL statement. The database uses this signature as a key to maintain SQL management objects such as SQL profiles, SQL plan baselines, and SQL patches.

Let me use an example. If you have the following SQL text:

select * from dual

The signature is:

14103420975540283355

There is a function that calculates the signature from the SQL text:

col signature format 99999999999999999999999
select dbms_sqltune.sqltext_to_signature ('select * from dual') as signature from dual;

Acknowledgment

Thanks to Nigel Bayliss, optimizer product manager, for valuable assistance.

Zero Downtime Migration – Logical Migration and the Final Touches

Logical migration has many benefits but there is also a catch. It does not transfer some of that nice meta data that make your life easier – such as:

  • AWR
  • SQL Plan Baselines
  • SQL Profiles

Further, we can make a few tweaks to avoid having performance problems after the migration. This post applies to migrations into any database – except Autonomous Databases. In an Autonomous Database you don’t need to worry about such things. The database will fix the issues for you.

AWR

If you are licensed to use AWR in your source database, you can transport the AWR data into your target database.

  1. In your source database, execute the script $ORACLE_HOME/rdbms/admin/awrextr.sql. It extracts the AWR data and load them into a Data Pump dump file.
  2. Transfer the dump file to your target database.
  3. In your target database, import the AWR data using the script $ORACLE_HOME/rdbms/admin/awrload.sql.

When you look at the old AWR data in your target database, use it with caution. The AWR data was collected on a completely different system (CPU, memory, settings, database release, initialization parameters etc). Your new system is very different. Keep that in mind when you compare data from before and after the migration.

SQL Plan Baselines

If you are using SQL Plan Management to ensure plan stability, you want to transport your baselines as well.

First, in my source database create a staging table. It is used to transport the baselines. Create the staging table in a schema that you will be migrating.

begin
   dbms_spm.create_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH');
   end;
/

Next, I will extract all the baselines including the fixed or accepted plans. It is now stored in the staging table and will be migrated together with my data:

declare
   l_count number;
begin
   l_count := dbms_spm.pack_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH',
      enabled      => 'YES',
      fixed        => 'YES');
   end;
/

After the migration, in my target database, I can now extract the baselines into the SQL Management Base:

declare
   l_count number;
begin
   l_count := dbms_spm.unpack_stgtab_baseline (
      table_name   => 'MIGR_SPB_1',
      table_owner  => 'SH');
   end;
/

That’s it. Now SQL Plan Management will ensure that the same plans are used for the statements that are in the baselines.

If you are not using SQL Plan Management, you should look into it. In my opinion it is one of the most underrated features in the Oracle Database. Many issues are caused by plan changes which is exactly what you can avoid with SQL Plan Management.

SQL Profiles

If you have SQL Profiles you can transport them in a way very similar to transporting SQL Plan Baselines.

First, in the source database, create a staging table in a schema that you will be migrating:

begin
   dbms_sqltune.create_stgtab_sqlprof (
      table_name   => 'MIGR_SQLP_1',
      table_owner  => 'SH');
end;
/

Next, put the SQL Profiles into the staging table. This example will load all profiles from the DEFAULT category, but there are options to cherry-pick those of interest:

begin
   dbms_sqltune.pack_stgtab_sqlprof (
      staging_table_name   => 'MIGR_SQLP_1',
      staging_schema_owner => 'SH');
end;
/

Finally, in my target database, I can unpack the profiles from the staging table and into the data dictionary:

begin
   dbms_sqltune.unpack_stgtab_sqlprof (
      staging_table_name   => 'MIGR_SQLP_1',
      staging_schema_owner => 'SH',
      replace              => TRUE);
   end;
/

Statistics

Should you do anything about object statistics?

  • If you have migrated into the same release, the answer is no.
  • If you have migrated into a higher release, the answer is it depends. It is not required, but if you come from a very old release, you might consider it.

Imagine a migration from 11.2.0.4 to 19c. Significant changes have been made to histograms in that period. If you want to benefit from the new, improved histogram types, you should re-gather statistics. If you don’t do it, the histograms will gradually change over time when the statistics become stale. Some people prefer taking such a change immediately because they don’t like the idea of things happening over the course of time.

If you want to refresh statistics:

exec dbms_stats.delete_database_stats;
exec dbms_stats.gather_database_stats;

To make it a little faster:

exec dbms_stats.set_global_prefs('CONCURRENT','AUTOMATIC');

To make it even faster – requires CPU:

exec dbms_stats.set_global_prefs('DEGREE', DBMS_STATS.AUTO_DEGREE);

To make it as fast as possible – requires a lot of CPU:

exec dbms_stats.set_global_prefs('CONCURRENT','AUTOMATIC');
exec dbms_stats.set_global_prefs('DEGREE', DBMS_STATS.AUTO_DEGREE);

Be sure to set the preferences back to the original value when you are done gathering statistics. Nigel Bayliss has an interesting blog post on How to gather optimizer statistics fast.

Dictionary Statistics

After the migration, the target database has many more tables, indexes, views and so on. This is represented in the data dictionary as additional rows in e.g. OBJ$. All that extra data probably means that the dictionary statistics are stale. Stale dictionary statistics can lead to all sorts of troubles.

Immediately after the migration is completed you should re-gather dictionary statistics to avoid problems caused by stale statistics. Do this right after ZDM has completed the migration:

exec dbms_stats.gather_dictionary_stats;

Or even better:

begin
   dbms_stats.gather_schema_stats('SYS');
   dbms_stats.gather_schema_stats('SYSTEM');
end;
/

I prefer gathering statistics on SYS and SYSTEM independently. My colleagues and I have seen issues from time to time with dbms_stats.gather_dictionary_stats.

Fixed Object Statistics

After migration and once the system has been properly warmed up, you should re-gather fixed objects statistics. It is important that you don’t gather fixed objects statistics right after migration. You must until you have run a representative workload. If you wait a few days or until the next weekend, then you are most likely on the safe side:

exec dbms_stats.gather_fixed_objects_stats;

This recommendation is similar to the one we give about after upgrade. Read the blog post to read more about fixed objects statistics in general. Also, you can find a very useful piece of code that gathers fixed objects statistics in seven days via a scheduler job.

SQL Tuning Sets

This is something that you should do before you start the migration. Capture your workload from the source database into SQL Tuning Sets.

What is a SQL Tuning Set? It is an object that contains information about a number of SQLs. For each SQL the following is collected:

  • SQL Statement – the SQL text itself
  • Context – executing schema, binds, and environment (like NLS settings)
  • Statistics – execution statistics like elapsed time, CPU time, buffer gets, rows processed
  • Plan – last, but not least, the actual execution plan

You can then transport the SQL Tuning Set into the target database. If an SQL is now running slow, you can use the information from the SQL Tuning Set to identify why. For an SQL you can see the old and new plan, compare amount of buffer gets and CPU time and so forth. You can also bring back the old plan, if the new plan is less effective.

First, in the source you create a SQL Tuning Set:

exec dbms_sqlset.create_sqlset (sqlset_name => 'MIGR_STS_1', description => 'For migration - from source');

Next, capture statements from AWR. This will take the top 5000 statements from your entire AWR ordered by elapsed time:

declare
   begin_id number;
   end_id number;
   cur sys_refcursor;
begin
   select min(snap_id), max(snap_id) int begin_id, end_id
   from dba_hist_snapshot;

open cur for
  select value(p) from table(dbms_sqltune.select_workload_repository(
       begin_snap       => begin_id,
       end_snap         => end_id,
       basic_filter     => 'parsing_schema_name not in (''SYS'')',
       ranking_measure1 => 'elapsed_time',
       result_limit     => 5000,
       attribute_list   => 'ALL')) p;

  dbms_sqltune.load_sqlset('MIGR_STS_1', cur);
  
close cur;

end;
/

You can also sample directly from cursor cache. Start the sampling and then run your workload. This example will sample every minute for 15 minutes:

begin
   dbms_sqlset.capture_cursor_cache_sqlset(
      sqlset_name     => 'MIGR_STS_1',
      time_limit      => 900,
      repeat_interval => 60,
      capture_option  => 'MERGE',
      capture_mode    => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
      basic_filter    => 'parsing_schema_name not in (''SYS'')',
      sqlset_owner    => NULL,
      recursive_sql   => 'HAS_RECURSIVE_SQL');
end;
/

Finally, put all that information into a staging table, so it can be transported to the target database. Put the staging table into a schema that you plan on migrate. This way you ensure that the SQL Tuning Set is migrated together with your data. In my example it is the schema SH:

begin
   dbms_sqltune.create_stgtab_sqlset ( 
      schema               => 'SH',
      table_name           => 'MIGR_STGTAB_1');
   dbms_sqltune.pack_stgtab_sqlset (      
      sqlset_name          => 'MIGR_STS_1',
	  staging_schema_owner => 'SH',
      staging_table_name   => 'MIGR_STGTAB_1');
end;
/

After the migration, in my target database I can now extract the SQL Tuning Set from the staging table and into the data dictionary:

begin
   dbms_sqltune.unpack_stgtab_sqlset (
      sqlset_name          => '%',
      replace              => true,
	  staging_schema_owner => 'SH',
      staging_table_name   => 'MIGR_STGTAB_1'
   );
end;
/

Now – if you get into trouble with a misbehaving SQL – you can use the information in the SQL Tuning Set to investigate the problem.

Conclusion

When you perform a logical migration, there are some additional tasks that you should consider at least for your mission-critical databases. These steps will help you avoid performance issues after the migration.

In addition to the topics discussed here, I recommend that you also watch our webinar Performance Stability, Tips and Tricks and Underscores (slides). It has the complete prescription to avoid performance problems after upgrade and migration.

Other Blog Posts in This Series