During the next execution, the database will use the best plan.
Problem solved!
SQL Plan Management
I’m a huge fan of SQL Plan Management. It is one of the most underrated features in Oracle Database.
What is your biggest fear when you upgrade? Often, the answer is changing plans. SQL Plan Management effectively solves that problem, and we recommend it as part of our performance stability prescription.
Afterwords
In fact, I’ve never been a performance expert. When I was tasked with a performance issue, I knew it would take a lot of time.
Over time, I found a better way of solving these issues. I would turn my chair and look directly into the wrinkled eyes of Frank, my trusted grey-haired colleague.
Using my cutest little puppy face and soft voice, young me would ask for advice and smoothly hand over the task to him.
But not everyone has a Frank in your office, so DBMS_SPM.ADD_VERIFIED_SQL_PLAN_BASELINE is very useful.
In the last three customer cases I worked on, we solved the performance problems by using the latest Release Update plus Data Pump bundle patch. This is my number 1 advice.
Skip Statistics
I recommend that you skip statistics when you export:
Import statistics from source database using DBMS_STATS.
Import statistics from a test system using DBMS_STATS.
Options 1 and 3 are especially appealing if your target database is very different from the source. Imagine going from AIX to Exadata, from 11.2.0.4 to 19c, and non-CDB to PDB. The platform itself is very different; Exadata has superiour capabilities. In addition, it is a new version with other histogram types and different architecture. In this case, it does make sense to get new statistics that can better reflect the new environment.
Data Pump uses Advanced Queueing which uses the streams pool in the SGA. If you have enough memory, I suggest that you allocate a big streams pool right from the beginning:
SQL> alter system set streams_pool_size=2G scope=both;
You don’t want the streams pool to eat too much from your buffer cache and shared pool, so if you are short on memory find a more suitable value.
Database Complexity
The Data Pump export and import must be done during downtime. The time it takes to perform these two tasks is often critical.
How long will it take? It depends (classic IT answer)!
The complexity of your user data dictionary has the biggest impact. The more objects, generally the longer the export and import will take. Also, certain features like partitioning have a big impact as well. It might be impossible to reduce the user data dictionary complexity, but it can have a big impact. In some situations, I have seen old or obsolete data in the database. Or partitions that had to be archived. Or entire groups of tables that were used by a feature in the application that was no longer in use.
Another thing to look at is invalid objects. If you have objects that can’t compile, check the reason and whether the object can be dropped. Often these invalid objects are just remnants from old times.
In a recent case I worked on, the customer found more than 1.000 old tables that could be dropped. This significantly decreased the downtime required for the migration.
Dump File or Network Mode
A thing to test: What works best in your migration: Data Pump in dump file mode or network mode?
Normally, we recommend dump file mode because it has much better parallel capabilities. But metadata export and import for transportable tablespace jobs happen serially anyway (until Oracle Database 21c), so there might be a benefit of using Data Pump in network mode. When using Data Pump in network mode, you just start the Data Pump import without first doing an export. The information is loaded directly into the target database over a database link.
Parallel Metadata Export and Import
Starting with Oracle Database 21, Data Pump supports parallel export and import of metadata when using transportable tablespaces. Add the following to your Data Pump parameter file. n is the level of parallelism:
parallel=n
If an export was made in a lower release that didn’t support parallel export, you can still import in parallel. Parallel Metadata import works regardless of how the Data Pump export was made.
Auditing
If you are using Traditional Auditing, you can disable it in source and target database during the migration:
SQL> alter system set audit_trail=none scope=spfile;
SQL> alter system set audit_sys_operations=none scope=spfile;
SQL> shutdown immediate
SQL> startup
In at least one case, I’ve seen it make a difference for the customer.
Additionally, if you have a lot of traditional auditing data, I suggest you get rid of it (archive if needed, else delete it).
I strongly recommend that you apply the recent-most Release Update to your source and target Oracle Database. Use the download assistant to find it.
Use Backup Sets
If both source and target databases are Oracle Database 12c or newer, you should set the following in xtt.properties:
usermantransport=1
RMAN will use backup sets using the new backup for transport syntax. Backup sets are better than image copies because RMAN automatically adds unused block compression. Unused block compression can shrink the size of the backup and improve performance.
Block Change Tracking
Enable block change tracking on source database. Although strictly speaking not required, it is strongly recommended, because it will shorten the time it takes to perform incremental backups dramatically. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata:
SQL> select status, filename from v$block_change_tracking;
SQL> alter database enable block change tracking;
If you look in xtt.properties, there is a parameter called parallel. What does it do?
It controls the number of batches in which the backup and restore/recover commands run. The Perl script will split the tablespaces into n batches – n is parallel from xtt.properties. One batch will process all the data files belonging to those tablespaces.
If you have 20 tablespaces, the Perl script will run in four batches of five tablespaces. If each tablespace has three data files, the Perl script will run four batches of each 15 data files.
Each batch will process n data files at the same time. n being the default parallelism assigned to the disk channel. To find the current parallelism (here it is two):
RMAN> show all;
...
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
...
If you want to change it to eight:
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
When you restore and convert the data files on the target database, it will also use the RMAN configuration parameter.
To enable parallel backup and restore, be sure to change the default disk parallelism on both source and target database host.
For image file backups (usermantransport=0), when the data files are converted on the target database, it will use the parallel degree specified in xtt.properties parameter parallel. Backup sets are converted using the RMAN configuration parameter.
Multiple Perl Scripts
If you really want to squeeze the very last drop of performance out of your system, or if you want to use multiple RAC nodes, you can use multiple Perl scripts.
Normally, you only have one Perl script with corresponding files like xtt.properties:
[oracle@sales12 xtts]$ pwd
/home/oracle/xtts
[oracle@sales12 xtts]$ ls -l
total 260
-rw-r--r-- 1 oracle oinstall 5169 Mar 11 19:30 xtt.newproperties
-rw-r--r-- 1 oracle oinstall 266 Mar 11 19:30 xtt.properties
-rw-r--r-- 1 oracle oinstall 1390 Mar 11 19:30 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall 71 Mar 11 19:30 xttdbopen.sql
-rw-r--r-- 1 oracle oinstall 180408 Mar 11 19:30 xttdriver.pl
-rw-r--r-- 1 oracle oinstall 11710 Mar 11 19:30 xttprep.tmpl
-rw-r--r-- 1 oracle oinstall 52 Mar 11 19:30 xttstartupnomount.sql
The idea with multiple Perl scripts is that you have many sets of Perl scripts; each set working on a unique batch of tablespaces.
So instead of just one folder, I could have four folders. Each folder is a complete Perl script with all the files. Download rman_xttconvert_VER4.3.zip and extract to four folders:
[oracle@sales12 ~]$ pwd
/home/oracle
[oracle@sales12 ~]$ ls -l
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts1
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts2
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts3
drwxr-xr-x 2 oracle oinstall 4096 Mar 11 19:30 xtts4
Each of the xtt.properties files will work on a unique set of tablespaces:
You must also ensure that src_scratch_location and dest_scratch_location are set to different locations. Each set of Perl scripts must have dedicated scratch locations.
You have multiple concurrent sessions running when you need to backup and restore/recover. Each session will use one of the Perl scripts, and, thus, process the tablespaces concurrently.
SSH session 1:
export TMPDIR=/home/oracle/xtts1
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 2 (notice I changed TMPDIR to another directory, xtts2):
export TMPDIR=/home/oracle/xtts2
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 3:
export TMPDIR=/home/oracle/xtts3
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
SSH session 4:
export TMPDIR=/home/oracle/xtts4
cd $TMPDIR
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
In the above example, I used four different Perl scripts and four concurrent sessions. But you can scale up if you have the resources for it. One of our customers ran with 40 concurrent sessions!
You must ensure to include all your tablespaces. The sum of all the tablespaces in your Perl scripts must be all of the tablespaces in your database. Don’t forget one of the tablespaces.
On RAC, you can run the Perl scripts on all the nodes, utilizing all your resources.
Watch this video to learn how a customer migrated a 230 TB database using multiple Perl scripts
Logical migration has many benefits but there is also a catch. It does not transfer some of that nice meta data that make your life easier – such as:
AWR
SQL Plan Baselines
SQL Profiles
Further, we can make a few tweaks to avoid having performance problems after the migration. This post applies to migrations into any database – except Autonomous Databases. In an Autonomous Database you don’t need to worry about such things. The database will fix the issues for you.
In your source database, execute the script $ORACLE_HOME/rdbms/admin/awrextr.sql. It extracts the AWR data and load them into a Data Pump dump file.
Transfer the dump file to your target database.
In your target database, import the AWR data using the script $ORACLE_HOME/rdbms/admin/awrload.sql.
When you look at the old AWR data in your target database, use it with caution. The AWR data was collected on a completely different system (CPU, memory, settings, database release, initialization parameters etc). Your new system is very different. Keep that in mind when you compare data from before and after the migration.
SQL Plan Baselines
If you are using SQL Plan Management to ensure plan stability, you want to transport your baselines as well.
First, in my source database create a staging table. It is used to transport the baselines. Create the staging table in a schema that you will be migrating.
Next, I will extract all the baselines including the fixed or accepted plans. It is now stored in the staging table and will be migrated together with my data:
That’s it. Now SQL Plan Management will ensure that the same plans are used for the statements that are in the baselines.
If you are not using SQL Plan Management, you should look into it. In my opinion it is one of the most underrated features in the Oracle Database. Many issues are caused by plan changes which is exactly what you can avoid with SQL Plan Management.
SQL Profiles
If you have SQL Profiles you can transport them in a way very similar to transporting SQL Plan Baselines.
First, in the source database, create a staging table in a schema that you will be migrating:
Next, put the SQL Profiles into the staging table. This example will load all profiles from the DEFAULT category, but there are options to cherry-pick those of interest:
If you have migrated into the same release, the answer is no.
If you have migrated into a higher release, the answer is it depends. It is not required, but if you come from a very old release, you might consider it.
Imagine a migration from 11.2.0.4 to 19c. Significant changes have been made to histograms in that period. If you want to benefit from the new, improved histogram types, you should re-gather statistics. If you don’t do it, the histograms will gradually change over time when the statistics become stale. Some people prefer taking such a change immediately because they don’t like the idea of things happening over the course of time.
Be sure to set the preferences back to the original value when you are done gathering statistics. Nigel Bayliss has an interesting blog post on How to gather optimizer statistics fast.
Dictionary Statistics
After the migration, the target database has many more tables, indexes, views and so on. This is represented in the data dictionary as additional rows in e.g. OBJ$. All that extra data probably means that the dictionary statistics are stale. Stale dictionary statistics can lead to all sorts of troubles.
Immediately after the migration is completed you should re-gather dictionary statistics to avoid problems caused by stale statistics. Do this right after ZDM has completed the migration:
exec dbms_stats.gather_dictionary_stats;
Or even better:
begin
dbms_stats.gather_schema_stats('SYS');
dbms_stats.gather_schema_stats('SYSTEM');
end;
/
I prefer gathering statistics on SYS and SYSTEM independently. My colleagues and I have seen issues from time to time with dbms_stats.gather_dictionary_stats.
Fixed Object Statistics
After migration and once the system has been properly warmed up, you should re-gather fixed objects statistics. It is important that you don’t gather fixed objects statistics right after migration. You must until you have run a representative workload.
If you wait a few days or until the next weekend, then you are most likely on the safe side:
exec dbms_stats.gather_fixed_objects_stats;
This recommendation is similar to the one we give about after upgrade. Read the blog post to read more about fixed objects statistics in general. Also, you can find a very useful piece of code that gathers fixed objects statistics in seven days via a scheduler job.
SQL Tuning Sets
This is something that you should do before you start the migration. Capture your workload from the source database into SQL Tuning Sets.
What is a SQL Tuning Set? It is an object that contains information about a number of SQLs. For each SQL the following is collected:
SQL Statement – the SQL text itself
Context – executing schema, binds, and environment (like NLS settings)
Statistics – execution statistics like elapsed time, CPU time, buffer gets, rows processed
Plan – last, but not least, the actual execution plan
You can then transport the SQL Tuning Set into the target database. If an SQL is now running slow, you can use the information from the SQL Tuning Set to identify why. For an SQL you can see the old and new plan, compare amount of buffer gets and CPU time and so forth. You can also bring back the old plan, if the new plan is less effective.
Next, capture statements from AWR. This will take the top 5000 statements from your entire AWR ordered by elapsed time:
declare
begin_id number;
end_id number;
cur sys_refcursor;
begin
select min(snap_id), max(snap_id) int begin_id, end_id
from dba_hist_snapshot;
open cur for
select value(p) from table(dbms_sqltune.select_workload_repository(
begin_snap => begin_id,
end_snap => end_id,
basic_filter => 'parsing_schema_name not in (''SYS'')',
ranking_measure1 => 'elapsed_time',
result_limit => 5000,
attribute_list => 'ALL')) p;
dbms_sqltune.load_sqlset('MIGR_STS_1', cur);
close cur;
end;
/
You can also sample directly from cursor cache. Start the sampling and then run your workload. This example will sample every minute for 15 minutes:
Finally, put all that information into a staging table, so it can be transported to the target database. Put the staging table into a schema that you plan on migrate. This way you ensure that the SQL Tuning Set is migrated together with your data. In my example it is the schema SH:
Now – if you get into trouble with a misbehaving SQL – you can use the information in the SQL Tuning Set to investigate the problem.
Conclusion
When you perform a logical migration, there are some additional tasks that you should consider at least for your mission-critical databases. These steps will help you avoid performance issues after the migration.
In addition to the topics discussed here, I recommend that you also watch our webinar Performance Stability, Tips and Tricks and Underscores (slides). It has the complete prescription to avoid performance problems after upgrade and migration.