XTTS: Introduction – Minimal Downtime Migration with Full Transportable Export Import and Incremental Backups

If you need to migrate a database to the cloud or anywhere else for that matter, you should consider using cross-platform transportable tablespaces and incremental backup (XTTS). Even for really large databases – 10s or 100s of TB – you can still migrate with minimal downtime. And it works across different endian formats. In fact, for the majority of cross-endian projects this method is used.

In addition to minimal downtime, XTTS has the following benefits:

  • You can implicitly upgrade the database by migrating directly into a higher release
  • You can migrate from a non-CDB and into a PDB
  • You can keep downtime at a minimum by using frequent incremental backups
  • You can migrate across endianness – e.g. from AIX or Solaris to Oracle Linux

Endian-what?

Endianness is determined by the operating system. Simplified, it determines in which order bytes are stored in memory:

  • Big endian: stores the most significant byte of a word at the smallest memory address.
  • Little endian: stores the least-significant byte at the smallest address.

Wikipedia has an article for the interested reader.

Which platform uses which endian format? There is a query for that:

SQL> select platform_name, endian_format from v$transportable_platform;

If your source and target platform does not use the same endian format, then you need a cross-endian migration.

How Does It Work

To concept is explained in this video on our YouTube Channel:

Basically, you need to migrate two things:

  • Data
  • Metadata

Data

The data itself is stored in data files and you will be using transportable tablespaces for this. Since the source and target platform are not the same, the data files must be converted to the new format. Only the data files that make up user tablespaces are transported. The system tablespaces, like SYSTEM and SYSAUX, are not transported.

If you have a big database, it will take a lot of time to copy the data files. Often this is a problem because the downtime window is short. To overcome this you can use a combination of RMAN full backups (backup set or image copies) and incremental backups.

There are a few ways to do this which is covered in the following MOS notes:

The first two methods are using version 3 of a Perl script (xttdriver.pl), whereas the last method uses version 4 of the same Perl script. Version 4 offers a much simplified method and I will use that version for this blog post series.

Version 4 of the Perl script has a list of requirements. If your project can’t meet these requirements, check if the previous version 3 can be used.

Metadata

Transferring the data files is just part of the project. Information on what is inside the data files, the metadata, is missing because the system tablespaces were not transferred. The metadata is needed by the target database, otherwise, the data files are useless. The Transportable Tablespace concept as a whole does not work for system tablespaces, but instead we can use Data Pump.

You can use either:

  • Traditional transportable tablespace
  • Or, the newer full transportable export/import (FTEX)

For this blog post series, I am only focusing on FTEX. But if you run into problems with FTEX, or if you can’t meet any of the FTEX requirements, you should look into the traditional transportable tablespace method.

Here are a few examples of metadata that Data Pump must transfer:

  • Users
  • Privileges
  • Packages, procedudes and functions
  • Table and index defintions (the actual rows and index blocks are in the data files)
  • Temporary tables
  • Views
  • Synonyms
  • Directories
  • Database links
  • And so forth

Conclusion

By using a combination of cross-platform transportable tablespaces and incremental backups, you can migrate even huge databases to the cloud. And it even works for cross-endian migrations, like AIX or Solaris to Oracle Linux.

You can watch our YouTube playlist and watch videos on cross-platform transportable tablespaces.

Further Reading

Other Blog Posts in This Series

How to Stop Hardcoding Your TDE Keystore Password

I was helping a customer the other day together with Mike. They were upgrading from 18c to 19c and had to convert the database to a PDB as well. At first glance, it seemed pretty straightforward, but things got complicated because:

  • They have standby databases and want the Data Guard setup to survive the plug-in operation.
  • They are using ASM.
  • They are using TDE Tablespace Encryption and have also encrypted their SYSTEM and SYSAUX tablespace.
  • The DBA that will carry out the upgrade and plug-in is not allowed to have the TDE Keystore password. They have separation of duties, so only the security admins have the keystore password.

Can you do that? Yes, you can! Let me tell you how.

Upgrade

First, upgrade the database. You can easily maintain the Data Guard setup during an upgrade. I wrote a blog post about a little while ago. In addition, to upgrade a database with encrypted tablespaces you don’t need the keystore password. You must configure the database to use an auto login keystore, and that’s it. If you are concerned about the use of an auto-login keystore, you can simply remove it again after the upgrade.

External Store for a Keystore Password

The plug-in operation will require the keystore password. But the DBA doesn’t know it – so we need to find a solution for that. The solution is to store the keystore password in an external store. I also wrote a blog post about that a while ago. When you have it configured you can exchange the commands that require a keystore password, like:

SQL> ADMINISTER KEY MANAGEMENT ... KEYSTORE IDENTIFIED BY "S3c3tPassw0rd";

With this:

SQL> ADMINISTER KEY MANAGEMENT ... KEYSTORE IDENTIFIED BY EXTERNAL STORE;

The database will get the keystore password from an external store, which is basically a file in the file system which is encrypted with a password that only the database know.

The security admins would need to do this in the CDB that will receive the non-CDB database. They can do it in advance, so they can relax while the DBA carries out the operation in a maintenance window. If the TDE keystore is already configured using the WALLET_ROOT parameter, you can use the feature right away. Otherwise, you need a database restart to configure it.

Like with the auto-login keystore, if you are concerned about the security, you can simply disable it again after the operation.

Plug In

Now things get complicated. When you plug in your non-CDB database the manifest file contains information on where the data files are located – but only on the primary database. This is an extract of a manifest file (the one you create with DBMS_PDB.DESCRIBE):

<PDB>
  ...
  <tablespace>
    <name>SYSTEM</name>
    ...
	<file>
      <path>+DATA/SALES1/DATAFILE/system.311.1058127529</path>

After plug-in, the CDB can start to use the data files right away. It reads from the manifest files where the data files are located. But there is no information on where files are located on the standby database. To overcome this you must create aliases in the ASM instance on the standby host. The aliases will point back to the original data files (used by the standby database). So, when the plug-in happens and redo start to flow to the standby database, it will know which data files to recover. If you are storing data files in a regular file system, you could use soft links to serve the same purpose.

The procedure is already very well described:

I won’t repeat the procedure as the above articles are really good. But these articles don’t consider the situation where your SYSTEM and/or SYSAUX tablespace is encrypted.

If that is the case, you must import your encryption keys into CDB$ROOT before you execute the CREATE PLUGGABLE DATABASE command. In Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1) it should happen right before step 17.2.2:

SQL> alter session set container=CDB$ROOT;
SQL> administer key management import keys ... keystore identified by external store ... ;
SQL> --Continue with step 17.2.2
SQL> create pluggable database .... ;

Dots and Underscores

When you follow the MOS notes you might wonder why the dots in the ASM aliases are replaced with underscores. At first glance, I had no idea, but it worked. I later learned the following:

The format for an ASM filename is [filetype|tablespacename].[ASM file number].[file incarnation], but basically it is three pieces of name separated by periods. We can’t create any filename or alias that mimics that format. So the scripts change those periods to underscores (‘_’). That is allowed.

ORA-15032 and ORA-15046

Most likely you get this error because there are already existing aliases on the ASM file. Only one alias is allowed per file.

  1. You can verify that by using the ls command in ASMCMD. If it is an alias the Name column will look similar to this alias1 => +DATA/......
  2. Ensure the database is not using the alias. If it does, rename the file in the database.
  3. Remove the alias from ASM. It is strongly recommended to use rmalias. Although also possible with rm I consider it much safer to use rmalias.

Conclusion

You can upgrade and convert your database to a PDB without comprising your standby database. In addition to that, you can configure your database in such a way that you don’t even need to type in the TDE keystore password.

Enabling Local Undo With AutoUpgrade

If you have databases that run in shared undo mode you should switch to local undo mode. Starting from 12.2 this is the default and recommended undo mode, and it offers a lot of cool functionality. Obviously, at the expense of having multiple undo tablespaces. There are already many good blog posts out there that the benefits of local undo and how to enable it.

AutoUpgrade To The Rescue

If you are planning an upgrade with AutoUpgrade, you can also enable local undo during the upgrade. You should simply just configure it in the config file:

upg1.enable_local_undo=YES

And AutoUpgrade will take care of the rest for you. It will even create undo tablespaces in PDB$SEED and all of the PDBs. Once again AutoUpgrade can make your life easier.

What Happens

Before creating the new undo tablespaces, the database will determine the attributes of the tablespace. By default, it will use a ratio of 30% compared to CDB$ROOT:

  • Initial data file size (calculated from current file size)
  • Maximum data files size (MAXBYTES)
  • Grow by (INCREMENT_BY)

Example (default ration 30 %):

Attribute CDB$ROOT PDB
Initial data file size current file size, 500M 150M
Maximum data file size 32G 9.6G
Grow by 5M 1.5M

You can control the ratio using the parameter _seed_root_undo_ratio in CDB$ROOT. To set the percentage to 10% use the following command before the upgrade:

ALTER SYSTEM SET "_seed_root_undo_ratio"=10 SCOPE=SPFILE;

With this knowledge you can now calculate the space you need for all those extra undo tablespaces. Thus, you can avoid to run out of disk space, or have the tablespace set at undesirable size.

After the upgrade, I would advice you to review the undo tablespace size per PDB. Different workload requires different amount of undo. And especially the increment_by attribute can become so low that it will lead to too frequent data file grow operations.

If your database is already in local undo mode then the parameter _seed_root_undo_ratio has no effect at all. The database will not start a grow or shrink operation to meet the ratio defined by the parameter. Once you have switched to local undo mode, you are in full control yourself.

Conclusion

Be sure to enable local undo for all CDBs databases when you upgrade to 12.2 or later releases. If you have many PDBs and you use AutoUpgrade to enable it, be aware of disk space needed for all those tablespaces. And review the settings afterwards.

Mike Dietrich wrote about a similar situation when you create CDBs using DBCA.

Resources

Move to the Cloud – Webinar

Yesterday, Mike Dietrich and I gave the final webinar in the Oracle Database 19c Upgrade Virtual Classroom series. It was about Move to the Cloud – not only for techies. Now, I say final – but we all know you should never, say never. And in this case, it applies to final as well. We are already talking about subjects for a seventh webinar. If you have any interesting topic, that you think we should cover, get in touch with me.

Oracle Database 19c Upgrade Virtual Classroom

Unfortunately, due to a technical glitch we skipped the part about migrating using transportable tablespaces and full transportable export/import. We uploaded the missing part to YouTube, so you can watch it.

For those interested, you can now download the slides. We had really much information to share, so browse through the deck to find a lot of hidden slides. Typically, there are references and links to more information about a specific topic.

Within a week it should be possible to watch a recording of the webinar.

The Demos and Videos

This presentation we gave, was a brand new one. We used as many demos and videos as we could – or rather had time to prepare. We will post them on our YouTube channel as soon as possible. I suggest that you subscribe to it, so you can receive word as soon as new contents arrives. Further, we want to enhance the presentation even more, so we will be putting in more videos and demos. Let me know, if there was a topic, that could improve with a video or demo.

Thank You

Thanks to everyone that participated yesterday. Happy migrating!

Migration Strategies

Mike Dietrich and I gave a presentation on Migration Strategies – Tips and Tricks and Insights and Secrets today. The webinar is part of the Oracle Database 19c Upgrade Virtual Classroom series.

Oracle Database 19c Upgrade Virtual Classroom

The slide deck contains more information than we presented at the webinar. We had hidden a lot of slides that contain additional information and useful links. The slide deck is ready to download.

Soon you will be able to watch a recording as well.

Move to the Cloud (for techies)

If you want to know even more about migrations, you should register for the next webinar. It’s all about cloud migrations.

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Thank You

Thanks to everyone that participated today. We had a lot of fun and look forward to seeing you on Thursday. Happy migrating!

Debut on the Big Stage

Mike Dietrich and I will give two webinars in mid-October on database migrations. One of them will focus solely on migration to Oracle Cloud Infrastructure. The webinars are part of the Oracle Database 19c Upgrade Virtual Classroom series. If you are interested, you have to register.

Oracle Database 19c Upgrade Virtual Classroom

Migration Strategies: Insights, Tips and Secrets

Date: Tuesday 13 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Now it’s time for us to dig deeper. We’d like to offer you further insights and a deep dive from a technical point of view, starting with Data Pump and Transportable Tablespaces, then Full Transportable Export Import, and adding RMAN Incremental Backups to decrease the downtime. Best practices and real-world experience will round up this two-hour webinar.

Move to the Cloud (for techies)

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Debut on the Big Stage

These two webinars will be my debut on the big stage. The previous webinars had a huge interest and, unfortunately, some people couldn’t join because the webinar had reached its maximum number of attendees. If you register, I recommend to join early to get your seat.

I have been presenting in person and virtually for some years now. Also, since I joined Oracle in January, I have been doing quite a few presentations. However, this is my first time for such big crowd. I am excited and look forward to it – but also a little intimidated. Luckily, I have super-star Mike Dietrich there to (virtually) hold my hand.

And, finally, I promise you: No marketing slides – just demos and details.

I hope to see you there!

Classic Preupgrade Tool and Change of Server

In short: If you are using the classic Preupgrade tool (preupgrade.jar) you should ensure that the tool is present in target Oracle Home. The MOS note How to Download and Run Oracle’s Database Pre-Upgrade Utility (Doc ID 884522.1) has been updated to highlight the following: > If the upgrade-to version is 12.2 or higher, then save the file to your target $ORACLE_HOME/rdbms/admin directory and then unzip the file.

You can use this example:

cp preupgrade_19_cbuild_7_lf.zip /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/
cd /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/
unzip preupgrade_19_cbuild_7_lf.zip

If you are moving the database to a new server as part of the upgrade, ensure that the same version of the Preupgrade tool is used on both the source and target database host.

Background

Lately, I have been involved in a few cases, where customers reported that the classic Preupgrade tool failed during post-upgrade fixups:

@postupgrade_fixups.sql 

DECLARE
*
ERROR at line 1:
ORA-20000: In run_check, Pre-Upgrade Package Requested Check "post_disable_bct_upg" does not exist 
ORA-06512: at line 1 
ORA-06512: at "SYS.DBMS_PREUP", line 293 
ORA-06512: at "SYS.DBMS_PREUP", line 5227 
ORA-06512: at "SYS.DBMS_PREUP", line 3239 
ORA-06512: at line 139 

After guidance from our developers, I learnt that you must extract the classic Preupgrade tool (all the files) into the target Oracle Home before you execute the post-upgrade fixups. Specifically, the files must go into $ORACLE_HOME/rdbms/admin. And you can safely overwrite the existing files. Or, back them up first if you are cautious.

This is especially relevant when you are upgrading the database and moving the database to a new server as well. I have a blog post series on upgrading on VM DB Systems in OCI and they all involve moving the database to a new server. And my initial editions of the posts didn’t have this information.

When you run the classic Preupgrade tool on the source database you should be using the latest version of the tool. You can download it from My Oracle Support. You run the tool on the source database before you shut it down, and move the database to another server. When the upgrade is completed, and you execute the post-upgrade fixups it will use auxiliary packages from the target Oracle Home to make some of the fixups. If the two versions of the auxiliary packages are out of sync, you might run into problems.

One of the developers wrote:

> When postupgrade_fixups.sql is executed, preupgrade_package.sql is executed again, but it is taken from the $ORACLE_HOME/rdbms/admin. As it has a different version, then it is not able to execute the postupgrade fixup

Conclusion

I have been upgrading databases for many years, and I haven’t been aware of this. That I haven’t run into problems before, is just pure luck, I assume.

And remember, always use the latest version of the tool from My Oracle Support.

Further Reading

Using the Pre-Upgrade Information Tool for Oracle Database How to Download and Run Oracle’s Database Pre-Upgrade Utility (Doc ID 884522.1) Database Preupgrade tool check list. (Doc ID 2380601.1)

TDE from a Non-Security Guy

TDE Tablespace Encryption is a must-have in the cloud and something that most of us have to learn. I recently gave a presentation on the basics and what you will need to learn to survive in the cloud.

If you are interested in the recording, it is uploaded to our YouTube channel. It is 43 minutes and no one will probably see it from start to end. But you should really have a look at the Secure External Password Store. It is a really awesome feature. There is a demo right here.

Danish Oracle User Group

The presentation was organized together with the Danish Oracle User Group and I have also published the slides. Please note that some of the content is now out-of-date. With the introduction of isolated keystore mode, using SEPS is now different.

I made a quick poll during the presentation to ask how many years the attendees had worked with databases without worrying about encryption. The answers ranged from eight to 35 (or even 42 – but was that a joke?). The majority was above 20. But with the evolution of the cloud this is about to change. You must learn about encryption these days.

I also mentioned some indications that AES256 is not necessarily more secure than AES128 (the latter being the default encryption algorithm in Oracle Database 19c). I can’t tell you why – that’s way beyond my knowledge – but here are the references:

Also, there was a comment from Asger stating that an encrypted database is hard to compress by your storage system. That is true – normally encryption is bad for your compression ratio. Unless you use compression options that are built into the database. It will compress the data before it is encrypted. You can have your cake – and eat it!

And just a few final words of advice:

  • Always back up your keystore
  • Never delete a keystore
  • Never forget your keystore password

I had much fun talking about TDE. The more I explore, the more I like it. Thanks to those that attended.

How To Gather Fixed Object Statistics After Upgrade With AutoUpgrade

In a previous blog post I wrote about the importance of gathering fixed objects statistics at the right time.

I argued that you could create a scheduler job that re-gathers fixed objects statistics after a number of days (and the database is warmed-up). If you like that idea here is how you can implement that in AutoUpgrade.

Schedule Fixed Objects Statistics Gathering

CDB

First, create a file named /home/oracle/sched_gfos/sched_gfos.sql:

BEGIN
   DBMS_SCHEDULER.CREATE_JOB (
      job_name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"',
	  job_type => 'PLSQL_BLOCK',
	  job_action => 'BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; END;',
	  start_date => SYSDATE+7,
	  auto_drop => TRUE,
	  comments => 'Gather fixed objects stats after upgrade - one time'   
   );
   DBMS_SCHEDULER.ENABLE (
      name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"'
   );
END;
/

Next, create a script named /home/oracle/sched_gfos/sched_gfos.sh. It will run the SQL script in all PDBs including CDB$ROOT, except PDB$SEED. I don’t want to run in PDB$SEED because it is normally in READ ONLY state:

export ORACLE_SID=CDB1
export ORACLE_HOME=/u01/app/oracle/product/19
export PATH=$ORACLE_HOME/bin:$PATH
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 4 -C 'PDB$SEED' -e -b sched_gfos -d /home/oracle/sched_gfos/ sched_gfos.sql

And then I add the following line to my config file:

upg1.after_action=/home/oracle/sched_gfos/sched_gfos.sh

Non-CDB

First, create a file named /home/oracle/sched_gfos/sched_gfos.sql:

BEGIN
   DBMS_SCHEDULER.CREATE_JOB (
      job_name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"',
	  job_type => 'PLSQL_BLOCK',
	  job_action => 'BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; END;',
	  start_date => SYSDATE+7,
	  auto_drop => TRUE,
	  comments => 'Gather fixed objects stats after upgrade - one time'   
   );
   DBMS_SCHEDULER.ENABLE (
      name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"'
   );
END;
/
EXIT

Next, create a script named /home/oracle/sched_gfos/sched_gfos.sh:

export ORACLE_SID=DB1
export ORACLE_HOME=/u01/app/oracle/product/19
export PATH=$ORACLE_HOME/bin:$PATH
sqlplus / as sysdba @/home/oracle/sched_gfos/sched_gfos.sql

And then I add the following line to my config file:

upg1.after_action=/home/oracle/sched_gfos/sched_gfos.sh

Appendix

A previous version of AutoUpgrade gathered fixed objects stats immediately after upgrade. This was not in line with our own recommendations. This has been fixed in AutoUpgrade 19.9.2. Thanks to a keen reader of the blog, Robert Ortel, who made a comment on this.

For reference, here is the workaround I shared to fix the wrong behaviour.

Prevent AutoUpgrade From Gathering Fixed Objects Statistics

AutoUpgrade is by design very flexible and there is a way to override the default fixups. It is very well described in our documentation and Mike Dietrich has a really good blog post about it.

The fixup that you want to override and disable is:

[checkname]          POST_FIXED_OBJECTS
[stage]              POSTCHECKS
[fixup_available]    YES
[runfix]             YES <== Change to NO 
[severity]           RECOMMEND

And you should change runfix to NO. Save the checklist file and reference it from the AutoUpgrade config file using the checklist parameter.

How To Gather Fixed Objects Statistics After Upgrade

Often, I see that re-gathering fixed objects statistics are one of those post-upgrade tasks that is forgotten – or completed in a wrong way. Here’s a quick way to avoid that.

TL;DR

To ensure that fixed objects statistics are gathered after upgrade, run this in your database after upgrade:

BEGIN
   DBMS_SCHEDULER.CREATE_JOB (
      job_name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"',
	  job_type => 'PLSQL_BLOCK',
	  job_action => 'BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; END;',
	  start_date => SYSDATE+7,
	  auto_drop => TRUE,
	  comments => 'Gather fixed objects stats after upgrade - one time'   
   );
   DBMS_SCHEDULER.ENABLE (
      name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"'
   );
END;
/

If you upgrade a CDB run this in all your PDBs and CDB$ROOT. If you upgrade a single PDB, just run this in that specific PDB.

It will create a scheduler job that fires in seven days and gathers fixed objects statistics. Once the job has been started, the job definition will be automatically deleted.

What Is Fixed Objects Statistics

It is important that fixed objects statistics are representative and especially after upgrade they must be re-gathered. A long time ago (in this galaxy, not one far, far away), Maria Colgan made a really good blog post about it. If you want to know more about it, I suggest that you read her blog post.

How Is That Related To Upgrade?

After you have completed the actual database upgrade, there is a significant number of post-upgrade tasks that you have to carry out. There is a whole chapter about it in the Database Upgrade documentation. One of the chapters specifically mention that you have to gather statistics on fixed objects. And the documentation is really clear:

… Oracle strongly recommends that you regather fixed object statistics after you have run representative workloads on Oracle Database.

Normally, this is not a problem because the DBA can just do it right after the upgrade. But …

… Oracle recommends that you do it after the system is in a runtime state, and the most important types of fixed object tables are populated.

So, you should not do this right after the upgrade. The system won’t be in a runtime state or in a warmed-up state. Actually, the database is in the opposite state. It has just been restarted multiple times and there are no representative workloads in the memory structures. Chances are that you could make things even worse.

You must wait until there is a representative workload on the system!

The Solution

The optimal solution is of course that the DBA ensures that these statistics are re-gathered during peak hours or right after. However, this will require some sort of intelligent automation or a DBA that really knows his or her database. These days – with hundreds or thousands of databases under management – that’s hard. And manual tasks have a risk of being forgotten – DBAs are humans after all.

A good solution could be to create a scheduler job that fires a number of days after the upgrade. That should allow for the database to warm up and the memory structures to fill with representative data. It’s not optimal – but for sure it is better than forgetting it.