How to Upgrade with Data Guard

You can upgrade your database to a new release, and keep the data guard setup intact. The standby database(s) can be upgraded implicitly via the redo from the primary database, and there is no need to rebuild the standby database after upgrade.

The process: Overview of upgrade with a data guard

In the following I will be using this setup: Overview of the environment that is used for this procedure

In advance, you should install the new Oracle Home on both primary and standby host. The two Oracle Homes should have the same patches applied, and I recommend that you always apply the latest Release Update.

Before Upgrade

The procedure starts right before you run the preupgrade fixups. Downtime has started and users are no logged connected to the database.

Disable Data Guard Broker

If you don’t use Data Guard Broker, you can skip this chapter and go to the chapter Stop Data Guard. Connect to the broker and disable Fast Start Failover:

DGMGRL SYS@PROD1> disable fast_start failover

Next, you disable the broker configuration:

DGMGRL SYS@PROD1> disable configuration

Then, you can shut down the broker in the primary. Make a copy of the broker configuration files. Use the below SQL to generate commands to copy the files. Remember to execute the commands generated:

PROD1 SQL> alter system set dg_broker_start=false scope=both;
PROD1 SQL> select 'host cp ' || value || ' /tmp' as cmd from v$parameter where name like 'dg_broker_config_file%';
PROD1 SQL> --Now, execute the commands
PROD1 SQL> host ls /tmp/dr*.dat

Finally, you do the same for the standby database:

PROD2 SQL> alter system set dg_broker_start=false scope=both;
PROD2 SQL> select 'host cp ' || value || ' /tmp' as cmd from v$parameter where name like 'dg_broker_config_file%';
PROD1 SQL> --Now, execute the commands
PROD2 SQL> host ls /tmp/dr*.dat

Stop Data Guard

On the primary database, defer the redo log transport to the standby database. Strictly speaking, this is not necessary, but I do it from a "better-safe-than-sorry" aspect. Be sure to verify that log_archive_dest_state_2 is the actual archive destination for your standby database:

PROD1 SQL> show parameter log_archive_dest_2
PROD1 SQL> alter system set log_archive_dest_state_2='defer' scope=both;

Next, you cancel redo apply on the standby database:

PROD2 SQL> alter database recover managed standby database cancel;

Finally, you shut down the database:

PROD2 SQL> shutdown immediate

If you are using Grid Infrastructure (GI) to manage the database, you should stop and disable the database. Disabling the database is strictly speaking not necessary, but again a "better-safe-than-sorry" approach:

[oracle@bm2]$ $ORACLE_HOME/bin/srvctl stop database -d PROD2
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl disable database -d PROD2

Upgrade

Now you can upgrade the primary database using the method you prefer. Complete all the post-upgrade tasks and perform the necessary tests to validate the new database release.

If something happens during upgrade and you want to revert, you can flash back the database (or restore on Standard Edition) and simply undo the before upgrade steps (start by enabling database, starting database, starting redo apply and so forth).

Remember that the standby databases was left behind before we started touching anything, so if all other fails, simply restart the standby database, and connect your users to it.

After Upgrade

Restart Data Guard

When you are happy with the upgrade, and your tests validate the new database release, you can proceed.

Update the listener on the standby host. Be sure to update the Oracle Home information in the listener.ora entry. Note, that your listener.ora might be stored in a non-default location, so use lsnrctl status to get the location. Finally, reload the listener:

[grid@bm2]$ $GRID_HOME/bin/lsnrctl status
[grid@bm2]$ vi $GRID_HOME/network/admin/listener.ora
[grid@bm2]$ $GRID_HOME/bin/lsnrctl reload

For the next commands, I will be used the same prompt, and I will need the following environment variables:

[oracle@bm2]$ export OLD_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
[oracle@bm2]$ export NEW_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm2]$ export ORACLE_HOME=$NEW_HOME
[oracle@bm2]$ export ORACLE_SID=PROD
[oracle@bm2]$ #Set ORACLE_UNQNAME to DB_UNIQUE_NAME
[oracle@bm2]$ export ORACLE_UNQNAME=PROD2 

Next, if the standby database is using TNS_ADMIN in the default location ($ORACLE_HOME/network/admin), then be sure to copy the relevant TNS aliases into the new tnsnames.ora. There should be TNS aliases to the primary and standby database. Or, if there are no other databases in the same Oracle Home, you can simply copy the files:

[oracle@bm2]$ #Back up files
[oracle@bm2]$ cp $NEW_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin/sqlnet.ora.backup
[oracle@bm2]$ cp $NEW_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin/tnsnames.ora.backup
[oracle@bm2]$ #Copy from old to new home
[oracle@bm2]$ cp $OLD_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin
[oracle@bm2]$ cp $OLD_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin

Now, you can edit /etc/oratab and update the information about the Oracle Home to match the new Oracle Home:

[oracle@bm2]$ vi /etc/oratab

Copy SPFile and password file to the new Oracle Home:

[oracle@bm2]$ cp $OLD_HOME/dbs/orapw$ORACLE_SID $ORACLE_HOME/dbs
[oracle@bm2]$ cp $OLD_HOME/dbs/spfile$ORACLE_SID.ora $ORACLE_HOME/dbs

If you are using GI to manage the database, you must upgrade the database, meaning updating the Oracle Home information, so GI will start the database in the correct Oracle Home. Next, re-enable and start the database:

[oracle@bm2]$ $ORACLE_HOME/bin/srvctl upgrade database -db $ORACLE_UNQNAME -oraclehome $ORACLE_HOME
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl enable database -d $ORACLE_UNQNAME
[oracle@bm2]$ $ORACLE_HOME/bin/srvctl start database -d $ORACLE_UNQNAME

Or, if you are not using GI, simply start the database:

PROD2 SQL> startup mount

Re-enable Redo Log Transport and Apply

On the primary database re-enable redo log transport to standby database:

PROD1 SQL> alter system set log_archive_dest_state_2='enable' scope=both;

On the standby database restart redo apply

PROD2 SQL> alter database recover managed standby database disconnect from session;

Re-enable Data Guard Broker

First, we need to copy the broker config files into the new Oracle Home. If you store your broker config files outside of the Oracle Home this might not be necessary to you:

[oracle@bm1]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm1]$ export ORACLE_UNQNAME=PROD1
[oracle@bm1]$ cp /tmp/dr1$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs
[oracle@bm1]$ cp /tmp/dr2$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs

Do the same on the standby database host:

[oracle@bm2]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
[oracle@bm2]$ export ORACLE_UNQNAME=PROD2 
[oracle@bm2]$ cp /tmp/dr1$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs
[oracle@bm2]$ cp /tmp/dr2$ORACLE_UNQNAME.dat $ORACLE_HOME/dbs

Now, you can restart the Data Guard Broker on both primary and standby database:

PROD1 SQL> alter system set dg_broker_start=true scope=both;

PROD2 SQL> alter system set dg_broker_start=true scope=both;

Finally, enable the broker configuration and fast start failover:

DGMGRL SYS@PROD1> show configuration
DGMGRL SYS@PROD1> enable configuration
DGMGRL SYS@PROD1> enable fast_start failover

Test

Use the broker to ensure everything is fine:

DGMGRL SYS@PROD1> show configuration
DGMGRL SYS@PROD1> show database prod1
DGMGRL SYS@PROD1> show database prod2

You should have SUCCESS listed for both databases Use Data Guard Broker to verify data guard setup after upgrade

Let’s try to make a switchover:

DGMGRL SYS@PROD1> switchover to prod2

If you don’t use Data Guard Broker, you use regular SQLs and SQLPlus to verify the data guard environment.

Conclusion

It is actually not that complicated to upgrade your database, even if it is part of a data guard setup. A little extra legwork is needed to take care of the standby database. But the good thing is that your DR setup is maintained althroughout the process.

I made a video on YouTube that shows the procedure. And while you are there, I suggest that you subscribe to our channel.

Keep an eye out for coming versions of AutoUpgrade. At time of writing our developers are working on streamlining the process. We want upgrade with data guard to be 100 % automated (or as close to as possible).

Further Reading

Webinar: Upgrade Your Databases to 19c

Danish Oracle User Group - DOUG

Danish Oracle User Group is hosting a talk tomorrow on Upgrade Your Databases to 19c. I will be presenting and talk about:

Oracle Database 19c is the long term release and this talk is about how to get there. The emphasis is on the AutoUpgrade tool which is the new, recommended way for upgrades. There will be demos that describe how you can get started and the options, you have. In addition, we will cover upgrade best practices and deep insights into the process.

You need to upgrade to Oracle Database 19c and this talk will prepare you.

Time: 25 November 2020 15:00 CET/Copenhagen

The session is open to anyone, so if you are interested simply go their webpage and get the connection details. There is an English version of the text a little down the page. I will give the talk in English.

If you want to have a sneak peek on the slides, there are already published.

I hope to see you tomorrow.

Minimal Downtime Migration with Full Transportable Export Import and Incremental Backups

If you need to migrate a database to the cloud or anywhere else for that matter, you should consider using Full Transportable Export Import (FTEX) and incremental backups. Even for really large databases – 10s or 100s of TB – you can still migrate with minimal downtime. And it works across different endian formats.

FTEX uses transportable tablespaces and the solution has these benefits:

  • You can implicitly upgrade the database as part of the migration
  • You can migrate from a non-CDB and into a PDB
  • You can keep downtime at a minimum by using frequent incremental backups
  • You can migrate across endianness – e.g. from AIX or Solaris to Oracle Linux

How Does It Work

To concept is explained in this video on our YouTube Channel and it includes a demo:

To make the backup and convert process really easy, Oracle is providing a perl script that can automate the entire process. You download the scripts from My Oracle Support: V4 PERL Scripts to reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

What You Need to Be Aware Of

Transportable Tablespaces

To get a complete list of limitations on transporting data, you should have a look in the documentation. Also, there are some specific to transportable tablespaces. The most notable are:

  • Character set and national character set should be the same. If not, there are a few options available, however.
  • No columns can be encrypted with TDE Column Encryption. Only option is to remove the encryption before migration, and re-encrypt afterwards.
  • TDE Tablespace Encryption is supported for same-endian migration. If you need to go across endianness, you must decrypt the tablespaces, and re-encrypt after migration. Remember in Oracle Database 12.2 you can encrypt tablespaces online.
  • If you are migrating across endianness, you must convert the data files. You must have disk space to hold a copy of all the data files. In addition, you should perform the convert on the platform that has the best I/O system and most CPUs. Typically, this is the cloud platform, which also offers scaling possibilities.
  • The database timezone and timezone file version must be identical.

Full Transportable Export Import

  • Source database must be 11.2.0.3 or higher
  • Target database must be 12.1.0.1 or higher

If you can’t meet these requirements, you can still use this solution. But instead of doing a FTEX, you need to use Data Pump in another way.

Incremental Backups Using Perl Scripts

  • Source database must be 10.2.0.3 or higher
  • Target database must be 11.2.0.4 or higher

In addition, it is strongly recommended to use Block Change Tracking (BCT) on the source database. Note, that this is an Enterprise Edition feature. If you don’t enable BCT the incremental backups will be much slower, because RMAN has to scan every single data block for changes. With BCT the database keeps track of changes in a special file. When RMAN backs up the database, it will just get a list of data blocks to include from the change tracking file.

Conclusion

By using a combination of Full Transportable Export Import and incremental backups, you can migrate even huge databases to the cloud. And it even works for cross-endian migrations, like AIX or Solaris to Oracle Linux.

If you want to learn more about endianness and transportable tablespaces, you should watch this video on our YouTube Channel:

Further Reading

How to Stop Hardcoding Your TDE Keystore Password

When you encrypt your databases, you will often end up needing the keystore password to perform certain operations. For instance, cloning an encrypted PDB will require the keystore password:

CREATE PLUGGABLE DATABASE ... KEYSTORE IDENTIFIED BY S3cr3t;

This is not very secure because the keystore password is now visible in clear text. Further, if you have the command in a script, ansible, cron job or the like, you will also have it there in clear text. Or, if your organization has implemented separation of duties, and the operational DBAs don’t have access to the keystore password.

Enter Secure External Password Store (SEPS)

Using the example above, what if you could just write the following:

CREATE PLUGGABLE DATABASE ... KEYSTORE IDENTIFIED BY EXTERNAL STORE;

And the database would get the secret keystore password without involving you! Your problems would be solved.

That is what you can do with Secure External Password Store. Whenever you need to specify the keystore password using KEYSTORE IDENTIFIED BY you can use the EXTERNAL STORE clause, and avoid typing the password.

SEPS was introduced in 10g, however, since Oracle Database 12.2 you can store keystore credentials in it. SEPS is similar to an auto-login keystore. It is an encrypted file stored in the file system that is encrypted by a password that only the database knows. You as a DBA can’t extract the information from the database.

Configuration

First, tell the database where SEPS is located:

SQL> ALTER SYSTEM 
     SET EXTERNAL_KEYSTORE_CREDENTIAL_LOCATION = '$ORACLE_BASE/admin/$ORACLE_SID/wallet/tde_seps'
     SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

Next, add the keystore password (in this case S3cr3t) into SEPS as a secret for the client TDE_WALLET:

SQL> ADMINISTER KEY MANAGEMENT
     ADD SECRET 'S3cr3t' FOR CLIENT 'TDE_WALLET'
     USING TAG 'TDE keystore password' 
     TO LOCAL AUTO_LOGIN KEYSTORE '$ORACLE_BASE/admin/$ORACLE_SID/wallet/tde_seps’;

Finally, replace the keystore password in your commands with SEPS using the EXTERNAL STORE clause:

SQL> --No longer need this
SQL> CREATE PLUGGABLE DATABASE ... KEYSTORE IDENTIFIED BY S3cr3t;
SQL> --Now you can do this
SQL> CREATE PLUGGABLE DATABASE ... KEYSTORE IDENTIFIED BY EXTERNAL STORE;

You can use KEYSTORE IDENTIFIED BY EXTERNAL STORE on most ADMINISTER KEY MANAGEMENT commands, like exporting and importing encryption keys, but for security reasons some ADMINISTER KEY MANAGEMENT commands still require that you type in the keystore password.

When you specify the location of SEPS, I strongly recommend that you stick to the default location, $ORACLE_BASE/admin/$ORACLE_SID/wallet/tde_seps. If you are also using WALLET_ROOT parameter, then SEPS must be stored in the default location.

If you are using Oracle Key Vault or a dedicated Hardware Security Module instead of a software keystore, you can store those credentials in SEPS as well. Respectively, use CLIENT set to OKV_PASSWORD or HSM_PASSWORD.

Conclusion

You can simplify commands that involve the keystore by using SEPS. In addition, it will be more secure because you can avoid typing or hardcoding the keystore password in cleartext.

I have produced a small demo that you can watch on YouTube:

If you like the video and want more, remember to hit the subscribe button on our YouTube channel.

References

Enabling Local Undo With AutoUpgrade

If you have databases that run in shared undo mode you should switch to local undo mode. Starting from 12.2 this is the default and recommended undo mode, and it offers a lot of cool functionality. Obviously, at the expense of having multiple undo tablespaces. There are already many good blog posts out there that the benefits of local undo and how to enable it.

AutoUpgrade To The Rescue

If you are planning an upgrade with AutoUpgrade, you can also enable local undo during the upgrade. You should simply just configure it in the config file:

upg1.enable_local_undo=YES

And AutoUpgrade will take care of the rest for you. It will even create undo tablespaces in PDB$SEED and all of the PDBs. Once again AutoUpgrade can make your life easier.

What Happens

Before creating the new undo tablespaces, the database will determine the attributes of the tablespace. By default, it will use a ratio of 30% compared to CDB$ROOT:

  • Initial data file size (calculated from current file size)
  • Maximum data files size (MAXBYTES)
  • Grow by (INCREMENT_BY)

Example (default ration 30 %):

Attribute CDB$ROOT PDB
Initial data file size current file size, 500M 150M
Maximum data file size 32G 9.6G
Grow by 5M 1.5M

You can control the ratio using the parameter _seed_root_undo_ratio in CDB$ROOT. To set the percentage to 10% use the following command before the upgrade:

ALTER SYSTEM SET "_seed_root_undo_ratio"=10 SCOPE=SPFILE;

With this knowledge you can now calculate the space you need for all those extra undo tablespaces. Thus, you can avoid to run out of disk space, or have the tablespace set at undesirable size.

After the upgrade, I would advice you to review the undo tablespace size per PDB. Different workload requires different amount of undo. And especially the increment_by attribute can become so low that it will lead to too frequent data file grow operations.

If your database is already in local undo mode then the parameter _seed_root_undo_ratio has no effect at all. The database will not start a grow or shrink operation to meet the ratio defined by the parameter. Once you have switched to local undo mode, you are in full control yourself.

Conclusion

Be sure to enable local undo for all CDBs databases when you upgrade to 12.2 or later releases. If you have many PDBs and you use AutoUpgrade to enable it, be aware of disk space needed for all those tablespaces. And review the settings afterwards.

Mike Dietrich wrote about a similar situation when you create CDBs using DBCA.

Resources

Move to the Cloud – Webinar

Yesterday, Mike Dietrich and I gave the final webinar in the Oracle Database 19c Upgrade Virtual Classroom series. It was about Move to the Cloud – not only for techies. Now, I say final – but we all know you should never, say never. And in this case, it applies to final as well. We are already talking about subjects for a seventh webinar. If you have any interesting topic, that you think we should cover, get in touch with me.

Oracle Database 19c Upgrade Virtual Classroom

Unfortunately, due to a technical glitch we skipped the part about migrating using transportable tablespaces and full transportable export/import. We uploaded the missing part to YouTube, so you can watch it.

For those interested, you can now download the slides. We had really much information to share, so browse through the deck to find a lot of hidden slides. Typically, there are references and links to more information about a specific topic.

Within a week it should be possible to watch a recording of the webinar.

The Demos and Videos

This presentation we gave, was a brand new one. We used as many demos and videos as we could – or rather had time to prepare. We will post them on our YouTube channel as soon as possible. I suggest that you subscribe to it, so you can receive word as soon as new contents arrives. Further, we want to enhance the presentation even more, so we will be putting in more videos and demos. Let me know, if there was a topic, that could improve with a video or demo.

Thank You

Thanks to everyone that participated yesterday. Happy migrating!

Migration Strategies

Mike Dietrich and I gave a presentation on Migration Strategies – Tips and Tricks and Insights and Secrets today. The webinar is part of the Oracle Database 19c Upgrade Virtual Classroom series.

Oracle Database 19c Upgrade Virtual Classroom

The slide deck contains more information than we presented at the webinar. We had hidden a lot of slides that contain additional information and useful links. The slide deck is ready to download.

Soon you will be able to watch a recording as well.

Move to the Cloud (for techies)

If you want to know even more about migrations, you should register for the next webinar. It’s all about cloud migrations.

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Thank You

Thanks to everyone that participated today. We had a lot of fun and look forward to seeing you on Thursday. Happy migrating!

Debut on the Big Stage

Mike Dietrich and I will give two webinars in mid-October on database migrations. One of them will focus solely on migration to Oracle Cloud Infrastructure. The webinars are part of the Oracle Database 19c Upgrade Virtual Classroom series. If you are interested, you have to register.

Oracle Database 19c Upgrade Virtual Classroom

Migration Strategies: Insights, Tips and Secrets

Date: Tuesday 13 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Now it’s time for us to dig deeper. We’d like to offer you further insights and a deep dive from a technical point of view, starting with Data Pump and Transportable Tablespaces, then Full Transportable Export Import, and adding RMAN Incremental Backups to decrease the downtime. Best practices and real-world experience will round up this two-hour webinar.

Move to the Cloud (for techies)

Date: Thursday 15 October 2020
Start Time: 13:00 GST – 12:00 EEST – 11:00 CEST – 10:00 BST
Duration: 120 mins

Whether you have databases in a cloud environment, or you plan to lift databases soon, this webinar is for you. We won’t cover cloud solution benefits but will show you how you can migrate your database(s) into the Oracle Cloud. We’ll start with Autonomous and also cover migration into VMs, Bare Metal, OCI, ExaCS and ExaCC. And we’ll look at minimizing downtime strategy, where ZDM can help. This two-hour webinar is not strictly for technical geeks ‒ but our focus will be on practical migration approaches.

Debut on the Big Stage

These two webinars will be my debut on the big stage. The previous webinars had a huge interest and, unfortunately, some people couldn’t join because the webinar had reached its maximum number of attendees. If you register, I recommend to join early to get your seat.

I have been presenting in person and virtually for some years now. Also, since I joined Oracle in January, I have been doing quite a few presentations. However, this is my first time for such big crowd. I am excited and look forward to it – but also a little intimidated. Luckily, I have super-star Mike Dietrich there to (virtually) hold my hand.

And, finally, I promise you: No marketing slides – just demos and details.

I hope to see you there!

Classic Preupgrade Tool and Change of Server

In short: If you are using the classic Preupgrade tool (preupgrade.jar) you should ensure that the tool is present in target Oracle Home. The MOS note How to Download and Run Oracle’s Database Pre-Upgrade Utility (Doc ID 884522.1) has been updated to highlight the following:

If the upgrade-to version is 12.2 or higher, then save the file to your target $ORACLE_HOME/rdbms/admin directory and then unzip the file.

You can use this example:

cp preupgrade_19_cbuild_7_lf.zip /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/
cd /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/
unzip preupgrade_19_cbuild_7_lf.zip

If you are moving the database to a new server as part of the upgrade, ensure that the same version of the Preupgrade tool is used on both the source and target database host.

Background

Lately, I have been involved in a few cases, where customers reported that the classic Preupgrade tool failed during post-upgrade fixups:

@postupgrade_fixups.sql 

DECLARE
*
ERROR at line 1:
ORA-20000: In run_check, Pre-Upgrade Package Requested Check "post_disable_bct_upg" does not exist 
ORA-06512: at line 1 
ORA-06512: at "SYS.DBMS_PREUP", line 293 
ORA-06512: at "SYS.DBMS_PREUP", line 5227 
ORA-06512: at "SYS.DBMS_PREUP", line 3239 
ORA-06512: at line 139 

After guidance from our developers, I learnt that you must extract the classic Preupgrade tool (all the files) into the target Oracle Home before you execute the post-upgrade fixups. Specifically, the files must go into $ORACLE_HOME/rdbms/admin. And you can safely overwrite the existing files. Or, back them up first if you are cautious.

This is especially relevant when you are upgrading the database and moving the database to a new server as well. I have a blog post series on upgrading on VM DB Systems in OCI and they all involve moving the database to a new server. And my initial editions of the posts didn’t have this information.

When you run the classic Preupgrade tool on the source database you should be using the latest version of the tool. You can download it from My Oracle Support. You run the tool on the source database before you shut it down, and move the database to another server. When the upgrade is completed, and you execute the post-upgrade fixups it will use auxiliary packages from the target Oracle Home to make some of the fixups. If the two versions of the auxiliary packages are out of sync, you might run into problems.

One of the developers wrote:

When postupgrade_fixups.sql is executed, preupgrade_package.sql is executed again, but it is taken from the $ORACLE_HOME/rdbms/admin. As it has a different version, then it is not able to execute the postupgrade fixup

Conclusion

I have been upgrading databases for many years, and I haven’t been aware of this. That I haven’t run into problems before, is just pure luck, I assume.

And remember, always use the latest version of the tool from My Oracle Support.

Further Reading

Using the Pre-Upgrade Information Tool for Oracle Database How to Download and Run Oracle’s Database Pre-Upgrade Utility (Doc ID 884522.1) Database Preupgrade tool check list. (Doc ID 2380601.1)

TDE from a Non-Security Guy

TDE Tablespace Encryption is a must-have in the cloud and something that most of us have to learn. I recently gave a presentation on the basics and what you will need to learn to survive in the cloud.

If you are interested in the recording, it is uploaded to our YouTube channel. It is 43 minutes and no one will probably see it from start to end. But you should really have a look at the Secure External Password Store. It is a really awesome feature. There is a demo right here.

Danish Oracle User Group

The presentation was organized together with the Danish Oracle User Group and I have also published the slides.

I made a quick poll during the presentation to ask how many years the attendees had worked with databases without worrying about encryption. The answers ranged from eight to 35 (or even 42 – but was that a joke?). The majority was above 20. But with the evolution of the cloud this is about to change. You must learn about encryption these days.

I also mentioned some indications that AES256 is not necessarily more secure than AES128 (the latter being the default encryption algorithm in Oracle Database 19c). I can’t tell you why – that’s way beyond my knowledge – but here are the references:

Also, there was a comment from Asger stating that an encrypted database is hard to compress by your storage system. That is true – normally encryption is bad for your compression ratio. Unless you use compression options that are built into the database. It will compress the data before it is encrypted. You can have your cake – and eat it!

And just a few final words of advice:

  • Always back up your keystore
  • Never delete a keystore
  • Never forget your keystore password

I had much fun talking about TDE. The more I explore, the more I like it. Thanks to those that attended.