Oracle Data Pump and BFILEs

If you need to move data using Oracle Data Pump and have BFILEs in your database, what do you need to be aware of?

What Is a BFILE?

From the documentation:

BFILEs are data objects stored in operating system files, outside the database tablespaces. Data stored in a table column of type BFILE is physically located in an operating system file, not in the database. The BFILE column stores a reference to the operating system file. BFILEs are read-only data types. The database allows read-only byte stream access to data stored in BFILEs. You cannot write to or update a BFILE from within your application.

They are sometimes referred to as external LOBs.

You can store a BFILE locator in the database and use the locator to access the external data:

To associate an operating system file to a BFILE, first create a DIRECTORY object that is an alias for the full path name to the operating system file. Then, you can initialize an instance of BFILE type, using the BFILENAME function in SQL or PL/SQL …

In short, it is stuff stored outside the database that you can access from inside the database. Clearly, this requires special attention when you want to move your data.

How Do I Move It?

There are three things to consider:

  1. The file outside the database – in the operating system.
  2. The directory object.
  3. The BFILE locator stored in the table.

Table and Schema Mode Export

  1. You must copy the file in the operating system. Since a BFILE is read-only, you can copy the file before you perform the actual export.
  2. You must create the directory object. Directory objects are system-owned objects and not part of a table or schema mode export.
  3. Data Pump exports a BFILE locator together with the table. It exports the BFILE locator just like any other column. On import, Data Pump inserts the BFILE locator but performs no sanity checking. The database will not throw an error if the file is missing in the OS or if the directory is missing or erroneous.

Full Export

  1. Like table and schema mode, you must copy the file.
  2. Directory objects are part of a full export. On import, Data Pump creates a directory object with the same definition. If you place the external files in a different location in the target system, you must update the directory object.
  3. Like table and schema mode. Data Pump exports the BFILE locator as part of the table.

Do I Have BFILEs in My Database?

You can query the data dictionary and check if there are any BFILEs:

SQL> select owner, table_name 
     from dba_tab_cols 
     where data_type='BFILE';

Further Reading

Pro Tips

Here’s a collection of good tips and tricks I found while writing this series of blog posts.

Pro Tip #1: How Do You Determine Grid Infrastructure Patch Level?

To determine the GI patch level:

[grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch lspatches | grep "OCW"

34444834;OCW RELEASE UPDATE 19.17.0.0.0 (34444834)

The inventory registers the GI Release Updates as OCW RELEASE UPDATE. In this example, GI is running on 19.17.0.

Sometimes critical one-off patches are delivered as merge patches with the GI Release Update. It can mess up the patch description. This example is from a Base Database Service in OCI:

[grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch lspatches | grep "OCW"

34122773;OCW Interim patch for 34122773

The patch description no longer contains the name of the Release Update. In this case, you can trawl through MOS to find the individual patches in the merge patch to identify which Release Update it contains. Or, you can often look at the ACFS patch instead:

[grid@copenhagen1]$ $ORACLE_HOME/OPatch/opatch lspatches | grep "ACFS"

34139601;ACFS RELEASE UPDATE 19.16.0.0.0 (34139601)

Pro Tip #2: Where Can You Find the Log Files?

Logging happens in different places depending on which method you use. Here are a few locations to browse when there are problems:

  • $GRID_HOME/install
  • $GRID_HOME/cfgtoollogs
  • $GRID_BASE/crsdata/<node>/crsconfig
  • /u01/app/oraInventory/logs

Pro Tip #3: Where Can You Find Information On Troubleshooting?

A few good MOS notes:

OPatchAuto enables you to control the logging granularity. If you run into problems, increase the logging level to get more information:

$ORACLE_HOME/OPatch/opatchauto ... -logLevel FINEST

In addition, OPatchAuto can resume a broken session. Fix the issue and restart OPatchAuto. It will pick up from where it left off:

$ORACLE_HOME/OPatch/opatchauto resume

Pro Tip #4: How Can I Install Patches Manually?

If you don’t want to use the automation tools (like OPatchAuto), you can install the patches manually using OPatch.

The details are in Supplemental Readme – Grid Infrastructure Release Update 12.2.0.1.x / 18c /19c (Doc ID 2246888.1).

The GI patch bundle contains several sub patches that must be installed in the correct order using opatch apply.

Pro Tip #5: How Do You Roll Back A Patch?

In-place OPatchAuto

You can find patch rollback (or deinstallation) instructions in the patch readme file. In short, you execute the following command:

$ORACLE_HOME/OPatch/opatchauto \
   rollback <unzipped_patch_location>/<patch_dir>

Note you might need to reboot the server.

Out-of-place OPatchAuto

You find rollback instructions in MOS note Grid Infrastructure Out of Place ( OOP ) Patching using opatchauto (Doc ID 2419319.1). In short, you execute the following command:

$NEW_ORACLE_HOME/OPatch/opatchauto \
   rollback \
   -switch-clone

Out-of-place SwitchGridHome

You find rollback instructions in MOS note Step by Step Zero Downtime Oracle Grid Infrastructure Patching in Silent Mode (Doc ID 2865083.1). The procedure is the same as ZDOGIP. You need to execute a few commands. Check the MOS note for details.

Zero Downtime Oracle Grid Infrastructure Patching

The procedure is the same as Out-of-place SwitchGridHome.

Pro Tip #6: The FAQ

On My Oracle Support there is an extensive FAQ. Bookmark it: RAC: Frequently Asked Questions (RAC FAQ) (Doc ID 220970.1)

Appendix

Other Blog Posts in This Series

New Webinars Coming Up

I am really excited to announce two new webinars:

  • Data Pump Best Practices and Real World Scenarios April 5, 2023, 16:00 CET
  • Release and Patching Strategies for Oracle Database 23c May 10, 2023, 16:00 CET

Oracle Database 19c Upgrade Virtual Classroom

You can sign up here.

The entire team, Roy, Mike, Bill, Rodrigo, and myself, are working hard to polish all the details.

Data Pump Best Practices and Real World Scenarios

In short: It’s all the stuff we couldn’t fit into our last Data Pump webinar.

Here’s the full abstract: > We promised to share more information in our last Data Pump Deep Dive With Development seminar. And here we are back again. Data Pump best practices is the topic we would like to emphasize on today. This will include some common tips and tricks but target especially parallel optimizations and transformation. It is quite common that you restructure objects and types when you use Data Pump for a migration. So we will give a detailed overview on the most common scenarios. This will guide us directly to real world scenarios where we’ll demonstrate several of those best practices used by customers.

Release and Patching Strategies for Oracle Database 23c

This is a revamped version of our very first webinar, Release and Patching Strategy. It’s updated to reflect the latest changes, and we have included even more details and demos.

Last time, the interest for this webinar was huge, and we ended up maxing out of Zoom capacity. A lot of you couldn’t get it. So, you better be ready on time, or you might miss your seat.

The full abstract: > This is a session every Oracle customer needs to attend to. Oracle Database 23c, the next long-term support release will be available sometime this year. Now it is time to refresh your knowledge about the best and most efficient strategies for your future release planning. Are there changes to the release numbering? Are there important changes regarding database patching? We will give you a complete overview on the available patch bundles and recent and future changes. We’ll discuss and showcase why a proper patching strategy is of vital importance – and how you can automate and optimize certain essential tasks.

But I Can’t Make It

Don’t worry. As usual, we will publish the recording on our YouTube channel and share the slides with you. Keep an eye out on my Webinars page.

But it’s better to watch it live. The entire team will be there, and we will answer all your questions. I promise you; we won’t leave until all questions have been answered.

All Tech, No Marketing

Remember, our mantra is: All tech, no marketing.

These webinars are technical. This is the place for you if you want all the gory details and cool demos.

I hope to see you there

Oracle CloudWorld 2022 On-Demand

Get the very last out of Oracle CloudWorld 2022 by watching some of the recordings. You can find some really great sessions available on demand. All available for free on YouTube.

Upgrade to Oracle Database 19c

My session shows how to upgrade to Oracle Database 19c using AutoUpgrade. A short demo to get you started, and then I talk about more advanced scenarios, like upgrading with Data Guard and RAC.

AutoUpgrade 2.0: internals and new features

In this session, I talk about some of the newest features in AutoUpgrade. A lot of customers have asked us to enhance AutoUpgrade with patching capabilities. We heard you – and that is one of the new features I present in this session.

What Else

The playlist has close to 60 videos, so there is much to dig into. Some of my personal favorites:

And then there are so many more exciting sessions to watch.

Oracle CloudWorld 2023

The next CloudWorld takes place in Las Vegas on September 18-21.

Registration opens on April 11 with attractive early bird prices.

I hope to see you in Las Vegas.

Installing Oracle Database 19c and All the Things to Put on Top

When you prepare for patching or upgrading Oracle Database 19c, you must also prepare an Oracle Home. Installing the Oracle Home is easy, but there is more to it.

Out-of-place Installation

I always use out-of-place installation. I install a new, fresh Oracle Home. I will move the databases into that Oracle Home as I upgrade or patch.

The alternative, in-place installation, leads to more downtime, is more error-prone, and makes fallback more complicated. In addition, in-place installation will gradually slow down patching; as Mike Dietrich describes in Binary patching is slow because of the inventory.

Download and Prepare Oracle Home

First, I download the base release from Oracle Software Delivery Cloud, aka e-delivery.

Find REL: Oracle Database 19.3.0.0.0 – Long Term Release, the right platform, and download.

Extract the zip file into a new Oracle Home location:

export NEW_ORACLE_HOME=<path>
mkdir -p $NEW_ORACLE_HOME
cd $NEW_ORACLE_HOME
unzip -oq /tmp/LINUX.X64_193000_db_home.zip
rm /tmp/LINUX.X64_193000_db_home.zip

Don’t run the installer yet.

Clone Existing Oracle Home

I could clone an existing Oracle Home and then just apply the new patches. But it will make me susceptible to the same issue described above about in-place patching.

Update OPatch

OPatch is needed later on to apply patches to the new Oracle Home. Get the latest version and install it into the new Oracle Home:

rm -rf $NEW_ORACLE_HOME/OPatch
cd $NEW_ORACLE_HOME
unzip -oq /tmp/<opatch_zip_file>
rm /tmp/<opatch_zip_file>

Patches

Now, I will determine which patches to apply to the Oracle Home.

  • Start by getting the latest Release Update. I really mean the latest. I have helped too many customers with issues, only to find out the issue is already solved in a later Release Update. If your database has JAVAVM installed, then get the combo patch.
  • Review the list of important one-off patches for the specific Release Update. The list contains important fixes that haven’t made into a Release Update yet. I don’t need to get all of them, but based on my knowledge of my database, I can cherrypick those that could be relevant.
  • If I am using Data Pump, I get the Data Pump bundle patch. Data Pump fixes rarely make it into Release Updates, but they are not RAC-Rolling Installable which is a clear requirement for inclusion in Release Update.
  • If my databases use time zone files that are newer than version 32, then I must also apply the matching time zone patch. The default time zone file in Oracle Database 19c is version 32, and the base release contains all previous versions. To check the time zone file version in a database:
    SQL> select * from v$timezone_file;
    
  • If I am using GoldenGate, I get the GoldenGate bundle patch.
  • If my database uses OJVM (see appendix), I get the OJVM patch that matches the Release Update I am using. I can also get the OJVM patch as a combo patch together with the Release Update.

Unzip

Now that I have downloaded a number of zip files, I go ahead and unzip the files into separate directories. In the below example, I am using 19.16 Release Update, Data Pump bundle patch and a time zone patch:

#Release Update 19.16.0
mkdir -p $NEW_ORACLE_HOME/patch/p34133642
cd $NEW_ORACLE_HOME/patch/p34133642
unzip -oq /tmp/p34133642_190000_Linux-x86-64.zip
rm /tmp/p34133642_190000_Linux-x86-64.zip

#Data Pump bundle patch
mkdir -p $NEW_ORACLE_HOME/patch/p34294932
cd $NEW_ORACLE_HOME/patch/p34294932
unzip -oq /tmp/p34294932_1916000DBRU_Generic
rm /tmp/p34294932_1916000DBRU_Generic

#Time zone patch version 37
mkdir -p $NEW_ORACLE_HOME/patch/p33613829
cd $NEW_ORACLE_HOME/patch/p33613829
unzip -oq /tmp/DSTV37_p33613829_190000_Linux-x86-64.zip
rm /tmp/DSTV37_p33613829_190000_Linux-x86-64.zip

Install

Now, I can install the Oracle Home and apply all the patches in one operation. Mike has a really good description of the functionality and a demo.

I do a silent installation using a response file. Notice how I am applying the patches during the installation using -applyRU and -applyOneOffs:

export ORACLE_BASE=<path_to_oracle_base>
export ORACLE_HOME=<path_to_oracle_home>
#Path to inventory is most likely /u01/app/oraInventory
export ORA_INVENTORY=<path_to_inventory>
cd $ORACLE_HOME
./runInstaller -ignorePrereqFailure -waitforcompletion -silent \
   -responseFile $ORACLE_HOME/install/response/db_install.rsp \
   -applyRU patch/p34133642/34133642 \
   -applyOneOffs patch/p34294932/34294932,patch/p33613829/33613829 \
   oracle.install.option=INSTALL_DB_SWONLY \
   UNIX_GROUP_NAME=oinstall \
   INVENTORY_LOCATION=$ORA_INVENTORY \
   SELECTED_LANGUAGES=en,en_GB \
   ORACLE_HOME=$ORACLE_HOME \
   ORACLE_BASE=$ORACLE_BASE \
   oracle.install.db.InstallEdition=EE \
   oracle.install.db.OSDBA_GROUP=dba \
   oracle.install.db.OSBACKUPDBA_GROUP=dba \
   oracle.install.db.OSDGDBA_GROUP=dba \
   oracle.install.db.OSKMDBA_GROUP=dba \
   oracle.install.db.OSRACDBA_GROUP=dba \
   oracle.install.db.isRACOneInstall=false \
   oracle.install.db.rac.serverpoolCardinality=0 \
   oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \
   oracle.install.db.ConfigureAsContainerDB=false \
   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
   DECLINE_SECURITY_UPDATES=true

You can read more about silent installation on oracle-base.com. That’s where I got inspired from. The reponse file db_install.rsp is the default one that comes with the Oracle Home. I don’t change anything in it.

Finally, I execute root.sh as root:

$ORACLE_HOME/root.sh

AutoUpgrade

Download the latest version of AutoUpgrade, and put it into $ORACLE_HOME/rdbms/admin.

Et Voilà

That’s it. I can now use the Oracle Home to upgrade or patch my Oracle Database 19c.

Appendix

Patches

As if the list of patches to apply wasn’t long enough. There are even more MOS notes!

Good news is that you don’t have to go through them, as long as you stay on the latest Release Update. If you check the notes, you will see that almost all bugs are already included in a Release Update. That’s a pretty strong argument for always using the latest Release Update.

  • Things to Consider to Avoid Prominent Wrong Result Problems on 19C Proactively (Doc ID 2606585.1)
  • Things to Consider to Avoid Database Performance Problems on 19c (Doc ID 2773012.1)
  • Things to Consider to Avoid SQL Performance Problems on 19c (Doc ID 2773715.1)
  • Things to Consider to Avoid SQL Plan Management (SPM) Related Problems on 19c (Doc ID 2774029.1)

Grid Infrastructure

If Grid Infrastructure manages my database, I must remember to keep GI and database patch level in sync.

It Looks Complicated

It is a little to cumbersome. We know, and that’s why there are several initiatives to make it easier.

You could also look at Oracle Fleet Patching & Provisioning (FPP). Philippe Fierens is product manager for FPP. You can read his blog posts or reach out to him (he is a nice guy who takes every opportunity to talk about FPP).

OJVM

If your database is using OJVM, then you must also apply the OJVM patch to your Oracle Home. You can check it using:

select version, status from dba_registry where comp_id=’JAVAVM’

I have seen many databases that had OJVM installed, but it was never used. In such case, you can remove the component from your database. Then you no longer need to apply the OJVM patch to your Oracle Home. Plus it has the added benefit that it will make your upgrades faster.

Mike Dietrich has a good blog – the OJVM Patching Saga. Catchy title!

Troubleshooting Rabbit Hole: From Data Guard to Data Integrity Checks

I always fear the worst when I get a TNS error. It’s not my expertise. A TNS error was exactly what I got while I configured a Data Guard environment. Redo Transport didn’t work; the redo logs never made it to the standby database.

The Error

I took a look in the alert log on the primary database and found this error:

2022-05-10T08:25:28.739917+00:00
"alert_SALES2.log" 5136L, 255034C
        TCP/IP NT Protocol Adapter for Linux: Version 12.2.0.1.0 - Production
  Time: 10-MAY-2022 18:09:02
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12650

TNS-12650: No common encryption or data integrity algorithm
    ns secondary err code: 0
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0

A little further in the alert log, I found proof that the primary database could not connect to the standby database:

2022-05-10T18:09:02.991061+00:00
Error 12650 received logging on to the standby
TT04: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (12650)
TT04: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
2022-05-10T18:09:02.991482+00:00
Errors in file /u01/app/oracle/diag/rdbms/sales2_fra3cx/SALES2/trace/SALES2_tt04_75629.trc:
ORA-12650: No common encryption or data integrity algorithm
Error 12650 for archive log file 1 to '...'

The Investigation

As always, Google it! Although I have used DuckDuckGo for privacy reasons instead of Google for many years, I still say google it, which is fairly annoying.

The search revealed this MOS note: ORA-12650: No Common Encryption Or Data Integrity Algorithm When Using SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=sha256 (Doc ID 2396891.1) Although it is fairly old, it led me to look for issues with data integrity checks defined in sqlnet.ora.

The primary database had the following defined in sqlnet.ora:

SQLNET.CRYPTO_CHECKSUM_SERVER=REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA1)
SQLNET.CRYPTO_CHECKSUM_CLIENT=REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA1)

The above means that any connection made to or from this database must use data integrity checks. CRYPTO_CHECKSUM_SERVER and CRYPTO_CHECKSUM_CLIENT defines that. Also, the database will only accept connections using the SHA1 algorithm.

Then I looked in sqlnet.ora on the standby database:

SQLNET.CRYPTO_CHECKSUM_CLIENT=ACCEPTED
SQLNET.CRYPTO_CHECKSUM_SERVER=ACCEPTED
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA256,SHA384,SHA512,SHA1)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA256,SHA384,SHA512)

This database does not require data integrity checks. But if the other party requests or requires it, then the server is fine with it. That’s the meaning of ACCEPTED. But look at the allowed algorithms. When acting as server (i.e. receiving connections from someone else), it does not allow SHA1 algorithm, the only one allowed by the counterpart.

The Solution

I decided to remove all instances of SHA1 because:

  • It is an old algorithm
  • Any 12c database or client supports newer algorithms
  • In this environment, I don’t have any old 11g servers or clients

I added all the SHA-2 algorithms as supported algorithms. Now, sqlnet.ora in both databases look like this:

SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA256,SHA384,SHA512)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA256,SHA384,SHA512)

This solved the problem and now redo transport worked fine.

If I wanted to go maximum security, I should allow only the SHA512 algorithm in both sqlnet.ora files:

SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA512)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA512)

And force both databases to always use data integrity checks:

SQLNET.CRYPTO_CHECKSUM_SERVER=REQUIRED
SQLNET.CRYPTO_CHECKSUM_CLIENT=REQUIRED

Security

Some questions I asked myself while reading the Security Guide 19c.

Why do you want data integrity checks in our connections?

To protect against two types of attack:

  1. Data modification attack An unauthorized party intercepting data in transit, altering it, and retransmitting it is a data modification attack. For example, intercepting a $100 bank deposit, changing the amount to $10,000, and retransmitting the higher amount is a data modification attack.
  2. Replay attack Repetitively retransmitting an entire set of valid data is a replay attack, such as intercepting a $100 bank withdrawal and retransmitting it ten times, thereby receiving $1,000.

Can I do more to strengthen security in sqlnet.ora?

Yes. You should definitely also take a look at network encryption to protect data-in-transit. Take a look at Configuring Oracle Database Native Network Encryption and Data Integrity in the Security Guide 19c. These four parameters are of interest:

Also, reading Securing the Oracle Database – A technical primer can inspire you.

What’s wrong with SHA-1?

It’s old and has been made insecure by computer evolution. From Wikipedia:

In cryptography, SHA-1 (Secure Hash Algorithm 1) is a cryptographically broken but still widely used hash function which takes an input and produces a 160-bit (20-byte) hash value known as a message digest – typically rendered as a hexadecimal number, 40 digits long. It was designed by the United States National Security Agency, and is a U.S. Federal Information Processing Standard.

Since 2005, SHA-1 has not been considered secure against well-funded opponents; as of 2010 many organizations have recommended its replacement. NIST formally deprecated use of SHA-1 in 2011 and disallowed its use for digital signatures in 2013. As of 2020, chosen-prefix attacks against SHA-1 are practical. As such, it is recommended to remove SHA-1 from products as soon as possible and instead use SHA-2 or SHA-3. Replacing SHA-1 is urgent where it is used for digital signatures.

I Never Meet Joel, But I Have Met His Spirit #JoelKallmanDay

I didn’t know Joel Kallman. I heard about his name, and I knew he had something to do with APEX. But we never had to opportunity to meet each other. After his untimely death, I became aware of his huge impact on the Oracle community. So many people shared so many great stories about him and how he influenced the Oracle community. It became apparent to me how much he meant for the Oracle community.

This post is about the Oracle community. When I started working with Oracle, I quickly found out how great the community was. People were happily sharing knowledge – on blogs, at conferences, in videos. I was blown away by the amount of help and guidance I could get. For free – and even with a huge smile on top of that. I was a rookie and all that help made a huge difference. This is what I mean – when I say I believe I have met Joel’s spirit. But still, after many years working with Oracle, I use and value the community.

Since I joined Oracle, it has been on my to-do list to do a rolling upgrade. I never tried that for real. Last week, I had time to try out rolling upgrades with DBMS_ROLLING. Provision a Data Guard environment, open the documentation, get a cup of coffee, and off we go…

At one point, I was stuck. Like in – really stuck! Friday night – even after trying Connor’s gin/tonic debugging – still stuck. Saturday morning – after a good night’s, sleep still stuck. Errors, errors, errors!

Luckily, I attended a talk at UKOUG Tech Talk in April this year about rolling upgrades. Clive and Zahid did a really good presentation on rolling upgrades with a lot of valuable technical knowledge. Flip through the slides, and then – voila – problem solved. Now I could proceed with the upgrade. Thanks Clive, thanks Zahid.

If it hadn’t been for the community – for people happily sharing knowledge and helping each other – for Joel’s spirit, I would probably still be stuck in that rolling upgrade.

Never stop sharing knowledge!

New Webinars Coming Your Way

These days there is an abundance of virtual events. I feel exhausted from it from time to time. So you might ask yourself. Is there really room for virtual events?

The answer is: Yes, there is 🙂

We have added more upcoming webinars to our Virtual Classroom Series: Upgrade to Oracle Database 19c. If you enjoyed the previous ones, I am sure that you will love these as well.

Webinar Date Sign Up
10 How Low Can You Go? Zero Downtime Operations October 21, 2021 Link
11 Secure Your Job – Fallback Is Your Insurance November 11, 2021 Link
12 Migrating Very Large Databases December 9, 2021 Link
13 Data Pump Extreme – Deep Dive with Development January 27, 2022 Link

Of course, we still keep our promise to you: All tech – no marketing, no buzzwords

If you miss any of the webinars, head over to the Webinar page and find the recording and the slides. All the previous nine webinars are there as well.

I hope to see you there!

Oracle Database 21c Is Here

Last week Oracle released Oracle Database 21c for additional platforms: Linux and Exadata. Other platforms will follow. You should keep an eye out for Release Schedule of Current Database Releases (Doc ID 742060.1) for further information.

Things to Notice

In my part of the Oracle Database, there are things to notice. I want to highlight:

To get all the details, visit the Upgrade and Utilities part of the new features documentation. There are some good examples of how the features can be used.

Behaviour Changes

Read-Only Oracle Home (ROOH) is now the default. Be sure to set the following environment variables to control the location of these directories:

  • ORACLE_BASE_HOME
  • ORACLE_BASE_CONFIG

I like ROOH, but it takes some time to get used to. For instance, network/admin files (tnsnames, sqlnet) and dbs files (pfile, spfile) are now in a new location.

The Pre-Upgrade Information Tool or preupgrade.jar is removed and replaced by AutoUpgrade. A few new parameters have been introduced to make the transition easier.

Innovation Release

Remember, 21c is an innovation release, which means a shorter support window than Long Term Releases such as Oracle Database 19c. If you adopt Innovation Releases, you should be prepared to upgrade to the next database release within one year after the next database release ships.

I would not recommend that you upgrade your production systems to Oracle Database 21c due to the limited support period. Not unless you are prepared to upgrade the database soon again – when support runs out. Oracle Database 19c is the current Long Term Support release. I recommend that for production databases.

Different release types for Oracle Database - innovation vs long term support

To learn more about innovation release and our release model, have a look at our slide deck. We discuss it in the first chapter.

New Features

I want to mention a few new features. They haven’t attracted as much attention as the marque features, but they are still cool.

Expression based init.ora parameters make it possible to base database parameters (init.ora) on calculations made on the system’s configuration. For example, setting the database parameter CPU_COUNT to half the number of CPUs (Windows):

alter system set cpu_count='$NUMBER_OF_PROCESSORS/2';

For more details, check out my video on YouTube.

Placeholders in SQL DDL Statements can improve application security because sensitive information, like passwords, doesn’t need to be hardcoded in SQL DDL. Example: You can make this statement:

CREATE USER :!username IDENTIFIED BY :!password ...

And Oracle Call Interface programs can substitute the placeholders into:

CREATE USER "DANIEL" IDENTIFIED BY "MyS3cr3tP!d" ...

This is similar to data binding but occurs in Oracle Client.

Conclusion

The complete 21c documentation is online, so I suggest that you head on over there and have a look. In the upgrade guide, you can find the list of behavior changes and also deprecated and desupported functionality. And finally, but most interesting perhaps, is Learning Database New Features.

Try it out in Always Free ADB or explore the Oracle LiveLabs.

Follow-up On Cool Features Webinar – Jan 2021

Wow! Mike and I gave a webinar yesterday: Cool Features – not only for DBAs. We showed a lot of cool features – and the audience recognised that by asking really cool questions. We had more than 100 questions to answer live, and below you can find the answer to those questions that we had to investigate further.

But first a few practical remarks:

Virtual Classroom Series - Upgrade to Oracle Database 19c

Expression Based Parameters

Ahh – one of my new favourites. A viewer asked whether there is any syntax check when you make an expression based parameter. So lets try. First command should work:

SQL> alter session set resumable_timeout='3000/2';

Session Altered.

Now, let’s make a syntax error and see what happens:

SQL> alter session set resumable_timeout='3000//2';

ORA-32005: error while parsing size specification [3000//2]

SQL> alter system set resumable_timeout='3000//2';

ORA-32005: error while parsing size specification [3000//2]

SQL> alter system set resumable_timeout='3000//2' scope=both;

ORA-32005: error while parsing size specification [3000//2]

SQL> alter system set resumable_timeout='3000//2' scope=memory;

ORA-32005: error while parsing size specification [3000//2]

SQL> alter system set resumable_timeout='3000//2' scope=spfile;

ORA-32005: error while parsing size specification [3000//2]

Conclusion: Your expressions are checked when you issue the ALTER SESSION or ALTER SYSTEM command.

But take care when using environment variables in your expressions. If the environment variable is available when you issue the statement, the command will succeed. However, if the environment variable is not present when the database restarts, then you will have a problem. This could be the case if you forget to add the environment variable to your profile or the environment in Grid Infrastructure (srvctl setenv). If this happens, the startup will error out.

And finally, the expressions you put into a pfile are not checked before the database starts using that pfile. If there are invalid expressions in your pfile, the database startup will error out as well.

Online Table Move

Indexes and LOBs

When you move a table online the indexes remain valid during and after the move. Optionally, you can specify the UPDATE INDEXES clause to change the index storage attributes as well (like moving index to a new tablespace):

SQL> alter table t1 move online tablespace data update indexes(i1 tablespace data);

The initial version of blog post claimed that indexes become unusable during online table move unless you specified the UPDATE INDEXES clause. This is not true! Kudos to Olaf Nowatzki for information me. Thanks!

The LOB segments that are created to support the LOB columns remain in the same tablespace, even if you move the table to a new tablespace. But if you want, you can also move the LOB segment as well:

SQL> alter table t1 move online tablespace data lob(c1) store as (tablespace data);

A viewer was concerned whether he could move a table online and also update indexes because one of the indexes was really wide. Apparently, in a previous version this had been an issue to them. I tested this out with an index on 10 columns defined as VARCHAR2(128 BYTE). And this was not a problem. If you index is even wider; test it yourself – or consider whether you really need such an index.

It is supported move an IOT online as well – but not partitioned IOTs.

The Rest

If the table has unused columns when you move it, it will still have unused columns. They are not affected by a move operation. Actually, Connor McDonald made a really good video where he talks about unused columns.

Also, an attendee wanted to know whether the online command generated more UNDO or TEMP. Let’s see what happens with my 125 MB large table (no indexes):

Metric Online move Regular move
undo change vector size 96504 37988
session pga memory 5750416 4243088

As you can see it does require more UNDO to move the table online. But the numbers might change on a busy system with many indexes and LOBs. TEMP appears to be relative unaffected – it might change if I have unique indexes and must rebuild those. Lesson learned – expect more resource usage – test before trying in production.

And finally – does online table move work on tables with OLS (Label Security) policies? I don’t know – so far I haven’t been able to get confirmation. My immediate answer is NO. Take for instance DBMS_REDEFINITION – it is not supported when the table has OLS policies, so I assume the same applies for online table move.

Online Convert To Partitioned Table

This feature only works on a non-partitioned (or regular table). If your table is already partitioned, and you want to change to a different partitioning method, you must use another method (like DBMS_REDEFINITION).

Online Data File Move

When you move a data file online, the database creates an exact copy of the data file in the new location. It is bit-by-bit identifical.

This also means that any free space in the data file is not reclaimed, nor is the High Water Mark affected.

Standby Database Operations

I demoed how the standby database in 19c can do automatic flashback, when the primary database flashes back. A viewer asked whether you can flash back to the same restore point multiple times without the standby database complaining.

My immediate answer was: yes, you can flash back to the same restore point multiple times. The documentation doesn’t mention such a restriction, so I was eager to try it out.

I made a data guard setup in OCI and I was able to flash back to the same restore point three times – and the standby database followed each time.

Gradual Password Rollover

With gradual password rollover a user can temporarily have two passwords. A viewer asked whether you can identify the sessions that are still connecting with the old password. And you can with unified audit trail provided you are auditing logons. In that case simply look at the AUTHENTICATION_TYPE column of the audit entries for LOGON actions:

SQL> select   authentication_type, event_timestamp 
     from     unified_audit_trail 
     where    action_name='LOGON' and dbusername='APP_USER' 
     order by event_timestamp;

Your result will be something like:

authentication_type
(TYPE=(DATABASE));(CLIENT ADDRESS=((PROTOCOL=tcp)(HOST=10.0.1.225)(PORT=24974)));(LOGON_INFO=((VERIFIER=12C-NEW) (CLIENT_CAPABILITIES=O5L_NP,O7L_MR,O8L_LI)));
(TYPE=(DATABASE));(CLIENT ADDRESS=((PROTOCOL=tcp)(HOST=10.0.1.225)(PORT=24983)));(LOGON_INFO=((VERIFIER=12C-OLD) (CLIENT_CAPABILITIES=O5L_NP,O7L_MR,O8L_LI)));

Look at the VERIFIER information. If a user is using the old password in a rollover period, the identifier is suffixed -OLD. Similar, the suffix is -NEW if the new password is used.

This is a very useful addition to gradual password rollover.

Conclusion

Thanks to everyone that participated today. We had so much fun preparing the webinar for you. And really – thanks for all the cool questions. Doing stuff remote is really hard, and you really feel disconnected from the audience. But it helps a lot with good questions.

See you next time.