How to Use NOLOGGING to Make Imports Faster

You can use NOLOGGING operations to speed up data loads. Imports are all about loading data.

When Data Pump loads the rows into your database and creates indexes, you can avoid generating redo.

You save:

  • Time – because a NOLOGGING operation is faster.
  • Space – because no redo means no archive logs.

But, there are serious implications! If you need to restore the database, those tables and indexes will not be there.

How To

You can toggle the logging clause of a table or index using the Data Pump import parameter transform. To use NOLOGGING.

impdp ... transform=disable_archive_logging:y

The Effect

I made a small test with a simple schema with four identical tables. Each table is 3 GB.

Setting Time to Import Redo, GB
LOGGING 4m 22s 12.2
NOLOGGING 1m 45s 0.047

Many factors affect the numbers, so your result might be very different. Check for yourself.

Underneath the Hood

The LOGGING information is only changed temporarily during the import. At the end of the import, the LOGGING information is reset to its original state.

I enabled Data Pump trace:

impdp ... \
   transform=disable_archive_logging:y \
   trace=1FF0300

And by grepping in the database trace directory, I could find these events:

FTEX_dw00_854409.trc:KUPW:09:38:39.351: 1:       ALTER TABLE "CONSTR_VALIDATE"."T1"  NOLOGGING
FTEX_dm00_854407.trc:KUPM:09:40:20.527: W-3 . . imported "CONSTR_VALIDATE"."T1"                      381.6 MB 23312384 rows in 101 seconds using direct_path
FTEX_dw01_854413.trc:KUPW:09:40:20.619: 2:       ALTER TABLE "CONSTR_VALIDATE"."T1" LOGGING

Words of Caution

I strongly recommend that you only use this feature when:

  1. You fully understand the implications of NOLOGGING operations.
  2. You install the Data Pump Bundle Patch. There are a handful of bugs related to this feature, including one bug where the Data Pump fails to restore the original LOGGING value, leaving the object in NOLOGGING mode.
  3. You have a plan for ensuring the recoverability of your database. Either you don’t care about the data at all, or you start a level 0 backup right away.
  4. You have a plan for your standby databases if you use Automatic Correction of Non-logged Blocks at a Data Guard Standby Database.

Notes

  • This feature doesn’t work if you have FORCE LOGGING enabled. The database silently ignores the NOLOGGING clause.

    select force_logging from v$database;
    
  • In a Data Guard configuration, you most likely use FORCE LOGGING, so don’t expect it to work here. Unless you’re using Automatic Correction of Non-logged Blocks at a Data Guard Standby Database.

  • This feature doesn’t work on Oracle Autonomous Database (ADB) Serverless and Dedicated because it ignores the NOLOGGING clause. ADB protects your data at any cost, and, thus, completely ignores that setting.

  • A Data Pump import always generates redo, even when you are using NOLOGGING operations. The database logs all actions on the data dictionary and UNDO. Furthermore, DML operations on the Data Pump control table are also logged.

Migrations

You can safely use this feature during a migration if you plan to perform a level 0 backup and build your standby databases within the maintenance window.

However, if you’re tight on time, you often take a level 0 backup before the import and rely on archive logs for recoverability. In such a situation, you must not use this feature. The same applies if you build your standby database before the import.

Just the Indexes

A compromise is to use NOLOGGING on the indexes only:

transform=disable_archive_logging:y:index
transform=disable_archive_logging:n:table

The rationale being that you can always rebuild your indexes again. Keep your tables safe, but if something happens, simply recreate the indexes.

In such a situation, it’s good to have the index definitions ready. You can use the SQLFILE option to extract the index DDLs from the dump file:

impdp ... sqlfile=indexes.sql include=index

Conclusion

You can speed up imports by using NOLOGGING operations. It also reduces the amount of redo generation.

Appendix

Give It a Try

You can use our Data Pump hands-on lab to try it yourself. Provision a lab and use the commands below:

# Create directory and copy dump file
mkdir /home/oracle/dpdir
cp /home/oracle/scripts/faster-import-lob.dmp /home/oracle/dpdir

# Create parameter file
# Change the transform parameter accordingly
cd
cat > imp.par <<EOF
directory=dpdir
dumpfile=faster-import-constraints.dmp
parallel=4
logtime=all
metrics=yes
transform=constraint_novalidate:y
transform=disable_archive_logging:y
trace=1FF0300
EOF

# Get rid of previous archived logs
. ftex
rman target /<<EOF
   delete noprompt archivelog all;
EOF
rm -rf /u02/fast_recovery_area/FTEX/archivelog/*

# Data Pump prereqs and a restart to reset metrics
sqlplus / as sysdba<<EOF
   drop user constr_validate cascade;
   grant datapump_exp_full_database, datapump_imp_full_database to dpuser identified by oracle;
   alter user dpuser default tablespace users;
   alter user dpuser quota unlimited on users;
   create or replace directory dpdir as '/home/oracle/dpdir';
   alter system set streams_pool_size=128m scope=spfile;
   shutdown immediate
   startup
EOF

# Remove existing trace files
rm -rf /u01/app/oracle/diag/rdbms/ftex/FTEX/trace/*

# Start import
impdp dpuser/oracle parfile=imp.par

# Measure redo via DB and file system
du -h /u02/fast_recovery_area/FTEX/archivelog
sqlplus / as sysdba<<EOF
   SELECT VALUE / 1024 / 1024 AS redo_generated_mb FROM v\$sysstat WHERE name = 'redo size';
EOF

AutoUpgrade New Features: Get Latest JDK Patches

In an Oracle home, you find a Java installation – a full JDK.

cd $ORACLE_HOME
ls -l jdk

Some of the tools related to Oracle AI Database are Java-based, like AutoUpgrade, making it convenient to have.

How do you patch the JDK in your Oracle home?

Updating JDK

When you apply a Release Update to your database, you’re also updating your JDK. Those patches are part of the Release Update.

Here are two Oracle homes with different Release Updates:

cd /u01/app/oracle/product/dbhome_1927
jdk/bin/java -version

java version "1.8.0_441"
Java(TM) SE Runtime Environment (build 1.8.0_441-b07)
[   Java HotSpot(TM) 64-Bit Server VM (build 25.441-b07, mixed mode)](https://docs.oracle.com/en/database/oracle/oracle-database/26/upgrd/patch-parameters-autoupgrade-config-file.html#UPGRD-GUID-A7E6221E-5964-4553-9A63-61B2E4AB1CBD)

cd /u01/app/oracle/product/dbhome_1929
jdk/bin/java -version

java version "1.8.0_461"
Java(TM) SE Runtime Environment (build 1.8.0_461-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.461-b11, mixed mode)
  • Notice how the JDK is newer in the later Release Update.

So, all’s good, right?

The Latest Ones

The Java team builds its patches around the same time as the database team builds the Release Update.

This means that the JDK patches you find in a Release Update are from the previous quarter. There’s simply not enough time to include the very latest JDK patches in the Release Update. We also need time for regression testing and other tasks.

So, the very latest JDK patches are made available as a one-off patch.

You can find the patch number of the latest JDK patch in JDK and PERL Patches for Oracle Database Home and Grid Home (Doc ID 2584628.1). Apply that and you’re fully up to date.

AutoUpgrade

You can instruct AutoUpgrade to get the latest available JDK patches using the jdk keyword in your patch configuration.

patch1.patch=RU,OPATCH,JDK

Here’s an example of two 19.29 Oracle homes – one without and one with the JDK patch:

cd /u01/app/oracle/product/dbhome_1929
jdk/bin/java -version

java version "1.8.0_461"
Java(TM) SE Runtime Environment (build 1.8.0_461-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.461-b11, mixed mode)

cd /u01/app/oracle/product/dbhome_1929jdk
jdk/bin/java -version

java version "1.8.0_471"
Java(TM) SE Runtime Environment (build 1.8.0_471-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.471-b11, mixed mode)
  • Notice how the version changes from 461 to 471.

Daniel’s Recommendation

Now that it has become so easy to update the JDK, I recommend doing it.

Security is paramount!

I see minimal risk in doing that.

  • The Java installation in Oracle Database 19c is version 8 that is very mature.
  • In Oracle AI Database 26ai, it is Java 11 that is also very mature.
  • Those two Java releases are in a part of their lifecycle where a few things are updated.
  • I recall no major issues caused by a JDK patch.

But what do you think?

  • Are you already updating the JDK, or do you plan to do so in the future?
  • Have you experienced issues with JDK in your Oracle home? Let me know in the comments below.

Happy patching!

New Webinars

Now that I’ve finished with Oracle AI World and the EMEA Community Tour is coming to a close, I have some free time on my hands. What better way to spend that time than to create webinars?

  • Webinar 26: Best Practices for Upgrading to Oracle AI Database 26ai
  • Webinar 27: Database Patching for DBAs – Patch smarter, not harder
  • Webinar 28: Patch smarter, not harder – MS Windows Special Edition

When and How

We’ll be airing the first webinar in December, and the other two will follow early next year.

Webinar Date
Best Practices for Upgrading to Oracle AI Database 26ai December 11 2025, 14:00 CET
Database Patching for DBAs – Patch smarter, not harder February 12 2026, 14:00 CET
Patch smarter, not harder – MS Windows Special Edition March 12 2026, 14:00 CET

All our webinars are free. Just sign up and we’ll send you the details.

Why

First of all, these webinars are all tech, no marketing!

  • You can learn how to tackle your next upgrade project to Oracle AI Database 26ai. Although it’s not yet available for all platforms, there are still important preparations that you should start today.
  • Furthermore, we’ll share how other customers have upgraded their mission-critical systems to the latest release and whether there are any secret underscores (spoiler: there is…).
  • Over the last year, we’ve made significant enhancements to AutoUpgrade that make it so much easier to patch your databases. After watching this webinar, you’ll never download patches from My Oracle Support again.
  • Oracle on Windows is slightly different – but don’t worry, AutoUpgrade got you covered. Use the latest enhancements to streamline your operations of Oracle AI Database on Windows.

See You

If you can’t wait, remember that you can watch all our previous webinars on demand.

I hope to see you there!

How To Export To ASM Storage – The Full Picture

At Oracle AI World, I spoke to a customer who used Data Pump as part of their CI/CD pipeline. Exporting and importing a 7 TB database took more than 36 hours.

That’s far too much, I said.

A meme showing a man and a cat shouting 36 hours is too much

A few details:

  • They had the Data Pump bundle patch installed.
  • Unloading and loading rows were generally just slow.
  • They were using NFS storage for the dump files.

I’ve seen far too many cases where misconfigured NFS caused slowness, so I suggested using ASM storage instead. At least, that could rule out NFS as the issue.

Here’s how to use Data Pump and ASM storage.

Export

It turned out that I already blogged about this topic. You can visit that for step-by-step instructions.

My feeling when I saw I blogged about this already

Move File

We now have a dump file on our source database. But we need to move it to the target host.

DBMS_FILE_TRANSFER

On the remote database:

  1. I create a directory in ASM where I can store the dump file:
    ASMCMD> create directory +DATA/DMPDIR
    
  2. I create a user and grant privileges to connect. I’ll use this user to connect via a database link:
    SQL> create user transportuser identified by ... ;
    SQL> grant connect to transportuser;
    
  3. I create a directory and allow my user to write to it:
    SQL> create directory dmpdir as '+DATA/DMPDIR';
    SQL> grant write on directory dmpdir to transportuser;
    

On the source database:

  1. I create a user with the right privileges:
    SQL> create user transportuser identified by ... ;
    SQL> grant connect to transportuser;
    SQL> grant create database link to transportuser;
    SQL> grant read on directory myasmdir to transportuser;
    SQL> grant execute on dbms_file_transfer to transportuser;
    
  2. I connect as transportuser and create a database link to the remote database/PDB:
    SQL> create database link transportlink
         connect to transportuser identified by ...
         using '<connect-string-to-remote-pdb>';
    
  3. I copy the file:
    begin
    dbms_file_transfer.put_file(
       source_directory_object      => 'MYASMDIR',
       source_file_name             => 'exp01.dmp',
       destination_directory_object => 'DMPDIR',
       destination_file_name        => 'exp01.dmp',
       destination_database         => 'TRANSPORTLINK');
    end;
    /
    

A closing remark: DBMS_FILE_TRANSFER is an effective means of copying the file. My wristwatch measurements show it’s slightly faster than using scp in the operating system.

ASMCMD

You could also transfer the file directly from one ASM instance to a remote one. Check out the cp command.

But I don’t recommend this approach, because you need the SYSDBA or SYSASM privilege to connect to asmcmd. Since the issue was related to a CI/CD pipeline, it’s a fully automated flow. If at all possible, I’d avoid using such a powerful privilege in my automation.

Import

Since we have the dump file on our target system, we can perform a Data Pump import. Use the same approach as described above.

Outcome

If the NFS storage indeed caused the slowness, you should see much faster times.

If that’s not the case, here are some other ideas to explore.

Happy Data Pumping!

How to Fix Errors in AutoUpgrade Download Mode

If you’re a frequent visitor to my blog, you’ll know I’m a big AutoUpgrade fanboy – especially the download mode, which can save you hours of downloading patches on My Oracle Support.

Recently, I was preaching to another customer, and when they finally gave in, here’s what happened:

We gave it a shot, but we keep running into errors when we try the download mode.

So, let’s see how we can solve the most common problems.

My Oracle Support

  1. You must have an Oracle SSO account.

  2. That account must be linked to a Customer Support Identifier (CSI) that has an active support agreement.

  3. In that CSI, you must have the privilege to download patches. The MOS administrator in your organization can help you with that.

    • In User Details, select the appropriate Support Identifier, set the Patches drop-down to Download.
  4. If AutoUpgrade tells you that your username or password is incorrect:

    *Connection Failed - You entered an incorrect user name or password.* 
    
    • You should ensure that you’re using the latest version of AutoUpgrade. We recently fixed a bug for passwords with certain special characters.

Network Connectivity

  1. AutoUpgrade must connect to the following servers:
  2. Those servers are part of a Content Delivery Network (CDN), so expect changing IP addresses. Base your firewall rules on DNS names rather than IP addresses.
  3. You can connect via a proxy:
    export https_proxy=https://proxy.weyland-yutani.net:8080
    java -jar autoupgrade.jar ... -mode download
    

Common Errors

*Request to https://login-ext.identity.oraclecloud.com failed with response code [403]*
  • Open your firewall. Allow traffic based on DNS names, not IP addresses.
  • Optionally, connect via a proxy server.

*unable to find valid certification path to requested target*

*SSL peer shut down incorrectly*
  • Check firewall settings and security groups to ensure they permit SSL traffic on the required ports.

Windows

If AutoUpgrade requires that you use an elevated command prompt (the run as administrator thing) when you use download mode, you should update to the latest version of AutoUpgrade. It no longer requires this.

Happy downloading!

Using Data Pump Exports As Long-Term Storage

I recently helped a customer get the most out of Data Pump. Here’s how it all started.

For legal reasons, we must keep a backup of our data for five years. We want to use Data Pump exports. Do you have any recommendations?

Before someone flames me, I know that an export is not a backup, but we’re not talking about disaster recovery here.

Meme depicting Captain Picard saying an export is not a backup

Data Pump Export

Here are some of the Data Pump export parameters I would use.

Log File

logtime=all
metrics=yes
logfile=dataexport.log

Whenever you store data for the long term, I think it’s a good idea to store relevant metadata too. So, be sure to save the log file together with the dump files, and include diagnostic information using logtime and metrics.

Dump Files

dumpfile=dataexport_%T_%L.dmp
filesize=10000M
reuse_dumpfiles=yes

The dumpfile specification contains the %T wildcard, which translates into YYYYMMDD at runtime. It’s handy to have the date in the file names. Using %L allows the creation of multiple dump files.

I recommend breaking the dump files into smaller pieces using filesize. Smaller files are easier to handle and transport, and avoid some issues I’ve seen with object storage and huge files.

Content

schemas=myapp
tables=mytab
exclude=statistics

I would advise against doing a full export. It contains far too much information that you won’t need, like tablespace definitions, SQL management objects, and audit trail. A schema or table export is more suitable.

Be sure to exclude statistics. They take time to export and have no value. To narrow down the objects you export, you can also use the include parameter.

The parameters include and exclude are mutually exclusive in Oracle Database 19c. That’s no longer the case in later releases.

Compression

compression=all
compression_algorithm=high

Use compression to make the dump files smaller. I think it’s worth spending a few more CPU cycles on export and compressing as much as possible to reduce the dump file size, so use the high algorithm.

Creating a compressed export requires the Advanced Compression Option. However, importing a compressed backup does not require the same option. Always check the license guide for up-to-date information.

If you don’t have a license for Advanced Compression Option, you can compress the dump file using OS utilities. Unless you also encrypt the dump file. Compressing an encrypted dump file gives no benefit.

Integrity

data_options=verify_stream_format
checksum=yes

Since you are storing data for long-term storage, it’s good to double-check that there’s no corruption in the data. Data Pump writes into the dump file in a streaming manner, and you can check it along the way using data_options.

I’d also recommend generating a checksum and storing that in the dump file using the checksum parameter.

You can periodically check your dump for corruption by using impdp ... verify_only=yes. It will re-calculate the dump file checksum and match that against what is stored in the dump file.

Note that checksum is a feature of Oracle Database 21c.

Usually, using the default checksum_algorithm is fine for checking data integrity. If you want to guard against someone tampering with the dump file, it’s better to use checksum_algorithm=sha512. But for proper protection of your data, use encryption.

Encryption

encryption=all
encryption_mode=password

You can encrypt the data in the dump using the encryption parameter.

I prefer using a separate passphrase to protect the dump file, instead of using the database master encryption key. If you use the latter, you also need to have a backup of the database keystore.

Using Data Pump encryption requires a license for Advanced Security Option. Always check the license guide for up-to-date information.

Miscellaneous

flashback_scn=systimestamp
parallel=<n>

Remember to make a consistent export using flashback_scn. Alternatively, export from a quiesced database or a snapshot standby.

Use whatever parallel degree that you can afford. Normally, on x86-64 architectures, the optimal setting is twice the number of physical cores.

Using OCI Object Storage

The customer also stated:

We plan on storing the dumps in OCI object storage.

To reduce the storage costs, use either:

Your Opinion

How would you store data for the long term? Do you have any experiences with the above Data Pump parameters?

Drop a comment and let me know.

Happy exporting!

How I Think You Should Create Your Container Databases

In the multitenant architecture in Oracle AI Database, the container database is essential. If you create it the right way, you can avoid some difficulties later on.

Here’s my recommendation for creating container databases.

3xC

The three most important things all start with “C”. Makes it easier to remember.

Character Set

The first one is straightforward: stick with the default, AL32UTF8.

If the root container is AL32UTF8, you can plug in PDBs with any character set. You can have a mix of the classic US7ASCII, old-school European without the Euro sign in WE8ISO8859P1, Japanese JA16SJIS, and original Unicode UTF8, and whatever.

But as soon as you set the root character set to anything but AL32UTF8, all PDBs are bound to the same character set.

Remain flexible and use AL32UTF8.

Components

The number of components has a direct influence on the time it takes to upgrade and patch a database.

Any component used by a PDB must also be installed in the root container. So, install as many components as you need in root. No more than that (selectively tick checkboxes in the blue box).

Also, from a least privilege kind-of-perspective don’t install more than you need.

When you decide on a set of components to install, do that in root only. Ensure that new PDBs have none of the components installed (untick all checkboxes in the red box). This gives you the leanest PDBs. If a PDB requires a component, it’s easy to install afterward.

Installing components in DBCA

Compatible

The most important thing is that you have a uniform compatible setting in all your CDBs. This gives you the most flexibility.

When you plug in a PDB, it must have the same compatible setting as the root container.

  • If the PDB has a lower compatible setting, it will automatically and silently raise the setting to that of the root container.
  • If the PDB has a higher compatible setting, you won’t be able to plug it in.

Maintain a uniform compatible setting in your PDBs

Important Parameters

Once you have created a CDB, I recommend setting these parameters.

max_pdbs

You can restrict the number of current user-created PDBs in a CDB. This is useful if you don’t have a license for the Multitenant option.

alter system set max_pdbs=3 scope=both;

The database now throws an error if you try to create a fourth PDB.

_exclude_seed_cdb_view

By default, the CDB views do not show information about PDB$SEED. I’m not too fond of this behavior. I want to know if there is a problem with it.

alter system set "_exclude_seed_cdb_view"=false scope=both;

Records from PDB$SEED now show up in the CDB views with CON_ID=2.

Create The CDB Using DBCA

You need to use the advanced configuration in Database Configuration Assistance (DBCA) to create a CDB that matches the above recommendations.

  • Select Create a database.
  • Select Advanced configuration.
  • Create a Custom Database.
  • On the following few pages, you can make the changes you like until you reach the Database Options page.
  • When you reach the Database Options page, select the components you need in the root container (blue box). In the Include in PDBs list, select only the component you need in all your PDBs. Otherwise, leave them out so you get a lean PDB. You can easily add components later on.
  • On the Configuration Options page, on the Character sets tab, ensure that you select Use Unicode (AL32UTF8).
  • On the Creation Option page, click on All initialization Parameters, and select the compatible setting you use. I recommend using the default.
  • On the Summary page, you should Save Response File so you can create additional CDBs in silent mode.

Create Manually

After creating the first database using DBCA, you can create another using the response file and DBCA in silent mode.

  1. Make the required changes to the response file.
    • Update the name of the database.
    • Verify the list of parameters.
    • Other settings.
    • For reference, in the appendix, you can find a sample response file that we use in our hands-on labs.
  2. If you need to create the database with a non-default timezone file version, set the following parameter:
    export ORA_TZFILE=$ORACLE_HOME/oracore/zoneinfo/timezone_42.dat
    
    • Check the directory for a complete list of available timezone files.
  3. Start DBCA in silent mode.
    $ORACLE_HOME/bin/dbca \
       -silent \
       -createDatabase \
       -responseFile /home/oracle/my_response_file.rsp
    

Check the other blog posts related to upgrade to Oracle AI Database 26ai.

Happy upgrading!

Appendix

Sample DBCA Response File

This is the DBCA response file that we use when we create our hands-on labs. This is for the CDB23 database.

responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v23.0.0
gdbName=cdb23
sid=CDB23
databaseConfigType=SI
policyManaged=false
managementPolicy=AUTOMATIC
createServerPool=false
force=false
createAsContainerDatabase=true
numberOfPDBs=0
pdbName=
useLocalUndoForPDBs=true
pdbAdminPassword=
templateName=/u01/app/oracle/product/23/assistants/dbca/templates/New_Database.dbt
sysPassword=oracle
systemPassword=oracle
emConfiguration=NONE
runCVUChecks=FALSE
dvConfiguration=false
olsConfiguration=false
datafileDestination=/u02/oradata/{DB_UNIQUE_NAME}
recoveryAreaDestination=
recoveryAreaSize=
configureWithOID=
pdbOptions=JSERVER:false,ORACLE_TEXT:false,IMEDIA:false,CWMLITE:false,SPATIAL:false,OMS:false,SAMPLE_SCHEMA:false,DV:false
dbOptions=JSERVER:false,ORACLE_TEXT:false,IMEDIA:false,CWMLITE:false,SPATIAL:false,OMS:false,SAMPLE_SCHEMA:false,DV:false
storageType=FS
characterSet=AL32UTF8
nationalCharacterSet=AL16UTF16
registerWithDirService=false
skipListenerRegistration=true
variables=ORACLE_BASE_HOME=/u01/app/oracle/product/23,DB_UNIQUE_NAME=cdb23,ORACLE_BASE=/u01/app/oracle,PDB_NAME=,DB_NAME=cdb23,ORACLE_HOME=/u01/app/oracle/product/23,SID=CDB23
initParams=undo_tablespace=UNDOTBS1,enable_pluggable_database=true,sga_target=4GB,db_block_size=8192BYTES,nls_language=AMERICAN,dispatchers=(PROTOCOL=TCP) (SERVICE=CDB23XDB),diagnostic_dest={ORACLE_BASE},remote_login_passwordfile=EXCLUSIVE,db_create_file_dest=/u02/oradata/{DB_UNIQUE_NAME},processes=300,pga_aggregate_target=1GB,nls_territory=AMERICA,open_cursors=300,log_archive_format=%t_%s_%r.arc,compatible=23.0.0,db_name=cdb23
enableArchive=false
useOMF=true
memoryPercentage=40
databaseType=MULTIPURPOSE
automaticMemoryManagement=false
totalMemory=0

That’s a Wrap! Oracle AI World 2025

Back from Oracle AI World in Las Vegas – the conference formerly known as CloudWorld formerly known as OpenWorld. Badge now joins the others on my wall. Putting my lanyard on the wall of fame

Slides / Labs

Here are the slides from our presentations:

You can try our hands-on labs for free in Oracle LiveLabs:

Impressions

  • The conference included many visionary keynotes, all available on YouTube for further viewing.

  • Release of Oracle AI Database 26ai. Check Mike’s blog post for details.

  • Oracle AI World is special, especially for someone like me who usually works from home. In a few days, I reunited with colleagues, customers, and friends—a very intense social event for someone used to meeting people on Zoom. It’s also the time of the year when I get to spend time with my team. My team at Oracle AI World 25

  • Long days left me with sore feet from extended standing and walking. I took many steps during Oracle AI World 25

  • The motto of the conference: AI is changing everything! I only had time to play with AI generated pictures. AI generated portraits

  • Nine-hour time difference usually causes me jetlag, but not this time. I used the Timeshifter app, drank lots of water with electrolytes (no caffeine), got sunshine and darkness at the right times, and took Melatonin (this is not a medical endorsement, always check with your doctor). I recovered quickly. If you suffer from jetlag too, give it a shot.

  • Fruit and a protein bar in my bag served as emergency fuel when meals were delayed—a tip for future conferences.

  • First Air Canada flight—a pleasure. Flew a Boeing Dreamliner, my second favorite after the Airbus A350. As an aviation geek, I love watching planes. What’s your favorite aircraft? Boeing Dreamliner 787

  • While making a layover in Toronto, I saw Canada’s vast autumn forests from above on approach—a magnificent sight. I’d love to visit Canada in autumn. Any recommendations for must-see sights?

  • Saw a remake of The Wizard of Oz at The Sphere. What a wild ride—unbelievable. If you visit Las Vegas, catch a show there. That’s a true big screen. The opening screen of The Wizard of Oz So many people were doing selfies, that we had to make a selfie-with-a-selfie. Selfie with a selfie

Job Done

  • That’s it! Celebrated an amazing conference with the team and said our goodbyes. Celebratory beers at the end of the conference

  • Moving forward, stay tuned for news on the upcoming Oracle AI World Tour 2026—coming soon to a city near you. Oracle AI World Tour 2026 banner

  • Next year’s conference returns to The Venetian in late October. Hope to see you there! If you have a story to share, reach out—perhaps you can present it with us.

  • THANKS to everyone that took part in making Oracle AI World a big success!

Oracle Influencers

At the recent Oracle AI World 2025, I spoke to a gentleman about Oracle influencers.

He wanted to expand his knowledge on Oracle database tech and wanted to follow relevant people from Oracle.

I wanted to share the list with you in case you’d like to follow some of them as well.

Blogs

Here’s a list of personal blogs from people at Oracle from my own blog feed:

Am I missing someone? Please drop a comment, so that I can update the list (and my own blog roll 😎).

You can also find a long list of official blogs at Oracle.

YouTube

Here are the channels that I subscribe to:

What other channels am I missing?

Sharing

There are many influencers who are not affiliated with Oracle. They are certainly worth following as well. However, the assignment was to identify Oracle employees only.

Thanks to all of you out there who share knowledge with the Oracle community.

#SharingIsCaring

Copy Data Pump Files Before the End of the Export

In Oracle Database 23ai, you can copy the dump files even before the export completes.

This saves time during your migration because you can start moving files to the target host while the export is in progress. Additionally, it potentially saves disk space because you can move the files away from the source host.

Which Files Can You Move

  1. After starting the Data Pump export, you must connect to the database using:

    • The same user who started the export
    • A user with DATAPUMP_EXP_FULL_DATABASE role
  2. Replace the job owner and name, and execute:

    set serverout on
    declare
       l_job_owner    varchar2(30) := 'DPUSER';
       l_job_name     varchar2(30) := 'SYS_EXPORT_FULL_01';
       l_handle       number;
       l_stsmask      integer := dbms_datapump.ku$_status_job_status;
       l_job_state    varchar2(30);
       l_status       ku$_status;
       l_dump_file    ku$_dumpfile;
    begin
       l_handle := dbms_datapump.attach(l_job_name, l_job_owner);
       dbms_datapump.get_status(l_handle, l_stsmask, NULL, l_job_state, l_status);
       dbms_datapump.detach(l_handle);
       
       for i in l_status.job_status.files.first..l_status.job_status.files.last() loop
          
          l_dump_file := l_status.job_status.files(i);
    
          if l_dump_file.file_type = dbms_datapump.ku$_dumpfile_type_template then
            continue;
          end if;
    
          if (l_dump_file.file_bytes_written = l_dump_file.file_size) then
             dbms_output.put_line('DONE: ' || l_dump_file.file_name);
          end if;
    
        end loop; 
    end;
    /
    
    
  3. The code lists all the files that Data Pump has marked as completed. Data Pump no longer writes to those files, and you can safely start moving them to the target system.

Prerequisites

  • You must use multiple dump files. You do that by including %L in your DUMPFILE specification.
    DUMPFILE=exp%L.dmp
    
  • You must specify a file size that allows Data Pump to rotate into multiple files when they are full. I suggest using 5G, which is also a good setting for most cloud migrations.
    FILESIZE=5G
    

What About

  • The code examines the max file size and bytes written to determine if the file is full. This is not the same as the physical size of the file in the operating system. You can’t use the file size information from the file system.

  • What about using rsync or similar tools? I guess that could work; however, we didn’t test that.

  • You could query the Data Pump control table for the information:

    SELECT file_name
    FROM <control_table>
    WHERE process_order = -21
    AND file_max_size = completed_bytes
    AND file_max_size <> 0
    ORDER BY file_name;
    
    • However, querying the control table is not a supported approach. Use the above PL/SQL.

Happy exporting!