You can use NOLOGGING operations to speed up data loads. Imports are all about loading data.
When Data Pump loads the rows into your database and creates indexes, you can avoid generating redo.
You save:
- Time – because a NOLOGGING operation is faster.
- Space – because no redo means no archive logs.
But, there are serious implications! If you need to restore the database, those tables and indexes will not be there.
How To
You can toggle the logging clause of a table or index using the Data Pump import parameter transform. To use NOLOGGING.
impdp ... transform=disable_archive_logging:y
The Effect
I made a small test with a simple schema with four identical tables. Each table is 3 GB.
| Setting | Time to Import | Redo, GB |
|---|---|---|
| LOGGING | 4m 22s | 12.2 |
| NOLOGGING | 1m 45s | 0.047 |
Many factors affect the numbers, so your result might be very different. Check for yourself.
Underneath the Hood
The LOGGING information is only changed temporarily during the import. At the end of the import, the LOGGING information is reset to its original state.
I enabled Data Pump trace:
impdp ... \
transform=disable_archive_logging:y \
trace=1FF0300
And by grepping in the database trace directory, I could find these events:
FTEX_dw00_854409.trc:KUPW:09:38:39.351: 1: ALTER TABLE "CONSTR_VALIDATE"."T1" NOLOGGING
FTEX_dm00_854407.trc:KUPM:09:40:20.527: W-3 . . imported "CONSTR_VALIDATE"."T1" 381.6 MB 23312384 rows in 101 seconds using direct_path
FTEX_dw01_854413.trc:KUPW:09:40:20.619: 2: ALTER TABLE "CONSTR_VALIDATE"."T1" LOGGING
Words of Caution
I strongly recommend that you only use this feature when:
- You fully understand the implications of NOLOGGING operations.
- You install the Data Pump Bundle Patch. There are a handful of bugs related to this feature, including one bug where the Data Pump fails to restore the original LOGGING value, leaving the object in NOLOGGING mode.
- You have a plan for ensuring the recoverability of your database. Either you don’t care about the data at all, or you start a level 0 backup right away.
- You have a plan for your standby databases if you use Automatic Correction of Non-logged Blocks at a Data Guard Standby Database.
Notes
-
This feature doesn’t work if you have FORCE LOGGING enabled. The database silently ignores the
NOLOGGINGclause.select force_logging from v$database; -
In a Data Guard configuration, you most likely use FORCE LOGGING, so don’t expect it to work here. Unless you’re using Automatic Correction of Non-logged Blocks at a Data Guard Standby Database.
-
This feature doesn’t work on Oracle Autonomous Database (ADB) Serverless and Dedicated because it ignores the
NOLOGGINGclause. ADB protects your data at any cost, and, thus, completely ignores that setting. -
A Data Pump import always generates redo, even when you are using NOLOGGING operations. The database logs all actions on the data dictionary and UNDO. Furthermore, DML operations on the Data Pump control table are also logged.
Migrations
You can safely use this feature during a migration if you plan to perform a level 0 backup and build your standby databases within the maintenance window.
However, if you’re tight on time, you often take a level 0 backup before the import and rely on archive logs for recoverability. In such a situation, you must not use this feature. The same applies if you build your standby database before the import.
Just the Indexes
A compromise is to use NOLOGGING on the indexes only:
transform=disable_archive_logging:y:index
transform=disable_archive_logging:n:table
The rationale being that you can always rebuild your indexes again. Keep your tables safe, but if something happens, simply recreate the indexes.
In such a situation, it’s good to have the index definitions ready. You can use the SQLFILE option to extract the index DDLs from the dump file:
impdp ... sqlfile=indexes.sql include=index
Conclusion
You can speed up imports by using NOLOGGING operations. It also reduces the amount of redo generation.
Appendix
Give It a Try
You can use our Data Pump hands-on lab to try it yourself. Provision a lab and use the commands below:
# Create directory and copy dump file
mkdir /home/oracle/dpdir
cp /home/oracle/scripts/faster-import-lob.dmp /home/oracle/dpdir
# Create parameter file
# Change the transform parameter accordingly
cd
cat > imp.par <<EOF
directory=dpdir
dumpfile=faster-import-constraints.dmp
parallel=4
logtime=all
metrics=yes
transform=constraint_novalidate:y
transform=disable_archive_logging:y
trace=1FF0300
EOF
# Get rid of previous archived logs
. ftex
rman target /<<EOF
delete noprompt archivelog all;
EOF
rm -rf /u02/fast_recovery_area/FTEX/archivelog/*
# Data Pump prereqs and a restart to reset metrics
sqlplus / as sysdba<<EOF
drop user constr_validate cascade;
grant datapump_exp_full_database, datapump_imp_full_database to dpuser identified by oracle;
alter user dpuser default tablespace users;
alter user dpuser quota unlimited on users;
create or replace directory dpdir as '/home/oracle/dpdir';
alter system set streams_pool_size=128m scope=spfile;
shutdown immediate
startup
EOF
# Remove existing trace files
rm -rf /u01/app/oracle/diag/rdbms/ftex/FTEX/trace/*
# Start import
impdp dpuser/oracle parfile=imp.par
# Measure redo via DB and file system
du -h /u02/fast_recovery_area/FTEX/archivelog
sqlplus / as sysdba<<EOF
SELECT VALUE / 1024 / 1024 AS redo_generated_mb FROM v\$sysstat WHERE name = 'redo size';
EOF
