At Oracle AI World, I spoke to a customer who used Data Pump as part of their CI/CD pipeline. Exporting and importing a 7 TB database took more than 36 hours.
That’s far too much, I said.
A few details:
- They had the Data Pump bundle patch installed.
- Unloading and loading rows were generally just slow.
- They were using NFS storage for the dump files.
I’ve seen far too many cases where misconfigured NFS caused slowness, so I suggested using ASM storage instead. At least, that could rule out NFS as the issue.
Here’s how to use Data Pump and ASM storage.
Export
It turned out that I already blogged about this topic. You can visit that for step-by-step instructions.

Move File
We now have a dump file on our source database. But we need to move it to the target host.
DBMS_FILE_TRANSFER
On the remote database:
- I create a directory in ASM where I can store the dump file:
ASMCMD> create directory +DATA/DMPDIR - I create a user and grant privileges to connect. I’ll use this user to connect via a database link:
SQL> create user transportuser identified by ... ; SQL> grant connect to transportuser; - I create a directory and allow my user to write to it:
SQL> create directory dmpdir as '+DATA/DMPDIR'; SQL> grant write on directory dmpdir to transportuser;
On the source database:
- I create a user with the right privileges:
SQL> create user transportuser identified by ... ; SQL> grant connect to transportuser; SQL> grant create database link to transportuser; SQL> grant read on directory myasmdir to transportuser; SQL> grant execute on dbms_file_transfer to transportuser; - I connect as transportuser and create a database link to the remote database/PDB:
SQL> create database link transportlink connect to transportuser identified by ... using '<connect-string-to-remote-pdb>'; - I copy the file:
begin dbms_file_transfer.put_file( source_directory_object => 'MYASMDIR', source_file_name => 'exp01.dmp', destination_directory_object => 'DMPDIR', destination_file_name => 'exp01.dmp', destination_database => 'TRANSPORTLINK'); end; /
A closing remark: DBMS_FILE_TRANSFER is an effective means of copying the file. My wristwatch measurements show it’s slightly faster than using scp in the operating system.
ASMCMD
You could also transfer the file directly from one ASM instance to a remote one. Check out the cp command.
But I don’t recommend this approach, because you need the SYSDBA or SYSASM privilege to connect to asmcmd. Since the issue was related to a CI/CD pipeline, it’s a fully automated flow. If at all possible, I’d avoid using such a powerful privilege in my automation.
Import
Since we have the dump file on our target system, we can perform a Data Pump import. Use the same approach as described above.
Outcome
If the NFS storage indeed caused the slowness, you should see much faster times.
If that’s not the case, here are some other ideas to explore.
- Use the Data Pump bundle patch. The customer told me they already did, but I’ve seen far too many cases where it wasn’t the case after all. Trust, but verify.
- Use my troubleshooting recipe.
- Use compression. Requires a license for Advanced Compression option on the exporting system.
- Split the dump files into pieces for easier handling.
- Transport the dump files when they are full.
- Detect corruptions before you import.
- Gather dictionary stats before you start the export, before and after the import.
- Allocate memory to the streams pool upfront.
- Create data files in advance with the full size to avoid the time it takes to auto-extend during the import.
- Skip export of statistics. Gather new statistics or transfer statistics instead.
- Skip validation of check constraints during import.
- Use the new way to create indexes available in Release Update 19.27.
Happy Data Pumping!

Cool, how long was the runtime after the changes?
LikeLike
Sorry, I didn’t get the numbers. Let me see if I can get the customer to share it.
Regards,
Daniel
LikeLike