When you use Oracle Data Pump, you often want it to happen fast. Applying parallelism is the obvious choice. Here are a few things you might not be aware of.
Parallel and Export
During export, when a worker needs to write to a file, it needs an exclusive lock on a dump file. If your dump file specification does not allow creating multiple files, then the workers will queue up waiting for a lock on that single dump file.
You should always allow Data Pump to create as many dump files as it wants:
expdp ... dumpfile=my_exp_%L.dmp
The wildcard operator, %L, allows the creation of many files by adding a number. Previously, you would use %U, but %L allows more files. Check the documentation for other substitution variables.
Import From a Single Dump File
Even if you made the export with no parallel or just have one dump file, you can still import in parallel.
During an import, Data Pump only needs to read from the dump file, and multiple workers can read from the same file without problems. The only issue would be how much I/O a single file can handle for a large database.
Does import parallelism has to match export parallelism, and the answer is no. The two are independent. This would work fine:
expdp ... parallel=4
impdp ... parallel=64
Import From a Previous Release
In Oracle Database 12.2, Data Pump starts to support parallel metadata import during a regular import. If you import from a dump file from a previous version (that does not support it), you can still import metadata in parallel in Oracle Database 12.2 and newer. There are no specific requirements for the dump file to perform parallel operations.
The same applies to transportable tablespace jobs, which you can do in parallel in Oracle Database 21c.

