Skip to content
Unknown's avatar

Databases Are Fun

dohdatabase.com

  • Bluesky
  • LinkedIn
  • RSS
  • Blog
  • Webinars
  • Slides
  • Videos
  • Hands-On Labs
  • Categories
  • About

Tag: metadata

Applying Parallelism to Data Pump Exports and Imports

When you use Oracle Data Pump, you often want it to happen fast. Applying parallelism is the obvious choice. Here are a few things you might not be aware of.

Parallel and Export

During export, when a worker needs to write to a file, it needs an exclusive lock on a dump file. If your dump file specification does not allow creating multiple files, then the workers will queue up waiting for a lock on that single dump file.

You should always allow Data Pump to create as many dump files as it wants:

expdp ... dumpfile=my_exp_%L.dmp

The wildcard operator, %L, allows the creation of many files by adding a number. Previously, you would use %U, but %L allows more files. Check the documentation for other substitution variables.

Import From a Single Dump File

Even if you made the export with no parallel or just have one dump file, you can still import in parallel.

During an import, Data Pump only needs to read from the dump file, and multiple workers can read from the same file without problems. The only issue would be how much I/O a single file can handle for a large database.

Does import parallelism has to match export parallelism, and the answer is no. The two are independent. This would work fine:

expdp ... parallel=4
impdp ... parallel=64

Import From a Previous Release

In Oracle Database 12.2, Data Pump starts to support parallel metadata import during a regular import. If you import from a dump file from a previous version (that does not support it), you can still import metadata in parallel in Oracle Database 12.2 and newer. There are no specific requirements for the dump file to perform parallel operations.

The same applies to transportable tablespace jobs, which you can do in parallel in Oracle Database 21c.

Want to Know More?

Daniel Overby Hansen Data Pump Leave a comment October 19, 2023October 20, 2023 1 Minute

How to Export Database Scheduler Jobs Using Data Pump

Which options do you have to move scheduler jobs (DBMS_SCHEDULER) to a different Oracle Database?

Data Pump

In Data Pump, scheduler jobs are referred to as PROCOBJ. You can get just the scheduler jobs using:

expdp ... include=procobj
impdp ... include=procobj

If you need just a subset of the scheduler jobs (works for impdp as well):

expdp ... include=procobj:" IN ('JOB1', 'JOB2')"

Data Pump and SQLFILE

If you have a Data Pump dump file, you can extract the DDL of the scheduler jobs using the sqlfile option:

impdp ... include=procobj \
   sqlfile=my_scheduler_jobs.sql

The DDL is written to a text file specified by sqlfile. This allows you to review the job defintions and, optionally, change them. Then, you execute the commands.

DBMS_METADATA

If you need to export just a few jobs, it might be easier to simply get the defintions using the metadata API:

select
   dbms_metadata.get_ddl('PROCOBJ','<name>','<owner>')
from 
   dual;

SYS Jobs

In older releases of Oracle Database, you could not get the definition of a job owned by SYS. A simple workaround is to copy the job into another schema and then generate the DDL:

exec dbms_scheduler.copy_job('SYS.MY_JOB','APPUSER.MY_JOB');
select
   dbms_metadata.get_ddl('PROCOBJ','MY_JOB','APPUSER')
from 
   dual;

Appendix

Credits

Thanks to my colleague, Klaus Gronau. He has done more migrations than I’ll ever do and he maintains a remarkable set of notes. From his notes, he dug out many good pointers.

Daniel Overby Hansen Data Pump Leave a comment October 17, 2023September 24, 2023 1 Minute

Enter your email address to follow this blog and receive notifications of new posts by email.

Tags

19c 23ai 26ai adb alias asm autonomous autoupgrade backup bigfile cloning cloud cloudworld compatible compression conference data guard datapatch Data Pump downgrade download encryption exadata export flashback database fpp ftex gi GoldenGate Grid Infrastructure keystore listener LOB migrate Migration multitenant oci oci cli ocw ojvm omf opatch opatchauto oracle oracle home out-of-place Patching pdb performance rac recovery catalog refreshable clone pdb release update rman scheduler security seps snapshot standby sql plan management standby-first statistics switchgridhome TDE testing tts unplug-plug Upgrade vldb vm db system webinar workshop xtts zdm zdogip zero downtime
Website Powered by WordPress.com.
  • Subscribe Subscribed
    • Databases Are Fun
    • Join 600 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Databases Are Fun
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar