How to Apply Compression During Data Pump Import

When you migrate your data using Oracle Data Pump, you can transform your data as part of the import. One of the popular transformations is applying compression.

Check the License Guide and ensure your licenses are in place. Most compression options require a separate license.

Table Compression

Data Pump has a dedicated TRANSFORM clause that allows you to apply table compression on import:

$ impdp ... transform=table_compression_clause:\"compress for oltp\"

It works for table partitions and subpartitions as well.

How about Tablespace Default Compression Clause?

You can create a tablespace with a default compression clause. Any new tables, table partitions or table subpartitions will inherit this compression clause:

SQL> create tablespace demo1 ... default table compress for oltp;

If you import your tables into such a tablespace, it will not change the compression type for the tables. The tables will preserve the compression type from the source database.

Unless you instruct Data Pump to remove any compression clause from the DDL. Then the tables inherit the tablespace compression clause:

$ impdp ... transform=table_compression_clause:\"none\"

Using the SQLFILE command line parameter, you can extract the DDL from a dump file. If you have uncompressed tables, you can see that the CREATE TABLE statements have a dedicated NOCOMPRESS clause. This will override the tablespace setting unless you instruct Data Pump to remove it.

Index Compression

In Oracle Database 21c, you can use the TRANSFORM parameter to apply compression on indexes during import:

$ impdp ... transform=table_compression_clause:\"...\"

Oracle Database 19c

No transformation clause allows you to change the compression type of your indexes.

In contrast to default table compression, the default index compression type for a tablespace is honored during import.

If you want to compress your indexes:

  1. Create a tablespace with a default index compression clause:
    SQL> create tablespace tbs_index_1 ... default index compress advanced high;
    
  2. Instruct the database to inherit the tablespace compression clause when you create new indexes:
    SQL> alter system set db_index_compression_inheritance=tablespace;
    
  3. Import the data and remap to the new tablespace:
    $ impdp ... remap_tablespace=users:tbs_index_1
    

The db_index_compression_inheritance parameter works for global indexes on partitioned tables as well.

Partitioned Indexes

However, the db_index_compression_inheritance parameter does not work for

  • Global partitioned indexes
  • Local indexes.

There is an enhancement request to get that fixed, but it is yet to be implemented.

You can’t apply compression on such objects during a Data Pump import. You have two options:

  1. Compress the index after the import using the ALTER INDEX command. You can rebuild the index online with no interuption.
  2. Create the index manually.
    • Exclude the index from the Data Pump import using the EXCLUDE parameter
    • Extract the DDL for the index from a Data Pump SQL file using the SQLFILE parameter
    • Change the DDL to apply the proper compression and execute it after the Data Pump import

LOB Compression

No transformation clause allows you to change the compression type of your LOBs. But it is on our to-do list.

If you want to apply Advanced LOB Compression:

  1. First, import the metadata only:

    $ impdp system/oracle ... content=metadata_only
    
  2. Then change the LOB storage to enable compression:

    SQL> alter table ... modify lob (<column_name>)(compress high);
    
  3. Finally, import the data only:

    $ impdp system/oracle ... content=data_only
    
  4. Be aware that importing with content=metadata_only will lock statistics. Be sure to unlock them after import:

    exec dbms_stats.unlock_schema_stats(...
    

Another option is to create the LOB tables in advance with the desired LOB compression type. Then instruct Data Pump to accept existing tables and load data anyway:

$ impdp ... table_exists_action=append

Appendix

Test Case

You can use the test case in our hands-on lab:

sqlplus / as sysdba<<EOF
create or replace directory dpdir as '/u01/app/oracle/admin/UP19/dpdump';
drop tablespace apps including contents and datafiles;
drop user appuser cascade;

create tablespace apps;
create user appuser identified by APPUSER;
grant dba to APPUSER;
alter user APPUSER default tablespace APPS;
connect appuser/APPUSER
create table t1 as select * from dba_objects;

ALTER TABLE t1 MODIFY
PARTITION BY HASH (object_id)
PARTITIONS 2
STORE IN (apps, apps, apps, apps);  

CREATE INDEX i1local ON t1 (last_ddl_time) LOCAL;
CREATE INDEX i1global ON t1 (owner) global;

create table t3 as select * from dba_objects;
ALTER TABLE t3 MODIFY
PARTITION BY HASH (object_id)
SUBPARTITION BY HASH (object_name)
SUBPARTITIONS 2 PARTITIONS 2;  
CREATE INDEX i3local ON t3 (last_ddl_time) LOCAL;
CREATE INDEX i3global ON t3 (owner) global;
CREATE INDEX i3globalpart ON t3 (object_id) global partition by hash (object_id) partitions 2;

EOF

rm /u01/app/oracle/admin/UP19/dpdump/*
expdp system/oracle schemas=appuser directory=dpdir

sqlplus / as sysdba<<EOF
drop user appuser cascade;
drop tablespace apps including contents and datafiles;
create tablespace apps default table compress for oltp index compress advanced low;
alter system set db_index_compression_inheritance=tablespace scope=both;
EOF

impdp system/oracle directory=dpdir transform=table_compression_clause:\"none\"

sqlplus / as sysdba<<EOF
prompt "TABLES"
select table_name, compression, compress_for from all_tables where owner='APPUSER';
prompt "TAB PARTITIONS"
select table_name,partition_name, compression, compress_for from all_tab_partitions where table_owner='APPUSER';
prompt "TAB SUBPARTITIONS"
select table_name,subpartition_name, compression, compress_for from all_tab_subpartitions where table_owner='APPUSER';

prompt "INDEXES"
select index_name, compression from all_indexes where table_owner='APPUSER';
prompt "IND PARTITIONS"
select index_name, compression from all_ind_partitions where index_owner='APPUSER';
prompt "IND SUBPARTITIONS"
select index_name, compression from all_ind_subpartitions where index_owner='APPUSER';
EOF

6 thoughts on “How to Apply Compression During Data Pump Import

  1. Thanks for the article.
    According Oracle documentation:
    “If NONE is specified, then the table compression clause is omitted (and the table is given the default compression for the tablespace).”
    If clause is omitted , table compression from the database source should be honored , but doc says “the table is given the default compression for the tablespace”

    Did you test this ?
    Is NONE value a way to override the default behaviour?

    Regards

    Like

    1. Hi,

      You’re absolutely right. You can use transform=table_compression_clause:none to allow tablespace inheritance. I have updated the blog post to make it clear.

      Good catch!

      Thanks for the feedback. Much appreciated.

      Regards,
      Daniel

      Like

  2. I had to learn being very careful with forced “compress for oltp”.
    It worked for many years with Oracle 12.1.02 and 19c until (I guess) RU14/15 without any problems.
    Beginning some time ago I got strange ORA-600 errors after I imported a database from 12.1.0.2 to 19.17 when exporting.
    This systems copy (datapump, forced compression) worked for several years without any issues.
    Research showed a LONG column in the problematic table.
    (I know, I know, I know! Never use LONG, it has been deprecated for 20 years! But some applications coming from the past were never changed.).
    Tracked it down to (at least one) line in the table that raised the error.
    Support could/would not help, as not supported combination of LONG column and advanced compression.
    So going back to uncompressed import“.
    An other database got upgraded from 12 to 19c with datapupump and foreced compression – and again, there are ORA-600s (after several weeks working fine!) wiht a table.
    Again, a LONG column is showing up.
    No way to change the table definition to uncompressed, as the LONG column kills all modern types of handling.
    The only way to get back to uncompressed is with datapump and NOCOMPRESS.
    I learned, that there is no checking within datapump to prevent the not supported compressed import of tables with LONG columns.
    I learned, that compressed LONG columns worked fine for years, but nowadays there will be errors – sometimes after weeks of working fine.
    I learned, that LONG columns hurt much more than I remembered, as most of your tools will not work on tanbles containing some of them.
    Finaly I learned, that I have to check evereytime I do some datapump export/import that there is no LONG column hidden in the applications schema – no matter how often export/import worked fine in the past.

    Like

  3. Hi Daniel!

    Maybe you could give a hint to the Datapump team that it would be great, if the implemented a check to not import tables with LONG columns with compression, as this is not supported.
    I’ve got no problem with not importing unsupported combinations, but when doing it fine at import and errors showing up much later is worst case.
    Thank you very much!
    I will try to talk to Mike at the DOAG Conference in Nuremberg next month.
    Don’t bother to warn him, we had some mail traffic regarding this issue.

    Regards
    Andrew

    Like

Leave a reply to jmzc Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.