We all been using datapump however there below are few less known features that can make our life easy .
#######################
METRICS=YES LOGTIME=ALL
#######################
While migration its critical to find steps taken by each process to estimate migration time .
The LOGTIME parameter adds a timestamp down to the millisecond for every line in your logfile. This can be helpful in diagnosing issues with I/O. It adds e timestamp in front of each line and a message about each task completed by the worker:
expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr METRICS=YES LOGTIME=ALL
15-NOV-17 11:48:32.305: W-5 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
15-NOV-17 11:48:34.439: W-13 Completed 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.439: W-13 Completed by worker 1 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.440: W-13 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
15-NOV-17 11:48:35.472: W-17 Startup took 70 seconds
15-NOV-17 11:48:35.596: W-18 Startup took 70 seconds
15-NOV-17 11:48:35.719: W-20 Startup took 70 seconds
15-NOV-17 11:48:35.841: W-19 Startup took 70 seconds
15-NOV-17 11:48:43.520: W-5 Completed 1714 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 1 39 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 2 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 3 85 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 4 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 5 25 TABLE objects in 1 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 6 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 7 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 8 111 TABLE objects in 6 seconds
15-NOV-17 11:48:44.266: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
#######################
KEEP_MASTER
#######################
KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.
There are cases where we want to keep master table for analysis to be done later .
SQL> select owner,segment_name,segment_type,tablespace_name,(bytes/1024/1024)MB from dba_segments where segment_name='SYS_EXPORT_TABLE_01';
SQL> select OBJECT_TYPE,OBJECT_NAME,OBJECT_SCHEMA,ORIGINAL_OBJECT_SCHEMA,ORIGINAL_OBJECT_NAME,OBJECT_TABLESPACE,SIZE_ESTIMATE,OBJECT_ROW from SYS_SQL_FILE_FULL_01 where ORIGINAL_OBJECT_SCHEMA is not null;
OBJECT_TYPE –> Show the object type.
OBJECT_SCHEMA –> Contains the schema name to which it has to be imported.
ORIGINAL_OBJECT_SCHEMA –> column has the original object’s schema name.
OBJECT_TABLESPACE –> Shows the tablespace where the object will be imported.
SIZE_ESTIMATE –> Estimated size of the table in bytes
#######################
Increasing parallelism for ongoing Job
#######################
We have been in situations where we want to increase/Decrease parallelism of datapump Jobs . We can achieve this using below .
To increase level of parallelism on running datapump job do the following :
- identify job name (select owner_name, job_name from dba_datapump_jobs);
- connect to it : impdp attach=schema.job_name
- change parallelism : parallel=n (n - number of parallel workers)
- wait a bit for change to apply
- confirm change : status
#######################
Patch 21539301 : Index creation serial in impdp
#######################
Previously, Data Pump would create indexes one at a time, specifying the PARALLEL keyword for the CREATE INDEX statement to invoke parallel query for index creation. We used to recommend a workaround to create indexes in parallel, which involved a three-step process of importing without indexes, then creating a SQLFILE of the CREATE INDEX statements, and breaking that file into multiple windows.
Through extensive performance testing we found that it is faster to create multiple indexes in parallel (using a parallel degree of 1) instead of creating a single index using parallel query processes. This is enabled by the patch for bug 18793090, superseded by patch 21539301
Need to check if similar patch is available in 19c
#######################
Other Useful Options :
#######################
Export Filtering Parameters
/* EXCLUDE & INCLUDE */
EXCLUDE=INDEX
EXCLUDE=PROCEDURE
EXCLUDE=TABLE:"LIKE 'EMP%'"
EXCLUDE=SCHEMA:"='HR'"
INCLUDE=TABLE:"IN ('EMP', 'DEPT')"
# When used in command line, use slashes to escape single and double
quotation:
expdp .. schemas=SCOTT EXCLUDE=TABLE:\"=\'EMP\'\"
/* QUERY */
QUERY=OE.ORDERS: "WHERE order_id > 100000 ORDER BY order_date desc"
Export Remapping Parameters
/* REMAP_DATA (11g) */
-- the remapping function shouldn’t have commit or rollback
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
expdp hr/passwd DIRECTORY=dp_dir DUMPFILE=remap.dmp
TABLES=hr.employees REMAP_DATA=hr.employees.last_name:hr.remap_pckg.modifychar
Export Encryption Parameters
(11g): To secure the exported dump file, the following new parameters are presented in Oracle 11g
Data pump: ENCRYPTION, ENCRYPTION_PASSWORD and ENCRYPTION_ALGORITHM. To enable
encryption, you must specify either the ENCRYPTION or ENCRYPTION_PASSWORD parameter, or both.
ENCRYPTION = {all | data_only | encrypted_columns_only | metadata_only | none}
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }
expdp hr DUMPFILE=dp_dir.hr_enc.dmp JOB_NAME=enc ENCRYPTION=data_only
ENCRYPTION_PASSWORD=mypassword
expdp hr DIRECTORY=dp_dir DUMPFILE=hr_enc.dmp
ENCRYPTION=all ENCRYPTION_PASSWORD=mypassword
ENCRYPTION_ALGORITHM=AES256 ENCRYPTION_MODE=dual
Export Estimating Parameters
ESTIMATE={BLOCKS | STATISTICS}
ESTIMATE_ONLY=y
expdp system/pswd estimate_only=y
Export Network Link Parameter
You can initiate an export job from your server and have Data Pump export data from a remote
database to dump files located on the instance from which you initiate the Data Pump export job.
READ ONLY DB can still be loaded from.
expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link DUMPFILE=network_export.dmp
-- more detailed steps:
-- scenario: I will take a data pump export from database ORCL
-- and dumpfile will be written to database TIGER
sqlplus sa/a@tiger
create database link orcl.net using 'ORCL';
OR
Create database link orcl.net connect to sa identified by a
using '(DESCRIPTION=(ADDRESS = (PROTOCOL = TCP)(HOST =10.4.x.x) (PORT=1521))
(connect_data=(service_name=orcl)))';
select * from dual@orcl.net;
$expdp arju/a@tiger directory=d schemas=arju dumpfile=arju_dump_from_orcl.dmp network_link=orcl.net
Expdp:
FLASHBACK_TIME=SYSTIMESTAMP , PARALLEL=4 , FILESIZE=5G , EXCLUDE=STATISTICS , DUMPFILE=TEST_%U.dmp , COMPRESSION=ALL , COMPRESSION_ALGORITHM=MEDIUM . EXCLUDE:CLUSTER, DB_LINK
Impdp:
transform=lob_storage:securefile , REMAP_TABLESPACE=%:DATA
#######################
References :
#######################
https://docs.oracle.com/database/121/SUTIL/GUID-56B6B4EA-5A2B-45B8-9B41-8966378C2C3D.htm#SUTIL4281
Hi Abdul,
ReplyDeleteIt is very useful, thanks for sharing these features.
Yu rock as usual again
Thanks Bro
Adi