這篇文章給大家介紹EXPDP/IMPDP中的并行度PARALLEL參數是什么,內容非常詳細,感興趣的小伙伴們可以參考借鑒,希望對大家能有所幫助。
讓客戶滿意是我們工作的目標,不斷超越客戶的期望值來自于我們對這個行業的熱愛。我們立志把好的技術通過有效、簡單的方式提供給客戶,將通過不懈努力成為客戶在信息化領域值得信任、有價值的長期合作伙伴,公司提供的服務項目有:域名與空間、網站空間、營銷軟件、網站建設、微山網站維護、網站推廣。
如果設置 EXPDP parallel=4 必須要設置4個EXPDP文件,不然PARALLEL是有問題的,同時EXPDP會使用一個WORKER進程導出METADATA,其他WORKER進程會同時出數據,如果EXPDP作業很于250M 只會啟動一個WORKER進程如果是500M會啟動2個,1000M及會啟動4個WOKER進程,一般來說加上%U來設置多個文件。
而IMPDP有所不同,會先啟動一個WOKER進程METADATA導入,然后啟動多個WORKER進程導入,所以再前期只會看到WOKER在導入METADATA,而且IMPDP如果PARALLE=4也需要>=4個DMP文件,也可以使用%U來進行導入。
nohup expdp system/**** PARALLEL=2 JOB_NAME=full_bak_job full=y dumpfile=exptest:back_%U.dmp logfile=exptest:back.log &
impdp system/*** PARALLEL=2 EXCLUDE=STATISTICS JOB_NAME=full_imp cluster=no full=y dumpfile=test:back_%U.dmp logfile=test:back_imp.log;
而在11GR2后EXPDP 和 IMDP的WORKER進程會在多個INSTANCE啟動,所以DIRECTORY必須在共享磁盤上,如果沒有設置共享磁盤還是指定cluster=no 來防止報錯。
當觀察EXPDP/IMPDP woker的時候如下:
Import> status
Job: FULL_IMP
Operation: IMPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 150,300,713,536
Percent Done: 80
Current Parallelism: 6
Job Error Count: 0
Dump File: /expdp/back_%u.dmp
Dump File: /expdp/back_01.dmp
Dump File: /expdp/back_02.dmp
Dump File: /expdp/back_03.dmp
Dump File: /expdp/back_04.dmp
Dump File: /expdp/back_05.dmp
Dump File: /expdp/back_06.dmp
Dump File: /expdp/back_07.dmp
Dump File: /expdp/back_08.dmp
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Object Schema: ACRUN
Object Name: T_PLY_UNDRMSG
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 3
Completed Rows: 3,856,891
Completed Bytes: 1,134,168,200
Percent Done: 83
Worker Parallelism: 1
Worker 2 Status:
Process Name: DW01
State: EXECUTING
Object Schema: ACRUN
Object Name: T_FIN_PAYDUE
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 5
Completed Rows: 2,646,941
Completed Bytes: 1,012,233,224
Percent Done: 93
Worker Parallelism: 1
Worker 3 Status:
Process Name: DW02
State: EXECUTING
Object Schema: ACRUN
Object Name: MLOG$_T_FIN_CLMDUE
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 6
Completed Bytes: 382,792,584
Worker Parallelism: 1
Worker 4 Status:
Process Name: DW03
State: EXECUTING
Object Schema: ACRUN
Object Name: T_PAY_CONFIRM_INFO
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 5
Completed Rows: 2,443,790
Completed Bytes: 943,310,104
Percent Done: 83
Worker Parallelism: 1
Worker 5 Status:
Process Name: DW04
State: EXECUTING
Object Schema: ACRUN
Object Name: T_PLY_TGT
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 6
Completed Rows: 2,285,353
Completed Bytes: 822,501,496
Percent Done: 64
Worker Parallelism: 1
Worker 6 Status:
Process Name: DW05
State: EXECUTING
Object Schema: ACRUN
Object Name: T_FIN_PREINDRCT_CLMFEE
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 5
Completed Rows: 6,042,384
Completed Bytes: 989,435,088
Percent Done: 79
Worker Parallelism: 1
英文如下:
For Data Pump Export, the value that is specified for the parallel parameter should be less than or equal to the number of files in the dump file set. Each worker or Parallel Execution Process requires exclusive access to the dump file, so having fewer dump files than the degree of parallelism will mean that some workers or PX processes will be unable to write the information they are exporting. If this occurs, the worker processes go into an idle state and will not be doing any work until more files are added to the job. See the explanation of the DUMPFILE parameter in the Database Utilities guide for details on how to specify multiple dump files for a Data Pump export job.
For Data Pump Import, the workers and PX processes can all read from the same files. However, if there are not enough dump files, the performance may not be optimal because multiple threads of execution will be trying to access the same dump file. The performance impact of multiple processes sharing the dump files depends on the I/O subsystem containing the dump files. For this reason, Data Pump Import should not have a value for the PARALLEL parameter that is significantly larger than the number of files in the dump file set.
In a typical export that includes both data and metadata, the first worker process will unload the metadata: tablespaces, schemas, grants, roles, tables, indexes, and so on. This single worker unloads the metadata, and all the rest unload the data, all at the same time. If the metadata worker finishes and there are still data objects to unload, it will start unloading the data too. The examples in this document assume that there is always one worker busy unloading metadata while the rest of the workers are busy unloading table data objects.
If the external tables method is chosen, Data Pump will determine the maximum number of PX processes that can work on a table data object. It does this by dividing the estimated size of the table data object by 250 MB and rounding the result down. If the result is zero or one, then PX processes are not used to unload the table
The PARALLEL parameter works a bit differently in Import than Export. Because there are various dependencies that exist when creating objects during import, everything must be done in order. For Import, no data loading can occur until the tables are created because data cannot be loaded into tables that do not yet exist
關于EXPDP/IMPDP中的并行度PARALLEL參數是什么就分享到這里了,希望以上內容可以對大家有一定的幫助,可以學到更多知識。如果覺得文章不錯,可以把它分享出去讓更多的人看到。
網頁名稱:EXPDP/IMPDP中的并行度PARALLEL參數是什么
URL網址:http://vcdvsql.cn/article46/gjgheg.html
成都網站建設公司_創新互聯,為您提供微信公眾號、商城網站、網站維護、網站設計公司、云服務器、網站設計
聲明:本網站發布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創新互聯