這篇文章主要講解了“hadoop運行實例分析”,文中的講解內容簡單清晰,易于學習與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學習“hadoop運行實例分析”吧!
讓客戶滿意是我們工作的目標,不斷超越客戶的期望值來自于我們對這個行業的熱愛。我們立志把好的技術通過有效、簡單的方式提供給客戶,將通過不懈努力成為客戶在信息化領域值得信任、有價值的長期合作伙伴,公司提供的服務項目有:主機域名、網站空間、營銷軟件、網站建設、天水網站維護、網站推廣。
1.找到examples的jar包
2.創建輸入和輸出目錄
3.將需要分隔的文件上傳到wc_input目錄下
4.查看上傳的文件
5.hadoop jar /hadoop_soft/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wc_input/* /wc_output/
[root@hadoop input]# hadoop jar /hadoop_soft/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wc_input/* /wc_output/
17/08/15 10:25:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/08/15 10:25:25 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.120:18040
17/08/15 10:25:27 INFO input.FileInputFormat: Total input paths to process : 2
17/08/15 10:25:27 INFO mapreduce.JobSubmitter: number of splits:2
17/08/15 10:25:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1502762082449_0001
17/08/15 10:25:28 INFO impl.YarnClientImpl: Submitted application application_1502762082449_0001
17/08/15 10:25:29 INFO mapreduce.Job: The url to track the job: http://hadoop:18088/proxy/application_1502762082449_0001/
17/08/15 10:25:29 INFO mapreduce.Job: Running job: job_1502762082449_0001
17/08/15 10:25:48 INFO mapreduce.Job: Job job_1502762082449_0001 running in uber mode : true
17/08/15 10:25:48 INFO mapreduce.Job: map 0% reduce 0%
17/08/15 10:25:50 INFO mapreduce.Job: map 100% reduce 0%
17/08/15 10:25:51 INFO mapreduce.Job: map 100% reduce 100%
17/08/15 10:25:51 INFO mapreduce.Job: Job job_1502762082449_0001 completed successfully
17/08/15 10:25:52 INFO mapreduce.Job: Counters: 52
File System Counters
FILE: Number of bytes read=276
FILE: Number of bytes written=545
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=798
HDFS: Number of bytes written=398613
HDFS: Number of read operations=66
HDFS: Number of large read operations=0
HDFS: Number of write operations=23
Job Counters
Launched map tasks=2
Launched reduce tasks=1
Other local map tasks=2
Total time spent by all maps in occupied slots (ms)=1972
Total time spent by all reduces in occupied slots (ms)=803
TOTAL_LAUNCHED_UBERTASKS=3
NUM_UBER_SUBMAPS=2
NUM_UBER_SUBREDUCES=1
Total time spent by all map tasks (ms)=1972
Total time spent by all reduce tasks (ms)=803
Total vcore-milliseconds taken by all map tasks=1972
Total vcore-milliseconds taken by all reduce tasks=803
Total megabyte-milliseconds taken by all map tasks=2019328
Total megabyte-milliseconds taken by all reduce tasks=822272
Map-Reduce Framework
Map input records=5
Map output records=11
Map output bytes=111
Map output materialized bytes=109
Input split bytes=210
Combine input records=11
Combine output records=8
Reduce input groups=7
Reduce shuffle bytes=109
Reduce input records=8
Reduce output records=7
Spilled Records=16
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=637
CPU time spent (ms)=1820
Physical memory (bytes) snapshot=830070784
Virtual memory (bytes) snapshot=8998096896
Total committed heap usage (bytes)=500510720
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=70
File Output Format Counters
Bytes Written=57
6.查看運行結果
7.檢查結果數據
感謝各位的閱讀,以上就是“hadoop運行實例分析”的內容了,經過本文的學習后,相信大家對hadoop運行實例分析這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是創新互聯,小編將為大家推送更多相關知識點的文章,歡迎關注!
文章名稱:hadoop運行實例分析
文章位置:http://vcdvsql.cn/article22/pdiccc.html
成都網站建設公司_創新互聯,為您提供企業建站、移動網站建設、響應式網站、品牌網站設計、建站公司、標簽優化
聲明:本網站發布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創新互聯