hadoop:如何显示put命令的执行时间?或者如何在hdfs中显示加载文件的持续时间?

r7s23pms  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(441)

如何配置 put 命令来显示执行时间?
因为这个命令:

hadoop fs -put table.txt /tables/table

正在返回这个:

16/04/04 01:44:47 WARN util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using    
builtin-java classes where applicable

该命令起作用,但不显示任何执行时间。你知道命令是否可能显示执行时间吗?或者有别的方法可以得到这些信息?

snvhrwxg

snvhrwxg1#

据我所知,hadoop fs命令不提供任何调试信息,如执行时间,但您可以通过两种方式获得执行时间:
猛击方式: start=$(date +'%s') && hadoop fs -put visit-sequences.csv /user/hadoop/temp && echo "It took $(($(date +'%s') - $start)) seconds" 从日志文件:您可以检查namenode日志文件,其中列出了与执行的命令相关的所有详细信息,如所用时间、文件大小、复制等。
e、 我试过这个命令 hadoop fs -put visit-sequences.csv /user/hadoop/temp 并把下面的日志,具体到放操作,放在日志文件里。

2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 38
2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 75 
2016-04-04 20:30:00,118 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 95 
2016-04-04 20:30:00,120 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /data/misc/hadoop/store/hdfs/namenode/current/edits_inprogress_0000000000000000038 -> /data/misc/hadoop/store/hdfs/namenode/current/edits_0000000000000000038-0000000000000000039
2016-04-04 20:30:00,120 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 40
2016-04-04 20:30:01,781 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.06s at 15.63 KB/s
2016-04-04 20:30:01,781 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000039 size 1177 bytes.
2016-04-04 20:30:01,830 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0
2016-04-04 20:30:56,252 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} for /user/hadoop/temp/visit-sequences.csv._COPYING_
2016-04-04 20:30:56,532 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 <  minimum = 1) in file /user/hadoop/temp/visit-sequences.csv._COPYING_
2016-04-04 20:30:56,533 INFO org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream: Nothing to flush
2016-04-04 20:30:56,548 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} size 742875
2016-04-04 20:30:56,957 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hadoop/temp/visit-sequences.csv._COPYING_ is closed by DFSClient_NONMAPREDUCE_1242172231_1

相关问题