hadoopMap卡在字数计算教程中-无法从scdynamicstore加载领域信息

lh80um4z  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(292)

我正在尝试在单节点设置上运行单词计数教程http://hadoop.apache.org/docs/stable/mapred_tutorial.html
这是我的终端输出:

> hadoop jar wordcount.jar org.myorg.WordCount input output
13/08/13 16:26:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/08/13 16:26:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/08/13 16:26:59 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/13 16:26:59 INFO mapred.FileInputFormat: Total input paths to process : 2
13/08/13 16:26:59 INFO mapred.JobClient: Running job: job_local955318185_0001
13/08/13 16:26:59 INFO mapred.LocalJobRunner: Waiting for map tasks
13/08/13 16:26:59 INFO mapred.LocalJobRunner: Starting task: attempt_local955318185_0001_m_000000_0
13/08/13 16:26:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
13/08/13 16:26:59 INFO mapred.MapTask: Processing split: file:/Users/jfk/work/hadoop/2_word/input/file02:0+24
13/08/13 16:26:59 INFO mapred.MapTask: numReduceTasks: 1
13/08/13 16:26:59 INFO mapred.MapTask: io.sort.mb = 100
13/08/13 16:27:00 INFO mapred.MapTask: data buffer = 79691776/99614720
13/08/13 16:27:00 INFO mapred.MapTask: record buffer = 262144/327680
13/08/13 16:27:00 INFO mapred.MapTask: Starting flush of map output
13/08/13 16:27:00 INFO mapred.JobClient:  map 0% reduce 0%
13/08/13 16:27:05 INFO mapred.LocalJobRunner: file:/Users/jfk/work/hadoop/2_word/input/file02:0+24
13/08/13 16:27:06 INFO mapred.JobClient:  map 50% reduce 0%

只是卡在那里了。我所能做的就是按ctrl-c。如何调试?
以下是 logs/userlogs ```
-> ls -l
total 0
drwx--x--- 16 jfk admin 544 Aug 7 21:51 job_201308072147_0001
drwx--x--- 16 jfk admin 544 Aug 9 10:18 job_201308091015_0001
drwx--x--- 9 jfk admin 306 Aug 13 14:59 job_201308131457_0001
drwx--x--- 7 jfk admin 238 Aug 13 14:59 job_201308131457_0002
drwx--x--- 9 jfk admin 306 Aug 13 15:02 job_201308131457_0003
drwx--x--- 9 jfk admin 306 Aug 13 15:04 job_201308131457_0005
drwx--x--- 9 jfk admin 306 Aug 13 15:13 job_201308131457_0007
drwx--x--- 9 jfk admin 306 Aug 13 15:14 job_201308131457_0009
drwx--x--- 9 jfk admin 306 Aug 13 15:15 job_201308131457_0011
drwx--x--- 7 jfk admin 238 Aug 13 15:16 job_201308131457_0012
drwx--x--- 15 jfk admin 510 Aug 13 15:28 job_201308131457_0014
drwx--x--- 7 jfk admin 238 Aug 13 15:28 job_201308131457_0015
drwx--x--- 15 jfk admin 510 Aug 13 16:20 job_201308131549_0001
drwx--x--- 11 jfk admin 374 Aug 13 16:20 job_201308131549_0002
drwx--x--- 4 jfk admin 136 Aug 13 16:13 job_201308131549_0004

的内容 `job_201308131549_0004/attempt_201308131549_0004_r_000002_0/stderr` :

2013-08-13 16:13:10.401 java[7378:1203] Unable to load realm info from SCDynamicStore

==更新==
搜索错误消息时 `Unable to load realm info from SCDynamicStore` ,似乎有几个在osx上使用hadoop的人也有同样的问题。下面的解决方案似乎对某些人有效,但不幸的是对我无效。osx上的hadoop“无法从scdynamicstore加载领域信息”
jtjikinw

jtjikinw1#

http://localhost:50030/jobtracker.jsp 并选择运行中被卡住的作业,找出被卡住的Map。转到被卡住的Map作业,然后单击右侧的所有链接,这将指向确切的错误。不知道这对你有没有帮助

相关问题