我最近开始研究nosql和大数据,并决定继续研究它们。几天来,我一直在尝试在win2008r264位机器上安装和配置hadoop和hbase。但不幸的是,我一直没有成功,我有不同的错误,在每个阶段的安装。我以下提到的教程在这方面。
对于hadoop=http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html 对于hbase=http://ics.upjs.sk/~novotnyr/blog/334/setting-up-hbase-on-windows
首先,当我在/usr/local/hadoop目录下运行jps命令时,我在那里没有看到datanode,只有这些值:
$日元
3984名称节点
6864日元
5972工作追踪者
但是,当我导航到这个地址127.0.0.1:50070时,它运行良好。但当我尝试运行test wordcount example job时,它在下面提到的位置停留了很长时间,我必须重新启动cygwin终端:
11/06/13 13:43:01 info mapred.jobclient:正在运行作业:作业\u 201005081732 \u 0001 11/06/13 13:43:02 info mapred.jobclient:Map0%减少0%
此外,我只是忽略了它,转而在hadoop上安装和配置hbase,安装进行得很顺利,但现在当我在hbase shell中运行不同的命令时,我会收到不同的错误,例如,如果我运行“list”命令,我会得到错误:
org.apache.hadoop.hbase.masternotrunningexception:重试7次如果我运行scan'test'命令,我得到错误:
org.apache.hadoop.hbase.client.noserverforregionexception:在7次尝试后,找不到用于测试的区域,999999999999。
我真的不知道该怎么办,我已经找了好几天了,但是仍然找不到解决我错误的确切方法。
为了成功配置hadoop和hbase,我真的需要你们Maven在这方面的帮助。
这是我的数据节点日志:
2013-06-11 14:21:16,703 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_3811235227329042813_1246 src: /127.0.0.1:51511 dest: /127.0.0.1:50010
2013-06-11 14:21:16,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51511, dest: /127.0.0.1:50010, bytes: 142452, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_3811235227329042813_1246, duration: 8188439
2013-06-11 14:21:16,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_3811235227329042813_1246 terminating
2013-06-11 14:21:17,024 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-7864325777801075696_1247 src: /127.0.0.1:51512 dest: /127.0.0.1:50010
2013-06-11 14:21:17,034 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51512, dest: /127.0.0.1:50010, bytes: 368, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_-7864325777801075696_1247, duration: 1775491
2013-06-11 14:21:17,035 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-7864325777801075696_1247 terminating
2013-06-11 14:21:17,135 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_8363548489446884759_1248 src: /127.0.0.1:51513 dest: /127.0.0.1:50010
2013-06-11 14:21:17,145 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51513, dest: /127.0.0.1:50010, bytes: 77, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_8363548489446884759_1248, duration: 1461072
2013-06-11 14:21:17,146 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_8363548489446884759_1248 terminating
2013-06-11 14:21:17,481 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_2254833662532666780_1249 src: /127.0.0.1:51514 dest: /127.0.0.1:50010
2013-06-11 14:21:17,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51514, dest: /127.0.0.1:50010, bytes: 20596, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_2254833662532666780_1249, duration: 2206535
2013-06-11 14:21:17,494 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_2254833662532666780_1249 terminating
2013-06-11 14:21:17,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:51516, bytes: 20760, op: HDFS_READ, cliID: DFSClient_-1869746926, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_2254833662532666780_1249, duration: 3906454
2013-06-11 14:21:18,234 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-2949992568769351385_1250 src: /127.0.0.1:51518 dest: /127.0.0.1:50010
2013-06-11 14:21:18,244 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51518, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE, cliID: DFSClient_-163790033, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_-2949992568769351385_1250, duration: 1404625
2013-06-11 14:21:18,245 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-2949992568769351385_1250 terminating
2013-06-11 14:21:18,290 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:51519, bytes: 81, op: HDFS_READ, cliID: DFSClient_-1869746926, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_8363548489446884759_1248, duration: 694149
2013-06-11 14:22:00,557 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_3811235227329042813_1246
TaskTrakers Log:
2013-06-11 12:33:27,223 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting TaskTracker
STARTUP_MSG: host = WIN-UHHLG0L1912/192.168.168.63
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2013-06-11 12:33:27,676 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-06-11 12:33:27,812 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-06-11 12:33:27,815 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-06-11 12:33:27,815 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
2013-06-11 12:33:28,402 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-06-11 12:33:28,411 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-06-11 12:33:28,697 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-11 12:33:28,852 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-06-11 12:33:28,954 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-06-11 12:33:28,963 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as cyg_server
2013-06-11 12:33:28,965 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-cyg_server/mapred/local
2013-06-11 12:33:28,982 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-06-11 12:33:28,984 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-cyg_server\mapred\local\taskTracker to 0755
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:670)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:723)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1459)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3742)
2013-06-11 12:33:28,986 INFO org.apache.hadoop.mapred.TaskTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down TaskTracker at WIN-UHHLG0L1912/192.168.168.63
************************************************************/
在core-site.xml中
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
在hdfs-site.xml中
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/workspace/name_dir</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/workspace/data_dir</value>
</property>
在mapred-site.xml中
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
提前谢谢,
当做
萨尔曼
1条答案
按热度按时间yr9zkbsy1#
创建一个目录,比如说
/home/hadoop/workspace/temp_dir
并添加属性hadoop.tmp.dir
将此目录作为其值放入core-site.xml
文件。然后更改的权限/home/hadoop/workspace/data_dir
以及/home/hadoop/workspace/temp_dir
到755并重新启动hadoop。