文档中的异常heckpoint:operation category 在待机状态下不支持日志

bxgwgixi  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(433)

我创建了一个ha集群,其中有一个datanode、active namenode、standby namenode和三个journalnode,当is将一个文件放入hdfs时,得到以下错误:

put: Operation category READ is not supported in state standby

put命令:

./hadoop fs -put golnaz.txt /user/input

名称节点日志:

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
    at org.apache.hadoop.ipc.Client.call(Client.java:1407)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy15.rollEditLog(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:273)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:315)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-09-15 02:07:23,961 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 103.41.177.161:45797 Call#11403 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category JOURNAL is not supported in state standby
2016-09-15 02:07:30,547 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000, call org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 103.41.177.160:39200 Call#11404 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category JOURNAL is not supported in state standby

secondarynamenode日志出错:

ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category JOURNAL is not supported in state standby

这是hdfs-site.xml:

<configuration>
<property>
    <name>dfs.data.dir</name>
    <value>/root/hadoopstorage/data</value>
    <final>true</final>
</property>
<property>
    <name>dfs.name.dir</name>
    <value>/root/hadoopstorage/name</value>
    <final>true</final>
</property>
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
</property>
<property>
    <name>dfs.nameservices</name>
    <value>ha-cluster</value>
</property>
<property>
    <name>dfs.ha.namenodes.ha-cluster</name>
    <value>NameNode,Standby</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.ha-cluster.NameNode</name>
    <value>103.41.177.161:9000</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.ha-cluster.Standby</name>
    <value>103.41.177.162:9000</value>
</property>
<property>
    <name>dfs.namenode.http-address.ha-cluster.NameNode</name>
    <value>103.41.177.161:50070</value>
</property>
<property>
    <name>dfs.namenode.http-address.ha-cluster.Standby</name>
    <value>103.41.177.162:50070</value>
</property>
<property>
    <name>dfs.namenode.shared.edits.dir</name>
    value>qjournal://103.41.177.161:8485;103.41.177.162:8485;103.41.177.160:8485/ha-cluster</value>
</property>
<property>
    <name>dfs.client.failover.proxy.provider.ha-cluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
</property>
<property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_rsa</value>
</property>
<property>
    <name>dfs.ha.fencing.ssh.connect-timeout</name>
    <value>3000</value>
</property>
</configuration>
e4eetjau

e4eetjau1#

你没有提到任何关于自动故障转移(zkfc&zookeeper)的内容。没有它,hdfs就不会自动故障切换。
您可以尝试以下操作:通过检查namenodes控制台(或使用管理命令中的getservicestate命令),确保两个namenodes都处于待机状态。如果是这样,请使用-transitionactive命令手动触发转换,并同时跟踪namenodes日志。如果转换失败,用namenode日志更新post。

相关问题