由于输入字符串“30s”的java.lang.numberformatexception,apache tez作业失败

2mbi3lxu  于 2021-05-31  发布在  Hadoop
关注(0)|答案(2)|浏览(1210)

我正在尝试在tez上的ApacheHive上执行查询,但不知怎的,我得到了如下错误,我不知道如何解决它
apache hadoop 3.1.1版
apache hive 3.1.0版
apache tez 0.9.1版
my tez-site.xml

<configuration>
        <property>
            <name>tez.lib.uris</name>
            <value>hdfs://localhost:8020/apps/apache-tez-0.9.1-bin/share/tez.tar.gz</value>
        </property>
        <property>
            <name>tez.staging-dir</name>
            <value>/tmp/${user.name}/staging</value>
        </property>
    <configuration>
020-04-22 21:08:55,530 [INFO] [main] |shim.HadoopShimsLoader|: Trying to locate HadoopShimProvider for hadoopVersion=2.7.0, majorVersion=2, minorVersion=7
2020-04-22 21:08:55,531 [INFO] [main] |shim.HadoopShimsLoader|: Picked HadoopShim org.apache.tez.hadoop.shim.HadoopShim27, providerName=org.apache.tez.hadoop.shim.HadoopShim25_26_27Provider, overrideProviderViaConfig=null, hadoopVersion=2.7.0, majorVersion=2, minorVersion=7
2020-04-22 21:08:55,551 [INFO] [main] |app.DAGAppMaster|: AM Level configured TaskSchedulers: [0:TezYarn:null],[1:TezUber:null]
2020-04-22 21:08:55,551 [INFO] [main] |app.DAGAppMaster|: AM Level configured ContainerLaunchers: [0:TezYarn:null],[1:TezUber:null]
2020-04-22 21:08:55,551 [INFO] [main] |app.DAGAppMaster|: AM Level configured TaskCommunicators: [0:TezYarn:null],[1:TezUber:null]
2020-04-22 21:08:55,551 [INFO] [main] |app.DAGAppMaster|: Comparing client version with AM version, clientVersion=0.9.1, AMVersion=0.9.1
2020-04-22 21:08:55,633 [INFO] [main] |service.AbstractService|: Service org.apache.tez.dag.app.DAGAppMaster failed in state INITED; cause: java.lang.NumberFormatException: For input string: "30s"
java.lang.NumberFormatException: For input string: "30s"
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
    at java.lang.Long.parseLong(Long.java:589)
    at java.lang.Long.parseLong(Long.java:631)
    at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1311)
    at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:502)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:637)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.tez.common.TezCommonUtils.getTezBaseStagingPath(TezCommonUtils.java:87)
    at org.apache.tez.common.TezCommonUtils.getTezSystemStagingPath(TezCommonUtils.java:146)
    at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:492)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2662)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2659)
    at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2464)
2020-04-22 21:08:55,636 [WARN] [main] |service.AbstractService|: When stopping the service org.apache.tez.dag.app.DAGAppMaster : java.lang.NullPointerException
java.lang.NullPointerException
zi8p0yeb

zi8p0yeb1#

谢谢你的回复。。我已经检查了我的hdfs-site.xml,没有任何设置指示值指定为“30s”。

<property>
        <name>dfs.replication</name>
        <value>1</value>
</property>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///hadoopdata/hdfs/namenode</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///hadoopdata/hdfs/datanode</value>
</property>
<property>
    <name>dfs.blocksize</name>
    <value>268435456</value>
</property>
<property>
    <name>dfs.blocksize</name>
    <value>268435456</value>
</property>
<property>
    <name>dfs.namenode.handler.count</name>
    <value>100</value>
</property>
<property>
    <name>dfs.permissions.superusergroup</name>
    <value>hadoop</value>
    <description>The name of the group of super-users.</description>
</property>
2nc8po8w

2nc8po8w2#

属性的默认值 dfs.client.datanode-restart.timeout 是30秒。这个问题是相关的。
这里提到了一个解决方法。它对我有用。

相关问题