等待完成hiveserver2和远程spark驱动程序之间的sasl协商时超时

jv4diomz  于 2021-06-24  发布在  Hive
关注(0)|答案(1)|浏览(647)

我正在用hive和spark学习cdh6.3.0,我面临一个困扰了我一周的问题。我已经安装它从零开始,没有解决任何问题。
当我尝试从表中进行选择时,会发生超时。
考虑到这一点:

DROP TABLE dashboard.top10;
CREATE TABLE dashboard.top10 (id VARCHAR(100), floatVal DOUBLE)
STORED AS ORC tblproperties("compress.mode"="SNAPPY");
INSERT into dashboard.top10 SELECT * from analysis.total_raw  order by floatVal DESC limit 10;

处理语句时出错:失败:执行错误,从org.apache.hadoop.hive.ql.exec.spark.sparktask返回代码30041。未能为spark会话faf8afcb-0e43-4097-8dcb-44f3f1445005创建spark客户端\u 0:java.util.concurrent.timeoutexception:客户端“faf8afcb-0e43-4097-8dcb-44f3f1445005 \u 0”在等待来自远程spark驱动程序的连接时超时
我的猜测是超时设置没有被考虑在内。作为我的测试环境,我可以有一个大于1s的延迟
警告:忽略非spark配置属性:hive.spark.client.server.connect.timeout=90000警告:忽略非spark配置属性:hive.spark.client.connect.timeout
由于出现错误,容器正在退出:

exception: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
    at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41)
    at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:155)
    at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:559)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:673)
Caused by: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
    at org.apache.hive.spark.client.rpc.Rpc$2.run(Rpc.java:120)
    at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
    at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at java.lang.Thread.run(Thread.java:748)
)
19/08/26 17:15:11 ERROR yarn.ApplicationMaster: Uncaught exception: 
org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:447)
    at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:275)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:805)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:804)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:804)
    at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
    at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41)
    at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:155)
    at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:559)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:673)
Caused by: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
    at org.apache.hive.spark.client.rpc.Rpc$2.run(Rpc.java:120)
    at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
    at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at java.lang.Thread.run(Thread.java:748)
19/08/26 17:15:11 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://masternode.vm:8020/user/root/.sparkStaging/application_1566847834444_0003
19/08/26 17:15:16 INFO util.ShutdownHookManager: Shutdown hook called

我提高了超时时间(作为测试),但没有成功:

hive.metastore.client.socket.timeout=360s
hive.spark.client.connect.timeout=360000ms
hive.spark.client.server.connect.timeout=360000ms
hive.spark.job.monitor.timeout=180s

我还仔细检查了每个节点的名称解析,一切正常,但我没有使用dns,而是使用主机文件。
集群上的虚拟机:centos 7
apache spak版本2.4.0-cdh6.3.0
cloudera版本cdh 6.3
配置单元版本:2.1.1-cdh6.3.0,re1e06dfe7de385554f2ec553009ef8452c5fd25a

yuvru6vn

yuvru6vn1#

在cdh 6.2中: set hive.spark.client.future.timeou=360; 而且,在集群重启后就解决了!

相关问题