等待完成hiveserver2和远程spark驱动程序之间的sasl协商时超时

jv4diomz  于 2021-06-24  发布在  Hive
关注(0)|答案(1)|浏览(650)

我正在用hive和spark学习cdh6.3.0,我面临一个困扰了我一周的问题。我已经安装它从零开始,没有解决任何问题。
当我尝试从表中进行选择时,会发生超时。
考虑到这一点:

  1. DROP TABLE dashboard.top10;
  2. CREATE TABLE dashboard.top10 (id VARCHAR(100), floatVal DOUBLE)
  3. STORED AS ORC tblproperties("compress.mode"="SNAPPY");
  4. INSERT into dashboard.top10 SELECT * from analysis.total_raw order by floatVal DESC limit 10;

处理语句时出错:失败:执行错误,从org.apache.hadoop.hive.ql.exec.spark.sparktask返回代码30041。未能为spark会话faf8afcb-0e43-4097-8dcb-44f3f1445005创建spark客户端\u 0:java.util.concurrent.timeoutexception:客户端“faf8afcb-0e43-4097-8dcb-44f3f1445005 \u 0”在等待来自远程spark驱动程序的连接时超时
我的猜测是超时设置没有被考虑在内。作为我的测试环境,我可以有一个大于1s的延迟
警告:忽略非spark配置属性:hive.spark.client.server.connect.timeout=90000警告:忽略非spark配置属性:hive.spark.client.connect.timeout
由于出现错误,容器正在退出:

  1. exception: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
  2. at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41)
  3. at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:155)
  4. at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:559)
  5. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  6. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  7. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  8. at java.lang.reflect.Method.invoke(Method.java:498)
  9. at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:673)
  10. Caused by: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
  11. at org.apache.hive.spark.client.rpc.Rpc$2.run(Rpc.java:120)
  12. at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
  13. at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
  14. at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
  15. at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
  16. at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
  17. at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
  18. at java.lang.Thread.run(Thread.java:748)
  19. )
  20. 19/08/26 17:15:11 ERROR yarn.ApplicationMaster: Uncaught exception:
  21. org.apache.spark.SparkException: Exception thrown in awaitResult:
  22. at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
  23. at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:447)
  24. at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:275)
  25. at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:805)
  26. at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:804)
  27. at java.security.AccessController.doPrivileged(Native Method)
  28. at javax.security.auth.Subject.doAs(Subject.java:422)
  29. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
  30. at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:804)
  31. at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
  32. Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
  33. at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41)
  34. at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:155)
  35. at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:559)
  36. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  37. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  38. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  39. at java.lang.reflect.Method.invoke(Method.java:498)
  40. at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:673)
  41. Caused by: java.util.concurrent.TimeoutException: Timed out waiting to connect to HiveServer2.
  42. at org.apache.hive.spark.client.rpc.Rpc$2.run(Rpc.java:120)
  43. at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
  44. at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
  45. at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
  46. at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
  47. at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
  48. at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
  49. at java.lang.Thread.run(Thread.java:748)
  50. 19/08/26 17:15:11 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://masternode.vm:8020/user/root/.sparkStaging/application_1566847834444_0003
  51. 19/08/26 17:15:16 INFO util.ShutdownHookManager: Shutdown hook called

我提高了超时时间(作为测试),但没有成功:

  1. hive.metastore.client.socket.timeout=360s
  2. hive.spark.client.connect.timeout=360000ms
  3. hive.spark.client.server.connect.timeout=360000ms
  4. hive.spark.job.monitor.timeout=180s

我还仔细检查了每个节点的名称解析,一切正常,但我没有使用dns,而是使用主机文件。
集群上的虚拟机:centos 7
apache spak版本2.4.0-cdh6.3.0
cloudera版本cdh 6.3
配置单元版本:2.1.1-cdh6.3.0,re1e06dfe7de385554f2ec553009ef8452c5fd25a

yuvru6vn

yuvru6vn1#

在cdh 6.2中: set hive.spark.client.future.timeou=360; 而且,在集群重启后就解决了!

相关问题