无法创建配置单元连接jdbc:hive2://localhost:10000. 群集模式下的spark提交

ulydmbyx  于 2021-07-13  发布在  Spark
关注(0)|答案(1)|浏览(399)

我正在apachespark上运行apachehudi应用程序。当我在客户机模式下提交应用程序时,它工作正常,但是当我在集群模式下提交应用程序时,出现了一个错误

  1. py4j.protocol.Py4JJavaError: An error occurred while calling o196.save.
  2. : org.apache.hudi.hive.HoodieHiveSyncException: Cannot create hive connection jdbc:hive2://localhost:10000/
  3. at org.apache.hudi.hive.HoodieHiveClient.createHiveConnection(HoodieHiveClient.java:422)
  4. at org.apache.hudi.hive.HoodieHiveClient.<init>(HoodieHiveClient.java:95)
  5. at org.apache.hudi.hive.HiveSyncTool.<init>(HiveSyncTool.java:66)
  6. at org.apache.hudi.HoodieSparkSqlWriter$.org$apache$hudi$HoodieSparkSqlWriter$$syncHive(HoodieSparkSqlWriter.scala:321)
  7. at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:363)
  8. at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:359)
  9. at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
  10. at org.apache.hudi.HoodieSparkSqlWriter$.metaSync(HoodieSparkSqlWriter.scala:359)
  11. at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:417)
  12. at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:205)
  13. at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:125)
  14. at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
  15. at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  16. at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  17. at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
  18. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
  19. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:169)
  20. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:197)
  21. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  22. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:194)
  23. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
  24. at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:114)
  25. at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
  26. at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:696)
  27. at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:696)
  28. at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$executeQuery$1(SQLExecution.scala:83)
  29. at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(SQLExecution.scala:94)
  30. at org.apache.spark.sql.execution.QueryExecutionMetrics$.withMetrics(QueryExecutionMetrics.scala:141)
  31. at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$withMetrics(SQLExecution.scala:178)
  32. at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:93)
yebdmbv4

yebdmbv41#

修改hudi配置后,hoodie.datasource.hive\u sync.jdbcurl开始工作。

相关问题