spark-on-hive sql查询错误nosuchfielderror:hive\u stats\u jdbc\u超时

mccptt67  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(207)

针对配置单元2.1.0提交spark 1.6.0 sql应用程序时出错:

Exception in thread "main" java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT
 at org.apache.spark.sql.hive.HiveContext.configure(HiveContext.scala:512)
 at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:252)
 at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:239)
 at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:443)
 at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272)
 at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
 at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
 at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:271)
 at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)
 at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
 at my.package.AbstractProcess$class.prepareContexts(AbstractProcess.scala:33)
 at my.package.PdfTextExtractor$.prepareContexts(PdfTextExtractor.scala:11)
 at my.package.AbstractProcess$class.main(AbstractProcess.scala:21)
 at my.package.PdfTextExtractor$.main(PdfTextExtractor.scala:11)
 at my.package.PdfTextExtractor.main(PdfTextExtractor.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

当我呼叫时出现:

hiveContext.sql(sqlString)

我使用spark提交工具提交申请:

appJar="${script_dir}/../../lib/application.jar"
jars="/usr/lib/spark/lib/spark-assembly.jar,/usr/lib/spark/lib/spark-examples.jar,/usr/share/java/scala-library.jar,/usr/lib/hive-exec.jar,/usr/lib/hive-2.1.0/lib/hive-metastore-2.1.0.jar,/usr/lib/hive-2.1.0/jdbc/hive-jdbc-2.1.0-standalone.jar,/usr/lib/hive-2.1.0/lib/hive-jdbc-2.1.0.jar"
CLASSPATH=`yarn classpath`
exec spark-submit --verbose \
--master 'yarn' \
--deploy-mode 'client' \
--name 'extract_text_from_krs_pdf' \
--jars ${jars} \
--executor-memory 3g \
--driver-cores 2 \
--driver-class-path '${CLASSPATH}:/usr/lib/spark/lib/spark-assembly.jar:/usr/lib/spark/lib/spark-examples.jar:/usr/share/java/scala-library.jar:/usr/lib/hive-exec.jar:/usr/lib/hive-2.1.0/lib/*:/usr/lib/hive-2.1.0/jdbc/*' \
--class 'my.package.PdfTextExtractor' \
"$appJar" "$dt" "$db"

我遵循了apache spark文档中的说明:与不同版本的hive metastore交互并修复spark default metastore和hive metastore不匹配的问题,因此my/etc/spark/conf/spark-defaults.conf如下所示:

spark.sql.hive.metastore.version 2.1.0
spark.sql.hive.metastore.jars /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hive-2.1.0/lib/*:/usr/lib/hive-2.1.0/jdbc/*:/usr/lib/spark/lib/*

但一点用都没有。我真的没什么主意了。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题