当我运行一个sparksql作业时,它发生在我身上,但是当我稍后执行它时,它工作了,我不知道它发生了什么,因为它又发生了。如果你有任何想法,请帮助我~~
at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:71)
at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81)
at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96)
at com.joinf.hbase.exportData.toMySQL.SparkOnSQLReplaceToHbase$$anonfun$main$1.apply(SparkOnSQLReplaceToHbase.scala:98)
at com.joinf.hbase.exportData.toMySQL.SparkOnSQLReplaceToHbase$$anonfun$main$1.apply(SparkOnSQLReplaceToHbase.scala:96)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2118)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2118)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
1条答案
按热度按时间cbeh67ev1#
在我的例子中,当我增加驾驶员的记忆时,它可以流畅地工作。我想它可能与记忆有关。