如何解决SparkyR中的iorg.apache.spark.sparkexception?

bq8i3lrv  于 2021-05-27  发布在  Spark
关注(0)|答案(0)|浏览(241)

我是个初学者。当我尝试使用 copy_to() 函数我有一个错误,我怎样才能修复它?
我的代码:

library(sparklyr)
library(babynames)
sc <- spark_connect(master = "local", version = "2.0.1")
babynames_tbl <- copy_to(sc, babynames, "babynames")

错误:
错误:org.apache.spark.sparkexception:由于阶段失败而中止作业:阶段1.0中的任务0失败了1次,最近的失败:阶段1.0中的任务0.0丢失(tid 1,localhost):java.lang.nullpointerexception位于java.lang.processbuilder.start(processbuilder。java:1012)在org.apache.hadoop.util.shell.runcommand(shell。java:482)在org.apache.hadoop.util.shell.run(shell。java:455)在org.apache.hadoop.util.shell$shellcommandexecutor.execute(shell。java:715)在org.apache.hadoop.fs.fileutil.chmod(fileutil。java:873) 在org.apache.hadoop.fs.fileutil.chmod(fileutil。java:853)在org.apache.spark.util.utils$.fetchfile(utils。scala:407)在org.apache.spark.executor.executor$$anonfun$org$apache$spark$executor$executor$$updateDependences$5.apply(executor。scala:430)在org.apache.spark.executor.executor$$anonfun$org$apache$spark$executor$executor$$updatedependencies$5.apply(executor。scala:422)在scala.collection.traversablelike$withfilter$$anonfun$foreach$1.apply(traversablelike

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题