sparksql/dataframes在sparkshell和应用程序中不起作用

ar5n3qh5  于 2021-06-10  发布在  Hbase
关注(0)|答案(0)|浏览(293)

当我试图遵循hbase指南中的示例时,它不起作用。出现以下错误: java.lang.NullPointerException ,全部信息如下。我猜一个对象是空的,然后用空的对象调用这个方法,但是我不知道哪个对象是空的。谁能帮我,谢谢。
感谢大家,现在问题解决了,我用intellij ide在一个应用程序中调试了这段代码,发现hbasecontext没有示例化,所以在创建hbasecontext对象时效果很好。就像这样: val hbaseContext = new HBaseContext(sc, config, null) reference:https用法://hbase.apache.org/book.html##u basic#u spark
“所有spark和hbase集成的根本是hbasecontext。hbasecontext接收hbase配置并将它们推送到spark执行器。这允许我们在静态位置为每个spark executor建立一个hbase连接。”

hadoop@master:~$ spark-1.6.0-bin-hadoop2.4/bin/spark-shell --jars /home/yang/Downloads/hbase-spark-2.0.0-20160316.173537-2.jar 
...
SQL context available as sqlContext.
scala> def catalog = s"""{
     |        |"table":{"namespace":"default", "name":"table1"},
     |        |"rowkey":"key",
     |        |"columns":{
     |          |"col0":{"cf":"rowkey", "col":"key", "type":"string"},
     |          |"col1":{"cf":"cf1", "col":"col1", "type":"string"}
     |        |}
     |      |}""".stripMargin
catalog: String

scala> case class HBaseRecord(
     |    col0: String,
     |    col1: String)
defined class HBaseRecord

scala> val data = (0 to 255).map { i =>  HBaseRecord(i.toString, "extra")}
data: scala.collection.immutable.IndexedSeq[HBaseRecord] = Vector(HBaseRecord(0,extra), HBaseRecord(1,extra), HBaseRecord(2,extra), HBaseRecord(3,extra), HBaseRecord(4,extra), HBaseRecord(5,extra), HBaseRecord(6,extra), HBaseRecord(7,extra), HBaseRecord(8,extra), HBaseRecord(9,extra), HBaseRecord(10,extra), HBaseRecord(11,extra), HBaseRecord(12,extra), HBaseRecord(13,extra), HBaseRecord(14,extra), HBaseRecord(15,extra), HBaseRecord(16,extra), HBaseRecord(17,extra), HBaseRecord(18,extra), HBaseRecord(19,extra), HBaseRecord(20,extra), HBaseRecord(21,extra), HBaseRecord(22,extra), HBaseRecord(23,extra), HBaseRecord(24,extra), HBaseRecord(25,extra), HBaseRecord(26,extra), HBaseRecord(27,extra), HBaseRecord(28,extra), HBaseRecord(29,extra), HBaseRecord(30,extra), HBaseRecord(31,extra), HBase...
scala> import org.apache.spark.sql.datasources.hbase
import org.apache.spark.sql.datasources.hbase

scala> sc.parallelize(data).toDF.write.options(Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5")).format("org.apache.hadoop.hbase.spark").save()
<console>:35: error: not found: value HBaseTableCatalog
              sc.parallelize(data).toDF.write.options(Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5")).format("org.apache.hadoop.hbase.spark").save()
                                                          ^ 

scala> import org.apache.spark.sql.datasources.hbase.{HBaseTableCatalog}
import org.apache.spark.sql.datasources.hbase.HBaseTableCatalog

scala> sc.parallelize(data).toDF.write.options(Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5")).format("org.apache.hadoop.hbase.spark").save()
java.lang.NullPointerException
    at org.apache.hadoop.hbase.spark.HBaseRelation.<init>(DefaultSource.scala:125)
    at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:74)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
    at $iwC$$iwC$$iwC.<init>(<console>:49)
    at $iwC$$iwC.<init>(<console>:51)
    at $iwC.<init>(<console>:53)
    at <init>(<console>:55)
    at .<init>(<console>:59)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

scala>

顺便说一下,以下是我的环境信息:
spark:版本1.6.0
scala:version 2.10.5(openjdk 64位服务器虚拟机,java 1.8.0\u 91)
hbase:版本1.2.2
hadoop:版本2.4.0

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题