我正在编写一个spark应用程序,它从hive获取事务数据,并将其与hbase表中的位置数据连接起来。基本上,最终目标是通过将hbase表中的lat和long连接到hive中的事务数据来判断事务发生在哪里。但是,当我将连接的数据集转换为dataframe时,会不断得到nullpointerexception。
当我使用以下命令时,会出现异常:
.todf()
.createdataframe()
.parallize(.toseq)
一开始我认为有些列有一个空值,所以我使用option().tostring来确保没有空值,但是当我调用上述3个方法时,错误仍然会出现。
我还可以确认占位符\u iterator.tostream在打印数据时不为空。
我必须使用foreachpartition,因为getatmlocation()连接到hbase表以获取lat和log。如果不使用foreachpartition,则会发生序列化错误。下面是函数的代码:
def getATMLocation(colFamily: String, search_item: String, table: Table) = {
val scanner = new Scan()
scanner
.addColumn(colFamily.getBytes(), atm_dict_key.getBytes())
.addColumn(colFamily.getBytes(), atm_dict_lat.getBytes())
.addColumn(colFamily.getBytes(), atm_dict_long.getBytes())
val filter = new SingleColumnValueFilter(colFamily.getBytes, atm_dict_key.getBytes(), CompareOp.EQUAL, Option(search_item).getOrElse("").toString.getBytes())
scanner.setFilter(filter)
val atm_locations = table.getScanner(scanner)
val location = atm_locations.next()
val longitude = location match {
case null => null
case _ => Option(Bytes.toString(location.getValue(colFamily.getBytes(), atm_dict_long.getBytes()))).getOrElse("")
}
val latitude = location match {
case null => null
case _ => Option(Bytes.toString(location.getValue(colFamily.getBytes(), atm_dict_lat.getBytes()))).getOrElse("")
}
atm_locations.close()
(longitude, latitude)
}
以下是问题代码供您参考:
val max_records = sql(hive_query_1 + " " + period_clause.replace("|date|", "01-11-2018")).select("transac_count").as[String].collect()(0).toInt
val max_page = math.ceil(max_records.toDouble/page_limit.toDouble).toInt
val start_row = 0
val end_row = page_limit.toInt
if(max_records > 0) {
for (page <- 0 to max_page - 1) {
val hiveDF = sql("SELECT " + hive_columns + " FROM (" + (hive_query_2 + " " + period_clause.replace("|date|", "01-11-2018")
) + ") as trans_data WHERE rowid BETWEEN " + (start_row + (page * page_limit.toInt)).toString + " AND " + ((end_row + (page * page_limit.toInt)) - 1).toString)
.withColumn("uuid", timeUUID())
.withColumn("created_dt", current_timestamp())
hiveDF.show()
hiveDF.rdd.foreachPartition{ iter =>
val hbaseconfig = HBaseConfiguration.create()
hbaseconfig.set("keytab.file", keytab)
val hbase_connection = ConnectionFactory.createConnection(hbaseconfig)
val table = hbase_connection.getTable(TableName.valueOf(hbase_table))
val regionLoc = hbase_connection.getRegionLocator(table.getName)
val admin = hbase_connection.getAdmin
val atm_dict_table = hbase_connection.getTable(TableName.valueOf(atm_dict_tbl))
val placeholder_Iterator = iter.map(r => {
val location = Query.getATMLocation(atm_dict_col_family, Option(r.get(14)).getOrElse("").toString, atm_dict_table)
(Option(r.get(0)).toString, Option(r.get(1)).toString, Option(r.get(2)).toString, Option(r.get(3)).toString, Option(r.get(4)).toString, Option(r.get(5)).toString, Option(r.get(6)).toString, Option(r.get(7)).toString, Option(r.get(8)).toString, Option(r.get(9)).toString, Option(r.get(10)).toString, Option(r.get(11)).toString, Option(r.get(12)).toString, Option(r.get(13)).toString, Option(r.get(14)).toString, Option(r.get(15)).toString, Option(r.get(16)).toString , Option(location._1).toString, Option(location._2).toString)
})
val test = placeholder_Iterator.toStream.toDF(new_column_names: _*)
test.foreach(x => println(x))
}
}
}
下面是返回的错误:
java.lang.NullPointerException
at org.apache.spark.sql.SQLImplicits.localSeqToDatasetHolder(SQLImplicits.scala:228)
at TransactionData$$anonfun$main$2$$anonfun$apply$1$$anonfun$apply$mcVI$sp$1.apply(TransactionData.scala:109)
at TransactionData$$anonfun$main$2$$anonfun$apply$1$$anonfun$apply$mcVI$sp$1.apply(TransactionData.scala:94)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:935)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:935)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:381)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
我真的希望连接的数据可以转换成一个Dataframe,这样我就可以将它写入一个hfile并将它批量插入hbase
1条答案
按热度按时间ubof19bj1#
我找到了答案。null指针异常的原因是因为Dataframe、rdd或数据集只能存在于驱动程序上。这篇文章解释了这一点。
spark:如何在每个执行器中创建本地Dataframe