spark hbase:如何将Dataframe转换为hbase org.apache.hadoop.hbase.client.result

nwnhqdif  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(345)

我有一个方法测试,其中一个参数是hbase结果 org.apache.hadoop.hbase.client.Result 现在我有一些hbase结果数据,我已经保存在一个文件中,并为它创建了一个Dataframe并加载了它。
我想把这个Dataframe数据传递给我的方法来测试一些功能。但是,问题是,我需要通过它作为一个结果。
我需要帮助转换Spark Dataframe 到hbase org.apache.hadoop.hbase.client.Result .

kqqjbcuj

kqqjbcuj1#

我已经采取了一个Dataframe,并试图提取 org.apache.hadoop.hbase.client.Result 从那开始。这可以通过RDD完成。

import org.apache.hadoop.hbase.{Cell, CellUtil}

import scala.collection.JavaConversions._
import scala.collection.mutable.ListBuffer

import scala.math.BigInt

import org.apache.spark._
import org.apache.spark.rdd._
import org.apache.spark.sql._

import org.apache.hadoop.hbase.client.Result
import org.apache.hadoop.hbase.io.ImmutableBytesWritable

object HbaseDFToResult extends App {

  val config = new SparkConf().setAppName("test").setMaster("local[*]")

  // below configuration set since org.apache.hadoop.hbase.client.Result is not serializable kryo can serialize this
  config.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
  config.registerKryoClasses(Array(classOf[org.apache.hadoop.hbase.client.Result]))
  val spark = SparkSession.builder().config(config).getOrCreate()
  val mytests = Seq((1, "test1"), (2, "test2"), (3, "test3"), (4, "test4"))

  import spark.implicits._

  val df = mytests.toDF("col1", "col2")
  val counts: RDD[(ImmutableBytesWritable, Result)] = df.rdd.map{ row =>
    val key = row.getAs[Int]("col1")
    val keyByteArray = BigInt(key).toByteArray
    val ibw = new ImmutableBytesWritable()
    ibw.set(keyByteArray)

    val value = row.getAs[String]("col2")
    val valueByteArray = value.getBytes()
    val cellList = List(CellUtil.createCell(valueByteArray))
    val cell: java.util.List[Cell] = ListBuffer(cellList: _*)
    val result = Result.create(cell)

    (ibw, result)

  }
  val results: Array[Result] = counts.map(x => x._2).collect()
  results.foreach(println)
}

日志:

/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar=60498:/Applications/IntelliJ IDEA 
....

2019-05-01 15:41:21 INFO  DAGScheduler:54 - Job 0 finished: collect at HbaseDFToResult.scala:41, took 0.568670 s
keyvalues={test1//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0}
keyvalues={test2//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0}
keyvalues={test3//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0}
keyvalues={test4//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0}
2019-05-01 15:41:21 INFO  SparkContext:54 - Invoking stop() from shutdown hook
2019-05-01 15:41:21 INFO  AbstractConnector:310 - Stopped Spark@4215838f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-05-01 15:41:21 INFO  SparkUI:54 - Stopped Spark web UI at http://10.219.20.238:4040
2019-05-01 15:41:21 INFO  MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2019-05-01 15:41:21 INFO  MemoryStore:54 - MemoryStore cleared
2019-05-01 15:41:21 INFO  BlockManager:54 - BlockManager stopped
2019-05-01 15:41:21 INFO  BlockManagerMaster:54 - BlockManagerMaster stopped
2019-05-01 15:41:21 INFO  OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2019-05-01 15:41:21 INFO  SparkContext:54 - Successfully stopped SparkContext
2019-05-01 15:41:21 INFO  ShutdownHookManager:54 - Shutdown hook called
2019-05-01 15:41:21 INFO  ShutdownHookManager:54 - Deleting directory /private/var/folders/mp/xydn5gdj4b51qgc7lsqzrft40000gp/T/spark-a9d46422-f21a-4f2b-98b0-a73238d20dee
Process finished with exit code 0

相关问题