读取gz.parquet文件

hrysbysz  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(699)

你好,我需要从gz.parquet文件读取数据,但不知道如何??尝试过 Impala ,但我得到了相同的结果 parquet-tools cat 没有表结构。
p、 s:任何改进spark代码的建议都是非常受欢迎的。
我有以下Parquet文件 gz.parquet 作为twitter=>flume=>kafka=>spark streaming=>hive/gz.parquet文件创建的数据管道的结果。我用的Flume剂 agent1.sources.twitter-data.type = org.apache.flume.source.twitter.TwitterSource spark code将来自kafka的数据解列并存储在hive中,如下所示:

val sparkConf = new SparkConf().setAppName("KafkaTweet2Hive")

val sc = new SparkContext(sparkConf)

val ssc = new StreamingContext(sc, Seconds(2))
val sqlContext =  new org.apache.spark.sql.hive.HiveContext(sc)//new org.apache.spark.sql.SQLContext(sc)

// Create direct kafka stream with brokers and topics
val topicsSet = topics.split(",").toSet
val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
  ssc, kafkaParams, topicsSet)

// Get the data (tweets) from kafka
val tweets = messages.map(_._2)

//adding the tweets to Hive

tweets.foreachRDD { rdd =>

  val hiveContext = SQLContext.getOrCreate(rdd.sparkContext)

  import sqlContext.implicits._

  val tweetsDF = rdd.toDF()
  tweetsDF.write.mode("append").saveAsTable("tweet")

}

当我运行spark流应用程序时,它将数据存储为hdfs:/user/hive/warehouse目录中的gz.parquet文件,如下所示:

[root@quickstart /]# hdfs dfs -ls /user/hive/warehouse/tweets
Found 469 items
-rw-r--r--   1 root supergroup          0 2016-03-30 08:36 /user/hive/warehouse/tweets/_SUCCESS
-rw-r--r--   1 root supergroup        241 2016-03-30 08:36 /user/hive/warehouse/tweets/_common_metadata
-rw-r--r--   1 root supergroup      35750 2016-03-30 08:36 /user/hive/warehouse/tweets/_metadata
-rw-r--r--   1 root supergroup      23518 2016-03-30 08:33 /user/hive/warehouse/tweets/part-r-00000-0133fcd1-f529-4dd1-9371-36bf5c3e5df3.gz.parquet
-rw-r--r--   1 root supergroup       9552 2016-03-30 08:33 /user/hive/warehouse/tweets/part-r-00000-02c44f98-bfc3-47e3-a8e7-62486a1a45e7.gz.parquet
-rw-r--r--   1 root supergroup      19228 2016-03-30 08:25 /user/hive/warehouse/tweets/part-r-00000-0321ce99-9d2b-4c52-82ab-a9ed5f7d5036.gz.parquet
-rw-r--r--   1 root supergroup        241 2016-03-30 08:25 /user/hive/warehouse/tweets/part-r-00000-03415df3-c719-4a3a-90c6-462c43cfef54.gz.parquet

\u元数据文件中的架构如下所示:

[root@quickstart /]# parquet-tools meta hdfs://quickstart.cloudera:8020/user/hive/warehouse/tweets/_metadata
creator:       parquet-mr version 1.5.0-cdh5.5.0 (build ${buildNumber}) 
extra:         org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"tweet","type":"string","nullable":true,"metadata":{}}]} 

file schema:   root 
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
tweet:         OPTIONAL BINARY O:UTF8 R:0 D:1

此外,如果我将数据加载到spark中的Dataframe中,我将得到`df.show的输出´ 具体如下:

+--------------------+
|               tweet|
+--------------------+
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|ڕObjavro.sch...|
|��Objavro.sc...|
|ֲObjavro.sch...|
|��Objavro.sc...|
|��Objavro.sc...|
|֕Objavro.sch...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
|��Objavro.sc...|
+--------------------+
only showing top 20 rows

我怎么会愿意把这些推文看作纯文本呢?

yks3o0rb

yks3o0rb1#

sqlcontext.read.parquet(“/user/hive/warehouse/tweets”).show

相关问题