使用hive/sql和spark读取json键值

xtfmy6hx  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(457)

我试图将这个json文件读入一个配置单元表,顶级键,即1,2..,在这里是不一致的。

{
    "1":"{\"time\":1421169633384,\"reading1\":130.875969,\"reading2\":227.138275}",
    "2":"{\"time\":1421169646476,\"reading1\":131.240628,\"reading2\":226.810211}",
    "position": 0
}

我只需要时间和读数1,2在我的Hive表列忽略位置。我还可以组合使用配置单元查询和spark map reduce代码。谢谢你的帮助。
更新,这是我正在尝试的

val hqlContext = new HiveContext(sc)

val rdd = sc.textFile(data_loc)

val json_rdd = hqlContext.jsonRDD(rdd)
json_rdd.registerTempTable("table123")
println(json_rdd.printSchema())
hqlContext.sql("SELECT json_val from table123 lateral view explode_map( json_map(*, 'int,string')) x as json_key, json_val ").foreach(println)

它抛出以下错误:

Exception in thread "main" org.apache.spark.sql.hive.HiveQl$ParseException: Failed to parse: SELECT json_val from temp_hum_table lateral view explode_map( json_map(*, 'int,string')) x as json_key, json_val
    at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:239)
    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
mpgws1up

mpgws1up1#

如果您将“1”和“2”(键名)重命名为“x1”和“x2”(在json文件或rdd中),这将起作用:

val resultrdd = sqlContext.sql("SELECT x1.time, x1.reading1, x1.reading1, x2.time, x2.reading1, x2.reading2 from table123  ")
resultrdd.flatMap(row => (Array( (row(0),row(1),row(2)), (row(3),row(4),row(5)) )))

这将给你一个rdd的元组与时间,读1和读2。如果需要schemardd,可以将其Map到flatmap转换中的case类,如下所示:

case class Record(time: Long, reading1: Double, reading2: Double)
resultrdd.flatMap(row => (Array( Record(row.getLong(0),row.getDouble(1),row.getDouble(2)), 
        Record(row.getLong(3),row.getDouble(4),row.getDouble(5))  )))
val schrdd = sqlContext.createSchemaRDD(resultrdd)

更新:
对于许多嵌套键,可以按如下方式分析行:

val allrdd = sqlContext.sql("SELECT * from table123")
allrdd.flatMap(row=>{
    var recs = Array[Record](); 
    for(col <- (0 to row.length-1)) { 
        row(col) match { 
            case r:Row => recs = recs :+ Record(r.getLong(2),r.getDouble(0),r.getDouble(1)); 
            case _ => ; 
        } 
    }; 
    recs 
})

相关问题