我正试着读一个Kafka的主题,然后把它流到我的Flume里。为了读取数据,我编写了以下代码。
json中的主题数据:
{
"HiveData": {
"Tablename": "HiveTablename1",
"Rowcount": "3213423",
"lastupdateddate": "2021-02-24 13:04:14"
},
"HbaseData": [
{
"Tablename": "HbaseTablename1",
"Rowcount": "23543",
"lastupdateddate": "2021-02-23 12:03:11"
}
],
"PostgresData": [
{
"Tablename": "PostgresTablename1",
"Rowcount": "23454345",
"lastupdateddate": "2021-02-23 12:03:11"
}
]
}
下面是我为解析主题编写的代码:
def streamData(): DataFrame = {
val kafkaDF = spark.readStream.format("kafka")
.option("kafka.bootstrap.servers", "server:port")
.option("subscribe", "topic_name")
.load()
kafkaDF.select(from_json(col("HiveData"), topic_schema).as("HiveData")).selectExpr("HiveData.tablename as table", "HiveData.Rowcount as rowcount", "HiveData.lastupdateddate as lastupdateddate")
kafkaDF
}
但是如果json的格式是:
{"Tablename": "HiveTablename1","Rowcount": "3213423","lastupdateddate": "2021-02-24 13:04:14"}
我想解析json并将hivedata转换成一个单独的dataframe和一个单独的dataframe(对于hba)以及postgresdata。如果json数据在一行中,我编写的代码就可以工作。有没有人能告诉我,如果数据是本问题开头提到的嵌套格式,如何将其解析为多个Dataframe?非常感谢您的帮助。
1条答案
按热度按时间mrzz3bfm1#
尝试添加
选项(“多行”、“真”)