scala—spark 2.2.0独立模式将Dataframe写入本地单节点kafka时出错

xzv2uavs  于 2021-06-05  发布在  Kafka
关注(0)|答案(1)|浏览(391)

数据源来自databricks笔记本demo:five spark 提取和探索复杂数据类型的SQLHelper实用程序函数!
但当我在自己的笔记本电脑上尝试这些代码时,总是会出错。
首先,将json数据作为dataframe加载

res2: org.apache.spark.sql.DataFrame = [battery_level: string, c02_level: string]

scala> res2.show
+-------------+---------+
|battery_level|c02_level|
+-------------+---------+
|            7|      886|
|            5|     1378|
|            8|      917|
|            8|     1504|
|            8|      831|
|            9|     1304|
|            8|     1574|
|            9|     1208|
+-------------+---------+

其次, write Kafka的资料:

res2.write 
  .format("kafka") 
  .option("kafka.bootstrap.servers", "localhost:9092") 
  .option("topic", "test") 
  .save()

所有这些都遵循上面的笔记本演示和官方步骤
但错误显示:

scala> res2.write 
         .format("kafka") 
         .option("kafka.bootstrap.servers", "localhost:9092") 
         .option("topic", "iot-devices") 
         .save()
org.apache.spark.sql.AnalysisException: Required attribute 'value' not found;
  at org.apache.spark.sql.kafka010.KafkaWriter$$anonfun$6.apply(KafkaWriter.scala:72)
  at org.apache.spark.sql.kafka010.KafkaWriter$$anonfun$6.apply(KafkaWriter.scala:72)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.kafka010.KafkaWriter$.validateQuery(KafkaWriter.scala:71)
  at org.apache.spark.sql.kafka010.KafkaWriter$.write(KafkaWriter.scala:87)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider.createRelation(KafkaSourceProvider.scala:165)
  at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
  at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
  ... 52 elided

我假设这可能是Kafka的问题,然后我测试了Dataframe read 从Kafka确保连通性:

scala> val kaDF = spark.read
         .format("kafka") 
         .option("kafka.bootstrap.servers", "localhost:9092") 
         .option("subscribe", "iot-devices") 
         .load()
kaDF: org.apache.spark.sql.DataFrame = [key: binary, value: binary ... 5 more fields]

scala> kaDF.show
+----+--------------------+-----------+---------+------+--------------------+-------------+
| key|               value|      topic|partition|offset|           timestamp|timestampType|
+----+--------------------+-----------+---------+------+--------------------+-------------+
|null|    [73 73 73 73 73]|iot-devices|        0|     0|2017-09-27 11:11:...|            0|
|null|[64 69 63 6B 20 3...|iot-devices|        0|     1|2017-09-27 11:29:...|            0|
|null|       [78 69 78 69]|iot-devices|        0|     2|2017-09-27 11:29:...|            0|
|null|[31 20 32 20 33 2...|iot-devices|        0|     3|2017-09-27 11:30:...|            0|
+----+--------------------+-----------+---------+------+--------------------+-------------+

因此,结果表明,从kafka bootstrap.servers读取主题“物联网设备”中的数据 localhost:9092 确实有效。
我在网上搜索了很多,但还是解决不了?
有sparksql经验的人能告诉我我的命令有什么问题吗?
谢谢!

tjjdgumg

tjjdgumg1#

错误消息清楚地显示了问题的来源:
org.apache.spark.sql.analysisexception:未找到必需的属性'value';
这个 Dataset 要写至少要有 value 列(可选) key 以及 topic )以及 res2 只有 battery_level , c02_level .
例如,您可以:

import org.apache.spark.sql.functions._

res2.select(to_json(struct($"battery_level", "c02_level")).alias("value"))
  .writeStream
  ...

相关问题