无法流式传输来自kafka debezium连接器的avro格式化数据

mnemlml8  于 2021-06-07  发布在  Kafka
关注(0)|答案(0)|浏览(273)

我流蒙哥oplog数据通过Kafka。我使用debezium cdcKafka连接器到尾部mongo oplog。
模式注册表使用avroconverter转换器序列化键和值
bootstrap.servers服务器=localhost:9092
kafka key.converter=io.confluent.connect.avro.avroconverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.avroconverter value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.jsonconverter internal.value.converter=org.apache.kafka.connect.json.jsonconverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
下面的代码流的Kafka数据和反序列化它使用Kafka夫罗德序列化

  1. import io.confluent.kafka.schemaregistry.client.rest.RestService
  2. import io.confluent.kafka.serializers.KafkaAvroDeserializer
  3. import org.apache.avro.Schema
  4. import org.apache.spark.sql.SparkSession
  5. import scala.collection.JavaConverters._
  6. object KafkaStream{
  7. case class DeserializedFromKafkaRecord(key: String, value: String)
  8. def main(args: Array[String]): Unit = {
  9. val sparkSession = SparkSession
  10. .builder
  11. .master("local[*]")
  12. .appName("kafka")
  13. .getOrCreate()
  14. //sparkSession.sparkContext.setLogLevel("ERROR")
  15. import sparkSession.implicits._
  16. val schemaRegistryURL = "http://127.0.0.1:8081"
  17. val topicName = "prodCollection.inventory.Prod"
  18. val subjectValueName = topicName + "-value"
  19. //create RestService object
  20. val restService = new RestService(schemaRegistryURL)
  21. //.getLatestVersion returns io.confluent.kafka.schemaregistry.client.rest.entities.Schema object.
  22. val valueRestResponseSchema = restService.getLatestVersion(subjectValueName)
  23. //Use Avro parsing classes to get Avro Schema
  24. val parser = new Schema.Parser
  25. val topicValueAvroSchema: Schema = parser.parse(valueRestResponseSchema.getSchema)
  26. //key schema is typically just string but you can do the same process for the key as the value
  27. val keySchemaString = "\"string\""
  28. val keySchema = parser.parse(keySchemaString)
  29. //Create a map with the Schema registry url.
  30. //This is the only Required configuration for Confluent's KafkaAvroDeserializer.
  31. val props = Map("schema.registry.url" -> schemaRegistryURL)
  32. //Declare SerDe vars before using Spark structured streaming map. Avoids non serializable class exception.
  33. var keyDeserializer: KafkaAvroDeserializer = null
  34. var valueDeserializer: KafkaAvroDeserializer = null
  35. //Create structured streaming DF to read from the topic.
  36. val rawTopicMessageDF = sparkSession.readStream
  37. .format("kafka")
  38. .option("kafka.bootstrap.servers", "localhost:9092")
  39. .option("subscribe", topicName)
  40. .option("startingOffsets", "earliest")
  41. .option("key.deserializer","KafkaAvroDeserializer")
  42. .option("value.deserializer","KafkaAvroDeserializer")
  43. //.option("maxOffsetsPerTrigger", 20) //remove for prod
  44. .load()
  45. rawTopicMessageDF.printSchema()
  46. //instantiate the SerDe classes if not already, then deserialize!
  47. val deserializedTopicMessageDS = rawTopicMessageDF.map{
  48. row =>
  49. if (keyDeserializer == null) {
  50. keyDeserializer = new KafkaAvroDeserializer
  51. keyDeserializer.configure(props.asJava, true) //isKey = true
  52. }
  53. if (valueDeserializer == null) {
  54. valueDeserializer = new KafkaAvroDeserializer
  55. valueDeserializer.configure(props.asJava, false) //isKey = false
  56. }
  57. //Pass the Avro schema.
  58. val deserializedKeyString = keyDeserializer.deserialize(topicName, row.getAs[Array[Byte]]("key"), keySchema).toString //topic name is actually unused in the source code, just required by the signature. Weird right?
  59. val deserializedValueJsonString = valueDeserializer.deserialize(topicName, row.getAs[Array[Byte]]("value"), topicValueAvroSchema).toString
  60. DeserializedFromKafkaRecord(deserializedKeyString, deserializedValueJsonString)
  61. }
  62. deserializedTopicMessageDS.printSchema()
  63. deserializedTopicMessageDS.writeStream
  64. .outputMode("append")
  65. .format("console")
  66. .option("truncate", false)
  67. .start()

反序列化的TopicMessageDS数据集架构正在根据需要进行转换,但流将被以下信息停止,

  1. root
  2. |-- key: binary (nullable = true)
  3. |-- value: binary (nullable = true)
  4. |-- topic: string (nullable = true)
  5. |-- partition: integer (nullable = true)
  6. |-- offset: long (nullable = true)
  7. |-- timestamp: timestamp (nullable = true)
  8. |-- timestampType: integer (nullable = true)
  9. root
  10. |-- key: string (nullable = true)
  11. |-- value: string (nullable = true)
  12. 18/08/13 22:53:54 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
  13. 18/08/13 22:53:54 INFO StreamExecution: Starting [id = b1fb3ce2-08d0-4d87-b031-af129432d91a, runId = 38b66e4a-040f-42c8-abbe-bc27fa3b9462]. Use /private/var/folders/zf/6dh44_fx1sn2dp2w7d_54wg80000gn/T/temporary-ae7a93f6-0307-4f39-ba44-93d5d3d7c0ab to store the query checkpoint.
  14. 18/08/13 22:53:54 INFO SparkContext: Invoking stop() from shutdown hook
  15. 18/08/13 22:53:54 INFO SparkUI: Stopped Spark web UI at http://192.168.0.100:4040
  16. 18/08/13 22:53:54 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
  17. 18/08/13 22:53:54 INFO MemoryStore: MemoryStore cleared
  18. 18/08/13 22:53:54 INFO BlockManager: BlockManager stopped
  19. 18/08/13 22:53:54 INFO BlockManagerMaster: BlockManagerMaster stopped
  20. 18/08/13 22:53:54 INFO ConsumerConfig: ConsumerConfig values:
  21. auto.commit.interval.ms = 5000
  22. auto.offset.reset = earliest
  23. bootstrap.servers = [localhost:9092]
  24. check.crcs = true
  25. client.id =
  26. connections.max.idle.ms = 540000
  27. default.api.timeout.ms = 60000
  28. enable.auto.commit = false
  29. exclude.internal.topics = true
  30. fetch.max.bytes = 52428800
  31. fetch.max.wait.ms = 500
  32. fetch.min.bytes = 1
  33. group.id = spark-kafka-source-b9f0f64b-952d-4733-ba3e-aa753954b2ef--1115279952-driver-0
  34. heartbeat.interval.ms = 3000
  35. interceptor.classes = []
  36. internal.leave.group.on.close = true
  37. isolation.level = read_uncommitted
  38. key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  39. max.partition.fetch.bytes = 1048576
  40. max.poll.interval.ms = 300000
  41. max.poll.records = 1
  42. metadata.max.age.ms = 300000
  43. metric.reporters = []
  44. metrics.num.samples = 2
  45. metrics.recording.level = INFO
  46. metrics.sample.window.ms = 30000
  47. partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  48. receive.buffer.bytes = 65536
  49. reconnect.backoff.max.ms = 1000
  50. reconnect.backoff.ms = 50
  51. request.timeout.ms = 30000
  52. retry.backoff.ms = 100
  53. sasl.client.callback.handler.class = null
  54. sasl.jaas.config = null
  55. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  56. sasl.kerberos.min.time.before.relogin = 60000
  57. sasl.kerberos.service.name = null
  58. sasl.kerberos.ticket.renew.jitter = 0.05
  59. sasl.kerberos.ticket.renew.window.factor = 0.8
  60. sasl.login.callback.handler.class = null
  61. sasl.login.class = null
  62. sasl.login.refresh.buffer.seconds = 300
  63. sasl.login.refresh.min.period.seconds = 60
  64. sasl.login.refresh.window.factor = 0.8
  65. sasl.login.refresh.window.jitter = 0.05
  66. sasl.mechanism = GSSAPI
  67. security.protocol = PLAINTEXT
  68. send.buffer.bytes = 131072
  69. session.timeout.ms = 10000
  70. ssl.cipher.suites = null
  71. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  72. ssl.endpoint.identification.algorithm = https
  73. ssl.key.password = null
  74. ssl.keymanager.algorithm = SunX509
  75. ssl.keystore.location = null
  76. ssl.keystore.password = null
  77. ssl.keystore.type = JKS
  78. ssl.protocol = TLS
  79. ssl.provider = null
  80. ssl.secure.random.implementation = null
  81. ssl.trustmanager.algorithm = PKIX
  82. ssl.truststore.location = null
  83. ssl.truststore.password = null
  84. ssl.truststore.type = JKS
  85. value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  86. 18/08/13 22:53:54 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
  87. 18/08/13 22:53:54 INFO SparkContext: Successfully stopped SparkContext
  88. 18/08/13 22:53:54 INFO ShutdownHookManager: Shutdown hook called
  89. 18/08/13 22:53:54 INFO ShutdownHookManager: Deleting directory /private/var/folders/zf/6dh44_fx1sn2dp2w7d_54wg80000gn/T/spark-e1c2b259-39f2-4d65-9919-74ab1ad6acae
  90. 18/08/13 22:53:54 INFO ShutdownHookManager: Deleting directory /private/var/folders/zf/6dh44_fx1sn2dp2w7d_54wg80000gn/T/temporary-ae7a93f6-0307-4f39-ba44-93d5d3d7c0ab
  91. Process finished with exit code 0

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题