环境(docker):
# 5.5.1
image: confluentinc/cp-zookeeper:latest
# 2.13-2.6.0
image: wurstmeister/kafka:latest
# 5.5.1
image: confluentinc/cp-schema-registry:latest
# 5.5.1
image: confluentinc/cp-kafka-connect:latest
# 0.11.0
image: confluentinc/ksqldb-server:latest
Kafka主题内容来自Kafka连接(使用debezium)。
当我使用查询时( select * from user emit changes
),显示了大部分内容,但丢失了一些内容。
我尝试查看ksqldb服务器的日志,发现一些错误消息:
ksqldb-server | [2020-08-29 12:44:23,008] ERROR {"type":0,"deserializationError":{"errorMessage":"Error deserializing DELIMITED message from topic: pa.new_pa.user","recordB64":null,"cause":["Size of data received by LongDeserializer is not 8"],"topic":"pa.new_pa.user"},"recordProcessingError":null,"productionError":null} (processing.CTAS_USER2_0.KsqlTopic.Source.deserializer:44)
ksqldb-server | [2020-08-29 12:44:23,008] WARN Exception caught during Deserialization, taskId: 0_0, topic: pa.new_pa.user, partition: 0, offset: 23095 (org.apache.kafka.streams.processor.internals.StreamThread:36)
ksqldb-server | org.apache.kafka.common.errors.SerializationException: Error deserializing DELIMITED message from topic: pa.new_pa.user
ksqldb-server | Caused by: org.apache.kafka.common.errors.SerializationException: Size of data received by LongDeserializer is not 8
ksqldb-server | [2020-08-29 12:44:23,009] WARN stream-thread [_confluent-ksql-default_query_CTAS_USER2_0-6637e2a8-c417-49fa-bb65-d0d1a5205af1-StreamThread-1] task [0_0] Skipping record due to deserialization error. topic=[pa.new_pa.user] partition=[0] offset=[23095] (org.apache.kafka.streams.processor.internals.RecordDeserializer:88)
ksqldb-server | org.apache.kafka.common.errors.SerializationException: Error deserializing DELIMITED message from topic: pa.new_pa.user
ksqldb-server | Caused by: org.apache.kafka.common.errors.SerializationException: Size of data received by LongDeserializer is not 8
我尝试使用偏移量为“23095”的消息,看起来很好。
[2020-08-29 13:24:12,021] INFO [Consumer clientId=consumer-console-consumer-37294-1, groupId=console-consumer-37294] Subscribed to partition(s): pa.new_pa.user-0 (org.apache.kafka.clients.consumer.KafkaConsumer)
[2020-08-29 13:24:12,026] INFO [Consumer clientId=consumer-console-consumer-37294-1, groupId=console-consumer-37294] Seeking to offset 23095 for partition pa.new_pa.user-0 (org.apache.kafka.clients.consumer.KafkaConsumer)
[2020-08-29 13:24:12,570] INFO [Consumer clientId=consumer-console-consumer-37294-1, groupId=console-consumer-37294] Cluster ID: rdsgvpoESzer6IAxQDlLUA (org.apache.kafka.clients.Metadata)
{"id":8191,"parent_id":{"long":8184},"upper_id":0,"username":"app0623c","domain":43,"role":1,"modified_at":1598733553000,"blacklist_modified_at":{"long":1598733768000},"tied_at":{"long":1598733771000},"name":"test","enable":1,"is_default":0,"bankrupt":0,"locked":0,"tied":0,"checked":0,"failed":0,"last_login":{"long":1598733526000},"last_online":{"long":1598733532000},"last_ip":{"bytes":"ÿÿ\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000"},"last_country":{"string":"MY"},"last_city_id":0}
这是我的源连接器配置,还有:
CREATE SOURCE CONNECTOR `pa_source_unwrap` WITH(
"connector.class" = 'io.debezium.connector.mysql.MySqlConnector',
"tasks.max" = '1',
"database.hostname" = 'docker.for.mac.host.internal',
"database.port" = '3306',
"database.user" = 'root',
"database.password" = 'xxxxxxx',
"database.service.id" = '10001',
"database.server.name" = 'pa',
"database.whitelist" = 'new_pa',
"table.whitelist" = 'new_pa.user, new_pa.user_created, new_pa.cash',
"database.history.kafka.bootstrap.servers" = 'kafka:9092',
"database.history.kafka.topic" = 'schema-changes.pa',
"transforms" = 'unwrap',
"transforms.unwrap.type" = 'io.debezium.transforms.ExtractNewRecordState',
"transforms.unwrap.delete.handling.mode" = 'drop',
"transforms.unwrap.drop.tombstones" = 'true',
"key.converter" = 'io.confluent.connect.avro.AvroConverter',
"value.converter" = 'io.confluent.connect.avro.AvroConverter',
"key.converter.schema.registry.url" = 'http://schema-registry:8081',
"value.converter.schema.registry.url" = 'http://schema-registry:8081',
"key.converter.schemas.enable" = 'true',
"value.converter.schemas.enable" = 'true'
);
CREATE TABLE user (`id` BIGINT PRIMARY KEY) WITH (
KAFKA_TOPIC = 'pa.new_pa.user',
VALUE_FORMAT = 'AVRO'
);
主题架构(自动生成):
Key:
{
"connect.name": "pa.new_pa.user.Key",
"fields": [
{
"name": "id",
"type": "long"
}
],
"name": "Key",
"namespace": "pa.new_pa.user",
"type": "record"
}
Value:
{
"connect.name": "pa.new_pa.user.Value",
"fields": [
{
"name": "id",
"type": "long"
},
{
"default": null,
"name": "parent_id",
"type": [
"null",
"long"
]
},
{
"default": 0,
"name": "upper_id",
"type": {
"connect.default": 0,
"type": "long"
}
},
{
"name": "username",
"type": "string"
},
{
"name": "domain",
"type": "int"
},
{
"name": "role",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"name": "modified_at",
"type": {
"connect.name": "io.debezium.time.Timestamp",
"connect.version": 1,
"type": "long"
}
},
{
"default": null,
"name": "blacklist_modified_at",
"type": [
"null",
{
"connect.name": "io.debezium.time.Timestamp",
"connect.version": 1,
"type": "long"
}
]
},
{
"default": null,
"name": "tied_at",
"type": [
"null",
{
"connect.name": "io.debezium.time.Timestamp",
"connect.version": 1,
"type": "long"
}
]
},
{
"name": "name",
"type": "string"
},
{
"name": "enable",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"name": "is_default",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"name": "bankrupt",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"name": "locked",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"name": "tied",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"name": "checked",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"name": "failed",
"type": {
"connect.type": "int16",
"type": "int"
}
},
{
"default": null,
"name": "last_login",
"type": [
"null",
{
"connect.name": "io.debezium.time.Timestamp",
"connect.version": 1,
"type": "long"
}
]
},
{
"default": null,
"name": "last_online",
"type": [
"null",
{
"connect.name": "io.debezium.time.Timestamp",
"connect.version": 1,
"type": "long"
}
]
},
{
"default": null,
"name": "last_ip",
"type": [
"null",
"bytes"
]
},
{
"default": null,
"name": "last_country",
"type": [
"null",
"string"
]
},
{
"name": "last_city_id",
"type": "long"
}
],
"name": "Value",
"namespace": "pa.new_pa.user",
"type": "record"
}
2条答案
按热度按时间nue99wik1#
这个
id
字段类型为BIGINT
.我尝试修改配置:
key.converter = org.apache.kafka.connect.storage.LongConverter
(参考:ksqldb microsite),获取错误:LongConverter could not be found
.套
key.converter = org.apache.kafka.connect.converters.LongConverter
(ref),并获取错误:我使用json+avro转换器,并创建表:
我可以从mysql获取所有数据,但是得到的内容不同
row_id
. 原因可能是KAFKA format
.krugob8w2#
这里的问题是您的密钥在avro中,ksqldb目前只支持kafka格式的密钥(从版本0.12开始)。
avro键正在积极开发中:#4461增加了对avro原语的支持,#4997扩展了这一功能,以支持avro记录中的单键列(如这里所示)。
您正在使用以下配置设置avro密钥格式:
你是sql:
正在设置
VALUE_FORMAT
至AVRO
,但密钥格式当前为KAFKA
. 因此,您可以使用:…将密钥转换为正确的格式。在ksqldb microsite上提供有关用于kafka格式的正确转换器的更多信息。