Kafka jdbc sink connector创建了与原始数据类型不匹配的数据类型,

n53p2ov0  于 2022-11-21  发布在  Apache
关注(0)|答案(3)|浏览(211)

我正在使用Kafka和Kafka Connect将MS SQL Server数据库复制到MySQL,使用debezium SQL Server CDC源连接器和confluent JDBC接收连接器。“auto.create”设置为true,接收连接器确实创建了表,但有些数据类型不匹配。在SQL Server中,我有

CREATE TABLE employees (
  id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY,
  first_name VARCHAR(255) NOT NULL,
  last_name VARCHAR(255) NOT NULL,
  email VARCHAR(255) NOT NULL UNIQUE,
  start_date DATE,
  salary INT,
  secret FLOAT,
  create_time TIME
);

但在MySQL中,它创建了以下内容:

mysql> desc employees;
+-------------+-------------+------+-----+---------+-------+
| Field       | Type        | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+-------+
| id          | int         | NO   | PRI | NULL    |       |
| first_name  | text        | NO   |     | NULL    |       |
| last_name   | text        | NO   |     | NULL    |       |
| email       | text        | NO   |     | NULL    |       |
| start_date  | int         | YES  |     | NULL    |       |
| salary      | int         | YES  |     | NULL    |       |
| secret      | double      | YES  |     | NULL    |       |
| create_time | bigint      | YES  |     | NULL    |       |
| messageTS   | datetime(3) | YES  |     | NULL    |       |
+-------------+-------------+------+-----+---------+-------+

ignore messgetTS,这是我在SMT中添加的一个额外字段。
first_name,last_name,email,start_date和创建时间的数据类型都不匹配。它将VARCHAR(255)转换为text,将DATE转换为int,将TIME转换为bigint。
只是想知道是否有什么配置错误?
我正在使用Docker运行SQL Server 2019和MySQL 9.0.28。
我还尝试了禁用自动创建和自动演化的建议,并使用适当的数据类型预先创建表。

mysql> desc employees;
+-------------+--------------+------+-----+---------+----------------+
| Field       | Type         | Null | Key | Default | Extra          |
+-------------+--------------+------+-----+---------+----------------+
| id          | int          | NO   | PRI | NULL    | auto_increment |
| first_name  | varchar(255) | NO   |     | NULL    |                |
| last_name   | varchar(255) | NO   |     | NULL    |                |
| email       | varchar(255) | NO   |     | NULL    |                |
| start_date  | date         | NO   |     | NULL    |                |
| salary      | int          | NO   |     | NULL    |                |
| secret      | double       | NO   |     | NULL    |                |
| create_time | datetime     | NO   |     | NULL    |                |
| messageTS   | datetime     | NO   |     | NULL    |                |
+-------------+--------------+------+-----+---------+----------------+

但在尝试插入到数据库中时会出现以下异常:

kafka-connect  | [2022-03-04 19:55:07,331] INFO Setting metadata for table "employees" to Table{name='"employees"', type=TABLE columns=[Column{'first_name', isPrimaryKey=false, allowsNull=false, sqlType=VARCHAR}, Column{'secret', isPrimaryKey=false, allowsNull=false, sqlType=DOUBLE}, Column{'salary', isPrimaryKey=false, allowsNull=false, sqlType=INT}, Column{'start_date', isPrimaryKey=false, allowsNull=false, sqlType=DATE}, Column{'email', isPrimaryKey=false, allowsNull=false, sqlType=VARCHAR}, Column{'id', isPrimaryKey=true, allowsNull=false, sqlType=INT}, Column{'last_name', isPrimaryKey=false, allowsNull=false, sqlType=VARCHAR}, Column{'messageTS', isPrimaryKey=false, allowsNull=false, sqlType=DATETIME}, Column{'create_time', isPrimaryKey=false, allowsNull=false, sqlType=DATETIME}]} (io.confluent.connect.jdbc.util.TableDefinitions)
kafka-connect  | [2022-03-04 19:55:07,382] WARN Write of 4 records failed, remainingRetries=0 (io.confluent.connect.jdbc.sink.JdbcSinkTask)
kafka-connect  | java.sql.BatchUpdateException: Data truncation: Incorrect date value: '19055' for column 'start_date' at row 1

消息的值为

{"id":1002,"first_name":"George","last_name":"Bailey","email":"george.bailey@acme.com","start_date":{"int":19055},"salary":{"int":100000},"secret":{"double":0.867153569942739},"create_time":{"long":1646421476477}}

start_date字段的消息模式为

{
      "name": "start_date",
      "type": [
        "null",
        {
          "type": "int",
          "connect.version": 1,
          "connect.name": "io.debezium.time.Date"
        }
      ],
      "default": null
    }

看起来它不知道如何将io.debezium.time.Date转换为Date,而是将其视为int。
任何关于这一点的指针都非常感谢。
源配置:

{
    "name": "SimpleSQLServerCDC",
    "config":{
      "connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
      "tasks.max":1,
      "key.converter": "io.confluent.connect.avro.AvroConverter",
      "key.converter.schema.registry.url": "http://schema-registry:8081",
      "value.converter": "io.confluent.connect.avro.AvroConverter",
      "value.converter.schema.registry.url": "http://schema-registry:8081",
      "confluent.topic.bootstrap.servers":"kafka:29092",
      "database.hostname" : "sqlserver",
      "database.port" : "1433",
      "database.user" : "sa",
      "database.password" : "",
      "database.dbname" : "testDB",
      "database.server.name" : "corporation",

      "database.history.kafka.topic": "dbhistory.corporation",
      "database.history.kafka.bootstrap.servers" : "kafka:29092",

      "topic.creation.default.replication.factor": 1,
      "topic.creation.default.partitions": 10,
      "topic.creation.default.cleanup.policy": "delete"
    }
  }

接收器配置:

{
  "name": "SimpleMySQLJDBC",
  "config": {
          "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
          "connection.url": "jdbc:mysql://mysql:3306/sinkdb",
          "connection.user": "user",
          "connection.password": "",
          "tasks.max": "2",
          "topics.regex": "corporation.dbo.*",
          "auto.create": "true",
          "auto.evolve": "true",
          "dialect.name": "MySqlDatabaseDialect",
          "insert.mode": "upsert",
          "pk.mode": "record_key",
          "pk.fields":"id",
          "delete.enabled": "true",
          "batch.size": 1,
          "key.converter":"io.confluent.connect.avro.AvroConverter",
          "key.converter.schema.registry.url": "http://schema-registry:8081",
          "value.converter": "io.confluent.connect.avro.AvroConverter",
          "value.converter.schema.registry.url": "http://schema-registry:8081",

          "transforms":"unwrap,dropPrefix,insertTS",

          "transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",
          "transforms.dropPrefix.regex":"corporation.dbo.(.*)",
          "transforms.dropPrefix.replacement":"$1",

          "transforms.unwrap.type":"io.debezium.transforms.ExtractNewRecordState",
          "transforms.unwrap.drop.tombstones":"false",
          "transforms.unwrap.delete.handling.mode":"drop",

          "transforms.insertTS.type": "org.apache.kafka.connect.transforms.InsertField$Value",
          "transforms.insertTS.timestamp.field": "messageTS",

          "errors.log.enable": "true",
          "errors.log.include.messages": "true",
          "errors.tolerance":"all",
          "errors.deadletterqueue.topic.name":"dlq-mysql",
          "errors.deadletterqueue.context.headers.enable": "true",
          "errors.deadletterqueue.topic.replication.factor":"1"
      }
}
yftpprvb

yftpprvb1#

将VARCHAR(255)转换为文本
字段的字符限制不通过Connect API数据类型实现。任何类似字符串的数据都将成为TEXT列类型。
将DATE转换为int,将TIME转换为bigint
我认为,默认情况下,日期时间值被转换为Unix纪元。
总的来说,如果要准确地保留类型,请禁用从接收器连接器自动创建表,并使用所需的类型预先创建表。

wbrvyc0a

wbrvyc0a2#

您需要进行2项更改
在源连接器中添加"time.precision.mode":"connect"
在接收器连接器中添加

"transforms": "TimestampConverter",
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.target.type": "Timestamp",
"transforms.TimestampConverter.field": "dob",
4smxwvx5

4smxwvx53#

我刚刚做了一个SMT,它将所有的时间戳字段转换为字符串。希望它能有所帮助。
https://github.com/FX-HAO/kafka-connect-debezium-tranforms

相关问题