我当前的测试配置如下:
version: '3.7'
services:
postgres:
image: debezium/postgres
restart: always
ports:
- "5432:5432"
zookeeper:
image: debezium/zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
kafka:
image: debezium/kafka
restart: always
ports:
- "9092:9092"
links:
- zookeeper
depends_on:
- zookeeper
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_GROUP_MIN_SESSION_TIMEOUT_MS=250
connect:
image: debezium/connect
restart: always
ports:
- "8083:8083"
links:
- zookeeper
- postgres
- kafka
depends_on:
- zookeeper
- postgres
- kafka
environment:
- BOOTSTRAP_SERVERS=kafka:9092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_source_connect_statuses
我用docker compose这样运行它:
$ docker-compose up
我没有看到错误信息。好像一切都很顺利。如果我这样做了 docker ps
,我看到所有服务都在运行。
为了检查kafka是否正在运行,我在python中创建了kafka producer和kafka consumer:
# producer. I run it in one console window
from kafka import KafkaProducer
from json import dumps
from time import sleep
producer = KafkaProducer(bootstrap_servers=['localhost:9092'], value_serializer=lambda x: dumps(x).encode('utf-8'))
for e in range(1000):
data = {'number' : e}
producer.send('numtest', value=data)
sleep(5)
# consumer. I run it in other colsole window
from kafka import KafkaConsumer
from json import loads
consumer = KafkaConsumer(
'numtest',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: loads(x.decode('utf-8')))
for message in consumer:
print(message)
而且效果非常好。我看到了我的制作者发布消息的方式,也看到了它们在消费者窗口中的使用方式。
现在我想让疾病预防控制中心工作。首先,在我设置的postgres容器内 postgres
角色密码 postgres
:
$ su postgres
$ psql
psql> \password postgres
Enter new password: postgres
然后我创建了一个新的数据库 test
:
psql> CREATE DATABASE test;
我创建了一个表:
psql> \c test;
test=# create table mytable (id serial, name varchar(128), primary key(id));
最后,我为debezium cdc堆栈创建了一个连接器:
$ curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{
"name": "test-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"plugin.name": "pgoutput",
"database.hostname": "postgres",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname" : "test",
"database.server.name": "postgres",
"database.whitelist": "public.mytable",
"database.history.kafka.bootstrap.servers": "localhost:9092",
"database.history.kafka.topic": "public.some_topic"
}
}'
{"name":"test-connector","config":{"connector.class":"io.debezium.connector.postgresql.PostgresConnector","tasks.max":"1","plugin.name":"pgoutput","database.hostname":"postgres","database.port":"5432","database.user":"postgres","database.password":"postgres","database.dbname":"test","database.server.name":"postgres","database.whitelist":"public.mytable","database.history.kafka.bootstrap.servers":"localhost:9092","database.history.kafka.topic":"public.some_topic","name":"test-connector"},"tasks":[],"type":"source"}
如您所见,我的连接器创建时没有任何错误。现在我希望DebeziumCDC会公布对Kafka主题的所有修改 public.some_topic
. 为了验证这一点,我创建了一个新的Kafka消费者:
from kafka import KafkaConsumer
from json import loads
consumer = KafkaConsumer(
'public.some_topic',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: loads(x.decode('utf-8')))
for message in consumer:
print(message)
与第一个例子的唯一区别是,我在观察 public.some_topic
. 然后转到数据库控制台并插入:
test=# insert into mytable (name) values ('Tom Cat');
INSERT 0 1
test=#
所以,插入了一个新的值,但我发现在消费者窗口中什么也没有发生。换句话说,debezium不会向Kafka发布事件 public.some_topic
. 有什么问题吗?我该怎么解决?
1条答案
按热度按时间hpxqektj1#
使用docker compose,在创建连接器时,我在kafka connect worker日志中看到以下错误:
如果您使用kafka connect rest api查询任务,它也会反映在任务的状态中:
你正在运行的postgres版本是
这个
pgoutput
仅适用于>=版本10。我将docker compose更改为使用版本10:
在跳转堆栈以获得一个干净的开始并遵循您的指示后,我得到一个正在运行的连接器:
以及Kafka主题中的数据:
我在docker compose中添加了kafkacat:
编辑:保留以前的答案,因为它仍然有用和相关:
debezium将根据表名向主题写入消息。在你的例子中
postgres.test.mytable
.这就是为什么
kafkacat
是有用的,因为你可以跑查看所有主题和分区的列表。一旦你有了主题
从中读出。
查看kafkacat的详细信息,包括如何使用docker运行它
这里还有一个docker compose的演示