来源:oracle数据库目标:kafka
通过oracle golden adapter for big data将数据从源移动到目标。问题是数据移动很好,但当我插入5条记录时,它作为一个文件在主题中运行。
我想把它分组。如果我做了5个插入我需要在主题(Kafka)五个单独的条目
Kafka处理器,大数据gg版12.3.1
我在源代码中插入了五条记录,在Kafka中,我得到了如下所示的所有插入
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"I","op_ts":"2017-10-24 08:52:01.000000","current_ts":"2017-10-24T12:52:04.960000","pos":"00000000030000001263","after":{"TEST_ID":2,"TEST_NAME":"Francis","TEST_NAME_AR":"Francis"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"I","op_ts":"2017-10-24 08:52:01.000000","current_ts":"2017-10-24T12:52:04.961000","pos":"00000000030000001437","after":{"TEST_ID":3,"TEST_NAME":"Ashfak","TEST_NAME_AR":"Ashfak"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"U","op_ts":"2017-10-24 08:55:04.000000","current_ts":"2017-10-24T12:55:07.252000","pos":"00000000030000001734","before":{"TEST_ID":null,"TEST_NAME":"Francis"},"after":{"TEST_ID":null,"TEST_NAME":"updatefrancis"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"D","op_ts":"2017-10-24 08:56:11.000000","current_ts":"2017-10-24T12:56:14.365000","pos":"00000000030000001865","before":{"TEST_ID":2}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"U","op_ts":"2017-10-24 08:57:43.000000","current_ts":"2017-10-24T12:57:45.817000","pos":"00000000030000002152","before":{"TEST_ID":3},"after":{"TEST_ID":4}}
2条答案
按热度按时间juzqafwq1#
在下面的.props文件中创建
gg.handler.kafkahandler.mode=op。
成功了!!
bfhwhh0e2#
我建议使用kafka connect处理程序,因为它随后会向合流模式注册中心注册数据的模式,从而更容易将数据流向前传输到elasticsearch等目标(使用kafka connect)。
在Kafka中,来自甲骨文的每条记录将是一条Kafka消息。