java嵌入的kafka以错误的分区数开始

dl5txlt9  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(392)

我已经在junit测试中启动了一个嵌入kafka的示例。我可以在应用程序中正确读取推送到流中的记录,但我注意到,每个主题只有一个分区。有人能解释为什么吗?
在我的申请表中,我有以下内容:

List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);

这将返回一个包含一项的列表。当对具有3个分区的本地kafka运行时,它会像预期的那样返回一个包含3项的列表。
我的测试结果是:

@RunWith(SpringRunner.class)
@SpringBootTest
@EmbeddedKafka(partitions = 3)
@ActiveProfiles("inmemory")
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
@TestPropertySource(
                locations = "classpath:application-test.properties",
                properties = {"app.onlyMonitorIfDataUpdated=true"})
public class MonitorRestKafkaIntegrationTest {

    @Autowired
    private EmbeddedKafkaBroker embeddedKafkaBroker;

    @Value("${spring.embedded.kafka.brokers}")
    private String embeddedBrokers;

    @Autowired
    private WebApplicationContext wac;

    @Autowired
    private JsonUtility jsonUtility;

    private MockMvc mockMvc;

    @Before
    public void setup() {
            mockMvc = webAppContextSetup(wac).build();
            UserGroupInformation.setLoginUser(UserGroupInformation.createRemoteUser("dummyUser"));
    }

        private ResultActions interactiveMonitoringREST(String eggID, String monitoringParams) throws Exception {
            return mockMvc.perform(post(String.format("/eggs/%s/interactive", eggID)).contentType(MediaType.APPLICATION_JSON_VALUE).content(monitoringParams));
        }

        @Test
        @WithMockUser("super_user")
        public void testEmbeddedKafka() throws Exception {
            Producer<String, String> producer = getKafkaProducer();
            sendRecords(producer, 3);

            updateConn();

            interactiveMonitoringREST(EGG_KAFKA, monitoringParams)
                    .andExpect(status().isOk())
                    .andDo(print())
                    .andExpect(jsonPath("$.taskResults[0].resultDetails.numberOfRecordsProcessed").value(3))
                    .andExpect(jsonPath("$.taskResults[0].resultDetails.numberOfRecordsSkipped").value(0));
        }

        private void sendRecords(Producer<String, String> producer, int records) {
            for (int i = 0; i < records; i++) {
                String val = "{\"auto_age\":" + String.valueOf(i + 10) + "}";
                producer.send(new ProducerRecord<>(testTopic, String.valueOf(i), val));
            }
            producer.flush();
        }

        private Producer<String, String> getKafkaProducer() {
            Map<String, Object> prodConfigs = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
            return new DefaultKafkaProducerFactory<>(prodConfigs, new StringSerializer(), new StringSerializer()).createProducer();
        }

        private void updateConn() throws Exception {
            String conn = getConnectionREST(CONN_KAFKA).andReturn().getResponse().getContentAsString();
            ConnectionDetail connectionDetail = jsonUtility.fromJson(conn, ConnectionDetail.class);
            connectionDetail.getDetails().put(ConnectionDetailConstants.CONNECTION_SERVER, embeddedBrokers);
            String updatedConn = jsonUtility.toJson(connectionDetail);
            updateConnectionREST(CONN_KAFKA, updatedConn).andExpect(status().isOk());
        }
    }
p5fdfcr1

p5fdfcr11#

你需要告诉经纪人预先创建主题。。。

@SpringBootTest
@EmbeddedKafka(topics = "foo", partitions = 3)
class So57481979ApplicationTests {

    @Test
    void testPartitions(@Autowired KafkaAdmin admin) throws InterruptedException, ExecutionException {
        AdminClient client = AdminClient.create(admin.getConfig());
        Map<String, TopicDescription> map = client.describeTopics(Collections.singletonList("foo")).all().get();
        System.out.println(map.values().iterator().next().partitions().size());
    }

}

或设置 num.partitions 如果您希望代理在第一次使用时自动为您创建主题,则返回broker属性。
我们可能应该根据partitions属性自动执行该操作。

相关问题