kafka.message.Message.key()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(8.8k)|赞(0)|评价(0)|浏览(258)

本文整理了Java中kafka.message.Message.key()方法的一些代码示例,展示了Message.key()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Message.key()方法的具体详情如下:
包路径:kafka.message.Message
类名称:Message
方法名:key

Message.key介绍

暂无

代码示例

代码示例来源:origin: apache/incubator-gobblin

  1. @Override
  2. public byte[] getKeyBytes() {
  3. return getBytes(this.messageAndOffset.message().key());
  4. }

代码示例来源:origin: apache/flink

  1. ByteBuffer keyPayload = msg.message().key();
  2. keyBytes = new byte[keySize];
  3. keyPayload.get(keyBytes);

代码示例来源:origin: prestodb/presto

  1. ByteBuffer key = messageAndOffset.message().key();
  2. if (key != null) {
  3. keyData = new byte[key.remaining()];

代码示例来源:origin: Graylog2/graylog2-server

  1. final byte[] keyBytes = ByteBufferUtils.readBytes(messageAndOffset.message().key());
  2. LOG.trace("Read message {} contains {}", bytesToHex(keyBytes), bytesToHex(payloadBytes));

代码示例来源:origin: pinterest/secor

  1. byte[] keyBytes = null;
  2. if (messageAndOffset.message().hasKey()) {
  3. ByteBuffer key = messageAndOffset.message().key();
  4. keyBytes = new byte[key.limit()];
  5. key.get(keyBytes);

代码示例来源:origin: linkedin/camus

  1. /**
  2. * Fetches the next Kafka message and stuffs the results into the key and
  3. * value
  4. *
  5. * @param etlKey
  6. * @return true if there exists more events
  7. * @throws IOException
  8. */
  9. public KafkaMessage getNext(EtlKey etlKey) throws IOException {
  10. if (hasNext()) {
  11. MessageAndOffset msgAndOffset = messageIter.next();
  12. Message message = msgAndOffset.message();
  13. byte[] payload = getBytes(message.payload());
  14. byte[] key = getBytes(message.key());
  15. if (payload == null) {
  16. log.warn("Received message with null message.payload(): " + msgAndOffset);
  17. }
  18. etlKey.clear();
  19. etlKey.set(kafkaRequest.getTopic(), kafkaRequest.getLeaderId(), kafkaRequest.getPartition(), currentOffset,
  20. msgAndOffset.offset() + 1, message.checksum());
  21. etlKey.setMessageSize(msgAndOffset.message().size());
  22. currentOffset = msgAndOffset.offset() + 1; // increase offset
  23. currentCount++; // increase count
  24. return new KafkaMessage(payload, key, kafkaRequest.getTopic(), kafkaRequest.getPartition(),
  25. msgAndOffset.offset(), message.checksum());
  26. } else {
  27. return null;
  28. }
  29. }

代码示例来源:origin: com.linkedin.gobblin/gobblin-kafka-08

  1. @Override
  2. public byte[] getKeyBytes() {
  3. return getBytes(this.messageAndOffset.message().key());
  4. }

代码示例来源:origin: org.apache.gobblin/gobblin-kafka-08

  1. @Override
  2. public byte[] getKeyBytes() {
  3. return getBytes(this.messageAndOffset.message().key());
  4. }

代码示例来源:origin: org.apache.storm/storm-kafka

  1. public static Iterable<List<Object>> generateTuples(KafkaConfig kafkaConfig, Message msg, String topic) {
  2. Iterable<List<Object>> tups;
  3. ByteBuffer payload = msg.payload();
  4. if (payload == null) {
  5. return null;
  6. }
  7. ByteBuffer key = msg.key();
  8. if (key != null && kafkaConfig.scheme instanceof KeyValueSchemeAsMultiScheme) {
  9. tups = ((KeyValueSchemeAsMultiScheme) kafkaConfig.scheme).deserializeKeyAndValue(key, payload);
  10. } else {
  11. if (kafkaConfig.scheme instanceof StringMultiSchemeWithTopic) {
  12. tups = ((StringMultiSchemeWithTopic)kafkaConfig.scheme).deserializeWithTopic(topic, payload);
  13. } else {
  14. tups = kafkaConfig.scheme.deserialize(payload);
  15. }
  16. }
  17. return tups;
  18. }

代码示例来源:origin: org.graylog2/graylog2-server

  1. final byte[] keyBytes = ByteBufferUtils.readBytes(messageAndOffset.message().key());
  2. LOG.trace("Read message {} contains {}", bytesToHex(keyBytes), bytesToHex(payloadBytes));

代码示例来源:origin: wurstmeister/storm-kafka-0.8-plus

  1. public static Iterable<List<Object>> generateTuples(KafkaConfig kafkaConfig, Message msg) {
  2. Iterable<List<Object>> tups;
  3. ByteBuffer payload = msg.payload();
  4. ByteBuffer key = msg.key();
  5. if (key != null && kafkaConfig.scheme instanceof KeyValueSchemeAsMultiScheme) {
  6. tups = ((KeyValueSchemeAsMultiScheme) kafkaConfig.scheme).deserializeKeyAndValue(Utils.toByteArray(key), Utils.toByteArray(payload));
  7. } else {
  8. tups = kafkaConfig.scheme.deserialize(Utils.toByteArray(payload));
  9. }
  10. return tups;
  11. }

代码示例来源:origin: org.graylog2/graylog2-shared

  1. final byte[] keyBytes = Utils.readBytes(messageAndOffset.message().key());
  2. LOG.trace("Read message {} contains {}", bytesToHex(keyBytes), bytesToHex(payloadBytes));

代码示例来源:origin: prestosql/presto

  1. ByteBuffer key = messageAndOffset.message().key();
  2. if (key != null) {
  3. keyData = new byte[key.remaining()];

代码示例来源:origin: apache/apex-malhar

  1. private void initializeLastProcessingOffset()
  2. {
  3. // read last received kafka message
  4. TopicMetadata tm = KafkaMetadataUtil.getTopicMetadata(Sets.newHashSet((String)getConfigProperties().get(KafkaMetadataUtil.PRODUCER_PROP_BROKERLIST)), this.getTopic());
  5. if (tm == null) {
  6. throw new RuntimeException("Failed to retrieve topic metadata");
  7. }
  8. partitionNum = tm.partitionsMetadata().size();
  9. lastMsgs = new HashMap<Integer, Pair<byte[],byte[]>>(partitionNum);
  10. for (PartitionMetadata pm : tm.partitionsMetadata()) {
  11. String leadBroker = pm.leader().host();
  12. int port = pm.leader().port();
  13. String clientName = this.getClass().getName().replace('$', '.') + "_Client_" + tm.topic() + "_" + pm.partitionId();
  14. SimpleConsumer consumer = new SimpleConsumer(leadBroker, port, 100000, 64 * 1024, clientName);
  15. long readOffset = KafkaMetadataUtil.getLastOffset(consumer, tm.topic(), pm.partitionId(), kafka.api.OffsetRequest.LatestTime(), clientName);
  16. FetchRequest req = new FetchRequestBuilder().clientId(clientName).addFetch(tm.topic(), pm.partitionId(), readOffset - 1, 100000).build();
  17. FetchResponse fetchResponse = consumer.fetch(req);
  18. for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(tm.topic(), pm.partitionId())) {
  19. Message m = messageAndOffset.message();
  20. ByteBuffer payload = m.payload();
  21. ByteBuffer key = m.key();
  22. byte[] valueBytes = new byte[payload.limit()];
  23. byte[] keyBytes = new byte[key.limit()];
  24. payload.get(valueBytes);
  25. key.get(keyBytes);
  26. lastMsgs.put(pm.partitionId(), new Pair<byte[], byte[]>(keyBytes, valueBytes));
  27. }
  28. }
  29. }

代码示例来源:origin: org.apache.apex/malhar-contrib

  1. private void initializeLastProcessingOffset()
  2. {
  3. // read last received kafka message
  4. TopicMetadata tm = KafkaMetadataUtil.getTopicMetadata(Sets.newHashSet((String)getConfigProperties().get(KafkaMetadataUtil.PRODUCER_PROP_BROKERLIST)), this.getTopic());
  5. if (tm == null) {
  6. throw new RuntimeException("Failed to retrieve topic metadata");
  7. }
  8. partitionNum = tm.partitionsMetadata().size();
  9. lastMsgs = new HashMap<Integer, Pair<byte[],byte[]>>(partitionNum);
  10. for (PartitionMetadata pm : tm.partitionsMetadata()) {
  11. String leadBroker = pm.leader().host();
  12. int port = pm.leader().port();
  13. String clientName = this.getClass().getName().replace('$', '.') + "_Client_" + tm.topic() + "_" + pm.partitionId();
  14. SimpleConsumer consumer = new SimpleConsumer(leadBroker, port, 100000, 64 * 1024, clientName);
  15. long readOffset = KafkaMetadataUtil.getLastOffset(consumer, tm.topic(), pm.partitionId(), kafka.api.OffsetRequest.LatestTime(), clientName);
  16. FetchRequest req = new FetchRequestBuilder().clientId(clientName).addFetch(tm.topic(), pm.partitionId(), readOffset - 1, 100000).build();
  17. FetchResponse fetchResponse = consumer.fetch(req);
  18. for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(tm.topic(), pm.partitionId())) {
  19. Message m = messageAndOffset.message();
  20. ByteBuffer payload = m.payload();
  21. ByteBuffer key = m.key();
  22. byte[] valueBytes = new byte[payload.limit()];
  23. byte[] keyBytes = new byte[key.limit()];
  24. payload.get(valueBytes);
  25. key.get(keyBytes);
  26. lastMsgs.put(pm.partitionId(), new Pair<byte[], byte[]>(keyBytes, valueBytes));
  27. }
  28. }
  29. }

代码示例来源:origin: HiveKa/HiveKa

  1. payload.set(bytes, 0, origSize);
  2. buf = message.key();
  3. if(buf != null){
  4. origSize = buf.remaining();

代码示例来源:origin: com.github.hackerwin7/jlib-utils

  1. ByteBuffer keyBuffer = messageAndOffset.message().key();
  2. byte[] keyBytes = new byte[keyBuffer.limit()];
  3. keyBuffer.get(keyBytes);

代码示例来源:origin: com.ebay.jetstream/jetstream-messaging

  1. ByteBuffer k = message.key();
  2. ByteBuffer p = message.payload();
  3. byte[] key = new byte[k.limit()];

代码示例来源:origin: com.linkedin.camus/camus-etl-kafka

  1. /**
  2. * Fetches the next Kafka message and stuffs the results into the key and
  3. * value
  4. *
  5. * @param etlKey
  6. * @return true if there exists more events
  7. * @throws IOException
  8. */
  9. public KafkaMessage getNext(EtlKey etlKey) throws IOException {
  10. if (hasNext()) {
  11. MessageAndOffset msgAndOffset = messageIter.next();
  12. Message message = msgAndOffset.message();
  13. byte[] payload = getBytes(message.payload());
  14. byte[] key = getBytes(message.key());
  15. if (payload == null) {
  16. log.warn("Received message with null message.payload(): " + msgAndOffset);
  17. }
  18. etlKey.clear();
  19. etlKey.set(kafkaRequest.getTopic(), kafkaRequest.getLeaderId(), kafkaRequest.getPartition(), currentOffset,
  20. msgAndOffset.offset() + 1, message.checksum());
  21. etlKey.setMessageSize(msgAndOffset.message().size());
  22. currentOffset = msgAndOffset.offset() + 1; // increase offset
  23. currentCount++; // increase count
  24. return new KafkaMessage(payload, key, kafkaRequest.getTopic(), kafkaRequest.getPartition(),
  25. msgAndOffset.offset(), message.checksum());
  26. } else {
  27. return null;
  28. }
  29. }

代码示例来源:origin: HomeAdvisor/Kafdrop

  1. private MessageVO createMessage(Message message, MessageDeserializer deserializer)
  2. {
  3. MessageVO vo = new MessageVO();
  4. if (message.hasKey())
  5. {
  6. vo.setKey(ByteUtils.readString(message.key()));
  7. }
  8. if (!message.isNull())
  9. {
  10. final String messageString = deserializer.deserializeMessage(message.payload());
  11. vo.setMessage(messageString);
  12. }
  13. vo.setValid(message.isValid());
  14. vo.setCompressionCodec(message.compressionCodec().name());
  15. vo.setChecksum(message.checksum());
  16. vo.setComputedChecksum(message.computeChecksum());
  17. return vo;
  18. }

相关文章