kafka-log4j-appender 0.9不工作

62o28rlo  于 2021-06-07  发布在  Kafka
关注(0)|答案(2)|浏览(663)

我在log4j.properties中添加了一个log4j kafka appender,但它并没有像我预期的那样工作。
在我发布这个问题之前,我检查了我的log4j.properties,它基于stackoverflow上的这个类似问题,大约是0.8。不过,我不走运。
这是我的log4j属性

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=my-topic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

当我启动我的应用程序时,我可以看到Kafka制作人启动了:

[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.

但appender不起作用,引发了一个异常:

[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

我还检查了我的kafka+zookeeper env,它在我的log4j.properties中是正确的。现在,我对此一无所知。希望有人能帮我一把。以下是整个输出:

[main] INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
        compression.type = none
        metric.reporters = []
        metadata.max.age.ms = 300000
        metadata.fetch.timeout.ms = 60000
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        bootstrap.servers = [localhost:9092]
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        buffer.memory = 33554432
        timeout.ms = 30000
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.keystore.type = JKS
        ssl.trustmanager.algorithm = PKIX
        block.on.buffer.full = false
        ssl.key.password = null
        max.block.ms = 60000
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        ssl.truststore.password = null
        max.in.flight.requests.per.connection = 5
        metrics.num.samples = 2
        client.id =
        ssl.endpoint.identification.algorithm = null
        ssl.protocol = TLS
        request.timeout.ms = 30000
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        acks = 1
        batch.size = 16384
        ssl.keystore.location = null
        receive.buffer.bytes = 32768
        ssl.cipher.suites = null
        ssl.truststore.type = JKS
        security.protocol = PLAINTEXT
        retries = 0
        max.request.size = 1048576
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        ssl.truststore.location = null
        ssl.keystore.password = null
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        send.buffer.bytes = 131072
        linger.ms = 0

[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
[main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = [])
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max
[main] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.9.0.0
[main] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fc7243c2af4b2b4a
[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
...
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

谢谢

ql3eal8s

ql3eal8s1#

最后,我把它修好了。这是我的新log4j属性

log4j.rootLogger=DEBUG, Console

log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=logTopic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.logger.io.vertx=WARN
log4j.logger.io.netty=WARN

我还创建了一个示例来演示如何在github上使用这个appender。
我所做的改变:
从rootlogger中删除kafka appender。在我以前的log4j.properties里

log4j.rootLogger=DEBUG, Console, Kafka

为那些日志输出将转到kafka的包添加日志类别

log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false

我认为原因是:有了旧的rootlogger,kafka的日志输出也转到了kafka,这导致了超时。

uurv41yg

uurv41yg2#

我对log4j和logback都有这样的问题。
当appender level为info时,一切正常,但当我将level更改为debug时,我在一段时间后遇到了这个错误:org.apache.kafka.common.errors.timeoutexception:在60000毫秒后更新元数据失败。
问题是kafkaproducer本身有跟踪和调试日志,并且试图将这些日志附加到kafka中,因此它陷入了一个循环中。
将org.apache.kafka包的日志级别更改为info或将其appender更改为将日志写入文件或标准输出解决了问题。

相关问题