我正在尝试在windows上运行kafka(在azurecloud中)。但它每两天就会失败一次,出现100个/乘0和1个io异常
[2018-06-12 09:00:23,457] ERROR Error while accepting connection (kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
at kafka.network.Acceptor.run(SocketServer.scala:354)
at java.lang.Thread.run(Unknown Source)
[2018-06-12 09:00:23,457] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
at kafka.network.Acceptor.run(SocketServer.scala:354)
at java.lang.Thread.run(Unknown Source)
[2018-06-12 09:00:23,457] ERROR Error while accepting connection (kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
at kafka.network.Acceptor.run(SocketServer.scala:354)
at java.lang.Thread.run(Unknown Source)
...........
...........
[2018-06-12 09:00:23,457] ERROR Failed to clean up log for __consumer_offsets-41 in dir C:\kafka\logs due to IOException (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: C:\kafka\logs\__consumer_offsets-41\00000000000000000000.log.cleaned: The process cannot access the file because it is being used by another process.
at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(Unknown Source)
at java.nio.file.Files.deleteIfExists(Unknown Source)
at kafka.log.Cleaner.deleteCleanedFileIfExists$1(LogCleaner.scala:488)
at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:493)
at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
at kafka.log.Cleaner.clean(LogCleaner.scala:438)
at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
删除所有日志并重新启动kafka解决问题(1-2天)
配置:
advised.host.name=空
advised.listeners=空
advised.port=空
alter.config.policy.class.name=null
alter.log.dirs.replication.quota.window.num=11
alter.log.dirs.replication.quota.window.size.seconds=1
authorizer.class.name=
auto.create.topics.enable=真
auto.leader.rebalance.enable=真
background.threads=10个
broker.id=0
broker.id.generation.enable=真
broker.rack=空
compression.type=生产者
connections.max.idle.ms=600000个
controlled.shutdown.enable=真
controlled.shutdown.max.retries=3次
controlled.shutdown.retry.backoff.ms=5000
controller.socket.timeout.ms=30000
create.topic.policy.class.name=null
default.replication.factor=1
delegation.token.expiry.check.interval.ms=3600000
delegation.token.expiry.time.ms=86400000
delegation.token.master.key=空
delegation.token.max.lifetime.ms=604800000
delete.records.purgatory.purge.interval.requests=1
delete.topic.enable=真
fetch.purgatory.purge.interval.requests=1000
group.initial.rebalance.delay.ms=0
group.max.session.timeout.ms=300000
group.min.session.timeout.ms=6000
主机名=
inter.broker.listener.name=空
inter.broker.protocol.version=1.1-iv0
leader.interval.check.interval.seconds=300
leader.per.broker.percentage=10
listener.security.protocol.map=plaintext:plaintext,ssl:ssl,萨斯勒_plaintext:sasl_plaintext,萨斯勒_ssl:sasl_ssl
侦听器=空
log.cleaner.backoff.ms=15000
log.cleaner.dedupe.buffer.size=134217728
log.cleaner.delete.retention.ms=86400000
log.cleaner.enable=真
log.cleaner.io.buffer.load.factor=0.9
log.cleaner.io.buffer.size=524288
log.cleaner.io.max.bytes.per.second=1.7976931348623157e308
log.cleaner.min.cleanable.ratio=0.5
log.cleaner.min.compression.lag.ms=0
log.cleaner.threads=1
log.cleanup.policy=[删除]
log.dir=/tmp/kafka日志
log.dirs=c:/kafka/logs
log.flush.interval.messages=9223372036854775807
log.flush.interval.ms=空
log.flush.offset.checkpoint.interval.ms=60000
log.flush.scheduler.interval.ms=9223372036854775807
log.flush.start.offset.checkpoint.interval.ms=60000
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.message.format.version=1.1-iv0
log.message.timestamp.difference.max.ms=9223372036854775807
log.message.timestamp.type=创建时间
log.preallocate=假
log.retention.bytes=-1
log.retention.check.interval.ms=300000
log.retention.hours=72小时
log.retention.minutes=空
log.retention.ms=空
log.roll.hours=168小时
log.roll.jitter.hours=0
log.roll.jitter.ms=空
log.roll.ms=空
log.segment.bytes=1073741824
log.segment.delete.delay.ms=60000
每个ip的最大连接数=2147483647
每个ip覆盖的最大连接数=
max.incremental.fetch.session.cache.slots=1000个
message.max.bytes=1000012
metric.reporters=[]
metrics.num.samples=2个
metrics.recording.level=信息
metrics.sample.window.ms=30000
最小insync.replicas=1
num.io.threads=12个
num.network.threads=64个
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.alter.log.dirs.threads=空
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
偏移量.retention.check.interval.ms=600000
offsets.retention.minutes=1440分钟
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=1
offsets.topic.segment.bytes=104857600
password.encoder.cipher.algorithm=aes/cbc/pkcs5p添加
password.encoder.iterations=4096
password.encoder.key.length=128
password.encoder.keyfactory.algorithm=空
password.encoder.old.secret=空
password.encoder.secret=空
端口=9092
principal.builder.class=空
producer.purgatory.purge.interval.requests=1000
queued.max.request.bytes=-1
queued.max.requests=500个
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1
replica.fetch.backoff.ms=1000
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.response.max.bytes=10485760
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
replication.quota.window.num=11
replication.quota.window.size.seconds=1
request.timeout.ms=30000
reserved.broker.max.id=1000
sasl.enabled.mechanisms=[gssapi]
sasl.jaas.config=空
sasl.kerberos.kinit.cmd=/usr/bin/kinit
sasl.kerberos.min.time.before.relogin=60000
sasl.kerberos.principal.to.local.rules=[默认值]
sasl.kerberos.service.name=空
sasl.kerberos.ticket.renew.jitter=0.05
sasl.kerberos.ticket.renew.window.factor=0.8
sasl.mechanism.inter.broker.protocol=gssapi
security.inter.broker.protocol=纯文本
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
ssl.cipher.suites=[]
ssl.client.auth=无
ssl.enabled.protocols=[tlsv1.2,tlsv1.1,tlsv1]
ssl.endpoint.identification.algorithm=空
ssl.key.password=空
ssl.keymanager.algorithm=sunx509
ssl.keystore.location=空
ssl.keystore.password=空
ssl.keystore.type=jks
ssl.protocol=tls协议
ssl.provider=空
ssl.secure.random.implementation=空
ssl.trustmanager.algorithm=pkix
ssl.truststore.location=空
ssl.truststore.password=空
ssl.truststore.type=jks
transaction.abort.timed.out.transaction.cleanup.interval.ms=60000
transaction.max.timeout.ms=900000
transaction.remove.expired.transaction.cleanup.interval.ms=3600000
transaction.state.log.load.buffer.size=5242880
transaction.state.log.min.isr=1
transaction.state.log.num.partitions=50
transaction.state.log.replication.factor=1
transaction.state.log.segment.bytes=104857600
transactional.id.expiration.ms=604800000
unclean.leader.election.enable=假
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
zookeeper.max.in.flight.requests=10
zookeeper.session.timeout.ms=6000
zookeeper.set.acl=假
zookeeper.sync.time.ms=2000
1条答案
按热度按时间gab6jxml1#
根据您运行的windows版本,默认情况下可能会启用defender。如果是这样,请尝试在c:\kafka目录中添加排除。