每18-20小时后 Kafka
由于以下原因,服务一直出错 log
错误。
我看过很多帖子,都提到 double backslash
即 \\
或删除上一个日志并启动 Kafka
再次服务。
但这确实会启动 Kafka
但它一次又一次地遇到同样的问题。
我们如何永久性地修复它 production
准备好了吗?
还有,有没有什么方法可以让我有一个回退机制 kafka
不管什么原因失败,它会自动重启吗?
下面是我的一段 server.properties
.
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=c:\\kafka\\kafka-logs-cos10
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=1
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
# log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
暂无答案!
目前还没有任何答案,快来回答吧!