忽略第三方lib的重复异常消息的日志记录

yvfmudvl  于 2021-06-07  发布在  Kafka
关注(0)|答案(2)|浏览(412)

我需要处理日志中特定例外的重复。
我使用slf4j和logback来登录我的应用程序。我使用一些外部服务(db、apachekafka、第三方libs等)。当与这种服务的连接丢失时,我得到了异常,例如

[kafka-producer-network-thread | producer-1] WARN  o.a.kafka.common.network.Selector - Error in I/O with localhost/127.0.0.1
java.net.ConnectException: Connection refused: no further information
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_45]
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_45]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:238) ~[kafka-clients-0.8.2.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [kafka-clients-0.8.2.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [kafka-clients-0.8.2.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [kafka-clients-0.8.2.0.jar:na]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]

问题是我每秒钟都收到这个信息。此异常消息将淹没我的日志文件,因此我将在n小时内在日志文件中有几个GB。
我希望每1-5分钟有一次关于此异常的日志消息。有没有办法忽略日志文件中异常的重复?
可能的解决方案:
忽略特定包和类的所有日志[糟糕,因为我可以跳过重要的信息]
使用http://logback.qos.ch/manual/filters.html#duplicatemessagefilter [不好,因为我只能设置允许重复或缓存大小的属性。它将匹配所有消息,但我只需要特定的例外]
写自定义过滤器也许,你知道已经实现的解决方案?

s8vozzvw

s8vozzvw1#

编写新的turbo过滤器并实现任何逻辑来拒绝某些特定的日志记录非常容易。
我已使用下一个配置logback.xml删除了新筛选器

<turboFilter class="package.DuplicationTimeoutTurboFilter">
    <MinutesToBlock>3</MinutesToBlock>
    <KeyPattern>
        <loggerClass>org.apache.kafka.common.network.Selector</loggerClass>
        <message>java.net.ConnectException: Connection refused: no further information</message>
    </KeyPattern>
</turboFilter>

设计和实施:

import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.turbo.TurboFilter;
import ch.qos.logback.core.spi.FilterReply;
import org.slf4j.Marker;

import java.time.LocalDateTime;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;

public class DuplicationTimeoutTurboFilter extends TurboFilter {

    private static final int CLEAN_UP_THRESHOLD = 1000;
    private ConcurrentHashMap<KeyPattern, LocalDateTime> recentlyMatchedPatterns = new ConcurrentHashMap<>();

    private Set<KeyPattern> ignoringPatterns = new HashSet<>();

    private long minutesToBlock = 3L;

    @Override
    public FilterReply decide(Marker marker, Logger logger, Level level, String format, Object[] params, Throwable t) {
        String rawLogMessage = format + Arrays.toString(params) + Objects.toString(t);  //sometimes throwable can be inserted into params argument

        Set<KeyPattern> matchedIgnoringSet = ignoringPatterns.stream()
                .filter(key -> match(key, logger, rawLogMessage))
                .collect(Collectors.toSet());

        if (!matchedIgnoringSet.isEmpty() && isLoggedRecently(matchedIgnoringSet)) {
            return FilterReply.DENY;
        }

        return FilterReply.NEUTRAL;
    }

    private boolean match(KeyPattern keyPattern, Logger logger, String rawText) {
        String loggerClass = keyPattern.getLoggerClass();
        String messagePattern = keyPattern.getMessage();
        return loggerClass.equals(logger.getName()) && rawText.contains(messagePattern);
    }

    private boolean isLoggedRecently(Set<KeyPattern> matchedIgnoredList) {
        for (KeyPattern pattern : matchedIgnoredList) {
            LocalDateTime now = LocalDateTime.now();

            LocalDateTime lastLogTime = recentlyMatchedPatterns.putIfAbsent(pattern, now);
            if (lastLogTime == null) {
                return false;
            }

            LocalDateTime blockedTillTime = lastLogTime.plusMinutes(minutesToBlock);
            if (blockedTillTime.isAfter(now)) {
                return true;
            } else if (blockedTillTime.isBefore(now)) {
                recentlyMatchedPatterns.put(pattern, now);
                cleanupIfNeeded();
                return false;
            }
        }
        return true;
    }

    private void cleanupIfNeeded() {
        if (recentlyMatchedPatterns.size() > CLEAN_UP_THRESHOLD) {
            LocalDateTime oldTime = LocalDateTime.now().minusMinutes(minutesToBlock * 2);
            recentlyMatchedPatterns.values().removeIf(lastLogTime -> lastLogTime.isAfter(oldTime));
        }
    }

    public long getMinutesToBlock() {
        return minutesToBlock;
    }

    public void setMinutesToBlock(long minutesToBlock) {
        this.minutesToBlock = minutesToBlock;
    }

    public void addKeyPattern(KeyPattern keyPattern) {
        ignoringPatterns.add(keyPattern);
    }

    public static class KeyPattern {
        private String loggerClass;
        private String message;

        //constructor, getters, setters, equals, hashcode
    }
}
k4emjkb1

k4emjkb12#

我认为你最好的选择就是扩展你已经找到的重复消息过滤器。它不是最终的,而且很容易:
实现一个新的turbofilter,使用一个基于类名、exceptiontype或其他任何你想根据其做出初始决定的过滤器方法
然后委托给父类进行重复性检查
可用参数:

public FilterReply decide(Marker marker, Logger logger, Level level,
  String format, Object[] params, Throwable t) {

包括两者 Throwable 以及 Logger .

相关问题