使用filebeat接收json日志文件

mznpcxlj  于 2021-06-15  发布在  ElasticSearch
关注(0)|答案(1)|浏览(365)

所以我有一个日志文件,它的每一行都是一个json对象。我希望能够将此日志文件直接发送到elasticsearch,然后希望elastic能够接收数据。
我很确定我需要为此声明一个特定的模板。然而,我不知道如何,并将很高兴有一些指导如何做的权利。

jgovgodb

jgovgodb1#


# Filebeat Configuration

filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -

      paths:
        #- /var/log/*.log
        - ${applicationLogsPath}
      document_type: application_logs

      # Mutiline can be used for log messages spanning multiple lines.
      multiline:

        # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
        pattern: ^%{TIMESTAMP_ISO8601}

        # Defines if the pattern set under pattern should be negated or not. Default is false.
        negate: true

        # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
        # that was (not) matched before or after or as long as a pattern is not matched based on negate.
        # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
        match: after

    # Additional prospector
    -
      paths:
        - ${iisLogsPath}
      document_type: iis_logs

# Configure what outputs to use when sending the data collected by the beat.

# Multiple outputs may be used.

output:

  ### Logstash as output
  elasticsearch:
    # The elasticsearch hosts
    hosts: ["${elasticsearchHost}:9200"]

    # Number of workers per Logstash host.
    #worker: 1

    # The maximum number of events to bulk into a single batch window. The
    # default is 2048.
    #bulk_max_size: 2048

这是一个默认模板,我使用它通过filebeat将日志接收到elasticsearch中。。您还可以将日志发送到logstash并过滤日志以捕获必要的信息,然后让logstash将日志转发到elasticsearch。。
如果你还需要什么,请告诉我。。
谢谢,

相关问题