这个logstash配置有什么问题?

nr7wwzry  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(425)

我们使用logstash(2.3.3)收听Kafka的多个主题,使用Kafka的新插件(3.0.2)。然后根据主题名称(作为元数据添加)将每个主题的数据重新定向到s3 bucket中的特定文件夹。然而,在当前的配置中,只有第一个s3输出的数据似乎落在它的s3 bucket/文件夹中。
有人能告诉我这里出了什么问题吗?我很确定有更好的方式来编写这个配置,可以满足我们的要求!

input
{
 kafka
 {
  bootstrap_servers => "10.0.0.5:9093,10.0.1.5:9093"
  topics => "topic"
  codec => "json"
  ssl => true
  ssl_keystore_location => "/opt/logstash/ssl/server.keystore.jks"
  ssl_keystore_password => "<snipped>"
  ssl_truststore_location => "/opt/logstash/ssl/server.truststore.jks"
  ssl_truststore_password => "<snipped>"
  add_field => { "[@metadata][topic]" => "topic" }
 }
 kafka
 {
  bootstrap_servers => "10.0.0.5:9093,10.0.1.5:9093"
  topics => "topic-test"
  codec => "json"
  ssl => true
  ssl_keystore_location => "/opt/logstash/ssl/server.keystore.jks"
  ssl_keystore_password => "<snipped>"
  ssl_truststore_location => "/opt/logstash/ssl/server.truststore.jks"
  ssl_truststore_password => "<snipped>"
  add_field => { "[@metadata][topic]" => "topic-test" }
 }
 kafka
 {
  bootstrap_servers => "10.0.0.5:9093,10.0.1.5:9093"
  topics => "daily_batch"  
  ssl => true
  ssl_keystore_location => "/opt/logstash/ssl/server.keystore.jks"
  ssl_keystore_password => "<snipped>"
  ssl_truststore_location => "/opt/logstash/ssl/server.truststore.jks"
  ssl_truststore_password => "<snipped>"
  add_field => { "[@metadata][topic]" => "daily_batch" }
 }
}

output
{
 if [@metadata][topic] == "topic"
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/topic"
     size_file => 20971520
     temporary_directory => "/logstash"
     use_ssl => "true"
     codec => json_lines     
    }
 }
 if [@metadata][topic] == "topic-test"
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/topic-test"
     size_file => 2097152
     temporary_directory => "/logstash"
     use_ssl => "true"
     codec => json_lines     
    }
 }
 if [@metadata][topic] == "daily_batch"
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/daily_batch"
     size_file => 41943
     temporary_directory => "/logstash"
     use_ssl => "true"
    }
 }
}
erhoui1w

erhoui1w1#

储藏室 5.0 你可以使用 topics 给你的Kafka输入一系列的主题

topics => ["topic", "topic-test", "daily_batch"]

在一个Kafka输入。但是,这不能用logstash2.3来完成,它没有 topics 现场。
通过使用logstash在每个事件的基础上将字段值插入到配置中的字符串中的功能,您可以明确地压缩输出。为了确保您的数据不会在坏数据上获得奇怪的一次性存储桶,您可以使用数组检查是否存在错误。

if [@metadata][topic] in ["topic", "topic-test", "daily_batch"]
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/%{[@metadata][topic]}"
     size_file => 41943
     temporary_directory => "/logstash"
     use_ssl => "true"
    }
 }
}

相关问题