我想让Filebeat设置它使用Kafka主题中的消息的方式,然后将它们发送到Elasticsearch。有没有办法,如何在docker-compose中设置所有这些消息?我可以将Logstash注册为Kafka使用者,所以我认为Filebeat也应该可以。在日志中,我可以看到它加载了配置文件。但是,我没有看到它。在Kafka的消费者列表中。所以Filebeat不消费任何消息,即使它们是在Kafka中。有人能给我指出正确的方向吗?我认为它应该是一个非常基本的配置。然而,我遗漏了一些东西。
docker-compose.yml:
version: '3'
services:
PostgreSQL:
image: postgres:latest
environment:
- POSTGRES_DB=distillery
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=secret
ports:
- "5434:5432"
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "22181:2181"
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_RETENTION_MS: 10000
KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 5000
elasticsearch:
image: elasticsearch:7.9.2
depends_on:
- kafka
ports:
- '9200:9200'
environment:
- discovery.type=single-node
limits:
memlock:
soft: -1
hard: -1
filebeat:
depends_on:
- kafka
image: docker.elastic.co/beats/filebeat:7.9.2
container_name: filebeat
volumes:
- "./filebeat.yml"
filebeat.yml文件的内容:
filebeat.inputs:
- type: kafka
hosts:
- kafka:29092
topics: ["progress-raspberry"]
client_id: "filebeat"
group_id: "filebeat"
output.elasticsearch:
hosts: ["localhost:9200"]`
1条答案
按热度按时间qnakjoqk1#
Kafka advertised listener(客户端连接到的地址)是
PLAINTEXT://kafka:9092
,因此,这是您需要Filebeat容器连接到的主机/端口。如果您在容器外部运行Filebeat,那么您将使用
localhost:29092
作为广告侦听器。更多详情连接到在Docker中运行的Kafka