kubernetes 如何将opensearch正确配置为logstash输出,我收到主机不可达错误

z9smfwbn  于 2022-12-03  发布在  Kubernetes
关注(0)|答案(1)|浏览(180)

所以我目前连接到一个contabo托管的kubernetes集群。在那里我运行了Kafka和opensearch/opensearch Jmeter 板部署。我正在尝试运行logstash,这样我就可以从kafka主题中获取数据到opensearch,https://hub.docker.com/r/opensearchproject/logstash-oss-with-opensearch-output-plugin这是我用于logstash的映像这是我的logstash配置。下面是我的opensearch配置。当我部署logstash时,我成功地从Kafka主题中获取了数据,所以我的输入插件工作正常,但输出不正常,我无法从logstash向opensearch输出数据。以下是logstash pod中的日志:https://justpaste.it/620g4
这是“kubectl获取服务”的输出

NAME                                  TYPE           CLUSTER-IP       EXTERNAL-IP   
PORT(S)                               AGE
dashboards-opensearch-dashboards      ClusterIP      10.96.114.252    <none>        5601/TCP                              5d20h
grafana                               ClusterIP      10.107.83.28     <none>        3000/TCP                              44h
logstash-service                      LoadBalancer   10.102.132.114   <pending>     5044:31333/TCP                        28m
loki                                  ClusterIP      10.99.30.246     <none>        3100/TCP                              43h
loki-headless                         ClusterIP      None             <none>        3100/TCP                              43h
my-cluster-kafka-0                    NodePort       10.101.196.50    <none>        9094:32000/TCP                        53m
my-cluster-kafka-1                    NodePort       10.96.247.75     <none>        9094:32001/TCP                        53m
my-cluster-kafka-2                    NodePort       10.98.203.5      <none>        9094:32002/TCP                        53m
my-cluster-kafka-bootstrap            ClusterIP      10.111.178.24    <none>        9091/TCP,9092/TCP,9093/TCP            53m
my-cluster-kafka-brokers              ClusterIP      None             <none>        9090/TCP,9091/TCP,9092/TCP,9093/TCP   53m
my-cluster-kafka-external-bootstrap   NodePort       10.109.134.74    <none>        9094:32100/TCP                        53m
my-cluster-zookeeper-client           ClusterIP      10.98.157.173    <none>        2181/TCP                              54m
my-cluster-zookeeper-nodes            ClusterIP      None             <none>        2181/TCP,2888/TCP,3888/TCP            54m
opensearch-cluster-master             ClusterIP      10.98.55.121     <none>        9200/TCP,9300/TCP                     19h
opensearch-cluster-master-headless    ClusterIP      None             <none>        9200/TCP,9300/TCP                     19h
prometheus-operated                   ClusterIP      None             <none>        9090/TCP                              25m
prometheus-operator                   ClusterIP      None             <none>        8080/TCP                              50m

我做错了什么?我如何建立这种联系?

txu3uszq

txu3uszq1#

我发现了。我认为它需要一个ssl证书,这就是为什么它拒绝连接。我“修复”这个问题的方法(因为我现在不需要这个项目的ssl证书)是我以这种方式更改了logstash配置。

logstash.conf: |
    input {
        kafka{
          codec => json
          bootstrap_servers => "10.111.178.24:9092"
          topics => ["t_events"]
        }
    }
    output {
       opensearch {
          hosts       => ["https://10.102.102.109:9200"]
          ssl_certificate_verification => false
          user        => "admin"
          password    => "admin"
          index       => "logstash-logs-%{+YYYY.MM.dd}"
        }
    }

所以我在配置文件中添加了“ssl_certificate_verification =〉false”行,这使我能够从logstash连接到opensearch并发送数据。现在我有了使用https协议的数据加密方面,但我缺少ssl认证,我对这个项目很满意。

相关问题