Kubernetes集群上的VerneMQ

lx0bsm1f  于 2023-11-17  发布在  Kubernetes
关注(0)|答案(3)|浏览(120)

我正在尝试通过Oracle OCI usign Helm chart在Kubernetes集群上安装VerneMQ。
Kubernetes基础设施似乎已经启动并运行,我可以毫无问题地部署我的自定义微服务。
我在按照https://github.com/vernemq/docker-vernemq的指示
这里的步骤:

  • helm install --name="broker" ./来自helm/vernemq目录

输出为:

NAME:   broker
LAST DEPLOYED: Fri Mar  1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/RoleBinding
NAME            AGE
broker-vernemq  1s

==> v1/Service
NAME                     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
broker-vernemq-headless  ClusterIP  None          <none>       4369/TCP  1s
broker-vernemq           ClusterIP  10.96.120.32  <none>       1883/TCP  1s

==> v1/StatefulSet
NAME            DESIRED  CURRENT  AGE
broker-vernemq  3        1        1s

==> v1/Pod(related)
NAME              READY  STATUS             RESTARTS  AGE
broker-vernemq-0  0/1    ContainerCreating  0         1s

==> v1/ServiceAccount
NAME            SECRETS  AGE
broker-vernemq  1        1s

==> v1/Role
NAME            AGE
broker-vernemq  1s

NOTES:
1. Check your VerneMQ cluster status:
  kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show

2. Get VerneMQ MQTT port
  echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
  kubectl port-forward svc/broker-vernemq 1883:1883

字符串
但当我检查的时候
kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show
我得到

Node '[email protected]' not responding to pings.
command terminated with exit code 1


我认为子域有问题(两个点之间没有任何东西)
使用此命令

kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns


最后一个日志行是

I0301 10:07:38.366826       1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.


我也试过这个自定义yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: default
  name: vernemq
  labels:
    app: vernemq
spec:
  serviceName: vernemq
  replicas: 3
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      containers:
      - name: vernemq
        image: erlio/docker-vernemq:latest
        imagePullPolicy: Always
        ports:
          - containerPort: 1883
            name: mqtt
          - containerPort: 8883
            name: mqtts
          - containerPort: 4369
            name: epmd
        env:
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
          value: "off"
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
          value: "/etc/vernemq-passwd/vmq.passwd"
        volumeMounts:
          - name: vernemq-passwd
            mountPath: /etc/vernemq-passwd
            readOnly: true

      volumes:
      - name: vernemq-passwd
        secret:
          secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: epmd
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: ClusterIP
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: Service
metadata:
  name: mqtts
  labels:
    app: mqtts
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 8883
    name: mqtts


有什么建议吗?
非常感谢
杰克

6l7fqoea

6l7fqoea1#

这似乎是Docker镜像中的一个bug。github上的建议是构建自己的镜像或使用后来的VerneMQ镜像(1.6.x之后),其中已修复。
建议:https://github.com/vernemq/docker-vernemq/pull/92
拉取-请求可能的修复:https://github.com/vernemq/docker-vernemq/pull/97
编辑:
我只是让它在没有 Helm 的情况下工作。使用kubectl create -f ./cluster.yaml,下面是cluster.yaml

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: vernemq
  namespace: default
spec:
  serviceName: vernemq
  replicas: 3
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      serviceAccountName: vernemq
      containers:
      - name: vernemq
        image: erlio/docker-vernemq:latest
        ports:
        - containerPort: 1883
          name: mqttlb
        - containerPort: 1883
          name: mqtt
        - containerPort: 4369
          name: epmd
        - containerPort: 44053
          name: vmq
        - containerPort: 9100
        - containerPort: 9101
        - containerPort: 9102
        - containerPort: 9103
        - containerPort: 9104
        - containerPort: 9105
        - containerPort: 9106
        - containerPort: 9107
        - containerPort: 9108
        - containerPort: 9109
        env:
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: MY_POD_NAME
          valueFrom:
           fieldRef:
             fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
          value: "9100"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
          value: "9109"
        - name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
          value: "1"
        # only allow anonymous access for development / testing purposes!
        # - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
        #   value: "on"
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: empd
  - port: 44053
    name: vmq
---
apiVersion: v1
kind: Service
metadata:
  name: mqttlb
  labels:
    app: mqttlb
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqttlb
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: NodePort
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["endpoints", "deployments", "replicasets", "pods"]
  verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
subjects:
- kind: ServiceAccount
  name: vernemq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader

字符串
需要几秒钟准备好分离舱

sshcrbum

sshcrbum2#

尝试设置环境变量“DOCKER_VERNEMQ_KUBERNETES_APP_LABEL”和“DOCKER_VERNEMQ_KUBERNETES_NAMESPACE”。

vhmi4jdf

vhmi4jdf3#

默认服务器名称为vernemMQ
您可以使用环境变量DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR覆盖它,并将值作为app=name传递

DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app={Name}"

字符串
例如:

DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app=demo"


参考内容:
Dockerfile from VerneMQ

相关问题