kubernetes 如何在Helm中正确设置Health和Liveliness探针

lf5gs5x2  于 2022-11-02  发布在  Kubernetes
关注(0)|答案(1)|浏览(169)

作为通向一个更复杂问题的垫脚石,我一直在遵循这个例子:https://blog.gopheracademy.com/advent-2017/kubernetes-ready-service/,一步一步来。我一直在努力学习的下一步是使用Helm文件来部署Golang服务,而不是使用makefile。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .ServiceName }}
  labels:
    app: {{ .ServiceName }}
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 50%
      maxSurge: 1
  template:
    metadata:
      labels:
        app: {{ .ServiceName }}
    spec:
      containers:
      - name: {{ .ServiceName }}
        image: docker.io/<my Dockerhub name>/{{ .ServiceName }}:{{ .Release }}
        imagePullPolicy: Always
        ports:
        - containerPort: 8000
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8000
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8000
        resources:
          limits:
            cpu: 10m
            memory: 30Mi
          requests:
            cpu: 10m
            memory: 30Mi
      terminationGracePeriodSeconds: 30

转向舵的部署yaml看起来像

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "mychart.fullname" . }}
  labels:
    {{- include "mychart.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "mychart.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        ano 5.4                                        mychart/templates/deployment.yaml
        {{- include "mychart.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "mychart.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 8000
              protocol: TCP
          livenessProbe:
                 httpGet:
              path: /healthz
              port: 8000
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8000
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
    {{- end }}

然而,当我运行helm图表时,探测器(在不使用helm时工作正常)因错误而失败-特别是在描述pod时,我得到错误“Warning Unhealthy 16 s(x3 over 24 s)kubelet Readiness probe failed:HTTP探测失败,状态代码为:503”我显然在舵图上设置了错误的测头。我如何将这些测头从一个系统转换到另一个系统?

aiqt4smr

aiqt4smr1#

解决方案:我发现的解决方案是,在Helm图表中的探针是初始时间延迟。当我更换

livenessProbe:
                 httpGet:
              path: /healthz
              port: 8000
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8000

livenessProbe:
  httpGet:
    path: /healthz
    port: 8000
  initialDelaySeconds: 15
readinessProbe:
  httpGet:
    path: /health
    port: 8000
  initialDelaySeconds: 15

因为探测器在容器完全启动之前运行,所以它们会自动得出失败的结论。

相关问题