Kubernetes livenessProbe一些容器因失败而停止,而另一些则成功,原因是什么?

jdzmm42g  于 2022-12-03  发布在  Kubernetes
关注(0)|答案(1)|浏览(229)

深入了解这个question。我在同一个pod中有一个计划的cron作业和一个永不结束的容器。为了在cron作业完成工作后结束永不结束的容器,我使用了一个liveness探测器。

apiVersion: batch/v1
kind: CronJob
metadata:
  name: pod-failed
spec:
  schedule: "*/10 * * * *"
  concurrencyPolicy: Replace
  jobTemplate:
    spec:
      ttlSecondsAfterFinished: 300
      activeDeadlineSeconds: 300
      backoffLimit: 4
      template:
        spec:
          containers:
          - name: docker-http-server
            image: katacoda/docker-http-server:latest
            ports:
            - containerPort: 80
            volumeMounts:
            - mountPath: /cache
              name: cache-volume
            volumeMounts:
            - mountPath: /cache
              name: cache-volume
            livenessProbe:
              exec:
                command:
                - sh
                - -c
                - if test -f "/cache/stop"; then exit 1; fi;
              initialDelaySeconds: 5
              periodSeconds: 5
          - name: busy
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - sh
            - -c
            args:
            - echo start > /cache/start; sleep 15; echo stop >  /cache/stop; 
            volumeMounts:
            - mountPath: /cache
              name: cache-volume
          restartPolicy: Never
          volumes:
          - name: cache-volume
            emptyDir:
              sizeLimit: 10Mi

正如你所看到的,cron作业将写入/cache/stop文件,永不结束的容器被停止。问题是,对于一些映像,永不结束的容器在失败时停止。有没有办法在成功时停止每个容器?

Name:                     pod-failed-27827190
Namespace:                default
Selector:                 controller-uid=608efa7c-53cf-4978-9136-9fec772c1c6d
Labels:                   controller-uid=608efa7c-53cf-4978-9136-9fec772c1c6d
                          job-name=pod-failed-27827190
Annotations:              batch.kubernetes.io/job-tracking: 
Controlled By:            CronJob/pod-failed
Parallelism:              1
Completions:              1
Completion Mode:          NonIndexed
Start Time:               Mon, 28 Nov 2022 11:30:00 +0100
Active Deadline Seconds:  300s
Pods Statuses:            0 Active (0 Ready) / 0 Succeeded / 5 Failed
Pod Template:
  Labels:  controller-uid=608efa7c-53cf-4978-9136-9fec772c1c6d
           job-name=pod-failed-27827190
  Containers:
   docker-http-server:
    Image:        katacoda/docker-http-server:latest
    Port:         80/TCP
    Host Port:    0/TCP
    Liveness:     exec [sh -c if test -f "/cache/stop"; then exit 1; fi;] delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /cache from cache-volume (rw)
   busy:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
    Args:
      echo start > /cache/start; sleep 15; echo stop >  /cache/stop;
    Environment:  <none>
    Mounts:
      /cache from cache-volume (rw)
  Volumes:
   cache-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  10Mi
Events:
  Type     Reason                Age   From            Message
  ----     ------                ----  ----            -------
  Normal   SuccessfulCreate      2m5s  job-controller  Created pod: pod-failed-27827190-8tqxk
  Normal   SuccessfulCreate      102s  job-controller  Created pod: pod-failed-27827190-4gj2s
  Normal   SuccessfulCreate      79s   job-controller  Created pod: pod-failed-27827190-5wgfg
  Normal   SuccessfulCreate      56s   job-controller  Created pod: pod-failed-27827190-lzv8k
  Normal   SuccessfulCreate      33s   job-controller  Created pod: pod-failed-27827190-fr8v5
  Warning  BackoffLimitExceeded  9s    job-controller  Job has reached the specified backoff limit

如图所示:katacoda/docker-http-server:latest在活动探测时失败。例如,ngix不会发生这种情况。

apiVersion: batch/v1
kind: CronJob
metadata:
  name: pod-failed
spec:
  schedule: "*/10 * * * *"
  concurrencyPolicy: Replace
  jobTemplate:
    spec:
      ttlSecondsAfterFinished: 300
      activeDeadlineSeconds: 300
      backoffLimit: 4
      template:
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
            volumeMounts:
            - mountPath: /cache
              name: cache-volume
            volumeMounts:
            - mountPath: /cache
              name: cache-volume
            livenessProbe:
              exec:
                command:
                - sh
                - -c
                - if test -f "/cache/stop"; then exit 1; fi;
              initialDelaySeconds: 5
              periodSeconds: 5
          - name: busy
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - sh
            - -c
            args:
            - echo start > /cache/start; sleep 15; echo stop >  /cache/stop; 
            volumeMounts:
            - mountPath: /cache
              name: cache-volume
          restartPolicy: Never
          volumes:
          - name: cache-volume
            emptyDir:
              sizeLimit: 10Mi

当然,我正在拉的永不结束的图像以失败告终,我无法控制该图像。有没有办法强制作业/pod的成功状态?

dy2hfwbg

dy2hfwbg1#

这取决于容器主进程的退出代码。当kubernetes想要停止容器时,每个容器都会收到一个term信号,给予它有机会优雅地结束。当原因是一个失败的liveness探测时,这也适用。我猜nginx退出时的退出代码是0,而你的katacode http服务器返回的代码不是0。查看golang ListenAndServe方法的文档,它清楚地表明它以一个非零错误结束:https://pkg.go.dev/net/http#Server.ListenAndServe
您可以使用bash脚本覆盖容器的默认命令,该脚本启动应用程序,然后等待直到写入停止文件:

containers:
  - name: docker-http-server
    image: katacoda/docker-http-server:latest
    command:
      - "sh"
      - "-c"
      - "/app & while true; do if [ -f /cache/stop ]; then exit 0; fi; sleep 1; done;"

这里,“/app”是katacode http服务器容器的启动命令。

相关问题