kubernetes 应用程序pod状态仍处于挂起状态

pn9klfpd  于 2023-01-20  发布在  Kubernetes
关注(0)|答案(1)|浏览(132)

我已经部署了一个应用程序,但pod始终处于挂起状态。

$ kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
server1       Ready    control-plane   8d    v1.24.9
server2       Ready    worker1         8d    v1.24.9
server3       Ready    worker2         8d    v1.24.9
server4       Ready    worker3         8d    v1.24.9
$ kubectl get all -n jenkins
NAME                          READY   STATUS    RESTARTS   AGE
pod/jenkins-6dc9f97c7-ttp64   0/1     Pending   0          7m42s
$ kubectl describe pods jenkins-6dc9f97c7-ttp64 -n jenkins

Events:
Type     Reason            Age    From               Message
----     ------            ----   ----               -------
Warning  FailedScheduling  5m42s  default-scheduler  0/4 nodes are available: 3 node(s) had volume node affinity conflict, 4 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

事件历史记录确认FailedScheduling错误是原因。
我的deployment.yml已强制将pod分配到主节点。

spec:
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists

由于从Kubernetes版本1.20+起node-role.kubernetes.io/master被弃用,而支持node-role.kubernetes.io/control-plane,我已经更新如下。但是pod仍然显示为pending

spec:
      nodeSelector:
        node-role.kubernetes.io/control-plane: ""
      tolerations:
      - key: node-role.kubernetes.io/control-plane

PersistentVolume.yml第i面有以下内容。

...
.....
..........
  local:
    path: /ksdata/apps/nodejs/
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - server1
    • 标签详情:-**
$ kubectl get nodes --show-labels
NAME      STATUS   ROLES           AGE   VERSION   LABELS
server1   Ready    control-plane   9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
server2   Ready    worker1         9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/worker1=worker
server3   Ready    worker2         9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux,node-role.kubernetes.io/worker2=worker
server4   Ready    worker3         9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux,node-role.kubernetes.io/worker3=worker
$ kubectl describe node | egrep -i taint
Taints:             key=value:NoSchedule
Taints:             <none>
Taints:             <none>
Taints:             <none>
kadbb459

kadbb4591#

您在集群中有4个节点,因此通常有一个是master节点,在该节点之上,apppod未调度,因此剩下3****节点
使用工作节点时,您的部署设置了节点关联性,因此无法在该节点上计划Pod,并停留在挂起状态。
检查PVC大多数情况下无法创建

    • 更新**

控制平面中删除污点

kubectl taint node server1 key=value:NoSchedule-
    • 在主文件上设置的公差**
spec:
    nodeSelector:
        kubernetes.io/hostname: "server1"

如果已保留且未移除,尝试公差,否则使用节点选择器进行调整

tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

相关问题