哪个任务出现了问题?
- ci-kubernetes-e2e-gci-gce-flaky
哪个测试出现了问题?
3个不同的测试,与SIG Storage和SIG API Machinery相关
参见:
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-flaky/1661222001538240512
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-flaky/1660677945099816960
从何时开始出现问题?
05-09 23:58 CST
Testgrid链接
https://testgrid.k8s.io/sig-storage-kubernetes#gce-flaky
失败原因(如果可能)
存储:
May 24 04:35:44.167: INFO: Failed inside E2E framework:
k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodRunningInNamespace({0x7f28c44ff9b8, 0xc004034ba0}, {0x72d13d0?, 0xc0005604e0?}, {0xc002fb8cf0, 0x10}, {0xc0019cf410, 0x12}, 0x0?)
test/e2e/framework/pod/wait.go:459 +0x1a4
k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodNameRunningInNamespace(...)
test/e2e/framework/pod/wait.go:443
k8s.io/kubernetes/test/e2e/framework/pod.CreatePod({0x7f28c44ff9b8, 0xc004034ba0}, {0x72d13d0?, 0xc0005604e0}, {0xc0019cf410, 0x12}, 0x0?, {0xc003a3fcf0, 0x2, 0x2}, ...)
test/e2e/framework/pod/create.go:87 +0x1c5
k8s.io/kubernetes/test/e2e/storage.glob..func19.2.1({0x7f28c44ff9b8, 0xc004034ba0})
test/e2e/storage/nfs_persistent_volume-disruptive.go:181 +0x8a5
[FAILED] pod "pvc-tester-2hw7x" is not Running: Timed out after 300.000s.
Expected Pod to be in <v1.PodPhase>: "Running"
Got instead:
<*v1.Pod | 0xc003c07b00>:
metadata:
creationTimestamp: "2023-05-24T04:30:44Z"
generateName: pvc-tester-
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:generateName: {}
f:spec:
f:containers:
k:{"name":"write-pod"}:
.: {}
f:command: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:securityContext:
.: {}
f:privileged: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/mnt/volume1"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/mnt/volume2"}:
.: {}
f:mountPath: {}
f:name: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
f:volumes:
.: {}
k:{"name":"volume1"}:
.: {}
f:name: {}
f:persistentVolumeClaim:
.: {}
f:claimName: {}
k:{"name":"volume2"}:
.: {}
f:name: {}
f:persistentVolumeClaim:
.: {}
f:claimName: {}
manager: e2e.test
operation: Update
time: "2023-05-24T04:30:44Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
.: {}
k:{"type":"PodScheduled"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: kube-scheduler
operation: Update
subresource: status
time: "2023-05-24T04:30:44Z"
name: pvc-tester-2hw7x
namespace: disruptive-pv-5536
resourceVersion: "3111"
uid: 866f2c7f-b379-46ba-8af1-77c49895e96c
spec:
containers:
- command:
- /bin/sh
- -c
- trap exit TERM; while true; do sleep 1; done
image: registry.k8s.io/e2e-test-images/busybox:1.29-4
imagePullPolicy: IfNotPresent
name: write-pod
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/volume1
name: volume1
- mountPath: /mnt/volume2
name: volume2
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-4tsnz
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: volume1
persistentVolumeClaim:
claimName: pvc-prm4b
- name: volume2
persistentVolumeClaim:
claimName: pvc-z8dxl
- name: kube-api-access-4tsnz
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2023-05-24T04:30:44Z"
message: '0/4 nodes are available: 1 node(s) were unschedulable, 3 node(s) had
volume node affinity conflict. preemption: 0/4 nodes are available: 4 Preemption
is not helpful for scheduling..'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
In [BeforeEach] at: test/e2e/storage/nfs_persistent_volume-disruptive.go:182 @ 05/24/23 04:35:44.167
API机械
[FAILED] failed to explain ksvc-1684903113.spec: error running /workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://34.145.75.255 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-5175 explain ksvc-1684903113.spec:
Command stdout:
stderr:
the server doesn't have a resource type "ksvc-1684903113"
error:
exit status 1
In [It] at: test/e2e/apimachinery/crd_publish_openapi.go:501 @ 05/24/23 04:38:39.602
我们需要了解其他信息吗?
- 无响应*
相关的SIG(s)
/sig storage
/sig api-machinery
5条答案
按热度按时间mqxuamgl1#
/assign
b5lpy0ml2#
/triage accepted
wfsdck303#
/cc
o75abkj44#
这个问题已经超过一年没有更新了,应该重新进行优先级评估。
你可以:
/triage accepted
(仅组织成员)相关/close
关闭这个问题有关优先级评估过程的更多详细信息,请参见 https://www.kubernetes.dev/docs/guide/issue-triage/
已接受移除优先级评估
xkftehaa5#
/remove-sig api-machinery