在kubeadm为非root用户设置的Kubernetes集群中启动网络相关Pod失败:缺少哪些权限?[关闭]

unhi4e5o  于 2023-05-28  发布在  Kubernetes
关注(0)|答案(1)|浏览(494)

**已关闭。**此问题为not about programming or software development。目前不接受答复。

这个问题似乎不是关于a specific programming problem, a software algorithm, or software tools primarily used by programmers的。如果你认为这个问题与another Stack Exchange site的主题有关,你可以留下评论,解释在哪里可以回答这个问题。
2天前关闭。
Improve this question
我已经在我的本地机器上使用kubeadm init设置了一个Kubernetes集群,它位于企业代理之后。以root用户身份执行时,初始化过程成功。但是,当我切换到非root用户(名为“k8s”)并尝试启动网络相关的pod时,例如kube-proxy,我遇到了失败,导致kubeadm init失败。
我相信这个问题可能与非root用户的权限不足有关。有人能解释一下我需要授予哪些特定的权限才能从非root用户成功启动这些pod吗?
我将非常感谢任何关于如何解决这个问题的见解或建议。提前感谢您的帮助。
我将共享我的配置`

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: docker-registry.***.lan/k8s
kubernetesVersion: "v1.27.2"
apiServer:
  certSANs:
    - "server-ip"
networking:
  podSubnet: "10.10.0.0/16"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
containerRuntime: remote
containerRuntimeOptions:
  remote:
    endpoint: "unix:///var/run/containerd/containerd.sock"

apiVersion:kubelet.config.k8s.io/v1beta1这显示为警告,但作为root用户可以省略,没有任何问题。
我需要能够在我的非root用户中成功运行kubeadm init。我尝试了Azure VM,这是无缝的。
错误是:

control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0526 06:55:43.205677   12773 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.


Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
0sgqnhkj

0sgqnhkj1#

您的错误日志表明kubelet出现了问题。通过运行以下命令检查其日志:

sudo journalctl -u kubelet

另外,请确认是否已安装。kubelet的安装步骤与kubeadm是分开的。

相关问题