在我通过下载源代码并在/usr/bin下运行来编译kubeadm、kubectl和kubelet之后,我发现下面的错误I want to run the cluster with the kubeadm compilation
我之前用过yum -y install kubelet kubeadm kubectl,运行下面的kubeadm init不会出现这个问题,但是我不确定为什么会出现这个问题
# kubeadm init --config=/opt/pass/kubeQ/kubeQ/init.yaml --upload-certs
W0406 15:52:28.811786 19897 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "certSANs"
W0406 15:52:28.814902 19897 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10]
W0406 15:52:28.853130 19897 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x4607bf]
goroutine 1 [running]:
runtime.throw({0x17a2046?, 0xc0003aa3c0?})
/usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc000806eb8 sp=0xc000806e88 pc=0x433f51
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:825 +0x305 fp=0xc000806f08 sp=0xc000806eb8 pc=0x4493c5
aeshashbody()
/usr/local/go/src/runtime/asm_amd64.s:1343 +0x39f fp=0xc000806f10 sp=0xc000806f08 pc=0x4607bf
runtime.mapiternext(0xc00049ba80)
/usr/local/go/src/runtime/map.go:934 +0x2cb fp=0xc000806f80 sp=0xc000806f10 pc=0x40fb0b
runtime.mapiterinit(0x811082?, 0xc0003aa3c0?, 0xc00062ae28?)
/usr/local/go/src/runtime/map.go:861 +0x228 fp=0xc000806fa0 sp=0xc000806f80 pc=0x40f7e8
reflect.mapiterinit(0x8259b6?, 0x15d2920?, 0xc00049ba80?)
/usr/local/go/src/runtime/map.go:1373 +0x19 fp=0xc000806fc8 sp=0xc000806fa0 pc=0x45d659
k8s.io/kubernetes/vendor/github.com/modern-go/reflect2.(*UnsafeMapType).UnsafeIterate(...)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/modern-go/reflect2/unsafe_map.go:112
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*sortKeysMapEncoder).IsEmpty(0x13?, 0x150f0e0?)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/reflect_map.go:333 +0x28 fp=0xc000807008 sp=0xc000806fc8 pc=0x818e68
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*placeholderEncoder).IsEmpty(0xc0003424b0?, 0xc000636028?)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/reflect.go:336 +0x22 fp=0xc000807028 sp=0xc000807008 pc=0x8110e2
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*structFieldEncoder).IsEmpty(0xc0007853b0, 0x1521476?)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:118 +0x42 fp=0xc000807048 sp=0xc000807028 pc=0x825b62
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*structEncoder).Encode(0xc000785980, 0x0?, 0xc0003aa3c0)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:148 +0x565 fp=0xc000807130 sp=0xc000807048 pc=0x8261c5
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc000197040?, 0x0?, 0x0?)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/reflect_optional.go:70 +0xa4 fp=0xc000807180 sp=0xc000807130 pc=0x81d5c4
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*onePtrEncoder).Encode(0xc000713f00, 0xc000636028, 0xc000632cc0?)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/reflect.go:219 +0x82 fp=0xc0008071b8 sp=0xc000807180 pc=0x810642
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*Stream).WriteVal(0xc0003aa3c0, {0x173a0c0, 0xc000636028})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/reflect.go:98 +0x158 fp=0xc000807228 sp=0xc0008071b8 pc=0x80f958
k8s.io/kubernetes/vendor/github.com/json-iterator/go.(*frozenConfig).Marshal(0xc000197040, {0x173a0c0, 0xc000636028})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/json-iterator/go/config.go:299 +0xc9 fp=0xc0008072c0 sp=0xc000807228 pc=0x806c09
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x14cf4e1?, {0x1a0a398?, 0xc000636028?}, {0x1a01760, 0xc000342210})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:305 +0x6d fp=0xc000807358 sp=0xc0008072c0 pc=0x9624ad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc000596aa0, {0x1a0a398, 0xc000636028}, {0x1a01760, 0xc000342210})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:300 +0xfc fp=0xc0008073b8 sp=0xc000807358 pc=0x9623dc
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc00053d680, {0x1a0a398?, 0xc000636028}, {0x1a01760, 0xc000342210})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:244 +0x8fa fp=0xc000807710 sp=0xc0008073b8 pc=0x9708da
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc00053d680, {0x1a0a398, 0xc000636028}, {0x1a01760, 0xc000342210})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:184 +0x106 fp=0xc000807770 sp=0xc000807710 pc=0x96ff86
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime.Encode({0x7fef6b9160d8, 0xc00053d680}, {0x1a0a398, 0xc000636028})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/codec.go:50 +0x64 fp=0xc0008077b0 sp=0xc000807770 pc=0x863424
k8s.io/kubernetes/cmd/kubeadm/app/util.MarshalToYamlForCodecs({0x1a0a398, 0xc000636028}, {{0x17b7a13?, 0x1552b20?}, {0x17a5597?, 0xc000610d80?}}, {0xc00059ee00, {0xc0004fe480, 0x3, 0x3}, ...})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/marshal.go:53 +0x289 fp=0xc000807a90 sp=0xc0008077b0 pc=0x10e5f49
k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs.(*configBase).Marshal(...)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/configset.go:156
k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs.(*kubeletConfig).Marshal(0x16225a0?)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/kubelet.go:100 +0x71 fp=0xc000807b28 sp=0xc000807a90 pc=0x11e5571
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubelet.WriteConfigToDisk(0x7fef6b94f898?, {0x17af936, 0x10})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubelet/config.go:46 +0x56 fp=0xc000807b68 sp=0xc000807b28 pc=0x11f8e76
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runKubeletStart({0x172fca0?, 0xc0000e2a50?})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/kubelet.go:75 +0x172 fp=0xc000807bf8 sp=0xc000807b68 pc=0x1451272
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0xc000038900)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 +0x154 fp=0xc000807c88 sp=0xc000807bf8 pc=0x1425a34
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(...)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0xc0002c0360, {0xc00049e9c0, 0x0, 0x2})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 +0x142 fp=0xc000807d10 sp=0xc000807c88 pc=0x1425882
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0xc0005698c0?, {0xc00049e9c0, 0x0, 0x2})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:149 +0xed fp=0xc000807d88 sp=0xc000807d10 pc=0x1480b2d
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0005698c0, {0xc00049e9a0, 0x2, 0x2})
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:842 +0x67c fp=0xc000807e60 sp=0xc000807d88 pc=0x6e8b5c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000281080)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x39c fp=0xc000807f18 sp=0xc000807e60 pc=0x6e913c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
k8s.io/kubernetes/cmd/kubeadm/app.Run()
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x15d fp=0xc000807f58 sp=0xc000807f18 pc=0x148acbd
main.main()
_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 +0x19 fp=0xc000807f80 sp=0xc000807f58 pc=0x148acf9
runtime.main()
/usr/local/go/src/runtime/proc.go:250 +0x212 fp=0xc000807fe0 sp=0xc000807f80 pc=0x436672
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000807fe8 sp=0xc000807fe0 pc=0x463281
goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x0?)
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x6a
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
/root/kubernetes-1.19.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xef
谁知道为什么会出现这个问题?这是我的init文件
# cat /opt/pass/kubeQ/kubeQ/init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: o07ftt.1k2k5dagbgypo863
ttl: 876000h0m0s
usages:
- signing
- authentication
localAPIEndpoint:
advertiseAddress: 10.102.30.14
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s1
taints: null
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.19.0
imageRepository: 10.102.30.16/google_containers
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
networking:
dnsDomain: cluster.local
podSubnet: "172.90.0.0/16"
serviceSubnet: "10.96.0.0/16"
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
election-timeout: "5000"
heartbeat-interval: "500"
certSANs:
- 127.0.0.1
- cloudybase.com
extraArgs:
enable-admission-plugins: 'NodeRestriction,DefaultTolerationSeconds'
max-requests-inflight: "1000"
max-mutating-requests-inflight: "500"
default-watch-cache-size: "500"
kubelet-timeout: "5s"
event-ttl: "1h0m0s"
default-not-ready-toleration-seconds: "60"
default-unreachable-toleration-seconds: "60"
timeoutForControlPlane: 4m0s
controllerManager:
extraArgs:
bind-address: 0.0.0.0
deployment-controller-sync-period: "50s"
concurrent-deployment-syncs: "5"
concurrent-endpoint-syncs: "5"
concurrent-gc-syncs: "20"
concurrent-namespace-syncs: "10"
concurrent-replicaset-syncs: "5"
concurrent-service-syncs: "1"
concurrent-serviceaccount-token-syncs: "5"
experimental-cluster-signing-duration: "87600h0m0s"
feature-gates: "RotateKubeletServerCertificate=true"
pvclaimbinder-sync-period: "15s"
node-monitor-period: "5s"
node-monitor-grace-period: "20s"
node-startup-grace-period: "30s"
pod-eviction-timeout: "1m"
scheduler:
extraArgs:
bind-address: 0.0.0.0
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clientConnection:
burst: 20
qps: 120
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
- 169.254.20.10
clusterDomain: cluster.local
cgroupsPerQOS: true
cgroupDriver: systemd
systemReserved:
cpu: "0.25"
memory: "200Mi"
kubeReserved:
cpu: "0.25"
memory: "1500Mi"
evictionHard:
memory.available: "100Mi"
nodefs.available: "5%"
nodefs.inodesFree: "3%"
imagefs.available: "8%"
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 20
kubeAPIQPS: 10
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 10s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
谢谢
1条答案
按热度按时间zsbz8rwp1#
我试着用1.15.4版本的go编译它,另外,我用本地使用的1.18.1版本的go编译1.23.4没有问题,所以我最后切换到1.15.4编译1.19.0