我正在尝试部署具有自管理节点组的群集。无论我使用什么配置选项,我总是会出现以下错误:
错误:Post“http://localhost/api/v1/namespaces/kube-system/configmaps":拨打tcp 127.0.0.1:80:kubernetes_config_map.aws_auth[0]on .terraform/modules/eks-ssp/aws-auth-configmap.tf line 19,in resource“kubernetes_config_map”“aws_auth”:resource“kubernetes_config_map”“aws_auth”{
.tf
文件如下所示:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 20
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 2
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
字符串
供应商:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.6.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
型
5条答案
按热度按时间30byixjq1#
基于Github repo [1]中提供的示例,我猜测
provider
配置块丢失,无法按预期工作。查看问题中提供的代码,似乎需要添加以下内容:字符串
如果还需要
helm
,我认为还需要添加以下块[2]:型
kubernetes
和helm
的提供程序参数参考分别在[3]和[4]中。[1]https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47
[2]https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55
[3]https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference
[4]https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference
tyky79it2#
上面来自Marko E的答案似乎解决了/刚刚遇到了这个问题。在应用了上面的代码后,在一个单独的
providers.tf
文件中,terraform现在使它通过了错误。稍后将发布部署是否完全通过。作为参考,在我遇到这个错误之前,可以从创建的65个资源下降到创建的42个资源。这是使用AWS Consulting的README顶部建议的确切最佳实践/示例配置:https://github.com/aws-samples/aws-eks-accelerator-for-terraform的
bz4sfanl3#
在我的例子中,我试图使用Terraform部署到kubernetes集群(GKE)。我用kubeconfig文件的绝对路径替换了kubeconfig路径。
起始
字符串
TO
型
8hhllhi24#
查看github上eks模块的示例文件夹。你不应该在kubernetes提供者配置中使用“data”-当你第一次从头开始创建资源时,它不起作用。提供程序配置必须如下所示:
字符串
ldxq2e6h5#
这个问题可能由于多种原因而发生,所以我添加了另一个解决方案。
基于这个问题
Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused #911在
terraform-aws-eks
模块中。在Terraform中运行以下命令有助于解决此问题:
字符串