我正在aks elasticsearch集群中创建3个主节点、2个数据节点和1个摄取节点,并使用elasticsearch版本7.9.1。
我成功地创建了集群,但是在主选择过程中遇到了问题。
问题:如果删除活动的主节点。然后,它自动选择专用数据节点作为活动主节点。甚至有时它选择摄取节点作为活动主节点。
我要为活动主节点选择专用主节点。
我想这可能是“发现种子”的原因。所以,我已经移除了
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
并补充道
- name: discovery.seed_hosts
value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
在这种情况下,创建主节点就可以了。但是当我执行数据节点yaml时,它抛出了一个错误:
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
{"type": "server", "timestamp": "2020-10-08T18:49:54,640Z", "level": "WARN", "component": "o.e.d.SeedHostsResolver", "cluster.name": "docker-cluster", "node.name": "elasticsearch-data-0", "message": "failed to resolve host [elasticsearch-master-0]",
"stacktrace": ["java.net.UnknownHostException: elasticsearch-master-0",
"at java.net.InetAddress$CachedAddresses.get(InetAddress.java:800) ~[?:?]",
"at java.net.InetAddress.getAllByName0(InetAddress.java:1495) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1354) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1288) ~[?:?]",
"at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:548) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:490) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:855) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0(SeedHostsResolver.java:144) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:651) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
所以,我怀疑我的配置。
elasticsearch发现
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-discovery
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: master
spec:
selector:
app: elasticsearch
role: master
ports:
- name: transport
port: 9300
protocol: TCP
主节点yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-master
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: master
spec:
serviceName: elasticsearch-discovery
selector:
matchLabels:
app: elasticsearch
replicas: 3
template:
metadata:
labels:
app: elasticsearch
role: master
spec:
terminationGracePeriodSeconds: 30
# Use the stork scheduler to enable more efficient placement of the pods
#schedulerName: stork
initContainers:
- name: increase-the-vm-max-map-count
image: busybox
#imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch-poc-master-pod
image: XXXXXXXX/elasticsearch-oss:7.9.1-amd64
#imagePullPolicy: Always
env:
- name: network.host
value: "0.0.0.0"
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
- name: cluster.initial_master_nodes
value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "XXXXXXXXX"
- name: "NUMBER_OF_MASTERS"
value: "3"
- name: NODE_MASTER
value: "true"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "false"
- name: HTTP_ENABLE
value: "false"
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 100m
memory: 2Gi
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-master
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
storageClassName: azurefile
resources:
requests:
storage: 5Gi
数据节点服务
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-data
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: data
spec:
ports:
- port: 9300
name: transport
clusterIP: None
selector:
app: elasticsearch
role: data
日期节点yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-data
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
selector:
matchLabels:
app: elasticsearch
replicas: 2
template:
metadata:
labels:
app: elasticsearch
role: data
spec:
terminationGracePeriodSeconds: 30
# Use the stork scheduler to enable more efficient placement of the pods
#schedulerName: stork
initContainers:
- name: increase-the-vm-max-map-count
image: busybox
#imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch-poc-data-pod
image: XXXXXXXXXXXXXXXXXXX/elasticsearch-oss:7.9.1-amd64
#imagePullPolicy: Always
env:
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "docker-cluster"
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: azurefile
resources:
requests:
storage: 5Gi
摄取节点yaml
kind: StatefulSet
metadata:
name: elasticsearch-ingest
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: ingest
spec:
serviceName: elasticsearch-ingest
selector:
matchLabels:
app: elasticsearch
replicas: 1
template:
metadata:
labels:
app: elasticsearch
role: ingest
spec:
terminationGracePeriodSeconds: 30
# Use the stork scheduler to enable more efficient placement of the pods
#schedulerName: stork
initContainers:
- name: increase-the-vm-max-map-count
image: busybox
#imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch-poc-ingest-pod
image: XXXXXXXXXXXXXXXXXXXXXXX/elasticsearch-oss:7.9.1-amd64
#imagePullPolicy: Always
env:
- name: network.host
value: "0.0.0.0"
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "docker-cluster"
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "true"
- name: NODE_DATA
value: "flase"
- name: HTTP_ENABLE
value: "false"
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 9300
name: transport
protocol: TCP
查看图像
我看到一些奇怪的事情,我所有的节点都显示“dimr”。我不知道这些是正确的还是错误的。我期待3主,2数据,1摄取节点。
暂无答案!
目前还没有任何答案,快来回答吧!