A high-availability (HA) etcd cluster is essential for ensuring Kubernetes remains operational even during failures. etcd acts as the brain of Kubernetes, storing all cluster data, including Pods, Nodes, ConfigMaps, and Secrets. If etcd fails, Kubernetes cannot function properly.
1.In a 3-node etcd cluster, if one node fails, the remaining two nodes keep the cluster running.
2.If etcd is highly available, Kubernetes API requests (kubectl commands, deployments, scaling, etc.) continue to work without disruption.
3.One node acts as the leader, and others as followers. If the leader fails, a new leader is elected automatically.
4.etcd ensures strong consistency—every etcd node has the same data. Writes are replicated across all nodes in the cluster.
5.A large cluster (1000+ nodes) requires an HA etcd cluster to avoid API slowdowns and failures.
1. Create a certificate on one master node.
2. Install and configure etcd on master nodes.
3. Install and configure haproxy on one master node.
4. install and configure kubernetes on master nodes.
5. install and configure the worker node.
Nodes |
IP address |
ETCD01 |
192.168.56.15 |
ETCD02 |
192.168.56.16 |
ETCD03 |
192.168.56.17 |
VIP |
192.168.56.18 |
# yum update -y
Disable selinux
and firewalld
# systemctl stop firewalld
# systemctl disable firewalld
# /etc/sysconfig/selinux
SELINUX=disable
Update hostname and
/etc/hosts file
# vim /etc/hostname
ETCD01
# hostname ETCD01
# vim /etc/hosts
192.168.56.15 ETCD01
192.168.56.16 ETCD02
192.168.56.17 ETCD03
1. Download the required binaries for TLS certificates.
# mkdir -p
tls_certificate
# cd tls_certificate
# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
# chmod +x cfssl_linux-amd64
# chmod +x cfssljson_linux-amd64
# sudo mv
cfssl_linux-amd64 /usr/local/bin/cfssl
# sudo mv
cfssljson_linux-amd64 /usr/local/bin/cfssljson
2. Create a Certificate Authority (CA).
# cd tls_certificate
# cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"etcd": {
"expiry": "8760h",
"usages": ["signing","key
encipherment","server auth","client auth"]
}
}
}
}
EOF
# cat > ca-csr.json
<<EOF
{
"CN":
"etcd cluster",
"key": {
"algo":
"rsa",
"size":
2048
},
"names": [
{
"C":
"GB",
"L":
"England",
"O":
"Kubernetes",
"OU":
"ETCD-CA",
"ST":
"Cambridge"
}
]
}
EOF
# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
3. Create TLS certificates.
# cd tls_certificate
# cat > etcd-csr.json
<<EOF
{
"CN":
"etcd",
"hosts": [
"localhost",
"127.0.0.1",
"192.168.56.15",
"192.168.56.16",
"192.168.56.17",
"192.168.56.18"
],
"key": {
"algo":
"rsa",
"size":
2048
},
"names": [
{
"C":
"GB",
"L":
"England",
"O":
"Kubernetes",
"OU":
"etcd",
"ST":
"Cambridge"
}
]
}
EOF
# cd tls_certificate
# cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=etcd etcd-csr.json | \
cfssljson -bare etcd
4. Create two directories and copy the certificate to /etc/etcd
on all master nodes
# mkdir -p /etc/etcd
# mkdir -p /var/lib/etcd
# cp -rvf ca-key.pem ca.pem etcd-key.pem etcd.pem
/etc/etcd
# scp -r ca-key.pem ca.pem etcd-key.pem etcd.pem
ETCD02:/etc/etcd
# scp -r ca-key.pem ca.pem etcd-key.pem etcd.pem
ETCD03:/etc/etcd
5. Download etcd & etcdctl binaries from Github on
all master nodes.
Ref :- https://github.com/etcd-io/etcd/releases/
# wget
https://github.com/etcd-io/etcd/releases/download/v3.5.13/etcd-v3.5.13-linux-amd64.tar.gz
# tar xvf etcd-v3.5.13-linux-amd64.tar.gz
# cd etcd-v3.5.13-linux-amd64
# mv etcd* /usr/bin
6. Create systemd unit file for etcd service on all
master nodes.
# vim /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd \
--name 192.168.56.15 \
--cert-file=/etc/etcd/etcd.pem \
--key-file=/etc/etcd/etcd-key.pem \
--peer-cert-file=/etc/etcd/etcd.pem \
--peer-key-file=/etc/etcd/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://192.168.56.15:2380 \
--listen-peer-urls
https://192.168.56.15:2380
\
--listen-client-urls
https://192.168.56.15:2379,http://127.0.0.1:2379
\
--advertise-client-urls https://192.168.56.15:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster 192.168.56.15=https://192.168.56.15:2380,192.168.56.16=https://192.168.56.16:2380,192.168.56.17=https://192.168.56.17:2380
\
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Note:- --initial-cluster-state
new/existing (if
already etcd service is running then use
existing parameter.)
# systemctl daemon-reload
# systemctl status etcd
# systemctl enable etcd.service
# systemctl start etcd.service
# systemctl status etcd
# etcdctl member list
or ETCDCTL_API=3 etcdctl member
list
# ETCDCTL_API=3 etcdctl endpoint status
# ETCDCTL_API=3 etcdctl endpoint health
# ETCDCTL_API=3 etcdctl endpoint status --write-out=table
# ETCDCTL_API=3 etcdctl put name2 test_k8s
# ETCDCTL_API=3 etcdctl get name2
1. Install HAproxy on one master.
# yum install haproxy -y
VIP setup on interface.
# cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
IPADDR1=192.168.56.15
IPADDR2=192.168.56.18
PREFIX1=24
PREFIX2=24
GATEWAY=192.168.56.1
# vim /etc/haproxy/haproxy.cfg
frontend k8s_VIP
bind
192.168.56.18:6444
option tcplog
mode tcp
default_backend k8s_APP
backend k8s_APP
mode tcp
balance
roundrobin
option
tcp-check
server ETC01
192.168.56.15:6443 check fall 5 rise 3
server ETC02
192.168.56.16:6443 check fall 5 rise 3
server ETC03
192.168.56.17:6443 check fall 5 rise 3
# haproxy -c -f /etc/haproxy/haproxy.cfg
# systemctl status haproxy
# systemctl start haproxy
Install Kubernetes on all master node.
1. Manually
loading the modules on a Linux system.
overlay — The overlay module provides overlay
filesystem support, which Kubernetes uses for its pod network abstraction.
br_netfilter — This module enables bridge netfilter
support in the Linux kernel, which is required for Kubernetes networking and
policy.
# sudo modprobe overlay
# sudo modprobe br_netfilter
2. kernel modules should be automatically
loaded at boot time.
# cat <<EOF | sudo tee
/etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
3. sysctl parameters for Kubernetes networking.
# cat <<EOF |
sudo tee /etc/sysctl.d/kube.conf
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
EOF
Reloading the sysctl settings.
# sudo sysctl --system
3. Disable the swap memory.
# sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# swapoff -a
# free -m
4. Download the containerd package.
# yum install -y yum-utils
5. Add repo for containerd.
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install containerd
6. Add repo for kubernetes.
# cat <<EOF | sudo tee
/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
EOF
7. install the kubelet , kubectl , kubeadm packages.
# yum install kubelet
kubectl kubeadm
8. location where the configuration file for containerd
is stored.
# sudo containerd config default | sudo tee
/etc/containerd/config.toml
Note:- SystemdCgroup has to be set to “true”
SystemdCgroup = true
# systemctl status containerd
# systemctl start containerd
# systemctl enable containerd
9. Install kubelet , kubeadm, kubectl package on the
master.
# yum install kubelet kubeadm kubectl
# systemctl enable kubelet
# vim ClusterConfiguration.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.29.0
controlPlaneEndpoint: "192.168.56.11:6444"
etcd:
external:
endpoints:
-
https://192.168.56.8:2379
-
https://192.168.56.9:2379
caFile:
/etc/etcd/ca.pem
certFile:
/etc/etcd/kubernetes.pem
keyFile:
/etc/etcd/kubernetes-key.pem
networking:
podSubnet:
10.30.0.0/24
apiServer:
certSANs:
-
"192.168.56.11"
extraArgs:
apiserver-count:
"3"
# kubeadm init
--config=ClusterConfiguration.yaml --v=5
mkdir -p $HOME/.kube
sudo cp -i
/etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id
-u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export
KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one
of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by
copying certificate authorities
and service account keys on each node and then running the
following as root:
kubeadm join 192.168.56.18:6444
--token ewa7om.7pv5tumd4a99r5qq \
--discovery-token-ca-cert-hash
sha256:e2baff69f0df3ace226b5f7a1c89dff4422e1fde503f50ab42541a46015872bf \
--control-plane
Then you can join any number of worker nodes by running the
following on each as root:
kubeadm
join 192.168.56.18:6444 --token ewa7om.7pv5tumd4a99r5qq \
--discovery-token-ca-cert-hash
sha256:e2baff69f0df3ace226b5f7a1c89dff4422e1fde503f50ab42541a46015872bf
10. Run the below command on the master node.
# mkdir -p
$HOME/.kube
# sudo cp -i
/etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id
-u):$(id -g) $HOME/.kube/config
# wget
https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml
# kubectl apply -f calico.yaml
# kubectl get po -A
# kubectl get nodes
NAME STATUS ROLES AGE
VERSION
etcd01 Ready
control-plane 35h v1.29.3
# cd TLS_cetificates
# scp -r etcd-key.pem
etcd.pem ca.pem etcd02:/etc/kubernetes/pki/
# scp -r etcd-key.pem
etcd.pem ca.pem etcd03:/etc/kubernetes/pki/
# cd /etc/kubernetes/pki/
# scp -r ca.crt
ca.key front-proxy-ca.crt front-proxy-ca.key front-proxy-client.crt
front-proxy-client.key sa.key sa.pub etcd02:/etc/kubernetes/pki/
scp -r ca.crt ca.key front-proxy-ca.crt front-proxy-ca.key front-proxy-client.crt front-proxy-client.key sa.key sa.pub etcd03:/etc/kubernetes/pki/
11.Check port on master node
# netstat -ntlp | grep "6443"
tcp6 1 0 :::6443 :::* LISTEN 4257/kube-apiserver
# ps -elf | grep “4257”
# kubectl get nodes
No comments:
Post a Comment