Shashikant shah

Tuesday 30 April 2024

What is load balancer in kubernetes.

Load balancer:-

1. It exposes the service both in and outside the cluster.

2. It exposes the service externally using a cloud provider's load balancer.

NodePort and ClusterIP services will be created automatically whenever the LoadBalancer service is created.

3. The LoadBalancer service redirects traffic to the node port across all the nodes.

4. The external clients connect to the service through load balancer IP

5. This is the most preferable approach to expose service outside the cluster


Type of load-balance use in Kubernetes :-

1. AWS loadbalancer.

2. metalLB,  MicroK8s  and  Traefik.

3. Haproxy.

4. nginx reverse Proxy LB.

i) AWS load-balance configure for k8s.

1.ELB setup


2. add security group.

3. configure health check.


4. all node working InService.



5. check LB URL :-



ii) metalLB configure for k8s.

1. Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP addresses.

2. Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.

 

1.Controller pods :-  It provides IP to the service.

2.speaker pods :-  speaker pod working every node. IP map with mac address.

 

If you’re using kube-proxy in IPVS mode, since Kubernetes v1.14.2 you have to enable strict ARP mode. Note, you don’t need this if you’re using kube-router as service-proxy because it is enabling strict arp by default.

#  kubectl edit configmap -n kube-system kube-proxy

 

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

mode: "ipvs"

ipvs:

  strictARP: true

 

# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml

 

# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml

 

# kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

 

# vim metallb-configmap.yaml

apiVersion: v1

kind: ConfigMap

metadata:

  namespace: metallb-system

  name: config

data:

  config: |

    address-pools:

    - name: default

      protocol: layer2

      addresses:

      - 172.31.24.220-172.31.24.250 --> public IP


Note :-
Since I am using the CIDR for internal calico networking for kubernetes cluster 172.31.24.0/24. I have used a range of IP’s reserved for the Load Balancers.

# kubectl get svc

# kubectl apply -f metallb-configmap.yaml

# kubectl describe configmap -n metallb-system

# kubectl get all -n metallb-system

1.Controller pods :-  It provide IP to the service.

2.speaker pods :-  speaker pod working every node.


Create a load balancer.

# kubectl expose deploy nginx-deploy --port 80 --type LoadBalancer

OR 

# vim nginx-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: nginx

spec:

  type: LoadBalancer

  selector:

    env: dev

  ports:

  - port: 80

    name: http

# kubectl apply -f nginx-service.yaml

# kubectl get svc


for describe command.

# kubectl describe pod/controller-58f55bbb6c-scrbw -n metallb-system

for logs command.

# kubectl  logs  pod/controller-58f55bbb6c-scrbw  -n  metallb-system

# kubectl describe service <service_name>


Speaker-jzjcm pods is working in worker2

# kubectl  logs  speaker-jzjcm  -n  metallb-system  


This IP will be map to mac address in worker2.

# ifconfig




# iptables -L


BGP metal LB :-

There is no concept of ARP in BGP. The switch device must be aware of every interface of the node. The switch and node interface communicate with the BGP protocol.






Monday 29 April 2024

What is NodePort Service in kubernetes ?

 NodePort:-

1. A NodePort service exposes the service on the IP of each node at a static port. A ClusterIP service is created automatically to route the traffic to the NodePort service. Users can communicate with the service from the outside by requesting <NodeIP>:<NodePort>

2. You can only use ports 30000–32767.

3. You can only have one service per port.

4. If you create a NodePort, then the endpoint service will also be created.


i) The node IP will be connected to nodeport 30010 and the 30010 port will be redirected to port 8081.

ii) 8081 will redirect to the port 80 endpoint.

# kubectl create deployment nodeport-deployment --image=nginx --replicas=2

# kubectl get deploy -o wide


# vim nodeport.yaml

apiVersion: v1

kind: Service

metadata:

  name: nginx-service

spec:

  type: NodePort

  selector:

    app: nodeport-deployment

  ports:

    - protocol: TCP

      port: 8081

      targetPort: 80

      nodePort: 30010

 

for svc

# kubectl get svc nginx-service -o wide

for pods

# kubectl get pods --show-labels

# kubectl get pods -o wide

Test NodePort with NodeIP.


For ClusterIP with endpoint Port.




Sunday 28 April 2024

What is Endpoints Service in k8s cluster ?

 Endpoint Services :

When you create a service in Kubernetes, it automatically creates an endpoint associated with that service. This endpoint is essentially a list of IP addresses and ports of the pods that the service is directing traffic to. So, when one application wants to talk to another application within the cluster, it looks up the endpoint associated with the target service to find out where to send its requests.


Why are required endpoints?

There is a front-end pod and a back-end application pod and all requests from the front-end go to the backend pod, If there is a redeployment in the backend pod, then IP has been changed for backend pods and the front-end pod will not be able to send requests to the backend, Its solution is done from the end-point.



# kubectl run frontend-pod --image=curlimages/curl --command -- sleep 3600

# kubectl run backend-pod --image=nginx

# kubectl get pods -o wide

# kubectl exec  -it frontend-pod -- sh

Curl to backend pods server from frontend pods.

Redeployment then Ip change.


For declarative way

# vim service.yaml

apiVersion: v1

kind: Service

metadata:

   name: clusterip-service

spec:

   ports:

   - port: 8080

     targetPort: 80

# kubectl apply -f service.yaml

# kubectl describe svc clusterip-service


Add the backend IP in the endpoint. But it’s a manual task.

# vim endpoint.yaml

apiVersion: v1

kind: Endpoints

metadata:

  name: clusterip-service

subsets:

  - addresses:

      - ip: 10.244.1.55

    ports:

      - port: 80

# kubectl apply -f endpoint.yaml

# kubectl get ep

# kubectl describe svc clusterip-service

# kubectl exec  -it frontend-pod -- sh

Hit to Endpoint 10.103.190.232:8080



ii)Using Selectors with endpoint in service:

Use case: if there are 500 Pods, we will have to manually define each IP in the endpoint.

Kubernetes allows us to define the list of labels of PODS that need to be added as part of Endpoints.

All Pods that match those labels will be added.



1.     Create deployment.




# vim demo-deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-deployment

  labels:

    app: nginx

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx:1.14.2

        ports:

        - containerPort: 80

# kubectl apply -f demo-deployment.yaml

For pods status :-

# kubectl get pods --show-labels


For deployment status :-

# kubectl get deployments -o wide


For replicaset status :-

# kubectl get rs -o wide


2. Creating Service.

# vim service-selector.yaml

apiVersion: v1

kind: Service

metadata:

   name: service-selector

spec:

   selector:

     app: nginx

   ports:

   - port: 80

     targetPort: 80

# kubectl apply -f service-selector.yaml

# kubectl get svc


# kubectl describe service service-selector


# kubectl get endpoints service-selector

# kubectl scale deployment/nginx-deployment --replicas=10

# kubectl describe service service-selector

# kubectl describe endpoints service-selector

From nodes :-

# curl 10.96.248.224:8080


for Imperative way

Port forwarding of pods.

# kubectl port-forward --address 0.0.0.0 pod/firstpod 8091:80

Port forwarding of services.

# kubectl port-forward --address 0.0.0.0 service/myfistservice 8090:8001

What is a ClusterIP services ?

 Cluster IP Setup:

ClusterIP is the default ServiceType. A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster.


# vim mysql.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: mysql

spec:

  replicas: 1

  selector:

    matchLabels:

      app: mysql

  template:

    metadata:

      labels:

        app: mysql

    spec:

      containers:

      - name: mysql

        image: mysql:latest

        ports:

        - containerPort: 3306

        env:

        - name: MYSQL_ROOT_PASSWORD

          value: shashi123

        - name: MYSQL_DATABASE

          value: test

        - name: MYSQL_USER

          value: test

        - name: MYSQL_PASSWORD

          value: test123

# kubectl apply -f mysql.yaml

# vim mysql-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: mysql-service

spec:

  selector:

    app: mysql

  ports:

    - protocol: TCP

      port: 3306

      targetPort: 3306

  type: ClusterIP

# kubectl get all

How to check container IP.

#  kubectl describe pods   mysql-77b47f887-8tjgl

# kubectl apply -f  mysql-service.yaml

# kubectl get pods 


# kubectl exec mysql-77b47f887-8tjgl -it bash

# echo "default_authentication_plugin=mysql_native_password" >> etc/my.cnf

bash-4.4# mysql -u root -p

ALTER USER 'test'@'%' IDENTIFIED WITH mysql_native_password BY 'test123';

# kubectl get svc mysql-service

# kubectl get endpoints


Test DB

# mysql -h 10.104.214.185 -u test -p3306 -p


What is a services in kubernetes ?

 Services:-           

A Kubernetes service can easily expose an application deployed on a set of pods using a single endpoint.



1.There are many types of Services:

i) ClusterIP (default) :- Exposes a service which is only accessible from within the cluster.

ii) NodePort :- Exposes a service via a static port on each node’s IP.

iii) LoadBalancer :- It uses cloud providers’ load balancer. NodePort and ClusterIP services are created automatically to which the external load balancer will route.

iv) Ingress :- Ingress is actually not a type of service. It sits in front of multiple services and performs smart routing between them, providing access to your cluster. Several types of ingress controllers have different routing capabilities. In GKE, the ingress controller creates an HTTP Load Balancer, which can route traffic to services in the Kubernetes cluster based on path or subdomain.

v) ExternalName :-  Maps a service to a predefined externalName field by returning a value for the CNAME record.

vi) Headless :- Services that do not need load balancing and only expose a single IP can create a ‘headless’ service by specifying “none” as the clusterIP.

vii) External IPs :- If there are external IPs that route to one or more cluster nodes.

viii) Endpoint:-  An endpoint is an resource that gets IP addresses of one or more pods dynamically assigned to it, along with a port. 

ix) KubeDNS or Kubernetes DNS:- is a component within the Kubernetes ecosystem that provides Domain Name System (DNS) resolution services for applications running on Kubernetes clusters. It essentially enables the mapping of service names to their corresponding network endpoints within the Kubernetes environment.

2.Two CIDRs are available on a k8s cluster.



1. Pods CIDR :- This specifies the CIDR range allocated for pod IP addresses in the Kubernetes cluster. Pods in the cluster will be assigned IP addresses from this range.

2. Services CIDR :- This specifies the CIDR range allocated for Kubernetes service IP addresses. Services in the cluster will be assigned virtual IP addresses from this range.

--cluster-cidr=192.169.0.0/16

--service-cluster-ip-range=10.96.0.0/12

# kubectl describe pod kube-controller-manager-master -n kube-system


3.Each node in a Kubernetes cluster typically has its own CIDR block assigned for pod IP addresses.

# kubectl get nodes -o custom-columns=NAME:.metadata.name,CIDR:.spec.podCIDR

NAME     CIDR

master   192.169.0.0/24

node-1   192.169.2.0/24

node-2   192.169.1.0/24


4. How to check container IP.

#  kubectl describe pods   <Pods_name>


5. How to update pods subnet CDIR.

# kubectl get ippool -o wide

# curl -L https://github.com/projectcalico/calico/releases/download/v3.27.3/calicoctl-linux-amd64 -o calicoctl

# mv calicoctl /usr/bin/

# chmod +x /usr/bin/calicoctl

# vim ip-pool_change.yaml

apiVersion: projectcalico.org/v3

kind: IPPool

metadata:

  name: new-pool

spec:

  cidr: 172.17.0.0/20

  ipipMode: Always

  natOutgoing: true

# calicoctl apply -f ip-pool_change.yaml

# calicoctl get ippool -o wide

# calicoctl get ippool -o yaml > ippool_new.yaml

# vim  ippool_new.yaml

   disabled: true

# calicoctl apply -f  ippool_new.yaml

# calicoctl get ippool -o wide

# kubectl -n kube-system edit cm kubeadm-config

# x=$(kubectl get pods -n kube-system | awk -F " " '{print $1}')

# kubectl delete pods $x -n kube-system

Restart all worker nodes

# init 6

# kubectl get pods -A -o wide