How to install CLI and create an EKS Setup in AWS?
Prerequisites
- AWS
account
- IAM
user with AdministratorAccess (or specific EKS permissions)
- AWS
CLI installed
- eksctl
CLI installed
- kubectl
installed
What is a CLI?
# sudo dnf install -y unzip curl
# curl
"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
# unzip awscliv2.zip
# sudo ./aws/install
# /usr/local/bin/aws --version
# aws --version
To install aws client on Windows.
Download Binary: https://awscli.amazonaws.com/AWSCLIV2.msi
Install the downloaded binary (standard windows install)
# aws --version
# aws-cli/2.0.8 Python/3.7.5 Windows/10 botocore/2.0.0dev12
Configure AWS Command Line using Security Credentials
- Go
to AWS Management Console --> Services --> IAM
- Select
the IAM User: kalyan
- Important
Note: Use only IAM user to generate Security Credentials.
Never ever use Root User. (Highly not recommended)
- Click
on Security credentials tab
- Click
on Create access key
- Copy
Access ID and Secret access key
- Go to command line and provide the required details
https://891377184357.signin.aws.amazon.com/console
Username: eks_cli_user
Password: uE43w1(i
# aws configure
AWS Access Key ID [None]: AKIA47CRYNZS7IG3VGGL (Replace your creds when prompted)
AWS Secret Access Key [None]: dRPGiOSdkGYM1j9kFxo+8jfLVZVd2FVYxwlLgTNl (Replace your creds when prompted)
Default region name [None]: us-east-1
Default output format [None]: json
# aws ec2 describe-vpcs
Test With STS
·
If this fails: credentials are invalid.
· If this works but DescribeVpcs fails: it's a permission issue.
# aws sts get-caller-identity
{
"UserId": "AIDA47CRYNZSYDGPZGY4F",
"Account": "891377184357",
"Arn":
"arn:aws:iam::891377184357:user/eks_cli_user"
Get Error:
An error occurred (AuthFailure) when calling the DescribeVpcs operation: AWS
was not able to validate the provided access credentials.
Solution
:-
Update the date & time
# date -s " Jun 4 01:08:52 PM IST 2025"
Install kubectl CLI
Install and configure kubectl
# Download the Package
mkdir kubectlbinary
cd kubectlbinary
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.33.0/2025-05-01/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl
&& export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bash_profile
# kubectl version
or
# kubectl version --client
Install eksctl CLI
sudo dnf install -y curl tar
curl --silent --location
"https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz"
-o eksctl.tar.gz
tar -xzf eksctl.tar.gz
sudo mv eksctl /usr/local/bin/
eksctl version
# eksctl create cluster --dry-run
It will create an EKS control plane named eksdemo1 in
region us-east-1 across two availability zones, without any node group
(i.e., no worker nodes yet).
Default addons installed:
- metrics-server
- vpc-cni
- kube-proxy
- coredns
# Create Cluster
# eksctl create cluster --name=eksdemo1 --region=us-east-1 --zones=us-east-1a,us-east-1b --without-nodegroup
# Get List of clusters
# eksctl get cluster
# kubectl get pods -A
# kubectl get svc
Step-02: Create & Associate IAM OIDC Provider for our
EKS Cluster
OIDC Provider in EKS (OpenID Connect provider) is a way to connect AWS IAM roles with Kubernetes service accounts.
It works like a bridge that helps EKS (Kubernetes) know which AWS permissions to give to your pods or applications, in a secure way.
# eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster eksdemo1 --approve
You can verify the OIDC provider:
AWS Console:
- Go to IAM > Identity providers
Step-03: Create EC2 Keypair
Create a new EC2 Keypair with name as kube-demo
Step-04: Create Node Group with additional Add-Ons in
Public Subnets.
- This
keypair we will use it when creating the EKS NodeGroup.
- This
will help us to login to the EKS Worker Nodes using Terminal.
# Create Public Node Group
eksctl create nodegroup --cluster=eksdemo1 --region=us-east-1 --name=eksdemo1-ng-public1 --node-type=t3.medium --nodes=2 --nodes-min=2 --nodes-max=4 --node-volume-size=20 --ssh-access --ssh-public-key=kube-demo --managed --asg-access --external-dns-access --full-ecr-access --appmesh-access --alb-ingress-access
Flag |
Description |
--cluster=eksdemo1 |
Attach
node group to this EKS cluster |
--region=us-east-1 |
AWS
region |
--name=eksdemo1-ng-public1 |
Name of
the node group |
--node-type=t3.medium |
EC2
instance type for nodes |
--nodes=2 |
Desired
number of nodes |
--nodes-min=2 |
Minimum
nodes in Auto Scaling Group |
--nodes-max=4 |
Maximum
nodes in Auto Scaling Group |
--node-volume-size=20 |
Disk size
in GB |
--ssh-access |
Enable
SSH access to the nodes |
--ssh-public-key=kube-demo |
Name of
EC2 Key Pair for SSH |
--managed |
Use
AWS-managed node group |
--asg-access |
Grant
permissions to access Auto Scaling groups |
--external-dns-access |
Allow
ExternalDNS access |
--full-ecr-access |
Allow
pulling from private ECR |
--appmesh-access |
Allow
access to AWS App Mesh |
--alb-ingress-access |
Grant
permissions for AWS ALB Ingress Controller |
EKS VPC
Architecture Summary.
1 VPC:-
vpc-0dccea06a79ddd613
4 subnets:-
subnet-0cca4006ecf153099 / eksctl-eksdemo1-cluster/SubnetPrivateUSEAST1A
subnet-0fb0b893dbbfd2f07 / eksctl-eksdemo1-cluster/SubnetPrivateUSEAST1B
subnet-0a96011b8050812f3 / eksctl-eksdemo1-cluster/SubnetPublicUSEAST1A
subnet-0a59c7c5420a06450 / eksctl-eksdemo1-cluster/SubnetPublicUSEAST1B
4 Route table:-
rtb-0c8ca76df9c6d548d / eksctl-eksdemo1-cluster/PublicRouteTable
rtb-015b4acfbefe82fdf / eksctl-eksdemo1-cluster/PrivateRouteTableUSEAST1A
rtb-03a34b8cd4ab4024f / eksctl-eksdemo1-cluster/PrivateRouteTableUSEAST1B
2 network connections
nat-0eefd9cc701314025 / eksctl-eksdemo1-cluster/NATGateway
igw-060f4dbebfe63ca83 / eksctl-eksdemo1-cluster/InternetGateway
2 nodes
2 nodes added with public subnet.
subnet-0a96011b8050812f3 / eksctl-eksdemo1-cluster/SubnetPublicUSEAST1A
subnet-0a59c7c5420a06450 / eksctl-eksdemo1-cluster/SubnetPublicUSEAST1B
i) Node à VPC
à Public subnet à InternetGateway
ii) VPC à Private subnet à NATGateway
Elastic Ip
eksctl-eksdemo1-cluster/NATIP :- 52.200.147.88
Internet Gateway (IGW): igw-060f4dbebfe63ca83
➤ Ye public subnets ko direct internet access
deta hai.
NAT Gateway: nat-0eefd9cc701314025
➤ Private subnets ke liye outbound internet access
deta hai (jaise image pull, updates).
Two public subnets:-
subnet-0a96011b8050812f3 / eksctl-eksdemo1-cluster/SubnetPublicUSEAST1A
subnet-0a59c7c5420a06450 / eksctl-eksdemo1-cluster/SubnetPublicUSEAST1B
Verify NodeGroup VPC subnets to confirm EC2 Instances are in the Public Subnet
- Verify
the node group subnet to ensure it created in public subnets
- Go
to Services -> EKS -> eksdemo -> eksdemo1-ng-public1
- Click
on Associated subnet in Details tab
- Click
on Route Table Tab.
- We should see that internet route via Internet Gateway (0.0.0.0/0 -> igw-xxxxxxxx)
Two Private subnets:-
subnet-0cca4006ecf153099 / eksctl-eksdemo1-cluster/SubnetPrivateUSEAST1A
subnet-0fb0b893dbbfd2f07 / eksctl-eksdemo1-cluster/SubnetPrivateUSEAST1B
Private subnets are not yet connected to anyone.
Verify Worker Node IAM Role and list of Policies.
- Go
to Services -> EC2 -> Worker Nodes
- Click
on IAM Role associated with EC2 Worker Nodes
Below, IAM policies are added to the EC2 worker nodes.
--asg-access |
Grant
permissions to access Auto Scaling groups |
--external-dns-access |
Allow
ExternalDNS access |
--full-ecr-access |
Allow
pulling from private ECR |
--appmesh-access |
Allow
access to AWS App Mesh |
--alb-ingress-access |
Grant
permissions for AWS ALB Ingress Controller |
Verify CloudFormation Stacks
- CloudFormation
creates the EKS cluster and stores all events.
- Verify
Control Plane Stack & Events
- Verify
NodeGroup Stack & Events
Verify the Security Group Associated to Worker Nodes
- Go
to Services -> EC2 -> Worker Nodes
- Click
on the Security Group associated with the EC2 Instance which
contains remote in the name.
- If
we have created a nodePort rule in EKS, then the inbound rule needs
to be allowed in the worker nodes.
Remote_group Ã
inbound rules à add.
All traffic – anywhere – 0.0.0.0/0
Verify Cluster, NodeGroup in EKS Management Console
# List EKS clusters
# eksctl get cluster
# List NodeGroups in a cluster
eksctl get nodegroup --cluster=<clusterName>
# eksctl get nodegroup --cluster=eksdemo1
# List Nodes in current kubernetes cluster
# kubectl get nodes
# kubectl get nodes -o wide
# kubectl get pods -A
# kubectl get daemonset aws-node -n kube-system
# Our kubectl context should be automatically changed to new
cluster
kubectl config view --minify
Login to Worker Node using Keypai kube-demo
- Login
to worker node
# For MAC or Linux or Windows10
ssh -i kube-demo.pem
ec2-user@<Public-IP-of-Worker-Node>
internet working.
Delete EKS Cluster & Node Groups
Step-01: Delete Node Group
# List EKS Clusters
eksctl get clusters
# Capture Node Group name
eksctl get nodegroup --cluster=<clusterName>
eksctl get nodegroup --cluster=eksdemo1
# Delete Node Group
eksctl delete nodegroup --cluster=<clusterName>
--name=<nodegroupName>
eksctl delete nodegroup --cluster=eksdemo1
--name=eksdemo1-ng-public1
Step-02: Delete Cluster
# Delete Cluster
eksctl delete cluster <clusterName>
eksctl delete cluster eksdemo1
No comments:
Post a Comment