Shashikant shah

Sunday 22 August 2021

Setup ACL and sudoers on centos 7

1    How to create user and password.
#useradd  <username>
#passwd <username>
 

2    How to change primary and secondary group of the user.
2.1    Create a new group.
# groupadd  <groupname>
 

2.2    Change the primary group of the user.
# id <username>
# usermod -g <groupname> <user_name>
 

2.3    Change the secondary group of the user.
# sudo usermod -a -G  <groupname> <user_name>
# grep <groupname> /etc/group


3    How to Remove a User From a Group
# gpasswd -d  <user_name>  <groupname>


4    How to add users and groups to sudoers on Centos 7.
4.1    Aliases
we get into adding user permission entries to sudo template, it is important to understand how aliases work. Sudoers aliases come in the form of:

    User_Alias — specifies a group of users by username

    Runas_Alias — specifies a group of users by UID

    Host_Alias — specifies a list of hostnames

    Cmnd_Alias — specifies a list of commands and directories

4.2   create a file in /etc/sudoers.d/<date>_optus_template

#vim /etc/sudoers.d/<date>_optus_template
############# group ime_group ##############
User_Alias OPTUS_IME_GROUP = %ime_group
Runas_Alias ROOT = root
Host_Alias ALL_HOST = ALL
Cmnd_Alias SYSADMIN = ALL
Cmnd_Alias NONROOT = /usr/bin/su,/usr/bin/sh,/usr/bin/bash,/usr/sbin/visudo,/usr/bin/passwd,/usr/bin/rm -rf /etc/sudoers.d/*,/usr/bin/rm /etc/sudoers.d/*,/bin/ls -[A-Za-z] /data/*
###########Sudo Stanza######
OPTUS_IME_GROUP ALL_HOST = (ROOT) NOPASSWD:SYSADMIN,!NONROOT

4.3 check sudo file.
#visudo -c
or
#visudo -cf /var/tmp/sudoers.new  

ref URL :-
https://www.linux.com/topic/networking/how-wrestle-control-sudo-sudoers/


5    How to Apply ACL for feed dir.

5.1    User Home directory only read and execute permission.
Note1 :- do not give full permission at the user home directory.
Note2 :- do not apply ACL on the .ssh directory.

# setfacl -m g:<group_name>:rx  <user_home_dir>

5.2    Data directory only read, write and execute permission.

# cd /home/user_name
# setfacl -R -d -m g:<group_name>:rwx  <dir_name>
#setfacl -Rm g: <group_name>:rwx  <dir_name>

4.3    How to verify ACL and delete ACL.

For acl verify
# getfacl  <dir_name>

For all acl delete
# setfacl -R -b  <dir_name>

############### How to create service in centos 7 ########

# cd /usr/lib/systemd/system

# vim adaptor.service

[Unit]
Description=Jboss adaptor service

[Service]
User=root
Group=root
Type=oneshot
RemainAfterExit=true
ExecStart=/data/cis/admin/JBossEAP7.2/jboss-eap-7.2-adaptor/jboss-Adapter.sh start
ExecStop=/data/cis/admin/JBossEAP7.2/jboss-eap-7.2-adaptor/jboss-Adapter.sh stop

[Install]
Wantedby=multi-user.target

# systemctl daemon-reload

# systemctl enable adaptor.service

# systemctl start adaptor.service

# systemctl status adaptor.service -l

# ps -elf | grep "adaptor"

# systemctl stop adaptor.service

# systemctl status adaptor.service -l

# ps -elf | grep "adaptor"

Sunday 17 January 2021

VPC EFS OpenVPN RDS

 



Web server ---à s3 bucket (vpc endpoint)

Subnets

IPs

Zone

VPC myvpc

10.0.0.0/16

NA

Public-sub01

10.0.1.0/24

ap-south-1a

Public-sub02

10.0.3.0/24

ap-south-1b

Private-sub01

10.0.2.0/24

ap-south-1a

Private-sub02

10.0.4.0/24

ap-south-1b


 






VPC :-
1 ) create a VPC :-
# name :- myvpc (10.0.0.0/16)
2) create Public and Private subnet.
# name :- Public-sub01 à Select myvpc  à 10.0.1.0/24
# name :- Public-sub02 à Select myvpc  à 10.0.3.0/24
# name :- Private-sub01 àSelect myvpc à 10.0.2.0/24
# name :- Private-sub02 àSelect myvpc à 10.0.4.0/24
 
3) Create a internet gateway.
# name :- my-internet-gateway à Attached à myvpc
4) create a NAT gateway.
# name :- my-NAT-gateway à subnet “public-sub” à Elastic IP.
5) Create a Route table.
 i) name :- Private-RT à myvpc
  Routes à 0.0.0.0/0  à NAT (my-NAT-gateway)
  Subnet Associations à Private-sub01,Private-sub02
ii) name :- public-route à myvpc
  Routes à 0.0.0.0/0  à IG (my-internet-gateway)
  Subnet Associations à Public-sub01,Public-sub02
 
 
EFS :-
EFS only access select AZ(a,b,c).
i)Create file system à EFS_group
ii)Select VPC à myVPC
iii)network (details)
iv) security group allow NFS port.
v) go to attach option:-
Client side install :-
# yum install -y amazon-efs-utils
# mkdir efs
# sudo mount -t efs -o tls fs-fa68122b:/ efs
# sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-fa68122b.efs.ap-south-1.amazonaws.com:/ efs

Create Multiple EFS :-
Access points :
File system :-  EFS_group
Name :- nfs-store
Root directory path :- /nfs-store
User ID :- 1001
Group ID :- 1001
Owner user ID :- 1001
Owner group ID :- 1001
OK
# Client Side
# mkdir efs-store
# sudo mount -t efs -o tls,accesspoint=fsap-00d06dfe8f1c61fb3 fs-fa68122b:/ efs-store

##### create new instance add EFS mount path ###





OpenVPN side to Client :-

VPN public add openVpn

Openvpn













Select t2.micro










Network select :- myVPC

Subnet select :- subnet-public

Auto-assign Public IP























ssh OpenVPN server

Username :- openvpnas

# all Yes

Reset password :-

# sudo passwd openvpn

Any setting in VPN

Admin  UI: https://65.1.3.195:943/admin

Download software from Link.

Client UI: https://65.1.3.195:943/

Login :- username :- openvpn

              Password :- password@123





































 RDS :-

1.Create a subnet for rds .

# subnet groups à name:- rdssubnet à select:- myvpc

# Availability Zones :-

ap-south-1a

ap-south-1b

 

 

# Subnets select :-

Private-sub01

Private-sub02

 

 

Create database :-
# Mysql à Connectivity à myvpc à rdssubnet
# Public access à NO
# VPC Security group à RDS-SG
# Ok
 
Create LB
1.Create RDS  --ok
2.Nginx with php setup --ok
3. check connection from code to RDS -- ok
4. changes code insert query.  – ok


Tuesday 12 January 2021

Loki with Promtail and Grafana

Promtail (push)
Promtail helps to monitor applications by shipping the container logs to Loki or Grafana cloud. This process primarily involves discovering targets, attaching labels to log streams from both log files and the systemd journal, and shipping them to Loki. Promtail’s service discovery is based on the Prometheus’ service discovery mechanism.

Loki
As quoted by creators of Loki, Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. Loki uses the same service discovery mechanism as that of Prometheus and adds labels to the log stream instead of indexing. Due to which, logs received from Promtail consist of the same set of labels as that of application metrics. Thus, it not only provides better context switching between logs and metrics but also avoids full index logging.

Grafana
Grafana is an open-source platform for monitoring and observability. It specifically operates on time-series data coming from sources like Prometheus and Loki. Moreover, it allows you to query, visualize, alert on the metrics regardless of its stored location. It helps to create, explore, and share dashboards and encourages data-driven culture.

Promtail --> Loki (logQL) --> Grafana

Install Loki
# cd /usr/local/bin
# curl -fSL -o loki.gz "https://github.com/grafana/loki/releases/download/v1.6.1/loki-linux-amd64.zip"
# gunzip loki.gz
# chmod a+x loki
# mkdir -p /etc/loki
# cd /etc/loki
# vim config-loki.yml
auth_enabled: false
server:
  http_listen_port: 3100
ingester:
  lifecycler:
    address: 127.0.0.1 # private IP loki server
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 5m
  chunk_retain_period: 30s
  max_transfer_retries: 0
schema_config:
  configs:
    - from: 2018-04-15
      store: boltdb
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 168h
storage_config:
  boltdb:
    directory: /tmp/loki/index
  filesystem:
    directory: /tmp/loki/chunks
limits_config:
  enforce_metric_name: false
  reject_old_samples: true
  reject_old_samples_max_age: 168h
chunk_store_config:
  max_look_back_period: 0s
table_manager:
  retention_deletes_enabled: false
  retention_period: 0s
# useradd --system loki

# vim /etc/systemd/system/loki.service
[Unit]
Description=Loki service
After=network.target
[Service]
Type=simple
User=loki
ExecStart=/usr/local/bin/loki -config.file /etc/loki/config-loki.yml
[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl start loki
# systemctl enable loki
# systemctl status loki

curl "127.0.0.1:3100/metrics"

#### worker node ##
# cd /usr/local/bin
# curl -fSL -o promtail.gz "https://github.com/grafana/loki/releases/download/v1.6.1/promtail-linux-amd64.zip"
# gunzip promtail.gz
# chmod a+x promtail
# mkdir -p /etc/promtail
# cd /etc/promtail
# vim config-promtail.yml
server:
  http_listen_port: 9080
  grpc_listen_port: 0
positions:
  filename: /tmp/positions.yaml
clients:
  - url: http://loki_private_IP:3100/loki/api/v1/push
scrape_configs:
- job_name: system
  static_configs:
  - targets:
      - node.example.com
    labels:
      job: varlogs
      __path__: /var/log/*log

# vim /etc/systemd/system/promtail.service

[Unit]
Description=Promtail service
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/promtail -config.file /etc/promtail/config-promtail.yml
[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl start promtail.service
# systemctl enable promtail.service
# systemctl status promtail.service 

Configure Loki Data Source
1.Login to Grafana web interface and select ‘Explore’. You will be prompted to create a data source.











2.Click on Add data source then select Loki from the available options:









4. Input the following values for Loki:





















Visualize Logs on Grafana with Loki

Click on Explore then select Loki at the Data source


 



























Find root value in logs





















Alternatively, you can write a stream selector into the query field:
{job="default/prometheus"}
Here are some example streams from your logs:
{job="varlogs"}
 
Combine stream selectors
{app="cassandra",namespace="prod"}
 
Filtering for search terms.
{app="cassandra"} |~ "(duration|latency)s*(=|is|of)s*[d.]+"
{app="cassandra"} |= "exact match"
{app="cassandra"} != "do not match"
 
Count over time
count_over_time({job="mysql"}[5m])
 
Rate
rate(({job="mysql"} |= "error" != "timeout")[10s])
This query gets the per-second rate of all non-timeout errors within the last ten seconds for the MySQL job.
 
Aggregate, count, and group
sum(count_over_time({job="mysql"}[5m])) by (level)
Get the count of logs during the last five minutes, grouping by level.
 
Some query for log count
count_over_time({filename="/var/log/syslog"} !="ERROR"[5m])
count_over_time({job="varlogs"} !="ERROR"[5m])
count_over_time({job="varlogs"} [2h])
 
Create loki dashboard
Create dashboard à loki select à add query
 






















Prometheus and Grafana

 

 

Two servers

1.server – install Prometheus and Grafana, AlertManager, push_gateway.

2.worker node – install node_exporter, nginx_exporter,  nginxlog exporter , blackbox exporter.











Server Node :-

exporter --> prometheus(promQL) --> grafana

Prometheus :-
Prometheus is a monitoring tool designed for recording real-time metrics in a time-series database. It is an open-source software project, written in Go. The Prometheus metrics are collected using HTTP pulls, allowing for higher performance and scalability.
 
Other tools which make Prometheus complete monitoring tool are:

Exporters:- These are libraries that help with exporting metrics from third-party systems as Prometheus.
 
1.Node-exporters :- Node Exporter is an 'official' exporter that collects technical information from Linux nodes, such as CPU, Disk, Memory statistics.
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Pushgateway :- we will push some custom metrics to pushgateway and configure prometheus to scrape metrics from pushgateway.
 

 








Alertmanager :- we would like to alarm based on certain metric dimensions. That’s where alertmanager fits in. We can setup targets and rules, once rules for our targets does not match, we can alarm to destinations suchs as slack, email etc.
 

Blackbox exporter :- Blackbox Exporter to Monitor Websites With Prometheus. Blackbox Exporter by Prometheus allows probing over endpoints such as http, https, icmp, tcp and dns.


 
 
 
 
 
 
 
metrics:
i)Targets (linux,window,application) à cpu status, mem/disk usage, request count  à unit called a matric and matric save in Prometheus DB.
ii)metrics Format - Human-readable text-based.
HELP :- description of what the metrics is.
 
Type :- 4 metrics types.
1) counter :- How many times X happened.(only increase value hogi, descrise nhi hogi.)
              i) number of requests served.
              ii)tasks completed or errors.
2) gauge :- what is the cuttent valume of X now? (increase and descise dono hoga. cpu load now, disk space now.)
3) summary :- How long something took Or How big something was
              i) Count shows number of time event observered.
              ii) sum shows sum of times taken by that event.
4) Histogram :- How long how big.
 
 
PromQL: Prometheus query language which allows you to filter multi-dimensional time series data.
 
 
Grafana is a tool commonly used to visualize data polled by Prometheus, for monitoring, and analysis. It is used to create dashboards with panels representing specific metrics over a set period of time.
1.Create Prometheus system group
sudo groupadd --system prometheus
sudo useradd -s /sbin/nologin --system -g prometheus prometheus
 
2.Prometheus needs a directory to store its data.
sudo mkdir /var/lib/prometheus
for i in rules rules.d files_sd; do sudo mkdir -p /etc/prometheus/${i}; done
sudo apt update
sudo apt -y install wget curl vim
 
3.Download Prometheus
mkdir -p /tmp/prometheus && cd /tmp/prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.23.0/prometheus-2.23.0.linux-amd64.tar.gz
tar xvf prometheus*.tar.gz
cd prometheus*/
sudo mv prometheus promtool /usr/local/bin/
 
prometheus --version
promtool --version
 
sudo mv prometheus.yml /etc/prometheus/prometheus.yml
sudo mv consoles/ console_libraries/ /etc/prometheus/
 
 
4.Configure Prometheus
sudo vim /etc/prometheus/prometheus.yml
- job_name: 'prometheus'
 
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
 
    static_configs:
    - targets: ['localhost:9090']
 
 
How to verify prometheus configuation file :-
  
 
 
5.Create a Prometheus systemd Service unit file
sudo vim /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Documentation=https://prometheus.io/docs/introduction/overview/
Wants=network-online.target
After=network-online.target
 
[Service]
Type=simple
User=prometheus
Group=prometheus
ExecReload=/bin/kill -HUP \$MAINPID
ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/var/lib/prometheus \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.listen-address=0.0.0.0:9090 \  ## (using Private IP for security purpose)

SyslogIdentifier=prometheus
Restart=always
 
[Install]
WantedBy=multi-user.target
 
OR
##########################
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/var/lib/prometheus \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.enable-admin-api \
  --web.enable-lifecycle

SyslogIdentifier=prometheus
Restart=always

[Install]
WantedBy=multi-user.target
######################
 
6.Change directory permissions.
for i in rules rules.d files_sd; do sudo chown -R prometheus:prometheus /etc/prometheus/${i}; done
for i in rules rules.d files_sd; do sudo chmod -R 775 /etc/prometheus/${i}; done
sudo chown -R prometheus:prometheus /var/lib/prometheus/
 
7.Reload systemd daemon and start the service:
sudo systemctl daemon-reload
sudo systemctl start prometheus
sudo systemctl enable prometheus
sudo systemctl status prometheus
 
OR

#htpasswd -c /etc/nginx/.htpasswd admin
 
#vim /etc/nginx/sites-enabled/prometheus.conf
server {
    listen 80 default_server;
 
    location / {
            auth_basic "Prometheus Auth";
            auth_basic_user_file /etc/nginx/.htpasswd;
            proxy_pass http://localhost:9090;
        }
}
 
http://13.127.100.171/
 
Grafana side :-
1.Source add URL
2.Basic auth enable.
3.Add username and password
 
http://13.127.100.171:9090/












Note :-

if reload prometheus from client side.
#curl -X POST http://localhost:9090/-/reload

Install Grafana ubuntu 20.4

wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -

echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install grafana
 

sudo systemctl start grafana-server

sudo systemctl enable grafana-server

sudo systemctl status grafana-server

Default logins are:

Username: admin
Password: admin


Grafana Package details:

Installs binary to /usr/sbin/grafana-server

Installs Init.d script to /etc/init.d/grafana-server

Creates default file (environment vars) to /etc/default/grafana-server

Installs configuration file to /etc/grafana/grafana.ini

Installs systemd service (if systemd is available) name grafana-server.service

The default configuration sets the log file at /var/log/grafana/grafana.log

The default configuration specifies a sqlite3 db at /var/lib/grafana/grafana.db

Installs HTML/JS/CSS and other Grafana files at /usr/share/Grafana

Install plugin cli
# grafana-cli plugins install grafana-image-renderer
 
http://13.127.100.171:3000/login










Go to “data source” – add data source – select Prometheus








Add Prometheus URL http://13.127.100.171:9090









Worker Node :-
Node exporter
 
# wget https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz
# tar -xf node_exporter-0.17.0.linux-amd64.tar.gz
# cp node_exporter-0.17.0.linux-amd64/node_exporter /usr/local/bin
# chown root:root /usr/local/bin/node_exporter
# rm -rf node_exporter-0.17.0.linux-amd64*
 
node export default port 9100.
change port 9501
 
$ vim /etc/systemd/system/node_exporter.service
 
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
 
[Service]
User=root
Group=root
Type=simple
ExecStart=/usr/local/bin/node_exporter --web.listen-address=:9501
 
[Install]
WantedBy=multi-user.target
 
$ systemctl daemon-reload
$ systemctl start node_exporter
$ systemctl enable node_exporter
$ systemctl status node_exporter
 
http://clientIP:9501/metrics












Server node :-
Add node exporter target in  prometheus.yml
 
# vim /etc/Prometheus/prometheus.yml
  - job_name: 'prometheus'
 
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
 
    static_configs:
    - targets: ['localhost:9090']
 
   - job_name: 'node_example_com'
    scrape_interval: 5s
    static_configs:
    - targets: ['172.31.39.204:9501']
 
# systemctl restart prometheus
# systemctl status Prometheus












Grafana :-






Nginx connection 
Enable NGINX Status Page
# nginx -V 2>&1 | grep -o with-http_stub_status_module
 
server {
 
  listen 80 default_server;
  # remove the escape char if you are going to use this config
  server_name \_;
 
  root /var/www/html;
  index index.html index.htm index.nginx-debian.html;
 
  location /nginx_status {
        stub_status;
       # allow 0.0.0.0;  #only allow requests from localhost
      #  deny all;               #deny all other hosts
  }
 
  location / {
    try_files $uri $uri/ =404;
  }
 
}
 
#cd /tmp
 
#wget https://github.com/nginxinc/nginx-prometheus-exporter/releases/download/v0.7.0/nginx-prometheus-exporter-0.7.0-linux-amd64.tar.gz
#tar -xf nginx-prometheus-exporter-0.7.0-linux-amd64.tar.gz
#mv nginx-prometheus-exporter /usr/local/bin
#useradd -r nginx_exporter
# Create Systemd Service File
 
#vim /etc/systemd/system/nginx_prometheus_exporter.service
[Unit]
Description=NGINX Prometheus Exporter
After=network.target
 
[Service]
Type=simple
User=nginx_exporter
Group=nginx_exporter
ExecStart=/usr/local/bin/nginx-prometheus-exporter -web.listen-address=":9113" -nginx.scrape-uri http://127.0.0.1/nginx_status
 
SyslogIdentifier=nginx_prometheus_exporter
Restart=always
 
[Install]
WantedBy=multi-user.target
 
#systemctl daemon-reload
#service nginx_prometheus_exporter status
#service nginx_prometheus_exporter start











Prometheus side :-
 
# vim /etc/prometheus/prometheus.yml
- job_name: 'nginx'
    scrape_interval: 7s
    static_configs:
    - targets: ['172.31.39.204:9113']






















Add Query and save

 











Change Visualization :-

























Use plugin in Grafana for nginx service :

Code no :- 12708

https://grafana.com/grafana/dashboards/12708












Nginx stop in Worker node:-
















Monitoring Nginx status count like 200, 300,404 from different logs.

1)/var/log/nginx/access_shashi.log

2) /var/log/nginx/access.log

Worker node :-

# vim /etc/nginx/nginx.conf










# logging config
          log_format custom   '$remote_addr - $remote_user [$time_local] '
                              '"$request" $status $body_bytes_sent '
                              '"$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
 
# rm -rf /etc/nginx/sites-enabled/default
 
# cat /etc/nginx/conf.d/myapp.conf
 
server {
 
  listen 80 default_server;
  # remove the escape char if you are going to use this config
  server_name \_;
 
  root /var/www/html;
  index index.html index.htm index.nginx-debian.html;
 
  location / {
    try_files $uri $uri/ =404;
  }
 
}
 
# cat /etc/nginx/conf.d/shashi.conf
server {
 
  listen 81 default_server;
  # remove the escape char if you are going to use this config
  server_name \_;
 
  root /var/www/html;
  index index.html index.htm index.nginx-debian.html;
 
   access_log /var/log/nginx/access_shashi.log custom;
   error_log /var/log/nginx/error_shashi.log;
  location / {
    try_files $uri $uri/ =404;
  }
 
}
 
# systemctl status nginx
# systemctl restart nginx
 
Download Nginx Log Exporter
 
# wget https://github.com/martin-helmich/prometheus-nginxlog-exporter/releases/download/v1.4.0/prometheus-nginxlog-exporter
 
# chmod +x prometheus-nginxlog-exporter
# mv prometheus-nginxlog-exporter /usr/bin/prometheus-nginxlog-exporter
 
# mkdir /etc/prometheus
 
# vim /etc/prometheus/nginxlog_exporter.yml
 
listen:
  port: 4040
  address: "0.0.0.0"
 
consul:
  enable: false
 
namespaces:
  - name: shashi_log
    format: "$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" \"$http_x_forwarded_for\""
    source:
      files:
        - /var/log/nginx/access_shashi.log
 
    labels:
      service: "shashi_log"
      environment: "production"
      hostname: "shashi_log.example.com"
    histogram_buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]
 
    namespaces:
  - name: myapp_log
    format: "$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" \"$http_x_forwarded_for\""
    source:
      files:
             - /var/log/nginx/access.log
 
    labels:
      service: "myapp"
      environment: "production"
      hostname: "myapp.example.com"
    histogram_buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]

 

















# vim /etc/systemd/system/nginxlog_exporter.service

[Unit]

Description=Prometheus Log Exporter

Wants=network-online.target

After=network-online.target

 

[Service]

User=root

Group=root

Type=simple

ExecStart=/usr/bin/prometheus-nginxlog-exporter -config-file /etc/prometheus/nginxlog_exporter.yml

[Install]

WantedBy=multi-user.target

# systemctl daemon-reload

# systemctl enable nginxlog_exporter

# systemctl restart nginxlog_exporter

# systemctl status nginxlog_exporter

curl http://localhost:4040/metrics



















Server side :-

# vim /etc/prometheus/ prometheus.yml

  - job_name: 'log_nginx'

    scrape_interval: 10s

    static_configs:

    - targets: ['172.31.39.204:4040']

# systemctl restart Prometheus

# systemctl status Prometheus

eg :- <namespace>_http_response_count_total

Execute :- shashi_log_http_response_count_total

Execute :- myapp_http_response_count_total



























Grafana :-









configuring-grafana-and-prometheus-alertmanager

Custom rules

1.How many memory free in percent for node. 

1.Create Rule file .
# /etc/prometheus/rules/prometheus_rules.yml
groups:
  - name: custom_rules
    rules:
      - record: node_memory_MemFree_percent
        expr: 100 - (100 * node_memory_MemFree_bytes / node_memory_MemTotal_bytes)
 
2.We will be check rule file.
# promtool check rules prometheus_rules.yml









3. prometheus_rules.yml file add in /etc/prometheus/ prometheus.yml

# vim /etc/prometheus/ prometheus.yml

rule_files:

  - rules/prometheus_rules.yml

# systemctl  daemon-reload

# systemctl restart prometheus

# systemctl status prometheus

4. Go to Prometheus URL

# select status à Configuration








# select à Rules







# execute query – node_memory_MemFree_percent










Example 2 :-
 
Free disk space in percent
 
# vim /etc/prometheus/rules/prometheus_rules.yml
 
- record: node_filesystem_free_percent
        expr: 100 * node_filesystem_free_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}










# promtool check rules prometheus_rules.yml
# systemctl restart prometheus
# systemctl status prometheus










Alerts Rules :-
1.rule for instance Down
2.rule for DiskSpaceFree10Percent less
 
# vim /etc/prometheus/rules/prometheus_alert_rules.yml
groups:
  - name: alert_rules
    rules:
      - alert: InstanceDown
        expr: up == 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Instance [{{ $labels.instance }}] down"
          description: "[{{ $labels.instance }}] of job [{{ $labels.job }}] has been down for more than 1 minute."
 
      - alert: DiskSpaceFree10Percent
        expr: node_filesystem_free_percent <= 10
        labels:
          severity: warning
        annotations:
          summary: "Instance [{{ $labels.instance }}] has 10% or less Free disk space"
          description: "[{{ $labels.instance }}] has only {{ $value }}% or less free."














# promtool check rules prometheus_alert_rules.yml

# vim /etc/Prometheus/prometheus.yml

rule_files:

  - rules/prometheus_rules.yml

  - rules/prometheus_alert_rules.yml

# systemctl  daemon-reload

# systemctl restart prometheus           

# systemctl status Prometheus
















Select status à rules























Alert Manager Setup

 

# wget https://github.com/prometheus/alertmanager/releases/download/v0.21.0/alertmanager-0.21.0.linux-amd64.tar.gz

# tar xvf alertmanager-0.21.0.linux-amd64.tar.gz

# cd alertmanager-0.21.0.linux-amd64

# cp -rvf alertmanager /usr/local/bin/

# cp -rvf amtool /usr/local/bin/

# cp -rvf alertmanager.yml /etc/prometheus/

 

#  vim /etc/systemd/system/alertmanager.service

[Unit]

Description=Prometheus Alert Manager Service

After=network.target

 

[Service]

Type=simple

ExecStart=/usr/local/bin/alertmanager \

        --config.file=/etc/prometheus/alertmanager.yml

[Install]

WantedBy=multi-user.target


Change alertmanager.yml

global:
  resolve_timeout: 5m
 
route:
  group_by: ['alertname']
  receiver: 'email-me'
receivers:
- name: 'email-me'
  email_configs:
  - send_resolved: true
     to: devopstest11@gmail.com
    from: devopstest11@gmail.com
    smarthost: smtp.gmail.com:587
    auth_username: "devopstest11@gmail.com"
    auth_identity: "devopstest11@gmail.com"
    auth_password: "pass@123"
 
# amtool check-config alertmanager.yml
# service alertmanager start
# service alertmanager status
 
#vim /etc/prometheus/prometheus.yml
 
# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
       - localhost:9093
# systemctl restart prometheus
# systemctl status prometheus
http://13.127.100.171:9090/status
select status à Runtime & build information.






























Worker node  -
# systemctl stop node_exporter.service
 
Server node :-
Logs :
# tail -f  /var/log/syslog
 
Go to setting à security
NOTE :- Less Secure app access  :- ON


























Worker node  -

# systemctl start node_exporter.service

(receive mail issue resolved).












1.Inspect option for data insert from Prometheus and Rename panel title from JSON.
  inspect – (data , stats, JSON, Query)
2.How to restore old dashboard.
setting – version
3.manule add metrics
Add panel à (panel name ) edit à metrics











Pushgateway :-

In this tutorial, we will setup pushgateway on linux machine and push some custom metrics to pushgateway and configure prometheus to scrape metrics from pushgateway.

1.Install Pushgateway Exporter.

# wget https://github.com/prometheus/pushgateway/releases/download/v0.8.0/pushgateway-0.8.0.linux-amd64.tar.gz

# tar -xvf pushgateway-0.8.0.linux-amd64.tar.gz

# cp pushgateway-0.8.0.linux-amd64/pushgateway /usr/local/bin/pushgateway

# chown root:root /usr/local/bin/pushgateway

 

# vim /etc/systemd/system/pushgateway.service

[Unit]

Description=Pushgateway

Wants=network-online.target

After=network-online.target

[Service]

User=root

Group=root

Type=simple

ExecStart=/usr/local/bin/pushgateway

[Install]

WantedBy=multi-user.target

 

# systemctl daemon-reload

# systemctl restart pushgateway

# systemctl status pushgateway

 

# vim /etc/prometheus/prometheus.yml

  - job_name: 'pushgateway'

    honor_labels: true

    static_configs:

      - targets: ['localhost:9091']

# systemctl restart prometheus

Run below command from Client side:-

# echo "cpu_utilization 20.25" | curl --data-binary @- http://localhost:9091/metrics/job/my_custom_metrics/instance/client_host/cpu/load

Take a look at the metrics endpoint of the pushgateway:

# curl -L  http://172.31.5.171:9091/metrics/  2>&1| grep "cpu_utilization"




## Pushgateway URL

 





## Go to Prometheus URL


 

 

 

 

 

 

BlackBox Exporter :-

Client side configuration of BlackBox.

# cd /opt

# wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.14.0/blackbox_exporter-0.14.0.linux-amd64.tar.gz

# tar -xvf blackbox_exporter-0.14.0.linux-amd64.tar.gz

# cp blackbox_exporter-0.14.0.linux-amd64/blackbox_exporter /usr/local/bin/blackbox_exporter

# rm -rf blackbox_exporter-0.14.0.linux-amd64*

# mkdir /etc/blackbox_exporter

# vim /etc/blackbox_exporter/blackbox.yml

modules:

  http_2xx:

    prober: http

    timeout: 5s

    http:

      valid_status_codes: []

      method: GET

#  vim /etc/systemd/system/blackbox_exporter.service

[Unit]

Description=Blackbox Exporter

Wants=network-online.target

After=network-online.target

 

[Service]

User=root

Group=root

Type=simple

ExecStart=/usr/local/bin/blackbox_exporter --config.file /etc/blackbox_exporter/blackbox.yml

[Install]

WantedBy=multi-user.target

# systemctl daemon-reload

# systemctl start blackbox_exporter

# systemctl status blackbox_exporter

# systemctl enable blackbox_exporter

Note :- nginx is running 8281 and not running 8282 on client side #

Prometheus server side :-

# vim /etc/prometheus/prometheus.yml

  - job_name: 'blackbox'

    metrics_path: /probe

    params:

      module: [http_2xx]

    static_configs:

      - targets:

        - http://172.31.42.127:8281

        - http://172.31.42.127:8282

    relabel_configs:

      - source_labels: [__address__]

        target_label: __param_target

      - source_labels: [__param_target]

        target_label: instance

      - target_label: __address__

        replacement: 172.31.42.127:9115

# systemctl restart prometheus

# systemctl status prometheus

# verify blackBox exporter  

# http://52.66.196.119:9115/metrics

 











 

# verify blackbox status from Prometheus .