Shashikant shah

Sunday, 3 January 2021

GlusterFS (Gluster File System)

What is GlusterFS ? 

GlusterFS (Gluster File System) is an open source distributed file system that can scale out in building-block fashion to store multiple petabytes of data. ... Scale-out storage systems based on GlusterFS are suitable for unstructured data such as documents, images, audio and video files, and log files. 

Default TCP protocol will be used by GlusterFS.
1) rpcbind ports :- 111
2) glusterd :- 24007
3) client :- 24009 - 24108

Different Types of Volumes :-

1) Replication volume.
2) Distributed Volume.
3) Striped Volume.

1)Replication volume.















2)Distributed Volume.



















3) Striped Volume.



















Node1 :- 172.31.39.204 (node1.example.com) 
Node2 :- 172.31.38.176 (node2.example.com)
Client :- 172.31.13.233 (client.example.com)
OS      :- Ubuntu 20.4
 
# vim /etc/hosts
172.31.39.204 node1.example.com node1
172.31.38.176 node2.example.com node2
172.31.13.233 client.example.com client
 
Server node1 and node2 install packages :-
# apt-get update
# apt -y install glusterfs-server
# systemctl enable --now glusterd
# systemctl status glusterd
# gluster --version
glusterfs 7.2
 
client node:-
# apt-get -y install glusterfs-client
 
NFS (v3) is also supported, so it's possible to mount with NFS
#apt-get -y install nfs-common
# systemctl status rpcbind
 
Gluster logs :-
 
# cd /var/log/glusterfs/
bricks
cli.log
cmd_history.log
geo-replication
geo-replication-slaves
gfproxy
glusterd.log
glustershd.log
quota-mount-volume1.log
quota_crawl
quotad.log
snaps
 
1.Create Storage pool

ON NODE-1
# gluster peer probe node2











# gluster peer status








Run command on Node2

# gluster peer status









2.Create a Logical volume :-
Both node1 and node2 servers
# mkdir -p /gluster_data

Create replication gluster method  on node1
# gluster volume create volume1 replica 2 node1:/gluster_data/data node2:/gluster_data/data force
# gluster volume list
volume1
#############################
Creating distributed volumes.
# gluster volume create volume2 node1:/gluster_data/data node2:/gluster_data/data force
                        OR
#gluster volume create volume1 node1:/gluster_data/data
Create striping volume
#gluster volume create volume2 stripe 2 transport tcp node1:/gluster_data/data node2:/gluster_data/data force
##############################

# gluster volume info


# gluster volume list
# gluster volume start volume1
# gluster volume status



#gluster volume info

Volume Name: volume1

Type: Replicate

Volume ID: 8feae0a4-0061-4e01-8f8a-90679a34b5cc

Status: Started

3.Client node :-

# mount -t glusterfs node1:volume1 /mnt/volume1

# cd /mnt/volume1/

# touch jk file test

OR

# mount -t nfs -o mountvers=3 node1:/voume1 /mnt/volume1

4.Check below path on both nodes :-

# ls /gluster_data/data/

file  jk  sk  test

5.How to delete volume:-

# umount /mnt/volume1/

# gluster volume stop volume1

# gluster volume delete volume1

Helpful command for gluster :

# gluster --help

 peer help                - display help for peer commands

 volume help              - display help for volume commands

 volume bitrot help       - display help for volume bitrot commands

 volume quota help        - display help for volume quota commands

 snapshot help            - display help for snapshot commands

 global help              - list global commands

# gluster peer help

gluster peer commands

======================

peer detach { <HOSTNAME> | <IP-address> } [force] - detach peer specified by <HOSTNAME>

peer help - display help for peer commands

peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME>

peer status - list status of peers

pool list - list all the nodes in the pool (including localhost)

# Usage: gluster [options] <help> <peer> <pool> <volume>

# gluster pool list

UUID                                    Hostname        State

8b9116a8-3d6a-4ac3-afba-fb79c3beb2ff    node2           Connected

39c6d1c1-3c01-498b-a536-731e9c7e78d4    localhost       Connected

How to use quotas in Gluster FS volumes

# cd /mnt/volume1

# df -hP .

Filesystem      Size  Used Avail Use% Mounted on

node1:volume1   7.7G  3.7G  4.1G  48% /mnt/volume1

1.check quota :-

# gluster volume list

# gluster volume quota volume1 list

2. quota enable

# gluster volume quota volume1 enable

volume quota : success

3.create a quota for whole volume.

# gluster volume quota volume1 limit-usage / 20MB

4.list of quota

# gluster volume quota volume1 list






How apply quota for directory :-
# gluster volume quota volume1 limit-usage /data1 10MB 
# gluster volume quota volume1 limit-usage /data2 10MB
# gluster volume quota volume1 list





Client side :-
# df -hP .
Filesystem      Size  Used Avail Use% Mounted on
node1:volume1    20M     0   20M   0% /mnt/volume1
 
# cd /mnt/volume1/data1
# df -hP .





Test disk space :-
# cd /mnt/volume1/data1
# dd if=/dev/urandom of=myfile1 bs=10M count=1
# du -sh myfile1
10M     myfile1
# df -hP .






Disable to quota.
# gluster volume quota volume1 disable

3.How to extend or shrink a Gluster FS volume.
 
i)Add Brick
# gluster volume add-brick volume1 node2:/gluster_data/data force
Remove brick (migrate data then remove )
# gluster volume remove-brick volume1 node2:/gluster_data/data start
# gluster volume remove-brick volume1 node2:/gluster_data/data commit

ii) data sync both server. This command use only distribute volume.
# gluster volume rebalance volume1 start

4.Restricting client connections to Gluster FS volumes

# gluster volume create volume1 node1:/gluster_data/data node2:/gluster_data/data force
# gluster volume start volume1
# gluster volume info
# gluster volume get volume1 auth.allow
OR
# gluster volume get volume1 all | less
Block IP, hostname, domain, IP Rang.
# gluster volume set volume1 auth.reject client.example.com
# gluster volume get volume1 auth.reject






root@client:~# mount -t glusterfs node1:volume1 /mnt/volume1
root@client:~# mount | grep "gluster"
check node1





 Remove IP from reject.
# gluster volume reset volume1 auth.reject
# gluster volume get volume1 all | grep "auth."








No comments:

Post a Comment