This article demonstrates how to install GlusterFS on a CentOS and Redhat server. I’ve skipped some things such as SELinux, IPtables, fstab mount points, etc. This is a tutorial of the basics. You will have a GlusterFS up in no time but you should spend time to harden it.
Run these commands on both/all of your Gluster hosts. We’re configuring these from scratch so any existing settings may be lost.
rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo yum -y install glusterfs glusterfs-fuse glusterfs-server systemctl start glusterd.service
We need to deal with the firewalls before we continue. Either stop firewalld with “systemctl stop firewalls” or allow the machines to communicate with each other by IP.
Now we need to run the following on the primary (first) Gluster server:
# where 10.0.0.12 is the IP address of the second server. gluster peer probe 10.0.0.12
Doing the same on the second server should result in the following:
# where 10.0.0.11 is the IP address of the first server. gluster peer probe 10.0.0.11 ...already in peer list
Now we need to add some storage (called a brick). We’re going to us a dummy filesystem but you’ll obviously use something more appropriate – this is just a tutorial.
Do this on both servers:
# 50MB filesystem/file dd if=/dev/zero of=/brick count=50000 bs=1024 mkdir.xfs /brick mkdir /mnt/brick mount /brick /mnt/brick mkdir /mnt/brick/export
In the above we created a filesystem, formatted it, mounted it and created a directory within that filesystem. The directory we created “/mnt/brick/export” is important because we can’t use the root of a filesystem.
Now we tell Gluster about the local and remote bricks.
On the primary server:
gluster volume create gvol0 replica 2 10.0.0.11:/mnt/brick/export 10.0.0.12:/mnt/brick/export
In the above we created a volume called “gvol0” but you could call it anything. Next we chose to use a Replica volume (it replicates the data) and we want it to be replicated at least to 2 locations (we only have two in this case). And then we specify the servers on which the bricks exist.
By thew way, a volume is made up of multiple bricks.
Run this to see the current status:
gluster volume info gluster volume status gluster volume start gvol0
We’re now ready to mount the GlusterFS from another server/workstation:
mount -t glusterfs 10.0.0.11:/mnt/brick/export /mnt/gluster
You can see what’s happening by viewing the files on both servers in their “/mnt/brick/export” directories.
If you have trouble and need to restart, remember to do this or you will get errors saying “…is already part of a volume”.
systemctl stop glusterd setfattr -x trusted.glusterfs.volume-id /mnt/brick/export/ setfattr -x trusted.gfid /mnt/brick/export/ rm -rf /mnt/brick/export/.glusterfs systemctl start glusterd
More information:
http://www.gluster.org/community/documentation/index.php/Getting_started_configure