Deploying a replicated NAS solution using Red Hat Storage Server 2.0

Lets go and jump onto our gluster client system and mount the new gluster file system.

The Gluster client is in the ‘rhel-x86_64-server-rhsclient-6’ Red Hat software channel in RHN. Be sure to add this channel to your system so you can install the necessary packages.

Install the necessary packages as follows.

[root@glusterclient01 ~]# yum install -y glusterfs glusterfs-fuse


Now lets mount out volume.
Notice that the remote source is MYDATA in capitals. You need to use the Gluster Volume name and not the actual underlying mount point. If you created your volume in lower case, you will also need to use lowercase here also.

[root@glusterclient01 ~]# mount -t glusterfs /media/


Lets verify that the mount was successful. Use either mount or df to do this.

[root@glusterclient01 ~]# mount
/dev/mapper/vg_base-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/vda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) on /media type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@glusterclient01 ~]#
[root@glusterclient01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
                      7.5G  1.3G  5.8G  18% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/vda1             485M   33M  428M   8% /boot
                       50G   33M   50G   1% /media
[root@glusterclient01 ~]#


As stated above, the mount of the Gluster volume has been successful.

Just to verify that I can save to my new mount point, I have echoed some text to the volume.

[root@glusterclient01 ~]# echo "My first write to Gluster" > /media/testwrite.txt

Now just for curiosity sake. Go and have a look at each of your Gluster nodes. You should see that the same file has appeared in both locations.

[root@gluster01 ~]# ls /mydata/
[root@gluster01 ~]# cat /mydata/testwrite.txt
My first write to Gluster
[root@gluster01 ~]#
[root@gluster02 ~]# ls /mydata/
[root@gluster02 ~]# cat /mydata/testwrite.txt
My first write to Gluster
[root@gluster02 ~]#


That’s basically it for the setup folks. However there is one last really cool feature that I love about Gluster.

Right now we have a replicated volume. Lets say, for example, that they are stored in two different buildings and we have this set up to give us an element of Disaster Recovery.

What happens in a disaster? For example, I have mounted to my client. What will happen if that NAS node loses power or otherwise becomes unavailable?


You can act out this type of failure however you chose, but I have just pulled the power cable out of my server, whilst my client still has it mounted.

I’ve just performed another write to the mount point, and the best part is, it has not failed.

[root@glusterclient01 ~]# echo "My second write to Gluster" > /media/testwrite2.txt

No I/O errors at all and I can still read my existing data.



Because the native Gluster client sends a list of all Gluster nodes who provide that specific Gluster volume that you have mounted. In the event of a failure, the client will simply send and or receive the data from one of the other participating nodes.

Impressed yet?

Please note: this feature is only available with the glusterfs client. It will not act in this same manner with CIFS or NFS client mounts.


Once the other server has come back online, all the missing data will be synchronized back and replicated from the other active nodes.


I hope you all find this article useful. If you would like any assistance with Gluster or any other areas of infrastructure, Feel free to contact me directly or leave a comment.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>