Deploying a replicated NAS solution using Red Hat Storage Server 2.0

This article will be covering how to scale out Red Hat Storage Server into multiple replica’s giving High Availability capabilities to your solution.

Red Hat Storage Server leverages the Gluster File System, which is not like a normal file system where you format directly onto your Logical Volume or Partition, but acts as a ‘Network’ based file system similar to NFS or CIFS.
One fantastic thing about Gluster, is that Red Hat Enterprise Virtualization will soon be supporting storage domains that reside on Gluster implementations. This will allow greater storage scalability in future versions of RHEV.

In this example, I will be using the following system details ( running Red Hat Storage Server 2.0
OS Disk /dev/sda
Data Disk /dev/sdb
Data Mount Point /mydata ( running Red Hat Storage Server 2.0
OS Disk /dev/sda
Data Disk /dev/sdb
Data Mount Point /mydata ( running Red Hat Enterprise Linux 6.3


As the Red Hat Storage Server appliance is based on Red Hat Enterprise Linux I won’t be covering configuring of network adapters in this article. Please ensure that all nodes can perform forward and reverse lookups of each other, and can ping successfully.

Perform a default install of Red Hat Storage Server and when selecting which disks to use, ensure you leave your “Data” disks that you wish to use for storage as untouched. We will configure them later.

The installation of Red Hat Storage Server is a very simple appliance-like process. If you would like more information on the installation process, please read the Installation Guide available here.

Setting up a Gluster replicated Volume

Now its time to set up our 2 node replica using Gluster.

On both nodes you will have unused disks (I currently only have one which is sdb). Create a new volume group using your space disk(s). I have called mine ‘vg_rhs’
Perform the following on both nodes. ( and

Create your Volume group

[root@gluster01 ~]# pvcreate /dev/sdb
Writing physical volume data to disk "/dev/sdb"
Physical volume "/dev/vdb" successfully created
[root@gluster01 ~]# vgcreate vg_rhs /dev/sdb
Volume group "vg_rhs" successfully created
[root@gluster01 ~]#


Create your Logical Volume for your data

[root@gluster01 ~]# lvcreate -L 50G /dev/vg_rhs --name lv_mydata
Logical volume "lv_mydata" created
[root@gluster01 ~]#


Create an XFS file system on your new Logical Volume
Note: Please use the size=512 switch as stated by Red Hat as this is the recommended practice. See documentation here

[root@gluster01 ~]# mkfs.xfs -i size=512 /dev/vg_rhs/lv_mydata
meta-data=/dev/vg_rhs/lv_mydata  isize=512    agcount=4, agsize=3276544 blks
=                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=13106176, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=6399, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@gluster01 ~]#


Mount your XFS file system locally and add it to /etc/fstab

[root@gluster01 ~]# mkdir /mydata
[root@gluster01 ~]# echo "/dev/vg_rhs/lv_mydata   /mydata                 xfs     defaults,allocsize=4096 0 0" >> /etc/fstab
[root@gluster01 ~]# mount -a
[root@gluster01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
7.5G  1.4G  5.7G  20% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/vda1             485M   31M  429M   7% /boot
50G   33M   50G   1% /mydata
[root@gluster01 ~]#


Once again, the above steps MUST be performed on both nodes. Please ensure that you have the same output from ‘df -h’ on both hosts. If they do not match, please go back and correct the issue.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>