1 . Replicated storage using Red Hat Storage Server
If you’re a regular reader of my articles you may recall that I have already documented Red Hat Storage Server in the past. In this article only minor changes have been made.
That being said, I’ll still cover the process.
Once you have your 3 Red Hat Storage Server nodes build you can do the following on all 3 nodes.
Note: I have a 100GB disk on each server appearing as /dev/vdb.
Create local volume
It is entirely up to you how you chose to prepare your underlying storage, however in this example, I am using LVM on top of my block devices.
[root@rhs01 ~]# pvcreate /dev/vdb Writing physical volume data to disk "/dev/vdb" Physical volume "/dev/vdb" successfully created [root@rhs01 ~]# vgcreate vg_rhs /dev/vdb Volume group "vg_rhs" successfully created [root@rhs01 ~]# lvcreate -L 100G /dev/vg_rhs --name lv_rhsvol1 Logical volume "lv_rhsvol1" created
Format local volume
[root@rhs01 ~]# mkfs.xfs -i size=512 /dev/vg_rhs/lv_rhsvol1 meta-data=/dev/vg_rhs/lv_rhsvol1 isize=512 agcount=4, agsize=3276544 blks = sectsz=512 attr=2 data = bsize=4096 blocks=13106176, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=6399, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Mount local volume
[root@rhs01 ~]# mkdir /rhs/vol1 [root@rhs01 ~]# echo "/dev/vg_rhs/lv_rhsvol1 /rhs/vol1 xfs defaults,allocsize=4096 0 0" >> /etc/fstab [root@rhs01 ~]# mount -a [root@rhs01 ~]# df -mhP Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_base-lv_root 8.7G 1.9G 6.4G 23% / tmpfs 499M 0 499M 0% /dev/shm /dev/vda1 194M 51M 134M 28% /boot /dev/mapper/vg_rhs-lv_rhsvol1 100G 33M 100G 1% /rhs/vol1 [root@rhs01 ~]#
Lastly, create a local folder inside of your new volumes to be used for our web content. Here I am creating “www” inside of “/rhs/vol1”
[root@rhs01 ~]# mkdir /rhs/vol1/www
You should have your local storage created, formatted and presented on all your Red Hat Storage Server nodes, now its time to create them into a single “Gluster” volume. In this example, we will be created a “replicated” volume as opposed to a “distributed” volume.
Note: From only one of the Red Hat Storage Server nodes, run the following.
Probe other servers
You will need to probe the other servers prior to creating a volume.
gluster peer probe rhs02.example.com gluster peer probe rhs03.example.com
Once you have probed all the servers you intend to use, you can list them with the following.
Be aware that if you see any IP addresses instead of hostnames, you have not configured your DNS resolution for those hosts correctly.
[root@rhs01 ~]# gluster peer status Number of Peers: 2< Hostname: rhs02.example.com Uuid: 26f2c32e-63b2-4d31-a6f7-1819bb849e74 State: Peer in Cluster (Connected) Hostname: rhs03.example.com Uuid: 3b3fefb5-561d-4da3-85eb-52f2bb679c86 State: Peer in Cluster (Connected) [root@rhs01 ~]#
Create Gluster Volume
Now lets bring all the above together to form a replicated Gluster volume.
To do this, run the following, notice that we are forming a volume with “3” replica’s as well as pointing it to 3 server names.
[root@rhs01 ~]# gluster volume create WWW replica 3 transport tcp rhs01.example.com:/rhs/vol1/www rhs02.example.com:/rhs/vol1/www rhs03.example.com:/rhs/vol1/www volume create: WWW: success: please start the volume to access data
Start Gluster Volume
The last step here is to start the volume, although the volume is created, it is not accessible for use until it is started.
[root@rhs01 ~]# gluster volume start WWW volume start: WWW: success
Once you have started the volume, you can verify its details with the below.
[root@rhs01 ~]# gluster volume info WWW Volume Name: WWW Type: Replicate Volume ID: 36b3f37e-19fb-4d97-99e3-f287f70d098d Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: rhs01.example.com:/rhs/vol1/www Brick2: rhs02.example.com:/rhs/vol1/www Brick3: rhs03.example.com:/rhs/vol1/www [root@rhs01 ~]#