1 . Replicated storage using Red Hat Storage Server
If you’re a regular reader of my articles you may recall that I have already documented Red Hat Storage Server in the past. In this article only minor changes have been made.
That being said, I’ll still cover the process.
Once you have your 3 Red Hat Storage Server nodes build you can do the following on all 3 nodes.
Note: I have a 100GB disk on each server appearing as /dev/vdb.
Create local volume
It is entirely up to you how you chose to prepare your underlying storage, however in this example, I am using LVM on top of my block devices.
[root@rhs01 ~]# pvcreate /dev/vdb Writing physical volume data to disk "/dev/vdb" Physical volume "/dev/vdb" successfully created [root@rhs01 ~]# vgcreate vg_rhs /dev/vdb Volume group "vg_rhs" successfully created [root@rhs01 ~]# lvcreate -L 100G /dev/vg_rhs --name lv_rhsvol1 Logical volume "lv_rhsvol1" created
Format local volume
[root@rhs01 ~]# mkfs.xfs -i size=512 /dev/vg_rhs/lv_rhsvol1 meta-data=/dev/vg_rhs/lv_rhsvol1 isize=512 agcount=4, agsize=3276544 blks = sectsz=512 attr=2 data = bsize=4096 blocks=13106176, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=6399, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Mount local volume
[root@rhs01 ~]# mkdir /rhs/vol1 [root@rhs01 ~]# echo "/dev/vg_rhs/lv_rhsvol1 /rhs/vol1 xfs defaults,allocsize=4096 0 0" >> /etc/fstab [root@rhs01 ~]# mount -a [root@rhs01 ~]# df -mhP Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_base-lv_root 8.7G 1.9G 6.4G 23% / tmpfs 499M 0 499M 0% /dev/shm /dev/vda1 194M 51M 134M 28% /boot /dev/mapper/vg_rhs-lv_rhsvol1 100G 33M 100G 1% /rhs/vol1 [root@rhs01 ~]#
Lastly, create a local folder inside of your new volumes to be used for our web content. Here I am creating “www” inside of “/rhs/vol1”
[root@rhs01 ~]# mkdir /rhs/vol1/www
You should have your local storage created, formatted and presented on all your Red Hat Storage Server nodes, now its time to create them into a single “Gluster” volume. In this example, we will be created a “replicated” volume as opposed to a “distributed” volume.
Note: From only one of the Red Hat Storage Server nodes, run the following.
Probe other servers
You will need to probe the other servers prior to creating a volume.
gluster peer probe rhs02.example.com gluster peer probe rhs03.example.com
Once you have probed all the servers you intend to use, you can list them with the following.
Be aware that if you see any IP addresses instead of hostnames, you have not configured your DNS resolution for those hosts correctly.
[root@rhs01 ~]# gluster peer status Number of Peers: 2< Hostname: rhs02.example.com Uuid: 26f2c32e-63b2-4d31-a6f7-1819bb849e74 State: Peer in Cluster (Connected) Hostname: rhs03.example.com Uuid: 3b3fefb5-561d-4da3-85eb-52f2bb679c86 State: Peer in Cluster (Connected) [root@rhs01 ~]#
Create Gluster Volume
Now lets bring all the above together to form a replicated Gluster volume.
To do this, run the following, notice that we are forming a volume with “3” replica’s as well as pointing it to 3 server names.
[root@rhs01 ~]# gluster volume create WWW replica 3 transport tcp rhs01.example.com:/rhs/vol1/www rhs02.example.com:/rhs/vol1/www rhs03.example.com:/rhs/vol1/www volume create: WWW: success: please start the volume to access data
Start Gluster Volume
The last step here is to start the volume, although the volume is created, it is not accessible for use until it is started.
[root@rhs01 ~]# gluster volume start WWW volume start: WWW: success
Once you have started the volume, you can verify its details with the below.
[root@rhs01 ~]# gluster volume info WWW Volume Name: WWW Type: Replicate Volume ID: 36b3f37e-19fb-4d97-99e3-f287f70d098d Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: rhs01.example.com:/rhs/vol1/www Brick2: rhs02.example.com:/rhs/vol1/www Brick3: rhs03.example.com:/rhs/vol1/www [root@rhs01 ~]#
Thanks for this excellent article!
Dale: thanks for this great article. The BZ you mentioned got “CLOSED NEXTRELEASE ” ages ago which suggests that the fix could be part of the latest selinux-policy by now. Yet no “Fixed in version” is mentioned while Miroslav usually adds the selinux-policy release in which it is fixed. Puzzling.
It seems the selinux-policy in RHEL6.4 has a fix:
$ getsebool -a | grep httpd_use_fusefs
httpd_use_fusefs –> off
So it should be just a matter of:
$ sudo setsebool -P httpd_use_fusefs on
to give Apache the ability to use GlusterFS storage.
Thanks Patrick, I was hoping for some good news like that.
I’ve just updated the article to reflect the changes.
great article! thnx !
After configurated KeepAlived and started it, I no see listen port (3306) on both servers:
[root@lb1 ~]# netstat -natp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1033/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1109/master
tcp 0 0 192.168.56.103:22 192.168.56.1:54793 ESTABLISHED 1201/sshd
tcp 0 0 :::22 :::* LISTEN 1033/sshd
tcp 0 0 ::1:25 :::* LISTEN 1109/master
Thank for article!
But in my system build, owncloud website is very slow.
I find that gluster processes in rhs and web servers use high CPU when
load website (about 75-90%).
Oh, after use NFS instead Gluster to mount Gluster volum in web servers, everything is ok!
Create tutorial,
l am trying to implement this topology, but l am facing a problem, and also have one question
should l need to install owncloud in all 3 server ( apache web farm ), so can then replicate eachothers ?
Hi Dale
You got nice tutorial,
l tried your things but l am facing one problem , since l am using only to apache web server with owncloud , should owncloud be installed in 2 servers ? .
Even that l have install , am unable to work with 2 server is same time as fail-over , or load balancing
Can you please advise me on this matter
Great, great article Dale !
My system works fine …
Very nice article , but need so much servers
Since l am just training for this kind of things, l have one question
My setup is with 2 load-balancer with haproxy , and 2 web-server , setup if working very fine , but how l can get of use https instead of http
Please help me in this matter
Thanks again for nice tutorial