Scaling OwnCloud with Red Hat Storage

2. Load Balancing with Red Hat Load Balancing Add-On (LVS)

I’ve covered Load Balancing methods with LVS in the past in both NAT’d configurations with keepalived as well as Direct Routing with Piranha. In this implementation, I have used direct routing with keepalived.

IP Forwarding

One key requirement of load balancing is to allow your system to essentially be used as a router. The difference here compared to a regular system is that it will allow the traffic to pass through your load balancer server instead of just receiving and responding like a normal service.

To enable IP forwarding, edit /etc/sysctl.conf and ensure the following is set to “1”

# Controls IP packet forwarding
net.ipv4.ip_forward = 1

Once you have made the change, reload the options with the following.

sysctl -p

Install required software

You’ll need to install two packages on both of your load balancer servers. Please be aware that you will need your systems subscribed to the “RHEL Server Load Balancer” software channel for both of these.

yum install -y keepalived ipvsadm

Firewall

As you will be using your load balancers as the front end to any platform you chose to use, I’d highly recommend you leave iptables to filter your traffic.

We will be load balancing traffic for http(tcp/80), https(tcp/443) and mariadb(tcp/3306), so we need to open those ports, but also we will need to ensure that both load balancers can communicate over the vrrp protocol so that the failover works successfully.

To open the required ports, run the following on both load balancers. Don’t forget to save your config.

iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables -I INPUT -p tcp --dport 443 -j ACCEPT
iptables -I INPUT -p tcp --dport 3306 -j ACCEPT
iptables -I INPUT -p vrrp -j ACCEPT
service iptables save

Configuration

Similar to my previous articles on keepalived, I’ll explain keepalived in two parts.

IP Failover

Our virtual IP addresses reside on our load balancers and will be active on one system only. When that active load balancer becomes unavailable, the other will jump in and assume the role and host those same IP addresses.

Before we begin, move or delete the original /etc/keepalived/keepalived.conf file as we will be using a completely blank slate to begin with.

mv /etc/keepalived/keepalived.conf{,.bak}

Now, on our master or primary load balancer, (here I will use lb01.example.com) use the below and create a new /etc/keepalived/keepalived.conf file.

vrrp_instance PUBLIC_V1 {
 state MASTER
 interface eth0
 virtual_router_id 51
 priority 100
 advert_int 1
 virtual_ipaddress {
 10.0.1.10/24
 10.0.1.20/24
 }
}

You can see the two IP addresses that will be used as the virtual IP’s at the bottom. The above config also sets this load balancer as the master and will assign the IP addresses to “eth0”

On your backup or secondary load balancer, (here I will use lb02.example.com) use the below and create a new /etc/keepalived/keepalived.conf file.

vrrp_instance PUBLIC_V1 {
 state BACKUP
 interface eth0
 virtual_router_id 51
 priority 100
 advert_int 1
 virtual_ipaddress {
 10.0.1.10/24
 10.0.1.20/24
 }
}

You’ll notice that the only difference between these files is the directive which states if the system is the “master” or the “backup”.

Load Balancing

Now that we have our virtual IP’s residing on the load balancers, we can add the load balancing configuration which will direct the incoming traffic to the real backend servers for each role.

Add the following to the end of each of your load balancer’s keepalived.conf files.

virtual_server 10.0.1.20 3306 {
 lb_algo rr
 lb_kind DR
 protocol TCP

 real_server 10.0.1.21 3306 {
 weight 100
 TCP_CHECK {
 connect_port 3306
 connect_timeout 5
 }
 }
 real_server 10.0.1.22 3306 {
 weight 100
 TCP_CHECK {
 connect_port 3306
 connect_timeout 5
 }
 }
 real_server 10.0.1.23 3306 {
 weight 100
 TCP_CHECK {
 connect_port 3306
 connect_timeout 5
 }
 }
}

virtual_server 10.0.1.10 80 {
 lb_algo rr
 lb_kind DR
 protocol TCP
 persistence_timeout 30
 virtualhost owncloud.example.com

 real_server 10.0.1.11 80 {
 weight 100
 TCP_CHECK {
 connect_port 80
 connect_timeout 5
 }
 }
 real_server 10.0.1.12 80 {
 weight 100
 TCP_CHECK {
 connect_port 80
 connect_timeout 5
 }
 }
 real_server 10.0.1.13 80 {
 weight 100
 TCP_CHECK {
 connect_port 80
 connect_timeout 5
 }
 }
}

virtual_server 10.0.1.10 443 {
 lb_algo rr
 lb_kind DR
 protocol TCP
 persistence_timeout 30
 virtualhost owncloud.example.com

 real_server 10.0.1.11 443 {
 weight 100
 TCP_CHECK {
 connect_port 443
 connect_timeout 5
 }
 }
 real_server 10.0.1.12 443 {
 weight 100
 TCP_CHECK {
 connect_port 443
 connect_timeout 5
 }
 }
 real_server 10.0.1.13 443 {
 weight 100
 TCP_CHECK {
 connect_port 443
 connect_timeout 5
 }
 }
}

If you have a read through the details of the above configuration, you’ll see each VIP is being used to load a particular port number, as well as details the back end real IP addresses of the servers that will handle the requests.

Once you have saved your config files, go ahead and start keepalived on both servers.

service keepalived start
chkconfig keepalived on

If you see no error output, and everything worked successfully, your master load balancer will have its original host IP address as well as two new IP’s which are the two VIP’s.

You can verify this with the ip command.

[root@lb01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:1a:4a:00:00:a1 brd ff:ff:ff:ff:ff:ff
 inet 10.0.1.1/24 brd 10.0.1.255 scope global eth0
 inet 10.0.1.10/24 scope global secondary eth0
 inet 10.0.1.20/24 scope global secondary eth0
 inet6 fe80::21a:4aff:fe00:a1/64 scope link 
 valid_lft forever preferred_lft forever
[root@lb01 ~]#

This has only verified the IP addressing however, now lets verify the load balancing configuration. You do this with the ipvsadm command.

Note: As I have already set up reverse lookups for the virtual IP addresses to use their service names (e.g. galera.example.com and owncloud.example.com), the ipvsadm command has already verified that the IP addresses used in thie configuration are pointing to those names.

[root@lb01 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP galera.example.com:mysql rr
TCP owncloud.example.com:http rr persistent 30
TCP owncloud.example.com:https rr persistent 30
[root@lb01 ~]#

 

You’ll notice in the above output that our load balancing state looks a little bland at the moment. More specifically, the above doesn’t actually report that any real servers are currently online. This is simply because we haven’t configured them yet so don’t fret too much as we will do this next.

12 comments on “Scaling OwnCloud with Red Hat Storage

  1. Jan Dam November 26, 2013 14:39

    Thanks for this excellent article!

  2. Patrick November 27, 2013 17:19

    Dale: thanks for this great article. The BZ you mentioned got “CLOSED NEXTRELEASE ” ages ago which suggests that the fix could be part of the latest selinux-policy by now. Yet no “Fixed in version” is mentioned while Miroslav usually adds the selinux-policy release in which it is fixed. Puzzling.

  3. Patrick November 27, 2013 17:27

    It seems the selinux-policy in RHEL6.4 has a fix:
    $ getsebool -a | grep httpd_use_fusefs
    httpd_use_fusefs –> off

    So it should be just a matter of:
    $ sudo setsebool -P httpd_use_fusefs on
    to give Apache the ability to use GlusterFS storage.

    • Dale Macartney November 27, 2013 21:41

      Thanks Patrick, I was hoping for some good news like that.

      I’ve just updated the article to reflect the changes.

  4. tquang April 24, 2014 19:12

    After configurated KeepAlived and started it, I no see listen port (3306) on both servers:

    [root@lb1 ~]# netstat -natp
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1033/sshd
    tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1109/master
    tcp 0 0 192.168.56.103:22 192.168.56.1:54793 ESTABLISHED 1201/sshd
    tcp 0 0 :::22 :::* LISTEN 1033/sshd
    tcp 0 0 ::1:25 :::* LISTEN 1109/master

  5. hoangvu August 8, 2014 18:15

    Thank for article!
    But in my system build, owncloud website is very slow.
    I find that gluster processes in rhs and web servers use high CPU when
    load website (about 75-90%).

    • hoangvu August 8, 2014 18:42

      Oh, after use NFS instead Gluster to mount Gluster volum in web servers, everything is ok!

  6. luli June 2, 2015 13:04

    Create tutorial,
    l am trying to implement this topology, but l am facing a problem, and also have one question
    should l need to install owncloud in all 3 server ( apache web farm ), so can then replicate eachothers ?

  7. theluli August 2, 2015 15:01

    Hi Dale
    You got nice tutorial,
    l tried your things but l am facing one problem , since l am using only to apache web server with owncloud , should owncloud be installed in 2 servers ? .
    Even that l have install , am unable to work with 2 server is same time as fail-over , or load balancing
    Can you please advise me on this matter

  8. Stéphane January 20, 2016 09:00

    Great, great article Dale !
    My system works fine …

  9. theluli February 13, 2016 14:17

    Very nice article , but need so much servers
    Since l am just training for this kind of things, l have one question
    My setup is with 2 load-balancer with haproxy , and 2 web-server , setup if working very fine , but how l can get of use https instead of http

    Please help me in this matter
    Thanks again for nice tutorial

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>