A while ago I wrote an article on setting up a virtual load balancing solution using Red Hat Enterprise Linux and Piranha. If you’re curious, you can find my previous article here.
Personally I always loved using Piranha as it provided a nice/functional web interface to configure your load balancing requirements. Yes I’ll admit it was never a Picasso when it came to appearance but it was very good at getting the job done. For beginners to advanced users, it had everything you needed without ever needing to touch a config file.
Sadly though, as of Fedora 17, Piranha has now been orphaned and discontinued in the upstream Fedora repositories. So the question has to be asked “What will become of the Load Balancing Add-On in Red Hat Enterprise Linux?”.
In Red Hat Enterprise Linux 6, you still have a fully supported option of using Piranha. Will it be available in Red Hat Enterprise Linux 7 though? If I put my fortune teller’s hat on, I personally doubt it. I wouldn’t feel too comfortable deploying a new solution into an infrastructure that is based on Piranha if you’re expecting to upgrade to version 7 when it is eventually released.
Thankfully, there are alternatives to using Piranha though. I did do a little victory dance when I realised that another product called “Keepalived” was also in that same Load Balancing Add-On channel from Red Hat.
This article is aimed to give you an alternative to using Piranha for your load balancing solution by using the “keepalived” product.
Keepalived, like Piranha taps into the LVS components in the Linux kernel to give you load balancing capabilities. Although it does not have a web based user interface, it does have an unbelievably simple configuration file structure which we will be going through shortly.
Who knows? Perhaps Red Hat is actually working on a web interface in their secret realms of magic. I’m just speculating of course as I don’t work for Red Hat, however I would definitely love to see something jump in to fill the gaps that Piranha has left open from its demise.
Thats enough of a background for now, lets crack on with the installation.
This walk-through will be setting up a 2 node load balancing solution to sit in front of a 3 node web server farm.
I will be using the below details in the example.
Load Balancing setup: lb01.example.com IP: 10.0.1.1 lb02.example.com IP: 10.0.1.2 Public Customer facing VIP: 10.0.1.3 Private VIP: 192.168.0.10 Web Server setup: web01.example.com: 192.168.0.1 web02.example.com: 192.168.0.2 web03.example.com: 192.168.0.3
Note: The private VIP will be used as the gateway IP for all backend web servers.
Prerequisites:
This article is being written based on servers running Red Hat Enterprise Linux 6. You will need 2 subscriptions to the Red Hat Load Balancing Add-On channel via RHN to meet the package requirements.
Step 1. Install Software.
On your two load balancer servers, install keepalived and ipvsadm. Don’t forget to make keepalived available on reboot.
[root@lb01 ~]# yum install -y keepalived ipvsadm ; chkconfig keepalived on
[root@lb02 ~]# yum install -y keepalived ipvsadm; chkconfig keepalived on
On your three web servers, install Apache and make it available on reboot. (If you have iptables turned on, don’t forget to open port 80)
[root@web01 ~]# yum install -y httpd ; chkconfig httpd on ; service httpd start [root@web01 ~]# iptables -I INPUT -p tcp --dport 80 -j ACCEPT ; service iptables save
…. repeat for all web servers …
To verify the responding host, I like to set up the index.html file to contain the hostname of the server.
[root@web01 ~]# echo $(hostname) > /var/www/html/index.html
…. repeat for all web servers …
Step 2. Enable IP Forwarding (routing)
We need to enable IP forwarding on our two load balancing servers. This will allow us to use the servers as a gateway for our real servers, but also it will allow our web traffic to pass through the load balancers.
To enable IP forwarding, edit the /etc/sysctl.conf file on both load balancer servers and change the following.
Change:
net.ipv4.ip_forward = 0
to:
net.ipv4.ip_forward = 1
Step 3. Disable ARP caching.
The more technically savy will know that all IP addresses are mapped to the MAC address which is currently presenting that IP address.
When it comes to VIP’s (Virtual IP’s), these IP addresses need the ability to move from one system to another. Having a cached ARP entry can effectively prevent clients from seeing your load balanced service.
To disable ARP caching, edit the /etc/sysctl.conf file on both load balancer servers and add the following entries.
net.ipv4.conf.all.arp_ignore = 3 net.ipv4.conf.all.arp_announce = 2
Once you have made the changes in steps 2 and 3, you will need to apply your changes. You can do this by running the following command. You’ll notice that you have the configuration outputted to the console. Verify these are correct before continuing.
[root@lb01 ~]# sysctl -p net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.conf.all.arp_ignore = 3 net.ipv4.conf.all.arp_announce = 2 [root@lb01 ~]#
Step 4. Configure Keepalived.
Keepalived keeps its configuration file broken down into two simple sections.
Section 1) IP failover
Here is where the virtual IP address(es) are configured and on which devices those IP’s will be presented on.
On your primary load balancing server (in my case lb01.example.com), configure /etc/keepalived/keepalived.conf to show the following.
Take note of the two stanzas. One for each of the virtual IP’s. This is because they will be assigned to different network interfaces. eth0 = public, eth1 = private.
vrrp_sync_group VG1 { group { PUBLIC_V1 PRIVATE_V1 } } vrrp_instance PUBLIC_V1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 virtual_ipaddress { 10.0.1.3/24 } } vrrp_instance PRIVATE_V1 { state MASTER interface eth1 virtual_router_id 52 priority 100 advert_int 1 virtual_ipaddress { 192.168.0.10/24 } }
On your backup load balancer server (lb02.example.com), use the following configuration. You’ll notice that it is almost completey identical. The only changes are the state is now BACKUP instead of MASTER and the priority is lower than on the first load balancer. This will enable auto fail-back in the event of a failure and recovery.
vrrp_sync_group VG1 { group { PUBLIC_V1 PRIVATE_V1 } } vrrp_instance PUBLIC_V1 { state BACKUP interface eth0 virtual_router_id 51 priority 50 advert_int 1 virtual_ipaddress { 10.0.1.3/24 } } vrrp_instance PRIVATE_V1 { state BACKUP interface eth1 virtual_router_id 52 priority 50 advert_int 1 virtual_ipaddress { 192.168.0.10/24 } }
Section 2) Load Balancing to real servers
This section is where you set up your backend servers that will actually be handling your requests. In this example, our 3 web servers.
Append the below to the end of the /etc/keepalived/keepalived.conf file on both load balancer servers.
virtual_server 10.0.1.3 80 { lb_algo rr lb_kind NAT protocol TCP real_server 192.168.0.1 80 { weight 100 } real_server 192.168.0.2 80 { weight 100 } real_server 192.168.0.3 80 { weight 100 } }
Step 5. Verify the installation
Once you have made the above changes, its time to start the keepalived service on both load balancer servers.
[root@lb01 ~]# service keepalived start Starting keepalived: [ OK ] [root@lb01 ~]#
[root@lb02 ~]# service keepalived start Starting keepalived: [ OK ] [root@lb02 ~]#
You can verify that you have both your public and private facing VIP’s online by printing your ip configuration.
e.g
[root@lb01 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:1a:4a:00:00:00 brd ff:ff:ff:ff:ff:ff inet 10.0.1.1/24 brd 10.0.1.255 scope global eth0 inet 10.0.1.3/24 scope global secondary eth0 inet6 fe80::21a:4aff:fe00:0/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:1a:4a:00:00:34 brd ff:ff:ff:ff:ff:ff inet 192.168.0.10/24 scope global eth1 inet6 fe80::21a:4aff:fe00:34/64 scope link valid_lft forever preferred_lft forever [root@lb01 ~]#
The VIP will favour running on the MASTER load balancer as it has the highest priority.
If you stop the keepalived service, or otherwise make the MASTER unavailable (powering off is always a good test), the VIP’s will fail over to the BACKUP almost immediately. (1 second to be precise, based on the above configuration).
To verify the current state of the load balancing table, you can print the running config with the following.
Note: this will appear on both load balancer servers, reguardless of which is the master.
[root@lb01 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.1.3:http rr -> 192.168.0.1:http Masq 100 0 0 -> 192.168.0.2:http Masq 100 0 0 -> 192.168.0.3:http Masq 100 0 0 [root@lb01 ~]#
You should now be in a position to browse to your new VIP.
Once the page loads, you will see which server has responded. (This will be showing the hostname that is saved to the index.html file we set up earlier).
If you preess the refresh button a few times, you’ll notice that the name of the responding server will be changing. This will prove that the requests are being bounced around to share the incoming load.
Of course you wouldn’t normally have such a simple website however. You would simply replace your regular index.html content with a regular Apache website configuration.
The key here is, make sure your configuration is applied to all servers in the load balancing setup, but also ensure that your data hosting your website is consistent/shared. You don’t want different content being served from different servers after all.
I hope this has proven useful. Please feel free to leave a comment below with feedback and/or any questions you might have.
Why the arp_ignore is set to 3 and the annouce is set to 2. I didnt understand that. Shouldnt the ignore set to 1?