From time to time, you may have a requirement for looking into or even deploying a Load Balancing solution which will allow you to scale your platform to a larger implementation than what an individual server could give you.
You may have or be looking into deploying a farm of web servers, proxy servers or any other type of platform. All of which may be stand alone installations however you wish to achieve a single point of communication for your users.
If we take a web server environment for an example,
Lets say we would like to have www.example.com always present yet scalable and redundant for future growth. If we have this website running on a single web server, we have not only have a single point of failure, but we also have a limitation on capacity as we only have the local resources of that one server to scale with.
If we were to use a load balancing solution as the forefront of our platform, we could use any number of servers on the back end of that platform to actually provide the resource handling.
Lets assume we have 3 servers acting as our web servers, and 2 load balancers to give us some High Availability within the Load Balancer deployment itself.
www.example.com : 10.0.1.60 (This will be the load balanced IP for our services) --------------------------------------------------------------------------------------------------- lb01.example.com (Primary) : 10.0.1.61 lb02.example.com (Secondary) : 10.0.1.62 web01.example.com : 10.0.1.71 web02.example.com : 10.0.1.72 web03.example.com : 10.0.1.73
For example, lets point www.example.com to our Load Balancer. The Load Balancer will then pass on those requests to any member of that platform, e.g. web01, web02 or web03 could receive and respond to the next incoming request.
I will be using the above system details in this walk through, and to keep things very simple, I will simply have the hostname of the responding server appear in a web browser when a request is handled.
E.g. if you browse to http://www.example.com, you will see the hostname of the server that has responded. Which could be web01.example.com
Lets start by setting up our web servers.
We will keep this very simple. Just install apache, and create a default index.html file containing the hostname of the server.
To do this, run the following.
[root@web01 ~]# yum install -y httpd ... ... [root@web01 ~]# chkconfig httpd on [root@web01 ~]# service httpd start Starting httpd: [ OK ] [root@web01 ~]# iptables -I INPUT -p tcp --dport 80 -j ACCEPT [root@web01 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] [root@web01 ~]# hostname > /var/www/html/index.html [root@web01 ~]#
Perform the above on all nodes you wish to use for your set up. I have completed this on web01.example.com, web02.example.com and web03.example.com.
If you fire up a web browser and browse to your web servers. The holding page should show you the hostname of that server.
The reason we have done this, is when we set up the load balancer and we browse to the VIP we have chosen for the www.example.com website, it will be very easy to see which host is responding to our requests.
Now lets start building our load balancers.
We will be building two load balancers to act as a HA pair. This will mean our traffic will continue to flow even if we lose one of the load balancers.
Please note: IPVS performs automatic failover in a failure. It will not auto failback once the primary has been restored. Please keep this in mind.
On both nodes, install the following packages.
[root@lb01 ~]# yum install -y piranha ipvsadm
On both nodes, enable the pulse service to start on reboot.
[root@lb01 ~]# chkconfig pulse on
Only on your primary node, enable the Piranha Web UI to start on boot and start the piranha-gui service
The reason we do this is that the second node will know its the secondary, and we don’t want someone changing that config as it won’t replicate back to the primary.
[root@lb01 ~]# chkconfig piranha-gui on [root@lb01 ~]# service piranha-gui start Starting piranha-gui: [ OK ] [root@lb01 ~]#
One other critical part of a load balancer is to be able to route traffic. Enable IP forwarding on each of your load balancers.
Edit /etc/sysctl.conf and change
net.ipv4.ip_forward = 0
net.ipv4.ip_forward = 1
Save and exit, and then reload the config file with
[root@lb01 ~]# sysctl -p net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 [root@lb01 ~]#