All our Web Services are developed and deployed in a fashion so as to support Load Balancing in order to provide redundancy and high availability. A Load-Balanced deployment of any Product consists of n+1 Nodes. Clients are oblivious of the Node that they establish a connection to. Our Products support stateless connections to ensure that the behavior of the application is independent of the node to which the client connects.
This document describes the special configuration that our Products support for a Load Balanced Deployment -
The very essence of load balancing means that if a node goes down a client is seamlessly served by another node which is up. A client however at any point in time has a large amount of context data that identifies the clients session with the server, such as shopping cart data, user information, preferences etc. In order to ensure that clients do not perceive any downtime when switching between nodes, our Products create a central persistent session store which is used commonly by all the nodes
A Node does not go down frequently. Therefore it is an additional performance overhead to load the session data from the persistent store for every client request. The load balanced architecture is configured in such a fashion that a client continues to connect to the same Node unless that Node goes down, and session data is fetched from the persistent session store only when a modification is made.
Live monitoring and instant detection of any downtime
Several scripts monitor the health of the various nodes and respond to any downtime. Some of these simply need to be configured on the Load Balancer. However downtimes are of varied nature. On many occasions a Node may continue to respond to a port 80 request, but the application itself may not be responding. This could be due to any resource locks, resource unavailability, memory or Application Server issues. Various additional scripts also monitor these circumstances and trigger appropriate flags on the Load Balancer to allow for automated steps to be taken to bring a node back to Operational status.
Upgrades and Bug Patching
With an n+1 architecture it is important to ensure that all Nodes are running the same code at all times. It would be catastrophic to Data Integrity if there were different codebases running on different nodes. For this purpose a separate set of deployment scripts are implemented for a Load Balanced setup which ensure that all modifications, upgrades and patches to any code are replicated across all the Nodes automatically
Synchronous modification of settings
Various parameters and meta-data that the application requires is stored in XML and property files. Each node has a separate copy of these files that are loaded at start-up time and maintained in memory. Our implementation ensures that any modification to these files is replicated across the nodes.
Each node has a separate dynamic data store for various dynamic files that are generated at run-time. When multiple nodes stretch across servers an intelligent backup agent ensures that all these files are backed up across all nodes.
NAT to look like a Single Server
Despite the fact that a Load Balanced setup consists of a cluster of servers, various applications require a load balanced cluster to appear as a single server to the external world. For instance - external resources that these Web servers connect to may have firewalls which allow connectivity from a set of specified IP addresses. As the Web Nodes multiply the number of source IP addresses increase. Not all external vendors allow provisions for multiple IP addresses. Therefore an IP Masquerade is setup between the Web Nodes and the Load Balancer, such that the Web Nodes attempt connections to external resources using virtual IP Addresses which are then masqueraded by the Load Balancer to the actual live IP Addresses. This allows the entire Load Balanced cluster to appear as a single server to the external world.
Use of this Site is subject to express Terms of Service. By using this Site, you signify that you agree to be bound by these Terms of Service, which were last revised on November 01, 2010.