Load balancing is mainly to distribute the workload across
multiple computers or a computer cluster, CPUs, disk drives, or other system
resources.
This optimizes resource use, maximizes throughput, minimizes
response time, and avoids overload. Using multiple components with load
balancing instead of a single component may increase reliability through
redundancy. Load balancing is usually provided by dedicated software or
hardware, such as a multilayer switch or a Domain Name System server Process.
Why to do Load Balancing?
Click here to know more
From the first glance it looks like an overhead but it is
not; actually this helps us to improve performance.
1. Load balancing
increases scalability to accommodate higher volume work by distributing
(balancing) client requests among several servers. This allows more servers
(hosts) to be added to handle additional load.
2. Load balancing
ensures uninterrupted continuous availability of critical and key business
applications. When the applications are on more than one machine, operations
personnel can work on one machine while the other is busy working.
3. It helps the
system to be ready to accept and handle growth and make it fault resilience, so
if a server crashes or just needs maintenance work done, an alternative server
can take over.
Types:
There are various algorithms for load balancing and it totally
depends on the context which one you want to use. Some of them are listed
below:
1. Round robin:
This distributes the request to different servers in a round robin manner
independent of the load on each server. The problem with this type is it works
blindly and even if server is overloaded it will queue the request.
2. Denoted
Connections: This algorithm keeps track of the number of active connections to
each server and always sends new request to server with least number of
connections. If two servers have same number of connections, it selects the
server with lowest server identifier. The disadvantage with this is when system
is empty the same server is used all the time.
3. Round Trip: This
algorithm monitors the first buffer time for each connections of the servers.
The mean time is calculated over a averaging window and the average value is
reset at the end of averaging window. The server with least mean value will get
the request. This is complex and can be implemented with limited set of
variables.
4. Transmit Byte:
This algorithm keeps track of the amount of transmitted bytes from each web
server since the last averaging reset and uses this to allocate the request to
the server.
No comments:
Post a Comment