HTTP Traffic Distribution Explained

HTTP Traffic Distribution Explained

Ever wondered what is a load balancer? You may have heard of it while using AWS or may have heard about popular load balancers like NGINX. But do you know, how it works? I also had the same question a week ago, and i began researching about Load Balancers. To give you a simple answer - Load Balancers are used to distribute incoming network traffic across multiple servers.

Features of Load Balancer

  • Load Balancers recieve incoming client requests and distributes them just like a reverse proxy.

  • High availability is maintained by distributing requests over multiple servers, so if one server fails the traffic is redirected to the remaining servers.

  • Load Balancer facilitates horizontal scaling, it can add or remove server as per demands.

How I made my own Load Balancer?

I made a simple load balancer in GO which uses the Round Robin algorithm to distribute the traffic(more on that later).

  • My first step was to create a server struct that uses a reverse proxy to forward requests to the actual server.
type server struct {
    addr  string
    proxy *httputil.ReverseProxy
  • For the actual Load Balancer, I created three predefined servers for now and used a Round Robin counter to distribute requests.
type loadbalance struct {
    port    string
    rrcount int
    servers []Server
}

Round Robin: Round Robin distributes requests to a group of servers in a circular order. In my case I had three predefined servers: [A,B,C]

  • The first request goes to server A

  • The second request to server B

  • The third request to server C

  • The fourth request goes back to server A, and the cycle continues.

server := lb.servers[lb.rrcount%len(lb.servers)]
lb.rrcount++

The modulo operation(%) ensures that when rrcount exceeds the number of servers, it goes back to the beginning.

  • I've also implemented a basic health check.

      for !server.IsAlive() {
          lb.rrcount++
          server = lb.servers[lb.rrcount%len(lb.servers)]
      }
    

    If the server is not alive, the load balancer moves to the next one, i.e it maintains the Round Robin order but skips unavailable servers.

This isn't production ready at all as it :

  • Doesn't account for server load or capacity differences

  • Doesn't consider the current number of connections or response time

I will be updating this project, as i learn more things about load balancers.

Did you find this article valuable?

Support segfault by becoming a sponsor. Any amount is appreciated!