How to Configure Nginx Load Balancer

Written by: Bagus Facsi Aginsa
Published at: 26 Aug 2021


Nginx as Load Balancer is a common use case for Nginx around the world. Load balancer usually not only for balancing the upstream server load, but also improve the overall security of your application. 2 types of Load Balancer can be built using Nginx. The first one is Layer 4 Load Balancer, the second one is Layer 7 Load Balancer. The term Load Balancer (without saying the layer) usually refers to layer 7.

In this tutorial, we will do a Layer 7 Load Balancer configuration. If you need Layer 4 Load Balancer configuration, you can check my other tutorial: How to Configure Nginx as Layer 4 Load Balancer.

Check Prerequisite

This tutorial needs Nginx Installed, if the Nginx is not installed yet, you can just use apt install nginx to install it.

Sudo Privileges

Before starting, make sure that we have no permission issue on the configuration by executing this command:

sudo su

Use Case

To demonstrate the load balancing configuration, will use this use case

                                                 ____________
                                                |            |
                                         -----> |    App1    |
                                        |       |____________|
                        ___________     |        ____________
                       |           |    |       |            |
     user -----------> |     LB    |----|-----> |    App2    |
                       |___________|    |       |____________|
                                        |        ____________
                                        |       |            |
                                         -----> |    App3    |
                                                |____________|

3 application nodes will be load balanced using Nginx LB (Load Balancer). These are the specification of the nodes:

  1. LB Node
    • IP: 10.11.12.13
    • Port: 3000
  2. App1 Node
    • IP: 10.1.1.10
    • Port: 4000
  3. App2 Node
    • IP: 10.1.1.20
    • Port: 4000
  4. App3 Node
    • IP: 10.1.1.30
    • Port: 4000

In this use case, we will assume that the application should be using TLS or https with example.com as the domain name. The Load Balancer will terminate the TLS, then use plain http to the upstream. If you are not using TLS, We will also provide you guidance on how to do that.

Load Balancer Configuration

First, you need an Nginx that already running and the move our work directory to the Nginx configuration folder

cd /etc/nginx

Backup tne default nginx configuration file

mv nginx.conf nginx.conf.old

Create a new configuration file

nano nginx.conf

Add this configuration to the configuration file

user www-data;
worker_processes auto;
worker_rlimit_nofile 8192;
pid /run/nginx.pid;

events {
        worker_connections 4096;
}

This will set up a basic global configuration for the Nginx. The important part is the worker_rlimit_nofile and worker_connections directives. You can set the number higher depending on the specification of your server. No one knows the best setup, you must do a load test yourself.

After that, create a http block under the events block like this.

http {
   include       mime.types;
   default_type  application/octet-stream;

   # Log Setting
   log_format complete '$remote_addr - $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_x_forwarded_for" '
                       'rt=$request_time '
                       'ua="$upstream_addr" us="$upstream_status" '
                       'ut="$upstream_response_time" ul="$upstream_response_length"';
   access_log /var/log/nginx/access.log complete;
   error_log  /var/log/nginx/error.log warn;

   # Sending fille Optimization
   sendfile on;
   tcp_nopush on;
   tcp_nodelay on;

   # Keepalive Connection
   keepalive_timeout 65;

   upstream app_node {
      server 10.1.1.10:4000;
      server 10.1.1.20:4000;
      server 10.1.1.30:4000;
   }

   server {
      listen 3000 ssl http2;
      server_name example.com

      # Security and SSL
      ssl_protocols TLSv1.3 TLSv1.2;
      ssl_certificate /etc/ssl/cert_file.crt;
      ssl_certificate_key /etc/ssl/key_file.key;

      location / {
         # HTTP 1.1
         proxy_http_version 1.1;
         proxy_set_header Connection "";
         proxy_set_header Host $host;
         proxy_pass http://app_node;
      }
   }
}

Let’s review this line by line.

include mime.types directive is import the mime.types files in the Nginx directory so the Nginx support serving static file extensions that are included in mime.types file.

default_type directive is to set the default Content-Type response header if the header is not configured. Usually, the default is application/octet-stream as we used in this tutorial.

log_format directive is to format the format of the Nginx log.

access_log and error_log directive is to set where the Nginx log will be stored.

sendfile, tcp_nopush, and tcp_nodelay directive is to optimize the Nginx performance when serving static files.

keepalive_timeout directive is to set the http timeout before Nginx closes the connection after the user made 1 request. If the user makes another request within the keepalive_timeout, Nginx will not make a new connection to the server the request, hence improving the Nginx performance.

Inside the upstream block, there are 3 servers that match our use case. Our target is load balancing these 3 server behind the Nginx. By default, Nginx will use the Round Robin algorithm to load balance incoming traffic, directing it sequentially to the servers in the configured upstream. If you want to use another algorithm you can check Nginx documentation: Choosing an NGINX Plus Load‑Balancing Technique.

Inside the server block, there are a few directives.

listen directive is to set the port of our Load Balancer will listen to, also we add ssl parameter to tell Nginx that the traffic is encrypted with TLS, and http2 to use http2 protocol that comes with performance improvement from plain http version 1.

server_name directive is to set the domain name of the load balancer.

ssl_protocols is to set the version of TLS protocol used. It is recommended to use TLSv1.3 and TLSv1.2 to get maximum security with high compatibility.

ssl_certificate and ssl_certificate_key directive is where the SSL cert and key location.

proxy_http_version directive is to set the http version used for upstream connection. Version 1.1 is recommended.

proxy_set_header Connection "" directive is needed when we use http version 1.1 for the upstream connection.

Lastly, we use the proxy_pass directive to tell the Nginx to pass the traffic to the upstream location.

So the full configuration of nginx.conf is like this

user www-data;
worker_processes auto;
worker_rlimit_nofile 8192;
pid /run/nginx.pid;

events {
        worker_connections 4096;
}

http {
   include       mime.types;
   default_type  application/octet-stream;

   # Log Setting
   log_format complete '\$remote_addr - \$remote_user [\$time_local] "\$request" '
                       '\$status $body_bytes_sent "\$http_referer" '
                       '"\$http_user_agent" "\$http_x_forwarded_for" '
                       'rt=\$request_time '
                       'ua="\$upstream_addr" us="\$upstream_status" '
                       'ut="\$upstream_response_time" ul="\$upstream_response_length"';
   access_log /var/log/nginx/access.log complete;
   error_log  /var/log/nginx/error.log warn;

   # Sending fille Optimization
   sendfile on;
   tcp_nopush on;
   tcp_nodelay on;

   # Keepalive Connection
   keepalive_timeout 65;

   upstream app_node {
      server 10.1.1.10:4000;
      server 10.1.1.20:4000;
      server 10.1.1.30:4000;
   }

   server {
      listen 3000 ssl http2;
      server_name example.com

      # Security and SSL
      ssl_protocols TLSv1.3 TLSv1.2;
      ssl_certificate /etc/ssl/cert_file.crt;
      ssl_certificate_key /etc/ssl/key_file.key;

      location / {
         # HTTP 1.1
         proxy_http_version 1.1;
         proxy_set_header Connection "";
         proxy_set_header Host $host;
         proxy_pass http://app_node;
      }
   }
}

If your domain is not using TLS/SSLthen you can comment out ssl_protocols, ssl_certificate, and ssl_certificate_key directives, and also change the listen directive to:

listen 3000;

After the configuration is done, make sure that there is no error in the configuration by running this command

nginx -t

Reload the Nginx so it uses the new configuration file

service nginx reload

You can check the Nginx status by running this command

service nginx status

That’s it! Congratulation, now you can build your own Load Balancer using Nginx.

Final word, if you make a Load Balancer for many different application, I suggest you to understand how the Nginx serve a request by learning the Nginx server block and location block. You can read my other tutorial: Understanding Nginx Server Block and Understanding Nginx Location Block.