Tuesday, October 24, 2017

Load balance your web applications with NGINX

In today's world applications are used from variety of devices. Traffic from web applications were hard to manage in the past. Add mobile and IoT to this fray, we have a lot of demand for services/applications enterprise build. And this is ever increasing. Our application servers should have capacity to handle all this with aplomb. A couple of strategies help this cause


  1. Reverse Proxy
  2. Load balancing
In the past we use more expensive products like F5 etc. They are worth every penny. Here in this article we see how to get it done with open source server like nginx


Our focus is more on the server configuration and not on the application, so I'll basically use this small node application(s) for our test. This node application can easily be your web application or a J2EE application or a flask application or anything else. Process remains the same. For the sake of simplicity and focus, I choose node application.

const express = require('express')
const app = express()

app.get('/', function (req, res) {
  res.send('Hello World from server 1!')
})

app.get('/hello/:name', function (req, res) {
    res.send(`Hello ${req.params.name} from server  1!`);
  })

app.listen(3001, 'localhost',function () {
  console.log('Example app listening on port 3001!')
})


I use pm2 to run the my node applications. I had 2 different version of the same application. Above runs on port 3001 and the one below runs on port 3002.

const express = require('express')
const app = express()

app.get('/', function (req, res) {
  res.send('Hello World from server 2!')
})

app.get('/hello/:name', function (req, res) {
    res.send(`Hello ${req.params.name} from server  2!`);
  })

app.listen(3002, 'localhost',function () {
  console.log('Example app listening on port 3002!')
})


pm2 status shows the following output

┌────────────┬────┬──────┬───────┬─────────┬─────────┬────────┬─────┬───────────┬──────────┬──────────┐
│ App name   │ id │ mode │ pid   │ status  │ restart │ uptime │ cpu │ mem       │ 
├────────────┼────┼──────┼───────┼─────────┼─────────┼────────┼─────┼───────────┼──────────┼──────────┤
│ app        │ 0  │ fork │ 12662 │ online  │ 0       │ 48s    │ 0%  │ 37.0 MB   │ 
│ hellonode2 │ 4  │ fork │ 12859 │ online  │ 0       │ 11s    │ 0%  │ 37.1 MB   │ 
└────────────┴────┴──────┴───────┴─────────┴─────────┴────────┴─────┴───────────┴──────────┴──────────┘

I presume you already installed nginx and have it running. First step is to use nginx as a reverse proxy.

Open /etc/nginx/sites-enabled/default

Look for this configuration

server_name _;

location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
}


Replace it with the following

server_name localhost;
location / {
        proxy_pass "http://localhost:3001";
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
}

location /api1/ {
        proxy_pass "http://127.0.0.1:3001/";
}

As you can see, when the user reaches out to our server default root will point to an application running in port 3001

Save the changes, restart nginx

sudo service nginx restart
 
Open browser and go to this location http://localhost
This should re-direct you to 'hello world' output . You can check this behavior with http://localhost/api1 and http://localhost/api2

This is a nice way to setup reverse proxy. Note that, we are not limited to localhost applications. We should be able to have applications on other IP's proxied from here. It helps us to setup nginx as a production server in production DMZ for all customer access. This server can inturn point to application in our internal DMZ's. Each of those applications can be on one or many servers and can be running on different ports. Security team can decide who gets to access what. This way we have a scalable application that is secure too.

The next step is how are we going to load balance our application running on 2 different ports 3001 & 3002

Open /etc/nginx/sites-enabled/default

Add the following line to top of the file, right below the commented line "Default server configuration"

All that this configuration achieves is, set's up "nodeserver" as a label for multiple applications. We follow this with changes to our root path

# Default server configuration
#
upstream nodeserver {
    server localhost:3001 max_fails=0 fail_timeout=10s weight=1;
    server localhost:3002 max_fails=0 fail_timeout=10s weight=1;
}

Note the "proxy_pass" value changed from http://localhost:3001 to http://nodeserver

Save the file. Restart nginx like the way you did before. Hit the borwser "http://localhost" you should see different applications are hit. That's as easy as it gets. But beware we have a simple HelloWorld application in this case. As the application complexity increases we build more services and there will be more nginx servers or other servers that can be federated in your environment.

Here's the complete file for your reference

##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##

# Default server configuration
#
upstream nodeserver {
        server localhost:3001 max_fails=0 fail_timeout=10s weight=1;
        server localhost:3002 max_fails=0 fail_timeout=10s weight=1;
}
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        # SSL configuration
        #
        # listen 443 ssl default_server;
        # listen [::]:443 ssl default_server;
        #
        # Note: You should disable gzip for SSL traffic.
        # See: https://bugs.debian.org/773332
        #
        # Read up on ssl_ciphers to ensure a secure configuration.
        # See: https://bugs.debian.org/765782
        #
        # Self signed certs generated by the ssl-cert package
        # Don't use them in a production server!
        #
        # include snippets/snakeoil.conf;

        root /var/www/html;

        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;

        server_name localhost;

        location / {
                proxy_pass "http://nodeserver";
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
        }

        location /api1/ {
                proxy_pass "http://127.0.0.1:3001/";
        }


        location /api2/ {
                proxy_pass "http://127.0.0.1:3002/";
        }
        # pass PHP scripts to FastCGI server
        #
        #location ~ \.php$ {
        #       include snippets/fastcgi-php.conf;
        #
        #       # With php-fpm (or other unix sockets):
        #       fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
        #       # With php-cgi (or other tcp sockets):
        #       fastcgi_pass 127.0.0.1:9000;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #       deny all;
        #}
}


# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#
#server {
#       listen 80;
#       listen [::]:80;
#
#       server_name example.com;
#
#       root /var/www/example.com;
#       index index.html;
#
#       location / {
#               try_files $uri $uri/ =404;
#       }
#}