I wrote an article a while back about load balancing with HA Proxy. If you’re wanting to do SSL, it lets you do it, but SSL will terminate on each individual webhead. This works quite well for performance, and it is designed with performance in mind. Unfortunately there are some cases where you want the SSL to terminate on the load balancer (for instance if you’re making use of the X-Forwarded-For header). This article will explain how to setup Nginx as a load balancer with SSL termination. Read on after the jump for the howto.

The first thing you’ll need, is to make sure that your webheads are configured to listen on the internal interface. It can listen on the external as well, but the load balancer is going to communicate to it over the private network.

# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2342/apache2    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      2307/sshd       
tcp6       0      0 :::22                   :::*                    LISTEN      2307/sshd       

0.0.0.0 signifies all available IPs, so we’re fine. If your server is set to explicitly listen on it’s public IP, you’ll need to either change it to listen on all or on the internal.

Once we have done this we need to install nginx. The modules we will be using – upstream and proxy – are both part of the core HTTP set, so we don’t need to build it any specific way. You can either grab the source and build it, or just download it from your repository –

 # aptitude install nginx 

Now that we’ve got this installed, we need to make some configuration changes. First off, go ahead and create a directory to save all your SSL keys in, I’m going to use /etc/nginx/ssl, feel free to use whatever makes sense to you. Open up /etc/nginx/nginx.conf with your editor of choice and find the http section. Comment out the include for sites-enabled and then create another include, I’ll be using lb.con – again, feel free to use whatever you want.

user www-data;
worker_processes  1;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
    # multi_accept on;
}

http {
    include       /etc/nginx/mime.types;

    access_log	/var/log/nginx/access.log;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    gzip  on;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    include /etc/nginx/conf.d/*.conf;
#    include /etc/nginx/sites-enabled/*;
    include /etc/nginx/lb.conf;
}

In your new config file, you’ll need to set up the server and upstream sections. Here is my basic configuration, before we get any further:

upstream backend {
	server 10.179.73.92 max_fails=3 fail_timeout=15s;
	server 10.179.73.148 max_fails=3 fail_timeout=15s;
	server 10.179.73.170 max_fails=3 fail_timeout=15s;
	server 10.179.73.197 max_fails=3 fail_timeout=15s;
}

server {
	listen 80;

	location / {
		proxy_pass http://backend;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	}
}

server {
	listen 443;
	ssl on;
	ssl_certificate /etc/nginx/ssl/server.crt;
	ssl_certificate_key /etc/nginx/ssl/server.key;

	location / {
		proxy_pass http://backend;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	}
}

“upstream” defines a pool of servers, and basically manages them. You can specify the max times they can fail before they are pulled out of the rotation (I feel 3 is sufficient, you may choose to increase this. You might also want to increase the timeouts, it should be fairly straight forward.)

The second section is what does most of the work – you need to specify the listen ports, if you have multiple SSL websites that you want to terminate, you will need a server block for each, as well as binding to a specific IP. Additionally you need to make sure the webheads are set up to work with name based virtualhosts, as that header should get based to them. In my case, I only have one SSL website so I simply specify a single listen for 443, and another for 80.

The SSL parts should be fairly straight forward, which leaves the location block – you can do some pretty cool things with this section, but for our purposes we just want to proxy everything so we use “/”. We need to tell it to proxy specifically to port 80, and use the upstream “backend”. The last part tells nginx to add the head X-Forwarded-For, which you probably want enabled.

If you want session persistence you have to enable ip_hash in your backend. Simply put “ip_hash;” in your upstream block above your servers

More then likely, you’ll want to fine tune your configuration. For that you might check out the documentation on cialis list price as well as upstream as nginx’s website.

As a final note, you’re traffic is sent from the load balancer to the webheads in the CLEAR. While this shouldn’t be a big problem for you in most cases (you can’t sniff the traffic unless you broadcast it for some reason), it’s something to keep in mind. The most notable time this might be a problem is on a shared network where you can’t control it, the biggest problem might be a MiTM attack – if you’re concerned about this, you can set up arptables to prevent ARP poisoning, but the best bet would be to just ask your provider. If they don’t have any kind of ARP poisoning countermeasures in place, you might think about a host with your security as a higher priority.

« »