Nginx Reverse Proxy With SSL Pass-Through Load Balancing

Introduction

In this tutorial, you will learn how to use NGINX as a Reverse Proxy and Load Balancer, to distribute incoming traffic to multiple servers (nodes in Docker’s argo) by utilizing Docker’s Swarm mode. Docker Swarm is a container orchestrator embedded in Docker Engine and is responsible for automated container deployment, horizontal scaling, and management.

Prerequisites

  1. Already followed the tutorial dockerizing a WordPress site. (on Ubuntu)
  2. Root access to your server, or a non-root user with Sudo privileges.
  3. Docker and Docker Compose already installed in the host machine.

Step 1 – Initializing Docker Swarm

Before initializing the Swarm, make sure that Docker Engine is installed on the host machine you want to create the Swarm, but also on the machine(s) that will be used as worker node(s). For the purpose of this tutorial, we will create a 2 node swarm. The 1st will be our manager node, and the 2nd one will be our worker node.

Cluster overview

  1. Manager node: IP Address 192.168.1.3, 1 core, 1024Mb ram.
  2. Worker node: IP Address 192.168.1.4, 1 core, 1024Mb ram.

After you verify that Docker is installed on both machines, we need to issue the command on the machine we want to use as a manager using its own IP in the docker swarm initialization command:

docker swarm init --advertise-addr 192.168.1.3

Output:

Swarm initialized: current node (twn8ondlmhdz2969fai5yw5jv) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3v6r2adgmvhx5np4a559frddp729ygujofjtbm7zqfwjpd5gdg-97mklyqpb8vg5q7dvenh9m9m2 192.168.1.3:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The machine that we initialize the Swarm on will be the Manager or Leader, and as you can see Docker informing us about this fact. Docker also gives us the command that needs to be executed on the second node, in order for it to join the Swarm, hence creating the Cluster.

Step 2 – Add nodes to the swarm

Now let’s open an ssh connection to the second machine, and execute the command Docker told us to:

docker swarm join --token SWMTKN-1-3v6r2adgmvhx5np4a559frddp729ygujofjtbm7zqfwjpd5gdg-97mklyqpb8vg5q7dvenh9m9m2 192.168.1.3:2377

Output:

This node joined a swarm as a worker.

Great, our Cluster is now created, and if you ssh to your manager node, it can be verified like so:

docker node ls

Output:

ID                           HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ql9xkkbg95lihpge1nd1zla1j *   node01.admintuts    Ready               Active              Leader              19.03.5
sen358akhxhlpwozkr1jeiju6     node02.admintuts    Ready               Active                                  19.03.5

Of course, you can run this command to as many machines as you like. All will join the Swarm as worker nodes, scaling your cluster furthermore.

Step 3 – Setting Up Nginx As A TCP Load Balancer

From a security standpoint, setting up a load-balancer in front of your application (in our case containers) is a good practice security-wise. Don’t worry is not rocket science, as long as you understand how proxying request to backend servers works. What a Load Balancer does, is sending requests to backend servers according to some predefined rules. In Nginx, these rules are defined inside the configuration file, which we will create shortly.

When a server admin is thinking about using a load balancer, he faces a common question. Should the SSL Termination take place to the load balancer itself, or to the backend server? That clearly depends on how many applications (or websites) we plan on hosting to these servers. If you want to use only one website, then the SSL Termination can take place on the load balancer. But if not, then the termination should be taken care of from the backend server. And that’s what we are going to use now.

Load Balancing Theoretical Approach

Let’s examine things a little bit more in-depth. What we want to do here is to create a secure route from the client’s browser, all the way down passing through the load balancer, to the backend server. To achieve this in Nginx, we are going to use the “HTTP” directive of the config file which takes care of the non-SSL requests, and the “stream” directive which will take care of the SSL requests. What the stream directive actually does, is, in fact, allowing the backend server to terminate incoming connections, and at the same time to load-balance them.

Edit the load balancer config file, and make it look like below, changing the IP with your actually servers IP addresses:

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
load_module /etc/nginx/modules/ngx_http_geoip2_module.so; # GeoIP2
events {
    worker_connections  1024;
}
http {
  upstream admintuts-http {
    server 192.168.1.3:7080;
    server 192.168.1.4:7080;
  }
  server {
    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-Host $server_name;
      proxy_set_header Connection "";
      add_header       X-Upstream $upstream_addr;
      proxy_redirect     off;
      proxy_connect_timeout  300;
      proxy_http_version 1.1;
      proxy_buffers 16 16k;
      proxy_buffer_size 16k;
      proxy_cache_background_update on;
      proxy_pass http://admintuts-http$request_uri;
    }
  }
}
stream {
  upstream admintuts-https {
    server 192.168.1.3:7443;
    server 192.168.1.4:7443;
  }

  log_format proxy '$protocol $status $bytes_sent $bytes_received $session_time';
  access_log  /var/log/nginx/access.log proxy;
  error_log /var/log/nginx/error.log debug;
  server {
    proxy_protocol on;
    tcp_nodelay on;
    listen 443;
    listen 7443;
    proxy_pass admintuts-https;
  }
}

Keep in mind, that we are not sshing to any server. We just editing config files that are going to get bind-mounted to containers.

Step 4 – Configure The Backend Servers

Now, let’s configure the webserver config files. These are two. The nginx.conf, and the actual server config inside the sites-enabled folder.

Nginx.conf:

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
load_module /etc/nginx/modules/ngx_http_geoip2_module.so; # GeoIP2 Can be removed.
load_module /etc/nginx/modules/ngx_rtmp_module.so; Nginx RTMP module. Can be removed.
events {
    worker_connections  1024;
}
http {
    variables_hash_bucket_size 64;
    variables_hash_max_size 2048;
    server_tokens off;
    sendfile    on;
    tcp_nopush  on;
    tcp_nodelay on;
    autoindex off;
    keepalive_timeout  30;
    types_hash_bucket_size 256;
    client_max_body_size 100m;
    server_names_hash_bucket_size 256;
    include         mime.types;
    default_type    application/octet-stream;
    index  index.php index.html index.htm;
    # GeoIP2
    log_format  main    'Proxy Protocol Address: [$proxy_protocol_addr] '
                        '"$request" $remote_addr - $remote_user [$time_local] "$request" '
                        '$status $body_bytes_sent "$http_referer" '
                        '"$http_user_agent" "$http_x_forwarded_for"';

    # GeoIP2
    log_format  main_geo    'Original Client Address: [$realip_remote_addr]- Proxy Protocol Address: [$proxy_protocol_addr] '
                            'Proxy Protocol Server Address:$proxy_protocol_server_addr - '
                            '"$request" $remote_addr - $remote_user [$time_local] "$request" '
                            '$status $body_bytes_sent "$http_referer" '
                            '$geoip2_data_country_iso $geoip2_data_country_name';

    access_log  /var/log/nginx/access.log  main_geo; # GeoIP2
#===================== GEOIP2 =====================#
    geoip2 /usr/share/geoip/GeoLite2-Country.mmdb {
        $geoip2_metadata_country_build  metadata build_epoch;
        $geoip2_data_country_geonameid  country geoname_id;
        $geoip2_data_country_iso        country iso_code;
        $geoip2_data_country_name       country names en;
        $geoip2_data_country_is_eu      country is_in_european_union;
    }
    #geoip2 /usr/share/geoip/GeoLite2-City.mmdb {
    #   $geoip2_data_city_name city names en;
    #   $geoip2_data_city_geonameid city geoname_id;
    #   $geoip2_data_continent_code continent code;
    #   $geoip2_data_continent_geonameid continent geoname_id;
    #   $geoip2_data_continent_name continent names en;
    #   $geoip2_data_location_accuracyradius location accuracy_radius;
    #   $geoip2_data_location_latitude location latitude;
    #   $geoip2_data_location_longitude location longitude;
    #   $geoip2_data_location_metrocode location metro_code;
    #   $geoip2_data_location_timezone location time_zone;
    #   $geoip2_data_postal_code postal code;
    #   $geoip2_data_rcountry_geonameid registered_country geoname_id;
    #   $geoip2_data_rcountry_iso registered_country iso_code;
    #   $geoip2_data_rcountry_name registered_country names en;
    #   $geoip2_data_rcountry_is_eu registered_country is_in_european_union;
    #   $geoip2_data_region_geonameid subdivisions 0 geoname_id;
    #   $geoip2_data_region_iso subdivisions 0 iso_code;
    #   $geoip2_data_region_name subdivisions 0 names en;
   #}

#=================Basic Compression=================#
    gzip on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/css text/xml text/plain application/javascript image/jpeg image/png image/gif image/x-icon image/svg+xml image/webp application/font-woff application/json application/vnd.ms-fontobject application/vnd.ms-powerpoint;
    gzip_static on;
    
    include /etc/nginx/sites-enabled/admintuts-https.conf;
}

admintuts-https.conf:

server {
    real_ip_header proxy_protocol;
    set_real_ip_from proxy;
    server_name 192.168.1.3; #Your current server ip address. It will redirect to the domain name.
    listen 80;
    listen 443 ssl http2;
    listen [::]:80;
    listen [::]:443 ssl http2;
    ssl_certificate     /etc/nginx/certs/admintuts.crt;
    ssl_certificate_key /etc/nginx/certs/admintuts.key;
    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    return 301 https://admintuts.ltd$request_uri;
}
server {
    real_ip_header proxy_protocol;
    set_real_ip_from proxy;
    server_name www.admintuts.ltd;
    listen 80;
    listen 443 http2;
    listen [::]:80;
    listen [::]:443 ssl http2 ;
    ssl_certificate     /etc/nginx/certs/admintuts.crt;
    ssl_certificate_key /etc/nginx/certs/admintuts.key;
    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    return 301 https://admintuts.ltd$request_uri;
}
server {
    real_ip_header proxy_protocol;
    set_real_ip_from proxy;
    server_name admintuts.ltd;
    listen 443 proxy_protocol ssl http2 proxy_protocol;
    listen [::]:443 proxy_protocol ssl http2 proxy_protocol;
    root /var/www/html;
    charset UTF-8;
    add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
    add_header X-Frame-Options SAMEORIGIN;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header Referrer-Policy no-referrer;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
    ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 10m;
    keepalive_timeout   70;
    ssl_buffer_size 1400;
    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    #ssl_stapling on;
    #ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=86400;
    resolver_timeout 10;
    ssl_certificate     /etc/nginx/certs/admintuts.crt;
    ssl_certificate_key /etc/nginx/certs/admintuts.key;
    ssl_trusted_certificate /etc/nginx/certs/admintuts.crt;
location ~* \.(jpg|jpe?g|gif|png|ico|cur|gz|svgz|mp4|ogg|ogv|webm|htc|css|js|otf|eot|svg|ttf|woff|woff2)(\?ver=[0-9.]+)?$ {
    expires modified 1M;
    add_header Access-Control-Allow-Origin '*';
    add_header Pragma public;
    add_header Cache-Control "public, must-revalidate, proxy-revalidate";
    access_log off;
    }
    #access_log  logs/host.access.log  main;
    location ~ /.well-known {
        allow all;
    }
location / {
    index index.php;
    try_files $uri $uri/ /index.php?$args;
    }
error_page  404    /404.php;
location /wp-config.php {
    deny all;
}
#pass the PHP scripts to FastCGI server listening on php-fpm unix socket
location ~ \.php$ {
    try_files       $uri =404;
    fastcgi_index   index.php;
    fastcgi_pass    wordpress:9000;
    fastcgi_pass_request_headers on;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_param   SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    fastcgi_intercept_errors on;
    fastcgi_ignore_client_abort off;
    fastcgi_connect_timeout 60;
    fastcgi_send_timeout 180;
    fastcgi_read_timeout 180;
    fastcgi_request_buffering on;
    fastcgi_buffer_size 128k;
    fastcgi_buffers 4 256k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;
    include fastcgi_params;
}
location = /robots.txt {
    access_log off;
    log_not_found off;
    }
location ~ /\. {
    deny  all;
    access_log off;
    log_not_found off;
    }
}

Before moving forward, let’s talk a bit about the Proxy Protocol.

How To Enable The Proxy Protocol Correctly

The is a lot of confusion about using proxy protocol correctly, and even more about enabling it without getting broken headers error. I find it amazing that the actual documentation from nginx.com is flat out wrong. The only way to enable it correctly, and as it turns out the simplest one, is this:

  1. Setting proxy_protocol on; on the load-balancer.
  2. Adding real_ip_header proxy_protocol; and
    set_real_ip_from proxy; to the backend server config file.
  3. Setting proxy_protocol on the listen directive.

Now you may be wondering, what does this “proxy” in the directive set_real_ip_from mean. This is the proxy service hostname, which because of Docker’s internal DNS mechanism, translates to an IP address. This way we can guarantee that even when we restart the load balancer, the backend servers will get the correct IP. Lastly, it’s worth mentioning that the Proxy Protocol was designed to chain reverse-proxies without losing the client information.

Load Balancing Testing

Let’s make some curl requests to the nodes in order to make sure that load balancing works as it should:

nikolas@node01:~$ curl -I node01.admintuts.ltd
HTTP/1.1 301 Moved Permanently
Server: nginx/1.17.6
Date: Tue, 03 Dec 2019 01:55:42 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: https://admintuts.ltd/
X-Upstream: 192.168.1.3:7080

nikolas@node01:~$ curl -I node01.admintuts.ltd
HTTP/1.1 301 Moved Permanently
Server: nginx/1.17.6
Date: Tue, 03 Dec 2019 01:55:42 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: https://admintuts.ltd/
X-Upstream: 192.168.1.4:7080
nikolas@node02:~$ curl -I node02.admintuts.ltd
HTTP/1.1 301 Moved Permanently
Server: nginx/1.17.6
Date: Tue, 03 Dec 2019 02:00:18 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: https://admintuts.ltd/
X-Upstream: 192.168.1.4:7080

nikolas@node02:~$ curl -I node02.admintuts.ltd
HTTP/1.1 301 Moved Permanently
Server: nginx/1.17.6
Date: Tue, 03 Dec 2019 02:00:20 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: https://admintuts.ltd/
X-Upstream: 192.168.1.3:7080

As you can see from the X-Upstream header, our requests are being load balanced correctly (round-robin mode) between our 2 nodes. So far so good 🙂

Step 4 – Deploy Services To The Swarm

Now is time to put our cluster to work. We will use a docker-compose file to deploy a list of services. Following up on the previous tutorial dockerizing a WordPress installation, we will make some changes to the compose file, to reflect the swarm environment. This is the docker-compose-swarm.yaml file needed:

version: '3.3'

services:
  db:
    image: mariadb:latest
    hostname: db
    env_file: variables/mysql.env
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 5
        window: 30s
      mode: global
      endpoint_mode: dnsrr
    volumes:
      - ./db-data:/var/lib/mysql
    command: mysqld --max_allowed_packet=128M --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
    networks:
      - app-net

  wordpress:
    image: admintuts/wordpress:php7.4-fpm-redis-alpine
    hostname: wordpress
    depends_on:
        - db
    env_file: variables/wordpress.env
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 5
        window: 30s
      mode: replicated
      replicas: 2
      endpoint_mode: dnsrr    
    volumes:
      - ./wordpress-data:/var/www/html
      - ./php-conf/php.ini:/usr/local/etc/php/php.ini
    networks:
      - app-net

  proxy:
    image: admintuts/nginx:1.17.6-rtmp-geoip2-alpine
    hostname: proxy
    depends_on:
      - wordpress    
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 5
        window: 30s
      mode: replicated
      replicas: 2
      endpoint_mode: vip    
    volumes:
      - ./nginx-conf/proxy/nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
      - "443:443"
    networks:
      - app-net

  webserver:
    image: admintuts/nginx:1.17.6-rtmp-geoip2-alpine
    hostname: webserver
    depends_on:
      - proxy
    deploy:
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 5
        window: 30s
      mode: replicated
      replicas: 2
      endpoint_mode: vip
    volumes:
      - ./wordpress-data:/var/www/html
      - ./nginx-conf/myproject/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx-conf/myproject/sites-enabled:/etc/nginx/sites-enabled
      - ./nginx-conf/myproject/certs:/etc/nginx/certs
      - ./nginx-conf/myproject/fastcgi_config:/etc/nginx/fastcgi_config
      - ./nginx-conf/myproject/ssl/dhparam.pem:/etc/nginx/ssl/dhparam.pem
    ports:
      - "7080:80"
      - "7443:443"
    networks:
      - app-net

  redis:
    image: redis:5.0.7-alpine
    depends_on:
      - webserver
    hostname: redis
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 5
        window: 30s
      mode: global
      endpoint_mode: dnsrr
    volumes:
      - ./redis/cache/:/data
    command: redis-server --bind redis --requirepass some_very_long_password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
    networks:
      - app-net

volumes:
  db-data:
  wordpress-data:
  nginx-conf:
  php-conf:
  redis:

networks:
  app-net:
    driver: overlay
    driver_opts:
      encrypted: "true"

We are now ready to deploy our services. Issue the following command on the host machine:

docker stack deploy -c docker-compose-swarm.yaml myproject

Output:

Creating service myproject_db
Creating service myproject_wordpress
Creating service myproject_proxy
Creating service myproject_webserver
Creating service myproject_redis

Let’s verify that all our services are scheduled as expected across our nodes. Issue the command:

docker service ps myproject_webserver myproject_redis myproject_db myproject_wordpress myproject_proxy

Output:

ID                  NAME                                        IMAGE                                         NODE                   DESIRED STATE       CURRENT STATE
qojecxhf5815        myproject_redis.swp1vsugph61t45en0xreuxwi   redis:5.0.7-alpine                            node01.myproject.ltd   Running             Running 5 minutes ago                       
v4mujxau3j19        myproject_redis.qz2o99osvoc1g76l3a64c45v9   redis:5.0.7-alpine                            node02.myproject.ltd   Running             Running 5 minutes ago                       
v8y234n0an4t        myproject_db.swp1vsugph61t45en0xreuxwi      mariadb:latest                                node01.myproject.ltd   Running             Running 5 minutes ago                       
z75reykyukd8        myproject_db.qz2o99osvoc1g76l3a64c45v9      mariadb:latest                                node02.myproject.ltd   Running             Running 5 minutes ago                       
6d70cfe406qi        myproject_webserver.1                       myproject/nginx:1.17.6-rtmp-geoip2-alpine     node02.myproject.ltd   Running             Running 5 minutes ago                       
yzdfgr0ev55u        myproject_webserver.2                       myproject/nginx:1.17.6-rtmp-geoip2-alpine     node01.myproject.ltd   Running             Running 5 minutes ago 
q4bzk7l52yoz        myproject_proxy.1                           myproject/nginx:1.17.6-rtmp-geoip2-alpine     node01.myproject.ltd   Running             Running 5 minutes ago                       
x0suw8xli8kr        myproject_proxy.2                           myproject/nginx:1.17.6-rtmp-geoip2-alpine     node02.myproject.ltd   Running             Running 5 minutes ago
d08y87uwd6hr        myproject_wordpress.1                       myproject/wordpress:php7.4-fpm-redis-alpine   node02.myproject.ltd   Running             Running 5 minutes ago                       
ary9r4cv01ia        myproject_wordpress.2                       myproject/wordpress:php7.4-fpm-redis-alpine   node01.myproject.ltd   Running             Running 5 minutes ago

All work as expected 😉

There are some key differences when Docker running in Normal Mode vs Swarm Mode:

  1. Container name is not supported in Swarm Mode.
  2. Addition of “stack” and “deploy” instruction.
  3. Deploy instruction consists of Restart Policy, Mode, Replicas, and Endpoint_mode.
  4. Restart is replaced with Restart Policy which is part of the Deploy statement.
  5. Replacement of network driver from “bridge” to “overlay”.
  6. Exposing ports is only applicable to services that are assigned with endpoint_mode: vip.

Endpoint_mode is a service discovery specification method for external clients connecting to the swarm. There are 2 modes:

  • endpoint_mode: vip – Docker assigning to the service a single Virtual IP (VIP) that is behaving as the front end for clients to reach the service on the swarm network.
  • endpoint_mode: dnsrr – DNS round-robin (DNSRR) service discovery does not use a single virtual IP, but instead Docker returns internally a list of IP addresses, which the client connects directly to one of these.

Bonus: Configure Nginx Reverse Proxy SSL Passthrough SNI For Docker

Here is the scenario.

You have some docker-compose services up and running, or perhaps you wish to run 2 or more websites under the same server (technically under the same IP address). The configuration file below will act as a Layer 7 reverse proxy (http), and as a Layer4 reverse proxy(https) in the same time.

Examining it, you will get the internal working mechanics.

worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
  upstream app1_http {
    server app1_container_hostname:3080;
  }
  upstream app2_http {
  server app2_container_hostname:3080;
  }
  server {
    server_name example1.com;
    listen 80;
    listen [::]:80;
    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-Host $server_name;
      proxy_set_header Connection "";
      #add_header       X-Upstream $upstream_addr;
      proxy_redirect     off;
      proxy_connect_timeout  300;
      proxy_http_version 1.1;
      proxy_buffers 16 16k;
      proxy_buffer_size 64k;
      proxy_cache_background_update on;
      proxy_pass http://app1_http$request_uri;
    }
  }
  server {
    server_name example2.com;
    listen 80;
    listen [::]:80;
    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-Host $server_name;
      proxy_set_header Connection "";
      #add_header       X-Upstream $upstream_addr;
      proxy_redirect     off;
      proxy_connect_timeout  300;
      proxy_http_version 1.1;
      proxy_buffers 16 16k;
      proxy_buffer_size 16k;
      proxy_cache_background_update on;
      proxy_pass http://app2_http$request_uri;
    }
  }
}
stream {
  map $ssl_preread_server_name $domain {
      example1.com  app1_https;
      example2.com  app2_https;
      default app1_https;
      }
    upstream app1_https {
      server app1_container_hostname:3443;
      }
    upstream app2_https {
      server app2_container_hostname:3443;
      }

  server {
    proxy_ssl_server_name on;
    proxy_ssl_session_reuse off;
    ssl_preread on; 
    proxy_protocol on;
    tcp_nodelay on;
    listen 443;   
    proxy_pass $domain;
    }
  log_format proxy '$protocol $status $bytes_sent $bytes_received $session_time';
  access_log  /var/log/nginx/access.log proxy;
  error_log /var/log/nginx/error.log debug;
}

The configuration file above works also for normal backend servers. You will have only to replace:

  1. server app1_container_hostname:3080; with server backend1:3080;(on the http section)
  2. server app2_container_hostname:3080; with server backend2:3080; (on the http section)
  3. server app1_container_hostname:3443; with server backend1:3443;(on the stream section)
  4. server app2_container_hostname:3443; with server backend2:3443;(on the stream section)

Kubernetes Nginx-Controller LoadBalancing

In case some of you are using Kubernetes, the code snippet below is quite useful. It enabled the latest strongest algorithm with TLS.1.3 enabled.

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
  client-max-body-size: "500M"
  client_header_timeout: "30s"
  proxy-connect-timeout: "30s"
  proxy-read-timeout: "360s"
  proxy-send-timeout: "360s"
  proxy-stream-timeout: "60s"
  server_tokens: "off" 
  lb-method: "round_robin"
  ssl-ciphers: "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
  ssl-protocols: "TLSv1.2 TLSv1.3"
  enable-real-ip: "true"
  use-proxy-protocol: "true"
  real-ip-header: "proxy_protocol"
  # Below values are Cluster location spesific. Please refer to https://console.cloud.google.com/networking/networks/list to get the values from there
  set-real-ip-from: "10.164.0.0/20"
  #set-real-ip-from: "10.0.0.0/20"
  #set-real-ip-from: "10.124.0.0/14"
  #real_ip_recursive: "on"
  ssl-prefer-server-ciphers: "true"
  redirect-to-https: "true"
  hsts: "true"
  http2: "true"