M
MeshWorld.
Nginx DevOps Linux Backend How-To 7 min read

How to Set Up a Reverse Proxy with Nginx

Jena
By Jena

A reverse proxy sits in front of your application and forwards requests to it. Your app runs on a local port that’s not exposed to the internet; Nginx handles everything external — HTTPS termination, security headers, static file serving, and load balancing. This setup is the backbone of almost every production web server.

:::note[TL;DR]

  • Install Nginx, configure a server block in /etc/nginx/sites-available/
  • Point proxy_pass at your app’s local port (e.g., http://127.0.0.1:3000)
  • Get an SSL certificate with Certbot (sudo certbot --nginx -d yourdomain.com)
  • Add security headers and test with nginx -t before every reload
  • WebSocket and SSE each need specific additional headers :::

Prerequisites

  • A Linux server (Ubuntu 22.04/24.04 or Debian 12 used here)
  • A domain name pointing at your server’s IP
  • An app already running on a local port (Node.js, Python, Go, whatever — it just needs to serve HTTP)
  • Root or sudo access

How do you install Nginx?

sudo apt update
sudo apt install nginx -y

# Verify it's running
sudo systemctl status nginx

If nginx -v returns a version, you’re good. The default page should now be visible at your server’s IP address.


How do you create your first server block?

Nginx on Debian/Ubuntu uses a “sites-available / sites-enabled” split. You write configs in sites-available, then symlink them into sites-enabled to activate.

Create a file for your site:

sudo nano /etc/nginx/sites-available/yourdomain.com

Paste this minimal config — replace yourdomain.com and the port 3000 with your values:

server {
    listen 80;
    listen [::]:80;
    server_name yourdomain.com www.yourdomain.com;

    location / {
        proxy_pass         http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    }
}

Enable the site and test:

sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/

# Always test before reload
sudo nginx -t

sudo systemctl reload nginx

Your app should now be reachable at http://yourdomain.com. Nginx forwards the request to port 3000 on localhost and sends the response back.


How do you add HTTPS with Let’s Encrypt?

Install Certbot and the Nginx plugin:

sudo apt install certbot python3-certbot-nginx -y

Get and install the certificate. Certbot modifies your Nginx config in place:

sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

Certbot will:

  1. Verify domain ownership by placing a challenge file Nginx serves
  2. Issue the certificate and store it in /etc/letsencrypt/live/yourdomain.com/
  3. Add SSL directives and an HTTP-to-HTTPS redirect to your server block
  4. Configure auto-renewal via systemd timer

Test auto-renewal:

sudo certbot renew --dry-run

After Certbot runs, always re-check the config:

sudo nginx -t && sudo systemctl reload nginx

How do you harden the config with security headers?

A working reverse proxy is not a secure one. These headers cost nothing to add:

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;
    server_name yourdomain.com www.yourdomain.com;

    # Certificates (Certbot fills these in)
    ssl_certificate     /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_stapling        on;
    ssl_stapling_verify on;

    # Security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Frame-Options "DENY" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    server_tokens off;

    # Proxy
    location / {
        proxy_pass         http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout 60s;
    }

    # Block dotfiles (.git, .env, .htaccess)
    location ~ /\. {
        deny all;
    }
}

server_tokens off removes the Nginx version from error pages and the Server: header, which is a free reduction in attack surface.


How do you serve static files from Nginx directly?

If your app has assets (images, CSS, JS), letting Nginx serve them directly is faster than proxying to your app. Nginx can saturate a network link serving static files; your app can’t.

server {
    ...

    # Serve static files directly
    location /static/ {
        root /var/www/yourdomain.com;
        expires 1y;
        add_header Cache-Control "public, immutable";
        gzip_static on;
    }

    # Everything else goes to the app
    location / {
        proxy_pass http://127.0.0.1:3000;
        ...
    }
}

The root /var/www/yourdomain.com means Nginx serves /var/www/yourdomain.com/static/logo.png when a request comes in for /static/logo.png.


How do you support WebSocket connections?

WebSocket connections start as HTTP requests that get “upgraded.” Without two specific headers, the upgrade fails and the WebSocket handshake is rejected.

location /ws/ {
    proxy_pass         http://127.0.0.1:3000;
    proxy_http_version 1.1;
    proxy_set_header   Upgrade    $http_upgrade;
    proxy_set_header   Connection "upgrade";
    proxy_set_header   Host       $host;
    proxy_read_timeout 3600s;
}

The 3600s timeout keeps long-lived WebSocket connections open. Without it, Nginx closes connections after 60 seconds of no activity.

For Server-Sent Events (SSE), you need buffering off:

location /events {
    proxy_pass         http://127.0.0.1:3000;
    proxy_http_version 1.1;
    proxy_set_header   Connection "";
    proxy_buffering    off;
    proxy_cache        off;
    proxy_read_timeout 24h;
}

How do you add rate limiting?

Rate limiting prevents a single IP from hammering your API. Define the zone in the http {} block (in /etc/nginx/nginx.conf), then apply it to locations.

In /etc/nginx/nginx.conf, inside the http {} block:

limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_status 429;

In your site config:

location /api/ {
    limit_req zone=api burst=20 nodelay;
    proxy_pass http://127.0.0.1:3000;
    ...
}

This allows 10 requests per second per IP with a burst of 20. The nodelay flag serves burst requests immediately rather than spacing them out.


How do you proxy multiple apps on the same server?

Use separate server blocks and separate upstream ports. Each app runs on a different port; Nginx routes by domain name.

# App 1 — Node.js on port 3000
server {
    listen 443 ssl;
    server_name app1.example.com;
    ssl_certificate /etc/letsencrypt/live/app1.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app1.example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        include /etc/nginx/proxy_params;
    }
}

# App 2 — Python on port 8000
server {
    listen 443 ssl;
    server_name app2.example.com;
    ssl_certificate /etc/letsencrypt/live/app2.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app2.example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:8000;
        include /etc/nginx/proxy_params;
    }
}

Create /etc/nginx/proxy_params to avoid repeating the same headers everywhere:

proxy_http_version 1.1;
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

Checklist before going live

  • sudo nginx -t passes with no errors
  • HTTP redirects to HTTPS
  • Certificate is valid (curl -I https://yourdomain.com)
  • Security headers are present (curl -I https://yourdomain.com | grep -i strict)
  • App is not listening on a public port (check with ss -tlnp | grep 3000)
  • Certbot auto-renewal is configured (sudo certbot renew --dry-run)
  • server_tokens off is set
  • .env and dotfiles are blocked

Summary

  • Nginx acts as the entry point, handling HTTPS and forwarding requests to your app’s local port
  • proxy_pass with the four standard proxy_set_header directives covers 95% of use cases
  • Certbot handles certificate issuance and auto-renewal; --nginx plugin patches your config in place
  • WebSocket requires Upgrade + Connection "upgrade" headers; SSE requires proxy_buffering off
  • Multiple apps on one server = multiple server blocks, each matched by server_name

FAQ

Does my app need to know it’s behind a proxy?

For most apps, yes — it needs to trust the X-Forwarded-Proto header to serve HTTPS links correctly instead of HTTP ones. In Express.js: app.set('trust proxy', 1). In Django: USE_X_FORWARDED_HOST = True and SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https'). Without this, your app may generate links that reference http:// even when the request came in over HTTPS.

My app is not on localhost — can I proxy to another server?

Yes. proxy_pass accepts any HTTP URL: proxy_pass http://192.168.1.50:3000; or proxy_pass http://internal-app.example.com;. If the backend server uses HTTPS, use proxy_pass https://...; and consider setting proxy_ssl_verify on; with a trusted CA certificate.

Certbot says “Problem: Could not bind to IPv4” — what’s wrong?

Certbot’s --nginx plugin does the verification through Nginx itself, so it doesn’t need to bind to port 80 directly. If you see this error, check that Nginx is running and that your domain’s DNS is pointing to this server. Also confirm port 80 is not blocked by a firewall: sudo ufw allow 80.