Configure Nginx for Multiple Backend on Ubuntu

By Anurag Singh

Updated on May 22, 2025

Configure Nginx for Multiple Backend on Ubuntu

In this tutorial, we'll learn how to configure Nginx for multiple backend on Ubuntu 24.04 server. 

Introduction

In modern web architectures, routing client requests to the appropriate backend service is essential for reliability, scalability, and maintainability. By configuring Nginx as a reverse proxy on Ubuntu 24.04, we can centralize traffic management, offload SSL/TLS termination, and distribute requests across multiple backend applications. This tutorial walks through every step in detail—installing Nginx, defining upstream blocks, securing connections, and verifying our setup—so that our hosting infrastructure handles diverse applications seamlessly and securely.

Prerequisites

Before proceeding, make sure you have the following in place:

  • A Fresh Ubuntu 24.04 dedicated server or KVM VPS.
  • Root or Sudo Privileges
  • Basic Linux commands knowledge.
  • Two backend services already running on distinct ports (for example, Service A on port 3000 and Service B on port 5000)
  • A domain name (e.g., example.com) pointed via DNS to the server’s public IP

Configure Nginx for Multiple Backend on Ubuntu

Installing Nginx

First, we ensure that the package list is up to date and install Nginx from the official Ubuntu repositories.

sudo apt update
sudo apt install nginx -y

After installation, Nginx starts automatically. We can verify its status:

sudo systemctl status nginx

A successful output shows active (running). If the service is not running, we start and enable it at boot:

sudo systemctl enable --now nginx

Allowing HTTP and HTTPS Traffic

Our firewall (UFW) must permit web traffic. We enable the built-in Nginx profiles:

sudo ufw allow 'Nginx Full'
sudo ufw reload

This opens ports 80 (HTTP) and 443 (HTTPS) while ensuring SSH (port 22) remains accessible.

Defining Upstream Backend Services

To keep configurations clean and reusable, we declare upstream blocks that list our backend servers. In the file /etc/nginx/conf.d/upstreams.conf, we add:

upstream service_a {
    server 127.0.0.1:3000;
}

upstream service_b {
    server 127.0.0.1:5000;
}

These upstream names (service_a and service_b) will be referenced in our server blocks. If we scale to multiple instances for load balancing, we simply add more server lines.

Configuring the Reverse Proxy Server Block

Next, we create a dedicated server block to route requests based on URI paths. In /etc/nginx/sites-available/reverse-proxy.conf, we add:

server {
    listen 80;
    server_name example.com www.example.com;

    # Redirect HTTP to HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    # SSL configuration
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    include             /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam         /etc/letsencrypt/ssl-dhparams.pem;

    # Proxy for Service A
    location /app-a/ {
        proxy_pass         http://service_a/;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection "upgrade";
        proxy_set_header   Host $host;
        proxy_cache_bypass $http_upgrade;
    }

    # Proxy for Service B
    location /app-b/ {
        proxy_pass         http://service_b/;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    }

    # Optional: serve static files directly
    location /static/ {
        root /var/www/html;
    }

    # Fallback for other routes
    location / {
        return 404;
    }
}

Key points:

  • HTTP-to-HTTPS redirection ensures secure connections.
  • proxy_pass directives point to our upstream definitions.
  • Headers like X-Forwarded-For preserve client information for logging and security.
  • location matching on /app-a/ and /app-b/ enables multiple services under one domain.

Enabling the Configuration

Activate our reverse-proxy configuration by creating a symbolic link and testing for syntax errors:

sudo ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/
sudo nginx -t

If Nginx reports syntax is ok and test is successful, reload the service:

sudo systemctl reload nginx

Obtaining Free SSL/TLS Certificates

We recommend Let’s Encrypt for free, automated certificates. Install Certbot and the Nginx plugin:

sudo apt install certbot python3-certbot-nginx -y

Then request and install certificates:

sudo certbot --nginx -d example.com -d www.example.com

Certbot updates our Nginx server block automatically and sets up auto-renewal. Verify renewal with a dry run:

sudo certbot renew --dry-run

Verifying the Setup

  • Access http://example.com/app-a/ in a browser: we should see Service A’s interface.
  • Access https://example.com/app-b/: secure connection serving Service B.
  • Check logs for any errors:
  • sudo tail -f /var/log/nginx/access.log /var/log/nginx/error.log

Test header forwarding by inspecting request headers within backend applications—ensuring the original client IP and scheme are intact.

Troubleshooting Tips

  • 502 Bad Gateway often indicates a backend is down or listening on the wrong port. Confirm with ss -tlnp.
  • Permission Denied for SSL files may require adjusting Nginx’s user (www-data) permissions on /etc/letsencrypt.
  • URI mismatches: ensure trailing slashes in location and proxy_pass directives align to avoid unwanted redirects.

Here is how it work

When a client’s browser requests https://example.com/app-a/, here’s step-by-step how Nginx handles it:

DNS & TCP Connection

  • The client’s DNS lookup for example.com returns your server’s IP.
  • The browser opens a TCP connection to port 443 (HTTPS) on that IP.

Nginx Listens & Matches Server Block

  • Nginx is listening on port 443 with the server_name example.com block.
  • It selects that block because the Host: header matches.

URI-Based Routing (location directive)

  • Inside that block, Nginx evaluates location directives in order.
  • The request URI starts with /app-a/, so the location /app-a/ { … } block is chosen.

Proxy Pass to Upstream

  • The proxy_pass http://service_a/; tells Nginx to forward the request to the upstream group named service_a.
  • Nginx rewrites the URI so that /app-a/foo becomes /foo when sent to the backend (because of the trailing slash in proxy_pass http://service_a/;).

Resolving the Upstream

In /etc/nginx/conf.d/upstreams.conf, we declared:

upstream service_a {
  server 127.0.0.1:3000;
}

Nginx looks up the list of server lines under upstream service_a. Here it’s just one—127.0.0.1:3000—so every proxied request goes there.

Request Forwarding & Response

Nginx opens a new connection to 127.0.0.1:3000, forwards the adjusted HTTP request (including headers like Host, X-Forwarded-For, etc.), and waits for Service A’s response.
When Service A responds, Nginx relays that back over the original TCP connection to the client’s browser.

The same flow happens for /app-b/, except Nginx matches the location /app-b/ { proxy_pass http://service_b/; … } block and looks up service_b in the upstream definitions.

How Upstream Names Work & Simple Load Balancing

Upstream Block

upstream service_a {
    server 127.0.0.1:3000;
    # more server lines can go here
}

This block defines a named group of backend endpoints. Any proxy_pass http://service_a; refers to that group.

Referencing in Server Blocks

In your server { … } context, whenever you use proxy_pass http://service_a/;, Nginx knows to consult the list under upstream service_a { … }.

Scaling to Multiple Instances

If you run two copies of Service A—to spread load or for redundancy—you simply add more server lines:

upstream service_a {
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
}

Now, by default, Nginx uses a round-robin algorithm:

  • First request → port 3000
  • Second → port 3001
  • Third → port 3000 again
    …and so on.

Advanced Load-Balancing Options

You can customize behavior per server line:

upstream service_a {
    server 127.0.0.1:3000 weight=3;   # gets 3× more traffic
    server 127.0.0.1:3001 max_fails=2 fail_timeout=30s;
}
  • weight adjusts how often each backend is chosen.
  • max_fails and fail_timeout let Nginx temporarily avoid unhealthy backends.

In Summary

  • Routing by URI: Nginx picks the correct location block for /app-a/ vs. /app-b/.
  • Named Upstreams: service_a and service_b are groups of one or more backend servers.
  • Load Balancing: To scale horizontally, add more server entries under each upstream—Nginx will automatically distribute requests.

This design keeps our configuration modular and makes it trivial to add capacity or redundancy simply by updating the upstream definitions—no changes needed in the routing logic itself.

Conclusion

In this tutorial, we've learnt how to configure Nginx for multiple backend on Ubuntu 24.04 server.  By centralizing reverse-proxy logic in Nginx on Ubuntu 24.04, we achieve flexible routing to multiple backend services under a single domain, robust SSL/TLS management, and improved performance through header optimization. This scalable pattern supports future growth—adding more upstream blocks or server names as our hosting portfolio expands.