Proxy, Reverse Proxy, and Load Balancing Explained
Clear overview of proxies, reverse proxies, and load balancers - with reasons to use each and an Nginx example.
When learning system design, you'll often hear terms like proxy, reverse proxy, and load balancer. They are critical building blocks of modern infrastructure.
In this article, let's cover:
- What a Proxy (Forward Proxy) is
- What a Reverse Proxy is
- How Load Balancing works
1. Proxy (Forward Proxy)
A proxy server sits between the client and the internet. Instead of sending requests directly to the target server, the client communicates with the proxy, which forwards the request on its behalf.
How it Works:
- Client sends a request → Proxy intercepts it
- Proxy checks policies (block/allow/cache)
- Proxy forwards request to the target server
- Server responds → Proxy relays response to client
Why Use a Proxy?
- Anonymity & Privacy → hides client's IP address
- Access Control → companies use it to restrict or log internet usage
- Caching → improves performance by serving cached responses
- Geo-unblocking → bypasses region locks (e.g., streaming content)
Example Use Case
A company routes all employee internet traffic through a proxy. This lets IT enforce rules (block social media) and cache common requests (improving performance).
2. Reverse Proxy
While a forward proxy serves clients, a reverse proxy serves servers.
A reverse proxy sits in front of one or more backend servers. Clients send requests to the reverse proxy, and it decides how to route them.
How it Works:
- Client sends a request → lands at Reverse Proxy
- Reverse Proxy chooses a backend server (based on rules, load, etc.)
- Backend server processes → sends response back
- Reverse Proxy returns response to the client
Why Use a Reverse Proxy?
- Security → hides server details (IP, architecture)
- Load Balancing → distributes traffic across multiple servers
- SSL Termination → offloads HTTPS encryption from backend servers
- Caching → serves static assets without hitting the backend
- Protection → mitigates DDoS attacks by controlling entry point
Example Use Case
Cloudflare is a global reverse proxy. It protects sites from DDoS attacks, accelerates delivery with caching, and hides the real origin server.
3. Load Balancing
Load balancing is often handled by reverse proxies. It distributes incoming requests across multiple servers to prevent overload and ensure availability.
Common Load Balancing Algorithms
- Round Robin → send requests sequentially (A → B → C → A…)
- Least Connections → send to server with fewest active connections
- IP Hash → route clients consistently to the same server
Why Use Load Balancing?
- Scalability → add more servers as demand grows
- Fault Tolerance → if one server goes down, others keep working
- Performance → avoid bottlenecks, ensure smooth user experience
Example Nginx Load Balancing Config
upstream backend_servers {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}By default, Nginx uses round-robin. You can switch to least_conn or ip_hash as needed.
Quick Comparison
| Feature | Proxy | Reverse Proxy | Load Balancer |
|---|---|---|---|
| Protects | Clients | Servers | Servers |
| Hides | Client IP | Server IP | Server details |
| Use Case | Privacy, Access Control | Security, SSL, Cache | Scalability, Availability |
Final Thoughts
- Use a Proxy when you want to control or hide clients.
- Use a Reverse Proxy when you want to protect and manage servers.
- Use Load Balancing when you want your system to scale and stay reliable.
These concepts are fundamental in system design. Almost every large-scale application uses them in some form — whether it's Netflix handling millions of streams or your own project running behind Nginx.
