Azure Load Balancer
Azure Load Balancer is one of the most essential components in cloud infrastructure. It distributes network traffic across multiple backend resources to ensure high availability and consistent performance. Whether you’re handling web traffic, internal services, or hybrid networks, Azure Load Balancer is the go-to solution for fast and reliable traffic routing at layer 4.
Types of Azure Load Balancers
Azure offers two main types of load balancers, each serving a different purpose:
- Public Load Balancer: Handles inbound traffic from the internet to your Azure virtual machines or services.
- Internal Load Balancer (ILB): Used for distributing traffic within a virtual network or between peered VNets. No internet exposure.
Both types use health probes to monitor the availability of backend resources and only forward traffic to healthy instances.
Architecture and Use Cases
Azure Load Balancer sits in front of your services and spreads incoming requests across multiple instances. Some common patterns include:
- Distributing HTTP or HTTPS traffic across multiple web servers
- Balancing RDP or SSH sessions to internal admin VMs
- Routing VPN gateway traffic for hybrid connectivity
- Acting as a failover mechanism for high availability configurations
It integrates well with availability zones and availability sets, ensuring your services remain online even during outages in a single zone or host.
Core Components of Azure Load Balancer
Azure Load Balancer is built on four core components. Each plays a role in how traffic is distributed and managed.
1. Frontend IP Configuration
This sets the public or private IP address that clients use to reach your service. A public IP is internet-facing, while a private IP handles internal traffic inside a virtual network.
2. Backend Pool
The backend pool is the group of resources that receive traffic. It can include:
- Virtual Machines (VMs)
- Virtual Machine Scale Sets
- Availability sets
There are two SKUs:
- Basic SKU: Designed for small-scale use (up to 300 pools). No availability zone support, fewer backend instances, and no integration with Azure Monitor.
- Standard SKU: Recommended for production (up to 1,000 pools). Supports zone-redundant designs, better scalability, secure by default (NSG required), and full integration with Azure Monitor.
Standard SKU allows connection to any virtual network (VNet) in the same region using peering, while Basic is restricted to a single VNet.
3. Health Probes
Health probes check the health of each backend instance and determine if it should receive traffic. There are three types:
- TCP Probe: Attempts a TCP handshake on a defined port.
- HTTP Probe: Sends an HTTP GET to a specified path and expects a 200 OK response.
- Guest Agent Probe: Uses a guest agent on the VM. It is not recommended unless HTTP or TCP probes are not available.
If a backend fails a probe, it's temporarily removed from the pool until it's healthy again.
4. Load Balancing Rules
Rules define how traffic moves from frontend to backend. They map a front-end IP address and port combination to back-end IP address and port combinations
Azure Load Balancer uses a five-tuple hash to distribute traffic to available servers. This includes:
- Source IP address
- Source port
- Destination IP address
- Destination port
- Protocol (TCP or UDP)
When a client connects, the five-tuple is used to consistently route them to the same backend (unless session persistence is disabled).
Session Persistence
Session persistence determines how Azure Load Balancer handles returning clients. By default, the load balancer can distribute each request independently, but this isn’t ideal for all applications.
You can modify session persistence with three options:
- None (default): Every new request is hashed again. Clients may be sent to a different backend each time.
- Client IP: All requests from the same IP go to the same backend instance.
- Client IP and Protocol: Sticky sessions are based on both the client’s IP and the protocol (TCP/UDP), offering more precision for certain apps.
This is especially useful for scenarios where stateful sessions are stored in the VM, or when the app does not support external session management.
Security Implications and Risks
Azure Load Balancer itself does not inspect or modify packets. It plays an important role in secure architectures when paired with other tools:
- Works well with Azure Firewall and NSGs to control allowed traffic
- Can be placed in front of NVAs or WAFs to add deep packet inspection
- Helps reduce exposure of individual VMs by using a single public IP
On the risk side, a misconfigured load balancer can route traffic to unintended or unpatched services. It's also important to secure your backend pool VMs, since Load Balancer does not encrypt traffic or block malicious connections by itself.
Performance and Operational Impact
Azure Load Balancer is built to scale. It can handle millions of connections with ultra-low latency and offers up to 1000 rules per configuration. You don’t need to worry about the underlying infrastructure as it’s fully managed and fault tolerant.
That said, it’s not immune to misconfigurations. Improper health probe settings or missing NSG rules can silently break traffic flow. It’s important to test thoroughly in dev or staging before going live.
When to Use Load Balancer vs. Other Azure Services
Azure Load Balancer is great for simple, fast traffic distribution. But it’s not the only load balancing option in Azure. Here’s when you might choose another service:
- Azure Application Gateway: Use when you need SSL termination, cookie-based session affinity, or path-based routing at layer 7.
- Traffic Manager: Use for global DNS-based load balancing across regions or clouds.
- Front Door: Use for edge acceleration, global load balancing, and built-in WAF protection.