Service Mesh & Sidecar: Infrastructure as a Separate Concern
A Service Mesh provides infrastructure capabilities (networking, security, observability) to microservices without changing application code. The Sidecar Pattern is the foundation.
TL;DR
| Concept | Definition |
|---|---|
| Service Mesh | Infrastructure layer for service-to-service communication |
| Sidecar | Helper container deployed alongside each service |
| Data Plane | Proxies that handle actual traffic |
| Control Plane | Management and configuration |
The Problem
Without service mesh, every service implements:
// ❌ Every service has infrastructure code
public class OrderService
{
public async Task<Order> GetOrder(Guid id)
{
// Retry logic
var retryPolicy = Policy
.Handle<HttpRequestException>()
.WaitAndRetryAsync(3, i => TimeSpan.FromSeconds(i));
// Circuit breaker
var circuitBreaker = Policy
.Handle<HttpRequestException>()
.CircuitBreakerAsync(5, TimeSpan.FromMinutes(1));
// Timeout
var timeout = Policy.TimeoutAsync(30);
// Load balancing (manual)
var instance = await _serviceDiscovery.GetInstance("inventory");
// mTLS setup
var handler = new HttpClientHandler
{
ClientCertificates = { _certificate }
};
// Tracing
using var span = _tracer.StartSpan("GetInventory");
// Finally, the actual call
return await _httpClient.GetAsync($"{instance}/api/inventory/{id}");
}
}
The Sidecar Pattern
How It Works
- Application makes call to localhost
- Sidecar intercepts the call
- Sidecar handles: routing, retries, mTLS, tracing
- Sidecar forwards to destination
# Kubernetes deployment with sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
name: orders-service
spec:
template:
spec:
containers:
# Application container
- name: orders
image: orders-service:v1
ports:
- containerPort: 8080
# Sidecar proxy (injected by service mesh)
- name: envoy-proxy
image: envoyproxy/envoy:v1.25
ports:
- containerPort: 15001
Service Mesh Architecture
Capabilities
1. Traffic Management
# Istio VirtualService - Traffic routing
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: orders
spec:
hosts:
- orders
http:
- match:
- headers:
x-canary:
exact: "true"
route:
- destination:
host: orders
subset: v2
- route:
- destination:
host: orders
subset: v1
weight: 90
- destination:
host: orders
subset: v2
weight: 10
2. Resilience
# Istio DestinationRule - Retries and circuit breaker
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: inventory
spec:
host: inventory
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
h2UpgradePolicy: UPGRADE
http1MaxPendingRequests: 100
http2MaxRequests: 1000
outlierDetection:
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
retries:
attempts: 3
perTryTimeout: 2s
retryOn: gateway-error,connect-failure,refused-stream
3. Security (mTLS)
# Istio PeerAuthentication - mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: production
spec:
mtls:
mode: STRICT # All traffic must be mTLS
4. Observability
# Automatic metrics, traces, logs
# No application code changes needed
# Access logs
accessLogFile: /dev/stdout
accessLogFormat: |
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%"
%RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT%
%DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%
"%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%"
"%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"
Popular Service Meshes
| Mesh | Strengths | Considerations |
|---|---|---|
| Istio | Feature-rich, mature | Complex, resource-heavy |
| Linkerd | Simple, lightweight | Fewer features |
| Consul Connect | Multi-platform | HashiCorp ecosystem |
| AWS App Mesh | AWS native | AWS only |
When to Use
Good Fit
- Many microservices (20+) - Standardize cross-cutting concerns
- Security requirements - Need mTLS everywhere
- Complex traffic patterns - Canary, A/B testing
- Observability needs - Distributed tracing required
- Kubernetes environment - Native integration
Not Ideal
- Few services - Overhead not justified
- Simple traffic patterns - Basic load balancing sufficient
- Non-Kubernetes - Limited support
- Resource constrained - Sidecars add overhead
Trade-offs
| Benefit | Trade-off |
|---|---|
| Standardized infrastructure | Added complexity |
| No app code changes | Resource overhead (sidecars) |
| Centralized policy | Learning curve |
| Automatic mTLS | Debugging harder |
| Rich observability | Another system to manage |
Quick Reference
┌─────────────────────────────────────────────────────────┐
│ SERVICE MESH QUICK REFERENCE │
├─────────────────────────────────────────────────────────┤
│ │
│ COMPONENTS │
│ • Data Plane: Sidecar proxies (Envoy) │
│ • Control Plane: Configuration management │
│ │
│ CAPABILITIES │
│ • Traffic management (routing, load balancing) │
│ • Resilience (retries, circuit breakers) │
│ • Security (mTLS, authorization) │
│ • Observability (metrics, traces, logs) │
│ │
│ WHEN TO USE │
│ • Many microservices │
│ • Complex traffic patterns │
│ • Strong security requirements │
│ • Kubernetes environment │
│ │
└─────────────────────────────────────────────────────────┘
Next Steps
- Saga Patterns - Distributed transactions
- System Design: Microservices - Architecture patterns