Data Flow
Every HTTP request through Meridian follows this path:
Client (downstream)
→ TCP accept (Listener, per-IP rate limit)
→ [TLS handshake if configured]
→ [ALPN dispatch: HTTP/1.1 or HTTP/2]
→ Header parse with 60s timeout (Http1Codec or h2)
→ Path normalization (collapse //, resolve ..)
→ Filter chain: request filters (forward order)
→ Route lookup (prefix match on normalized path)
→ Cluster lookup (ClusterManager)
→ Circuit breaker check (RAII guard)
→ Load balancer endpoint selection
→ Connection pool checkout (or TCP connect with timeout)
→ Forward request (strip hop-by-hop, add Host)
→ Read upstream response
→ Filter chain: response filters (reverse order)
→ Forward response downstream (strip hop-by-hop)
→ Connection pool checkin (if keep-alive)
→ Check Connection: close → loop or close
Error Handling at Each Stage
Every stage that can fail returns a generic HTTP error to the client. No internal details are leaked — cluster names, endpoint addresses, and circuit breaker state are logged but never sent to clients.
| Stage | Error | HTTP Status |
|---|---|---|
| Header parse fails | Malformed request | 400 Bad Request |
| Header timeout | Slowloris defense | 408 Request Timeout |
| No route matches | Path not found | 404 Not Found |
| Cluster not found | Config error | 502 Bad Gateway |
| Circuit breaker open | Overload protection | 503 Service Unavailable |
| No healthy endpoints | All backends down | 502 Bad Gateway |
| Upstream connect fails | Backend unreachable | 502 Bad Gateway |
| Upstream connect timeout | Backend slow | 504 Gateway Timeout |
| Filter error | Internal filter failure | 500 Internal Server Error |
| Filter rejects request | Policy enforcement | Filter-defined (e.g., 403) |
Keep-Alive
HTTP/1.1 connections support keep-alive by default. Multiple requests are processed sequentially on the same connection. The connection closes when:
- The client sends
Connection: close - The client uses HTTP/1.0 (no keep-alive by default)
- A parse error occurs
- The header read timeout fires
HTTP/2 Multiplexing
HTTP/2 connections support multiple concurrent streams. Each stream is handled in its own Tokio task, enabling true parallelism within a single connection. Requests are translated from HTTP/2 to HTTP/1.1 for upstream forwarding.