Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Data Flow

Every HTTP request through Meridian follows this path:

Client (downstream)
  → TCP accept (Listener, per-IP rate limit)
    → [TLS handshake if configured]
      → [ALPN dispatch: HTTP/1.1 or HTTP/2]
        → Header parse with 60s timeout (Http1Codec or h2)
          → Path normalization (collapse //, resolve ..)
            → Filter chain: request filters (forward order)
              → Route lookup (prefix match on normalized path)
                → Cluster lookup (ClusterManager)
                  → Circuit breaker check (RAII guard)
                    → Load balancer endpoint selection
                      → Connection pool checkout (or TCP connect with timeout)
                        → Forward request (strip hop-by-hop, add Host)
                          → Read upstream response
                            → Filter chain: response filters (reverse order)
                              → Forward response downstream (strip hop-by-hop)
                                → Connection pool checkin (if keep-alive)
                                  → Check Connection: close → loop or close

Error Handling at Each Stage

Every stage that can fail returns a generic HTTP error to the client. No internal details are leaked — cluster names, endpoint addresses, and circuit breaker state are logged but never sent to clients.

StageErrorHTTP Status
Header parse failsMalformed request400 Bad Request
Header timeoutSlowloris defense408 Request Timeout
No route matchesPath not found404 Not Found
Cluster not foundConfig error502 Bad Gateway
Circuit breaker openOverload protection503 Service Unavailable
No healthy endpointsAll backends down502 Bad Gateway
Upstream connect failsBackend unreachable502 Bad Gateway
Upstream connect timeoutBackend slow504 Gateway Timeout
Filter errorInternal filter failure500 Internal Server Error
Filter rejects requestPolicy enforcementFilter-defined (e.g., 403)

Keep-Alive

HTTP/1.1 connections support keep-alive by default. Multiple requests are processed sequentially on the same connection. The connection closes when:

  • The client sends Connection: close
  • The client uses HTTP/1.0 (no keep-alive by default)
  • A parse error occurs
  • The header read timeout fires

HTTP/2 Multiplexing

HTTP/2 connections support multiple concurrent streams. Each stream is handled in its own Tokio task, enabling true parallelism within a single connection. Requests are translated from HTTP/2 to HTTP/1.1 for upstream forwarding.