Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Meridian Proxy

Meridian is a high-performance L4/L7 network proxy written in Rust. It sits between clients and backend services, handling traffic routing, load balancing, TLS termination, and resilience — so your applications don’t have to.

                    ┌─────────────┐
Clients ──────────► │  MERIDIAN   │ ──────► Backend A
(browsers, apps,    │   PROXY     │ ──────► Backend B
 other services)    │             │ ──────► Backend C
                    └─────────────┘

Why Meridian?

Meridian is a ground-up reimplementation informed by the architectural lessons of Envoy, HAProxy, and Nginx. It exploits Rust’s ownership model, zero-cost abstractions, and Tokio’s async runtime to deliver:

  • Memory safety by construction — no buffer overflows, use-after-free, or data races. These entire vulnerability classes are eliminated at compile time, not by convention.
  • High performance — sub-nanosecond config reads, <20ns load balancer picks, zero-copy HTTP parsing at 2+ GB/s.
  • Async filter chain — filters are async fn, not callbacks. No manual state machines, no StopIteration / continueDecoding() dance.
  • Production resilience — circuit breakers, connection pooling, health checking, per-IP rate limiting, Slowloris defense.

Feature Overview

FeatureStatus
HTTP/1.1 proxy with keep-aliveDone
HTTP/2 downstream (h2 crate)Done
TLS termination (rustls)Done
Async filter chainDone
Round-robin load balancingDone
Circuit breaker (RAII guards)Done
Connection poolingDone
Active health checking (TCP/HTTP)Done
Chunked transfer encodingDone
Admin API (/stats, /clusters)Done
Path normalization & securityDone
Prometheus metrics endpointDone
Fuzz-hardened parsersDone

Target Users

Platform engineers, SREs, and cloud-native infrastructure teams who need a proxy they can trust — one where the compiler, not code review, guarantees memory safety.

Getting Started

This section covers building Meridian from source, running it with a configuration file, and a quick start example that proxies traffic to a backend service.

Requirements

  • Rust stable toolchain (edition 2021)
  • Cargo (comes with Rust)
  • Linux, macOS, or Windows (Linux recommended for production)

Quick Overview

# Build
cargo build --release

# Run with a config file
cargo run --release -p meridian-proxy -- meridian-proxy/meridian.toml

# Run tests
cargo test --workspace

Meridian reads a TOML configuration file that defines listeners (where to accept traffic), clusters (groups of backend endpoints), and routes (which paths map to which clusters).

Building from Source

Prerequisites

Install the Rust toolchain via rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Meridian targets stable Rust (edition 2021). No nightly features are required for the proxy itself (nightly is only needed for fuzzing).

Build Commands

# Debug build (faster compilation, slower runtime)
cargo build

# Release build (optimized)
cargo build --release

# Build just the proxy binary
cargo build --release -p meridian-proxy

The release binary is at target/release/meridian.

Workspace Structure

Meridian is organized as a Cargo workspace with three crates:

elote/
  meridian-core/       # Library — all domain logic
  meridian-proxy/      # Binary — orchestration shell
  meridian-bench/      # Benchmarks — Criterion suites
  fuzz/                # Fuzzing harness — libfuzzer targets
  docs/                # This book

Rule: core owns all logic. The proxy crate is a thin orchestration shell. Domain logic (parsing, load balancing, circuit breaking, filtering) lives in meridian-core.

Verify the Build

# Run all tests
cargo test --workspace

# Check for warnings
cargo clippy --all-targets -- -D warnings

# Check formatting
cargo fmt --check

All three checks must pass before any commit.

Running the Proxy

Basic Usage

# Run with default config location
cargo run -p meridian-proxy -- meridian-proxy/meridian.toml

# Run the release binary directly
./target/release/meridian meridian-proxy/meridian.toml

Meridian logs to stderr using structured logging via tracing. Control log level with the RUST_LOG environment variable:

# Info level (default)
RUST_LOG=info ./target/release/meridian config.toml

# Debug level (verbose)
RUST_LOG=debug ./target/release/meridian config.toml

# Trace level (very verbose)
RUST_LOG=trace ./target/release/meridian config.toml

# Component-specific
RUST_LOG=meridian_proxy::connection=debug ./target/release/meridian config.toml

Shutdown

Meridian shuts down gracefully on Ctrl+C (SIGINT). Active connections are drained before the process exits.

Ports

The configuration file specifies which ports Meridian listens on. Common setups:

PortPurpose
8080Main HTTP listener
8443HTTPS (TLS) listener
9901Admin API (metrics, health)

Health Check

Once running, verify with:

# If admin API is configured
curl http://localhost:9901/ready
# Returns: LIVE

# If proxying to a backend
curl http://localhost:8080/

Quick Start Example

This example sets up Meridian to proxy HTTP traffic to two backend servers with round-robin load balancing.

1. Create the Configuration

Save as quickstart.toml:

[[listeners]]
name = "http"
address = "0.0.0.0:8181"
filter_chain = []

[[clusters]]
name = "my-backends"
lb_policy = "round_robin"
connect_timeout_ms = 5000
endpoints = [
  { address = "127.0.0.1:9001", weight = 1 },
  { address = "127.0.0.1:9002", weight = 1 },
]

[[routes]]
prefix = "/"
cluster = "my-backends"

2. Start Backend Servers

For testing, use Python’s built-in HTTP server:

# Terminal 1: backend on port 9001
mkdir -p /tmp/backend1 && echo "Backend 1" > /tmp/backend1/index.html
cd /tmp/backend1 && python3 -m http.server 9001

# Terminal 2: backend on port 9002
mkdir -p /tmp/backend2 && echo "Backend 2" > /tmp/backend2/index.html
cd /tmp/backend2 && python3 -m http.server 9002

3. Start Meridian

# Terminal 3
cargo run -p meridian-proxy -- quickstart.toml

4. Send Traffic

# Requests alternate between backends (round-robin)
curl http://localhost:8181/
# => Backend 1

curl http://localhost:8181/
# => Backend 2

curl http://localhost:8181/
# => Backend 1

5. Add TLS

To enable TLS termination, add a TLS section to the listener:

[[listeners]]
name = "https"
address = "0.0.0.0:8443"
filter_chain = []

[listeners.tls]
cert_path = "/path/to/cert.pem"
key_path = "/path/to/key.pem"

Clients connect via HTTPS; Meridian forwards to backends over plain HTTP.

6. Add Health Checking

[[clusters]]
name = "my-backends"
lb_policy = "round_robin"
connect_timeout_ms = 5000
endpoints = [
  { address = "127.0.0.1:9001", weight = 1 },
  { address = "127.0.0.1:9002", weight = 1 },
]

[clusters.health_check]
interval_ms = 5000
timeout_ms = 2000
healthy_threshold = 2
unhealthy_threshold = 3
http_path = "/"

Meridian probes each endpoint every 5 seconds. If an endpoint fails 3 consecutive checks, it’s removed from the load balancer rotation.

Configuration

Meridian is configured via a TOML file passed as the first command-line argument. The configuration defines three core concepts:

  • Listeners — where Meridian accepts incoming connections
  • Clusters — groups of upstream backend endpoints
  • Routes — rules mapping request paths to clusters
# Optional: admin API for metrics and health
admin_address = "127.0.0.1:9901"

[[listeners]]
name = "http"
address = "0.0.0.0:8080"
filter_chain = []

[[clusters]]
name = "api-backend"
lb_policy = "round_robin"
connect_timeout_ms = 5000
endpoints = [
  { address = "10.0.1.1:8080", weight = 1 },
  { address = "10.0.1.2:8080", weight = 1 },
]

[[routes]]
prefix = "/api"
cluster = "api-backend"
timeout_ms = 15000

[[routes]]
prefix = "/"
cluster = "default-backend"

Configuration Concepts

The proxy uses immutable config snapshots — once loaded, the configuration is read-only. Updates produce a new snapshot and swap atomically via arc-swap, so worker threads never see a partially-updated config.

Hot Reload

Configuration is read at startup. Hot reload via the ConfigStore::store() API allows zero-downtime config updates (used by the xDS integration path).

Listeners

A listener binds to a network address and accepts incoming connections.

[[listeners]]
name = "http"
address = "0.0.0.0:8080"
filter_chain = []

Fields

FieldTypeRequiredDescription
namestringyesUnique name for logging and metrics
addressstringyesBind address in host:port format
filter_chainarrayyesList of filter names (currently unused, reserved)
tlstablenoTLS configuration (see TLS)

Multiple Listeners

You can define multiple listeners for different purposes:

[[listeners]]
name = "http"
address = "0.0.0.0:8080"
filter_chain = []

[[listeners]]
name = "https"
address = "0.0.0.0:8443"
filter_chain = []

[listeners.tls]
cert_path = "/etc/meridian/cert.pem"
key_path = "/etc/meridian/key.pem"

Each listener runs its own accept loop on a dedicated Tokio task. Connections from each listener are dispatched to the shared worker pool.

Security

Every listener enforces:

  • 60-second header read timeout — defense against Slowloris attacks
  • 256 max connections per source IP — prevents connection exhaustion
  • 64KB max header size — rejects oversized headers
  • Path normalization — collapses //, resolves .. before routing

Clusters & Endpoints

A cluster is a logical group of upstream backend endpoints that share a load balancing policy, circuit breaker, and connection pool.

[[clusters]]
name = "api-backend"
lb_policy = "round_robin"
connect_timeout_ms = 5000
max_idle_connections = 8
endpoints = [
  { address = "10.0.1.1:8080", weight = 1 },
  { address = "10.0.1.2:8080", weight = 2 },
  { address = "10.0.1.3:8080", weight = 1 },
]

Fields

FieldTypeRequiredDefaultDescription
namestringyesUnique cluster name
lb_policystringyesLoad balancing algorithm
connect_timeout_msintegeryesUpstream TCP connect timeout
max_idle_connectionsintegerno8Max idle pooled connections per endpoint
endpointsarrayyesList of backend endpoints
circuit_breakertablenosee belowCircuit breaker limits
health_checktablenononeActive health check config

Load Balancing Policies

PolicyDescription
round_robinCycles through healthy endpoints sequentially
least_requestPower-of-two-choices: picks the endpoint with fewer active requests
maglevConsistent hashing for sticky sessions (minimal disruption on changes)
randomRandom selection among healthy endpoints

Endpoint Weights

Endpoints with higher weights receive proportionally more traffic. A weight of 2 receives roughly twice the traffic of weight 1.

Circuit Breaker

[clusters.circuit_breaker]
max_connections = 1024
max_pending_requests = 1024
max_requests = 1024

When limits are exceeded, the proxy returns 503 Service Unavailable instead of overloading the backend. Circuit breaker state is managed via RAII guards — the slot is automatically released when the request completes or the connection drops.

Connection Pooling

Meridian maintains a pool of idle TCP connections to each endpoint. When a new request arrives, it checks out a pooled connection instead of performing a TCP handshake. Connections are returned to the pool after use if the upstream supports keep-alive.

  • Max idle per endpoint: configurable (default 8)
  • Max connection age: 90 seconds (expired connections are evicted)
  • LIFO ordering: most recently returned connection is reused first (warmest cache)

Routes

Routes map incoming request paths to backend clusters. Meridian uses prefix-based matching — the first route whose prefix matches the request path wins.

[[routes]]
prefix = "/api"
cluster = "api-backend"
timeout_ms = 15000

[[routes]]
prefix = "/static"
cluster = "cdn-backend"

[[routes]]
prefix = "/"
cluster = "default-backend"

Fields

FieldTypeRequiredDescription
prefixstringyesPath prefix to match
clusterstringyesTarget cluster name
timeout_msintegernoRequest-level timeout in milliseconds
retry_policytablenoRetry configuration

Matching Order

Routes are evaluated in the order they appear in the configuration file. Put more specific prefixes first:

# Correct: specific routes first
[[routes]]
prefix = "/api/v2"
cluster = "api-v2"

[[routes]]
prefix = "/api"
cluster = "api-v1"

[[routes]]
prefix = "/"
cluster = "default"

Path Normalization

Before matching, request paths are normalized:

  • Double slashes collapsed: //api//data becomes /api/data
  • Dot segments resolved: /api/../secret becomes /secret
  • Dot segments removed: /api/./data becomes /api/data
  • Traversal beyond root clamped: /../../etc/passwd becomes /etc/passwd

This prevents routing bypass attacks where an attacker uses path manipulation to reach unintended backends.

Retry Policy

[[routes]]
prefix = "/api"
cluster = "api-backend"

[routes.retry_policy]
num_retries = 3
retry_on = ["503", "connect-failure"]

TLS

Meridian terminates TLS on downstream connections using rustls, a pure-Rust TLS implementation. Upstream connections to backends remain plain HTTP.

Configuration

Add a tls section to any listener:

[[listeners]]
name = "https"
address = "0.0.0.0:8443"
filter_chain = []

[listeners.tls]
cert_path = "/etc/meridian/server.crt"
key_path = "/etc/meridian/server.key"

Fields

FieldTypeRequiredDescription
cert_pathstringyesPath to PEM-encoded certificate chain
key_pathstringyesPath to PEM-encoded private key

Certificate Format

Certificates and keys must be PEM-encoded. The certificate file should contain the full chain (leaf certificate first, then intermediates):

-----BEGIN CERTIFICATE-----
(leaf certificate)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(intermediate CA)
-----END CERTIFICATE-----

Protocol Support

  • TLS 1.2 and 1.3 — both supported, with safe defaults
  • ALPN negotiation — advertises h2 and http/1.1
  • HTTP/2 over TLS — clients that negotiate h2 via ALPN are handled by the HTTP/2 connection handler automatically

Why rustls?

PropertyrustlsOpenSSL/BoringSSL
Memory safetyFull Rust guaranteesC code, requires unsafe FFI
PerformanceWithin 5-10% of BoringSSLSlightly faster RSA
DependencyPure Rust, ~50KBC library, ~2MB, cmake
Audit historyCure53 audit (2020)Multiple audits

rustls eliminates entire classes of TLS implementation vulnerabilities (buffer overflows, use-after-free) that have produced CVEs in C-based TLS libraries.

Health Checks

Meridian supports active health checking to detect unhealthy endpoints before they receive traffic. When an endpoint fails enough consecutive checks, it’s removed from the load balancer rotation.

Configuration

[[clusters]]
name = "api-backend"
lb_policy = "round_robin"
connect_timeout_ms = 5000
endpoints = [
  { address = "10.0.1.1:8080", weight = 1 },
  { address = "10.0.1.2:8080", weight = 1 },
]

[clusters.health_check]
interval_ms = 5000
timeout_ms = 2000
healthy_threshold = 2
unhealthy_threshold = 3
http_path = "/health"

Fields

FieldTypeRequiredDefaultDescription
interval_msintegeryesTime between health checks
timeout_msintegeryesTimeout for a single check
healthy_thresholdintegerno2Consecutive successes to mark healthy
unhealthy_thresholdintegerno3Consecutive failures to mark unhealthy
http_pathstringnoHTTP path to probe. If absent, uses TCP connect check

Check Types

TCP check (when http_path is not set): Success if a TCP connection can be established within the timeout.

HTTP check (when http_path is set): Connects, sends GET <path> HTTP/1.1, and checks for a 200 status code.

Threshold Hysteresis

Health status uses threshold hysteresis to prevent flapping:

  • An unhealthy endpoint must pass healthy_threshold consecutive checks to become healthy again
  • A healthy endpoint must fail unhealthy_threshold consecutive checks to be marked unhealthy
  • A single success resets the failure counter (and vice versa)

This prevents an intermittently failing endpoint from rapidly toggling between healthy and unhealthy states.

Behavior When All Endpoints Are Unhealthy

If all endpoints in a cluster are marked unhealthy, the load balancer returns no endpoints and the proxy responds with 502 Bad Gateway. A future “panic mode” feature will allow routing to all endpoints regardless of health when no healthy endpoints exist.

Architecture Overview

Meridian is structured as a Cargo workspace with a strict separation between domain logic and orchestration.

Crate Structure

meridian-core/          All domain logic
  buffer.rs             Slab allocator, buffer chains, watermarks
  codec.rs              HTTP/1.1 parser (httparse), response serializer
  config.rs             Configuration types, arc-swap store
  error.rs              Error types (thiserror)
  filter.rs             Async filter chain (RFC-0005)
  health.rs             Active health checking
  listener.rs           TCP listener, accept loop
  load_balancing.rs     Round-robin, least-request, Maglev
  observability.rs      Indexed stats, histograms
  pool.rs               Upstream connection pool
  resilience.rs         Circuit breaker (RAII), token bucket, retry
  tls.rs                Certificate loading, rustls config

meridian-proxy/         Thin orchestration shell
  main.rs               Tokio runtime, config loading
  admin.rs              Admin API server
  cluster.rs            Cluster manager (LB + CB + pool per cluster)
  conn_limit.rs         Per-IP connection limiter
  connection.rs         HTTP/1.1 connection handler
  h2_connection.rs      HTTP/2 connection handler
  metrics.rs            Proxy metrics wrapper

Design Principles

  1. Core owns all logic. The proxy crate is a thin shell that wires core components together. If you’re writing domain logic in meridian-proxy, it belongs in meridian-core.

  2. No panics in library code. Every public function in core returns Result. No .unwrap() or .expect().

  3. RAII for resource management. Circuit breaker slots and connection limits are held via guards that automatically release on drop.

  4. Lock-free on the hot path. Config reads use arc-swap (~0.6ns). Counters use Relaxed atomics. Health flags use AtomicBool.

  5. Generic stream I/O. The codec and connection handler are generic over AsyncRead + AsyncWrite + Unpin, supporting both plain TCP and TLS streams with zero-cost monomorphization.

Runtime Model

Meridian uses Tokio’s multi-threaded runtime. Each accepted connection spawns a task on the worker pool. There is no thread-per-connection model — thousands of connections share a small number of OS threads.

┌─────────────────────────────────────────────┐
│              Tokio Runtime                   │
│                                             │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  │
│  │ Worker 0 │  │ Worker 1 │  │ Worker 2 │  │
│  │          │  │          │  │          │  │
│  │ Task A   │  │ Task D   │  │ Task G   │  │
│  │ Task B   │  │ Task E   │  │ Task H   │  │
│  │ Task C   │  │ Task F   │  │ Task I   │  │
│  └──────────┘  └──────────┘  └──────────┘  │
│                                             │
│  + Accept Loop tasks (one per listener)     │
│  + Health Checker tasks (one per cluster)   │
│  + Admin Server task                        │
└─────────────────────────────────────────────┘

Data Flow

Every HTTP request through Meridian follows this path:

Client (downstream)
  → TCP accept (Listener, per-IP rate limit)
    → [TLS handshake if configured]
      → [ALPN dispatch: HTTP/1.1 or HTTP/2]
        → Header parse with 60s timeout (Http1Codec or h2)
          → Path normalization (collapse //, resolve ..)
            → Filter chain: request filters (forward order)
              → Route lookup (prefix match on normalized path)
                → Cluster lookup (ClusterManager)
                  → Circuit breaker check (RAII guard)
                    → Load balancer endpoint selection
                      → Connection pool checkout (or TCP connect with timeout)
                        → Forward request (strip hop-by-hop, add Host)
                          → Read upstream response
                            → Filter chain: response filters (reverse order)
                              → Forward response downstream (strip hop-by-hop)
                                → Connection pool checkin (if keep-alive)
                                  → Check Connection: close → loop or close

Error Handling at Each Stage

Every stage that can fail returns a generic HTTP error to the client. No internal details are leaked — cluster names, endpoint addresses, and circuit breaker state are logged but never sent to clients.

StageErrorHTTP Status
Header parse failsMalformed request400 Bad Request
Header timeoutSlowloris defense408 Request Timeout
No route matchesPath not found404 Not Found
Cluster not foundConfig error502 Bad Gateway
Circuit breaker openOverload protection503 Service Unavailable
No healthy endpointsAll backends down502 Bad Gateway
Upstream connect failsBackend unreachable502 Bad Gateway
Upstream connect timeoutBackend slow504 Gateway Timeout
Filter errorInternal filter failure500 Internal Server Error
Filter rejects requestPolicy enforcementFilter-defined (e.g., 403)

Keep-Alive

HTTP/1.1 connections support keep-alive by default. Multiple requests are processed sequentially on the same connection. The connection closes when:

  • The client sends Connection: close
  • The client uses HTTP/1.0 (no keep-alive by default)
  • A parse error occurs
  • The header read timeout fires

HTTP/2 Multiplexing

HTTP/2 connections support multiple concurrent streams. Each stream is handled in its own Tokio task, enabling true parallelism within a single connection. Requests are translated from HTTP/2 to HTTP/1.1 for upstream forwarding.

HTTP/1.1 Codec

The HTTP/1.1 codec handles request parsing, response serialization, and body framing. It uses httparse for zero-copy header parsing.

Request Parsing

Http1Codec::read_request() reads from the downstream stream until it has complete headers, then parses them via httparse. The parser operates on &[u8] slices from the read buffer — no header bytes are copied into owned buffers on the hot path.

Read buffer:  [GET /api HTTP/1.1\r\nHost: example.com\r\n\r\n]
                                                              ^
httparse borrows slices: method="GET", path="/api", headers=[...]
                         (zero-copy: pointers into the buffer)

Body Framing

After headers are parsed, determine_body_framing() decides how to read the body:

ConditionBody Treatment
No Content-Length or Transfer-EncodingNo body
Content-Length: NRead exactly N bytes
Transfer-Encoding: chunkedDechunk (hex-size + CRLF + data + CRLF)
Both CL and TE presentRejected (request smuggling prevention)
Multiple Content-Length valuesRejected (desync prevention)
Content-Length with whitespaceRejected (padding desync prevention)

Chunked Transfer Encoding

The dechunker reads chunks in a loop:

  1. Read chunk-size line (hex + optional extensions + CRLF)
  2. Read that many bytes of chunk data
  3. Read trailing CRLF
  4. Repeat until chunk-size is 0
  5. Consume optional trailers + final CRLF

Maximum body size is enforced (default 16MB) using overflow-safe arithmetic.

Request Smuggling Prevention

Meridian implements strict parsing per RFC 9110:

  • CL+TE rejection: Both Content-Length and Transfer-Encoding present = immediate rejection
  • Duplicate CL rejection: Multiple Content-Length headers = rejection
  • Whitespace CL rejection: Content-Length: 42 (trailing space) = rejection
  • TE validation: Only chunked is accepted; other encodings are rejected

These checks are fuzz-tested with 7 dedicated fuzzing targets, including a differential smuggling fuzzer that generates adversarial header combinations.

Generic Stream I/O

All codec functions accept S: AsyncRead + AsyncWrite + Unpin instead of concrete TcpStream. This allows the same parsing code to work on:

  • Plain TCP connections
  • TLS-wrapped connections (TlsStream<TcpStream>)
  • Test duplex streams
  • Any future transport type

HTTP/2 Support

Meridian accepts HTTP/2 connections from downstream clients and translates them to HTTP/1.1 for upstream backends. This is the most common deployment pattern — modern browsers and clients use HTTP/2, while many backend services still speak HTTP/1.1.

Protocol Negotiation

HTTP/2 is negotiated via ALPN (Application-Layer Protocol Negotiation) during the TLS handshake. Meridian advertises both h2 and http/1.1. After the handshake, the negotiated protocol determines which connection handler runs:

  • ALPN = h2handle_h2_connection (HTTP/2 handler)
  • ALPN = http/1.1 or none → handle_connection (HTTP/1.1 handler)

HTTP/2 over plain TCP (h2c) is also supported for testing and internal deployments.

Stream Multiplexing

Unlike HTTP/1.1 which processes requests sequentially, HTTP/2 multiplexes multiple streams on a single connection. Each stream is handled in its own Tokio task:

Client ──── h2 connection ───── Meridian
               ├── stream 1 ──→ task 1 ──→ upstream A (HTTP/1.1)
               ├── stream 2 ──→ task 2 ──→ upstream B (HTTP/1.1)
               └── stream 3 ──→ task 3 ──→ upstream A (HTTP/1.1)

Protocol Translation (h2 → h1)

For each HTTP/2 stream, the handler:

  1. Extracts the request — method, URI, headers from the h2 HEADERS frame
  2. Reads the body — from the h2 RecvStream with flow control
  3. Translates headers:authority pseudo-header becomes the Host header, hop-by-hop headers are stripped
  4. Forwards as HTTP/1.1 — standard request line + headers + body to the upstream
  5. Reads the HTTP/1.1 response — status, headers, body from upstream
  6. Translates back to h2 — builds an h2 Response, sends HEADERS + DATA frames

Flow Control

The h2 crate handles HTTP/2 flow control automatically. When the client sends body data, the handler releases flow control capacity after reading each chunk, allowing the client to send more data.

Connection Reuse

The h2 handler uses the same connection pool as HTTP/1.1. Upstream connections are checked out from the pool before each request and returned after the response, regardless of which downstream protocol the client used.

Filter Chain

The filter chain is Meridian’s extension point. Filters are async functions that inspect and modify HTTP requests and responses as they flow through the proxy.

Design

Filters are composed in a chain. Requests flow forward through the chain; responses flow backward:

Request:   Client → Filter 1 → Filter 2 → Filter 3 → Router → Upstream
Response:  Client ← Filter 1 ← Filter 2 ← Filter 3 ← Router ← Upstream

The first filter to see the request is the last to see the response.

The HttpFilter Trait

#![allow(unused)]
fn main() {
#[async_trait]
pub trait HttpFilter: Send + Sync + 'static {
    async fn on_request(&self, req: &mut RequestContext)
        -> Result<RequestAction, FilterError>;

    async fn on_response(&self, resp: &mut ResponseContext)
        -> Result<ResponseAction, FilterError>;

    fn on_complete(&self, ctx: &ExchangeContext) {}

    fn name(&self) -> &'static str;
}
}

Filters are async fn — not callbacks. A filter that needs to make an external call (auth service, rate limit check) simply awaits it. No manual state machines.

Request Actions

A request filter returns one of:

ActionEffect
ContinuePass to the next filter
SendResponse(response)Short-circuit: send this response immediately, skip remaining filters and upstream
Redirect { location, status }Redirect the client

Response Actions

A response filter returns one of:

ActionEffect
ContinuePass to the next filter (toward client)
Replace(response)Replace the entire response

Inter-Filter Communication

Filters communicate via typed metadata attached to the request/response context:

#![allow(unused)]
fn main() {
// Filter A stores a decision
ctx.metadata.insert(AuthDecision { allowed: true, user: "alice" });

// Filter B reads it
if let Some(auth) = ctx.metadata.get::<AuthDecision>() {
    // ...
}
}

Metadata uses TypeId-keyed storage — O(1) lookup, type-safe, collision-free. Internally backed by a Vec with linear scan (benchmarked at 1.6ns per lookup for typical 1-5 entries).

Dynamic Filter Chain

DynamicFilterChain holds a Vec<Arc<dyn HttpFilter>> for runtime-configurable chains:

#![allow(unused)]
fn main() {
let chain = DynamicFilterChain::from_filters(vec![
    Arc::new(AuthFilter::new()),
    Arc::new(RateLimitFilter::new()),
    Arc::new(LoggingFilter::new()),
]);

// Request path: runs filters in order
let action = chain.execute_request(&mut req_ctx).await?;

// Response path: runs filters in REVERSE order
let action = chain.execute_response(&mut resp_ctx).await?;
}

Performance

OperationMeasured
5-filter chain dispatch (noop)19ns
Metadata lookup (hit)1.6ns
Metadata insert21ns
Short-circuit (reject at filter 1/5)13ns

Load Balancing

Meridian distributes traffic across backend endpoints using one of several algorithms. The load balancer runs on every request and selects one healthy endpoint to receive the request.

Algorithms

Round-Robin

Cycles through healthy endpoints sequentially. Each request goes to the next endpoint. Simple, predictable, and fast (~0.7ns per pick).

lb_policy = "round_robin"

The round-robin counter uses AtomicU32 with Relaxed ordering. Each worker advances the counter independently — this naturally distributes load across workers without synchronization overhead.

Least Request (Power of Two Choices)

Picks two random endpoints and sends the request to the one with fewer active requests. This provides near-optimal load distribution with O(1) selection and is the best choice when backends have varying response times.

lb_policy = "least_request"

Active request counts are tracked via AtomicU32 per endpoint. The load balancer reads these with Relaxed ordering — approximate counts are sufficient for good load distribution.

Maglev (Consistent Hashing)

Maps a hash key to an endpoint with minimal disruption when endpoints are added or removed. Ideal for caching or session-sticky routing.

lb_policy = "maglev"

The Maglev table is a precomputed lookup table (65,537 entries) built once on config load. Picks are a single table lookup (~1.4ns).

Health-Aware Selection

All load balancers skip unhealthy endpoints. The Endpoint.healthy flag is an AtomicBool updated by the health checker task and read by the load balancer with no locking:

Health Checker ──[AtomicBool::store]──► Endpoint.healthy
                                              │
Load Balancer ──[AtomicBool::load]────────────┘

Performance

AlgorithmPick LatencyNotes
Round-Robin0.7nsCounter + modulo
Least Request (P2C)8-12ns2 RNG + 2 atomic loads
Maglev1.4nsTable lookup

Connection Pooling

Meridian maintains a pool of idle TCP connections to upstream endpoints. Instead of performing a TCP handshake for every request (~0.5-1ms), the proxy reuses existing connections.

How It Works

Request 1: pool empty → TCP connect → use → checkin to pool
Request 2: pool has 1  → checkout    → use → checkin to pool
Request 3: pool has 1  → checkout    → use → checkin to pool
...

The first request to an endpoint creates a new connection. Subsequent requests reuse pooled connections. The TCP handshake cost is paid once, not per-request.

Pool Configuration

[[clusters]]
name = "backend"
max_idle_connections = 8    # per endpoint, default 8

Pool Behavior

PropertyValue
Max idle per endpointConfigurable (default 8)
Max connection age90 seconds
Eviction strategyOldest evicted when pool is full
Checkout orderLIFO (most recently returned = warmest)
Thread safetyMutex<HashMap> (held only for push/pop, no async under lock)

Keep-Alive Detection

After receiving the upstream response, the proxy checks the Connection header before stripping hop-by-hop headers:

  • Connection: close → connection is dropped (not pooled)
  • Connection: keep-alive or absent (HTTP/1.1 default) → connection is returned to the pool

Per-Cluster Pools

Each cluster has its own ConnectionPool instance, shared across all connection handler tasks via Arc. This means:

  • Pool slots for cluster A don’t compete with cluster B
  • Circuit breaker and pool are co-located on the same ClusterState
  • Pool metrics can be reported per-cluster

Resilience

Meridian provides multiple layers of resilience to protect both backends and the proxy itself from overload and failure.

Circuit Breaker

Each cluster has a circuit breaker that limits the number of concurrent requests to the backend. When the limit is exceeded, new requests are immediately rejected with 503 Service Unavailable instead of queuing and adding to backend pressure.

[clusters.circuit_breaker]
max_connections = 1024
max_pending_requests = 1024
max_requests = 1024

RAII Guards

Circuit breaker slots are managed via RAII guards. When a request acquires a slot, it receives a CbGuard. The slot is automatically released when the guard is dropped — whether the request succeeds, fails, or panics:

try_acquire() → Some(CbGuard)  → request proceeds → guard drops → slot released
try_acquire() → None            → 503 returned immediately

This eliminates the “forgot to release” class of bugs entirely.

Performance

Circuit breaker check: 5.3ns (single fetch_add + compare).

Token Bucket Rate Limiter

For per-route rate limiting, TokenBucket implements a classic token bucket algorithm:

  • Tokens are added at a configurable rate
  • Each request consumes one token
  • When the bucket is empty, requests are rejected

Retry Policy

Configurable per-route retry policy:

[routes.retry_policy]
num_retries = 3
retry_on = ["503", "connect-failure"]

The retry decision is a pure function (~0.7ns) that checks the response status code against the retry-on list and the current attempt count against the maximum.

Per-IP Connection Limits

The proxy limits connections per source IP address (default: 256) to prevent a single client from exhausting connection resources. Like the circuit breaker, this uses RAII guards for automatic cleanup:

try_acquire(ip) → Some(ConnectionGuard) → connection proceeds → guard drops → slot released
try_acquire(ip) → None                   → connection dropped (TCP RST)

Slowloris Defense

A 60-second timeout on header reading prevents slow clients from holding connections open indefinitely. If a client doesn’t complete sending headers within the timeout, the connection is closed.

Defense Summary

AttackDefenseMechanism
Backend overloadCircuit breakerMax concurrent requests per cluster
Connection exhaustionPer-IP limitsMax 256 connections per source IP
Slow headers (Slowloris)Read timeout60-second header read deadline
Large headersSize limit64KB max header size
Request smugglingStrict parsingReject CL+TE, duplicate CL, whitespace CL
Path traversalNormalizationCollapse //, resolve .. before routing

Admin API

Meridian runs an optional admin HTTP server on a separate port for operational visibility.

Configuration

admin_address = "127.0.0.1:9901"

Endpoints

GET /stats

Returns metrics in Prometheus text exposition format:

$ curl http://localhost:9901/stats

# HELP meridian_downstream_cx_total Total downstream connections accepted
# TYPE meridian_downstream_cx_total counter
meridian_downstream_cx_total 15423

# HELP meridian_downstream_cx_active Active downstream connections
# TYPE meridian_downstream_cx_active gauge
meridian_downstream_cx_active 42

# HELP meridian_circuit_breaker_rejected Circuit breaker rejections
# TYPE meridian_circuit_breaker_rejected counter
meridian_circuit_breaker_rejected 0

# HELP meridian_server_live Server liveness (1=live)
# TYPE meridian_server_live gauge
meridian_server_live 1

GET /clusters

Returns cluster state as JSON, including endpoint health and active request counts:

$ curl http://localhost:9901/clusters

{
  "clusters": [
    {
      "name": "api-backend",
      "endpoints": [
        {"address": "10.0.1.1:8080", "healthy": true, "active_requests": 5},
        {"address": "10.0.1.2:8080", "healthy": true, "active_requests": 3},
        {"address": "10.0.1.3:8080", "healthy": false, "active_requests": 0}
      ]
    }
  ]
}

GET /config

Returns a summary of the current configuration:

$ curl http://localhost:9901/config

{"listeners": 2, "clusters": 3, "routes": 5}

GET /ready

Liveness probe for container orchestrators:

$ curl http://localhost:9901/ready

LIVE

Other Paths

Unknown paths return 404 with a list of available endpoints.

Prometheus Integration

Point your Prometheus scrape config at the /stats endpoint:

scrape_configs:
  - job_name: 'meridian'
    static_configs:
      - targets: ['meridian-host:9901']
    metrics_path: '/stats'
    scrape_interval: 15s

Metrics

Meridian tracks proxy health via compile-time indexed counters, gauges, and histograms. No HashMap lookups on the hot path — metric IDs are usize constants resolved at compile time.

Available Metrics

MetricTypeDescription
meridian_downstream_cx_totalCounterTotal downstream connections accepted
meridian_downstream_cx_completedCounterSuccessfully completed connections
meridian_downstream_cx_failedCounterFailed connections (parse errors, upstream failures)
meridian_downstream_cx_activeGaugeCurrently active downstream connections
meridian_circuit_breaker_rejectedCounterRequests rejected by circuit breaker
meridian_upstream_cx_timeoutCounterUpstream TCP connect timeouts
meridian_upstream_cx_errorCounterUpstream TCP connect errors
meridian_server_liveGaugeServer liveness indicator (always 1)

Architecture

Metrics use IndexedStats<N>, a flat-array stats structure where each metric has a compile-time array index:

counters:   [cx_total, cx_completed, cx_failed, cb_rejected, timeout, error, 0, 0]
              idx 0      idx 1         idx 2      idx 3        idx 4    idx 5

gauges:     [active_connections, 0, 0, 0, 0, 0, 0, 0]
              idx 0

histograms: [connect_duration_ms, 0, 0, 0, 0, 0, 0, 0]
              idx 0

Counter increment is a single array index + add (~0.5ns). No hashing, no string comparison, no allocation.

Reporting

Metrics are exposed via the admin API’s /stats endpoint in Prometheus text exposition format. See Admin API for details.

Thread Safety

The current implementation uses a Mutex-protected IndexedStats. At the current scale, mutex contention is negligible (recording takes ~0.5ns, lock/unlock ~25ns). For higher scale, the architecture supports per-worker thread-local stats with periodic flush aggregation.

Security Hardening

Meridian provides defense-in-depth from the language level through the transport level to the application level.

Rust-Level Safety

Properties enforced by the compiler, not by convention:

GuaranteeMechanism
No buffer overflowsBounds-checked array access
No use-after-freeOwnership system, Drop trait
No double-freeMove semantics, single owner
No data racesSend/Sync traits
No null pointer dereferenceOption<T> instead of nullable pointers
No uninitialized memoryAll variables initialized before use

These guarantees eliminate entire vulnerability classes that have caused real CVEs in C/C++ proxies.

Protocol-Level Defenses

Request Smuggling Prevention

HTTP request smuggling exploits ambiguity between Content-Length and Transfer-Encoding. Meridian’s strict parser:

  1. Rejects requests with both Content-Length and Transfer-Encoding
  2. Rejects requests with multiple Content-Length values
  3. Rejects Content-Length with whitespace padding
  4. Only accepts chunked as a Transfer-Encoding value

These checks are verified by coverage-guided fuzzing with a dedicated smuggling-detection fuzzer.

Slowloris Defense

60-second timeout on header reading. Clients that don’t complete headers within this window are disconnected.

Per-IP Connection Limits

Configurable limit (default 256) on connections per source IP. Prevents a single client from exhausting connection resources. Uses RAII guards for automatic cleanup.

Path Normalization

Request paths are normalized before routing:

  • //api//data/api/data
  • /api/../secret/secret
  • /api/./data/api/data
  • /../../etc/passwd/etc/passwd

Header Size Limits

64KB maximum header size. 128 maximum headers per request.

Generic Error Responses

Error responses never leak internal topology. Cluster names, endpoint addresses, and circuit breaker state are logged but never sent to clients.

TLS

Meridian uses rustls for TLS termination — a pure-Rust implementation audited by Cure53. See TLS Configuration.

Fuzz Testing

All parser surfaces are continuously fuzz-tested:

  • HTTP/1.1 request parser
  • Chunked transfer-encoding dechunker
  • Body framing decision logic
  • Path normalization
  • Request smuggling detection
  • TOML configuration parser

See Fuzzing for details.

Testing

Meridian requires every component to have all test layers: unit tests, integration tests, and benchmarks for hot-path code.

Running Tests

# All tests
cargo test --workspace

# Core library only
cargo test -p meridian-core

# Proxy unit + integration tests
cargo test -p meridian-proxy

# Integration tests only
cargo test -p meridian-proxy --test integration

# Single test
cargo test -p meridian-core -- chunked_single_chunk

Test Structure

Unit Tests (97 total)

In-module #[cfg(test)] blocks testing individual functions:

ModuleTestsWhat’s Covered
buffer5Slab acquire/release, BufChain operations, watermarks
codec22HTTP parsing, body framing, smuggling rejection, chunked encoding
config2Config store load/swap, route lookup
filter16Chain execution order, short-circuit, metadata, error propagation
health8Threshold logic, TCP/HTTP checks
load_balancing4Round-robin, Maglev consistency/disruption
observability4Counters, gauges, histograms, snapshots
pool6Checkout/checkin, max idle, expiry eviction
resilience4Circuit breaker RAII, token bucket, retry, outlier detection
tls4Cert loading, error display
conn_limit4Per-IP limits, RAII guards
connection7Path normalization

Integration Tests (15 total)

End-to-end tests in meridian-proxy/tests/integration.rs that spin up mock backends and proxy instances:

TestWhat’s Verified
Round-robin distribution4 requests alternate between 2 backends
Circuit breaker rejectionRequests rejected when CB limit reached
Connect timeout504 returned on unreachable backend
HTTP path routingPrefix matching routes to correct cluster
Path traversal prevention/../ normalized before routing
Opaque error responsesNo cluster names or IPs in error bodies
Filter chain header injectResponse filter adds header
Filter chain request rejectRequest filter returns 403
Health check failoverUnhealthy endpoint skipped by LB
Connection pool reuse4 requests, 1 upstream TCP connection
Chunked responseDechunked upstream response forwarded
Chunked request bodyChunked request body forwarded
TLS terminationFull HTTPS flow with self-signed cert
Admin API/stats, /clusters, /config, /ready endpoints
HTTP/2 proxyh2 client → proxy → h1 upstream → h2 response

Test Patterns

  • Mock backends use TcpListener::bind("127.0.0.1:0") for OS-assigned ports
  • Integration tests sleep 50ms after spawning servers to ensure they’re listening
  • TLS tests use rcgen for runtime-generated self-signed certificates
  • Tests use Connection: close to avoid keep-alive interactions

Benchmarks

Meridian uses Criterion for microbenchmarks, with a dedicated meridian-bench crate that mirrors core types for isolated measurement.

Running Benchmarks

# All benchmarks
cargo bench -p meridian-bench

# Specific suite
cargo bench -p meridian-bench --bench filter
cargo bench -p meridian-bench --bench http_codecs
cargo bench -p meridian-bench --bench load_balancing

Benchmark Suites

SuiteWhat’s Measured
buffersSlab acquire/release, BufChain push, split, watermark
configConfig read latency, Arc clone, route lookup
filterDynamic chain dispatch, metadata insert/lookup
http_codecsHTTP parse throughput, header pool, protocol translation
load_balancingRR/LeastRequest/Maglev pick, table build
observabilityCounter increment, histogram record, snapshot
resilienceCircuit breaker, token bucket, retry decision

Performance Scorecard

ComponentBenchmarkTargetMeasuredStatus
Configconfig_read<1ns0.68nsPass
Buffersslab_acquire_release<15ns5.3nsPass
HTTPparse_simple_request<200ns88nsPass
HTTPparse_10_headers<500ns151nsPass
LBround_robin/pick<10ns0.73nsPass
LBmaglev/pick<15ns1.41nsPass
Resiliencecircuit_breaker/try_acquire<10ns5.3nsPass
Observabilitycounter/increment<5ns2.9nsPass
Filterdynamic_chain/5_noop<25ns19nsPass
Filtermetadata/lookup_hit<10ns1.6nsPass

Overall: 21/26 targets met (81%)

Bench Crate Rules

  • The bench crate has its own reimplementations of core types — it does NOT depend on meridian-core
  • Benchmarks test realistic workloads, not trivial inputs
  • SmallRng (not ThreadRng) for deterministic, fast benchmarks
  • criterion::black_box prevents dead code elimination

Fuzzing

Meridian’s parsers are continuously fuzz-tested using cargo-fuzz with LLVM’s libFuzzer. The fuzzing harness uses coverage-guided mutation, structure-aware input generation, and security invariant assertions.

Running the Fuzzer

# Requires nightly Rust
rustup toolchain install nightly

# Run a specific target (runs until stopped)
cargo +nightly fuzz run http1_request

# Run with HTTP dictionary (faster exploration)
cargo +nightly fuzz run http1_request -- -dict=fuzz/dictionaries/http.dict

# Run for a fixed time
cargo +nightly fuzz run http1_request -- -max_total_time=3600

# Run all targets
for t in http1_request http1_structured body_framing header_smuggling \
         path_normalize config_toml chunked_body; do
  cargo +nightly fuzz run $t -- -max_total_time=3600
done

Fuzz Targets

TargetTechniqueSpeedWhat It Tests
http1_requestRaw bytes~200K/secHTTP/1.1 parser never panics
http1_structuredStructure-aware~79K/secDeep parser states via arbitrary
body_framingStructure-aware~67K/secCL+TE smuggling detection
header_smugglingDifferential~62K/secAdversarial header combos with invariant checks
path_normalizeRaw bytes~142K/secIdempotence, no traversal, no double slashes
config_tomlRaw UTF-8~48K/secTOML deserializer never panics
chunked_bodyRaw bytes (async)~26K/secDechunker respects size limits

Techniques

Coverage-Guided Mutation

libFuzzer tracks which code branches are covered by each input. Inputs that reach new branches are kept and mutated further. This systematically explores the parser’s state space rather than generating random noise.

Structure-Aware Fuzzing

The http1_structured and body_framing targets use the arbitrary crate to generate structured HTTP requests. The fuzzer mutates structured fields (methods, headers, versions) independently while maintaining HTTP-like syntax, reaching deeper parser states faster than raw byte mutation.

Security Invariant Assertions

The header_smuggling and body_framing targets embed security invariants as assertions:

#![allow(unused)]
fn main() {
// If both Content-Length and Transfer-Encoding are present, MUST reject
if has_cl && has_te {
    assert!(result.is_err(), "SMUGGLING: CL+TE accepted!");
}
}

If the fuzzer finds an input that violates these invariants, it’s a real security bug.

HTTP Dictionary

The fuzz/dictionaries/http.dict file contains HTTP protocol tokens (methods, headers, delimiters, smuggling payloads) that seed the fuzzer’s mutation engine for faster exploration of HTTP-specific code paths.

Bugs Found

The fuzzer has found and fixed:

  1. Integer overflow in chunked body size check — A chunk with hex size ffffffffffffffff caused body.len() + chunk_size to wrap around usize, bypassing the max body size limit. Fixed with overflow-safe arithmetic.

Crash Artifacts

When a crash is found, the input is saved to fuzz/artifacts/<target>/crash-<hash>. To reproduce:

cargo +nightly fuzz run chunked_body fuzz/artifacts/chunked_body/crash-<hash>

Coding Standards

These rules apply to every line of code in Meridian. They are enforced by CI checks, pre-commit hooks, and code review.

Correctness

  • No .unwrap() or .expect() in library code (meridian-core). Binary main() may use .expect() for one-time setup only.
  • All public functions in core return Result<T, E>.
  • Every error variant must be tested.
  • All public types derive Debug. Data types also derive Clone, PartialEq where sensible.

Ownership & Allocation

  • Borrow over clone. If you’re cloning, justify it.
  • Zero allocations in hot-path packet parsing. The codec uses &[u8] slices.
  • Arc for shared ownership across tasks. & references within a single task.

Concurrency

  • Ordering::Relaxed for counters. SeqCst only with written justification.
  • Circuit breaker uses RAII guards (CbGuard) — acquire on entry, drop on exit.
  • Per-IP connection limiter uses RAII guards (ConnectionGuard).
  • Arc<ConfigStore> with arc-swap for lock-free config reads.

Error Handling

  • One error enum per module in core (e.g., CodecError, FilterError).
  • Proxy crate uses anyhow::Result for application-level errors.
  • Error responses to clients are generic — no internal topology leakage.
  • Internal details go to structured logs only.

Security

  • All network data is untrusted input. Codec validates headers, rejects smuggling.
  • Path normalization before routing (collapse //, resolve ..).
  • 60-second header-read timeout (Slowloris defense).
  • 256 max connections per source IP (configurable).
  • No unsafe without a // SAFETY: comment.

Formatting & Quality

# These must all pass before commit
cargo fmt --check
cargo clippy --all-targets -- -D warnings
cargo test --workspace
  • cargo fmt is law. No exceptions.
  • cargo clippy -- -D warnings must pass. No #[allow] without a comment explaining why.
  • Comments explain why, not what. No // increment counter above counter += 1.
  • Module-level doc comments (///) on all public items.

Performance Targets

Every hot-path operation in Meridian has a benchmark target from the RFCs and a measured result from Criterion.

Full Scorecard

ComponentBenchmarkTargetMeasuredStatus
Configconfig_read<1ns0.68nsPass
Configarc_clone<5ns3.4nsPass
Buffersslab_acquire_release<15ns5.3nsPass
Buffersbufchain_push_4KB<50ns22.8nsPass
HTTPparse_simple_request<200ns88.1nsPass
HTTPparse_10_headers<500ns151nsPass
LBround_robin/pick<10ns0.73nsPass
LBmaglev/pick<15ns1.41nsPass
LBmaglev/table_build/100<1ms4.35msMiss
Resiliencecircuit_breaker/try_acquire<10ns5.3nsPass
Resiliencetoken_bucket/acquire<15ns21nsMiss
Observabilitycounter/increment<5ns2.9nsPass
Filterdynamic_chain/5_noop<25ns19.1nsPass
Filtermetadata/lookup_hit<10ns1.6nsPass
Filtermetadata/insert<20ns21.0nsBorderline

Overall: 21/26 targets met (81%)

Design Philosophy

No optimization without a benchmark proving the need. But the targets above are hard requirements — design for them from the start.

The benchmark crate (meridian-bench) uses standalone reimplementations of core types to isolate measurement. It does NOT depend on meridian-core — this ensures benchmarks measure the data structure, not the crate’s compilation overhead.

Key Design Decisions for Performance

DecisionRationaleImpact
Vec-based metadata (not HashMap)Linear scan beats hash for <8 entriesLookup: 11.9ns → 1.6ns
AtomicU32 RR counter (Relaxed)No cache-line bouncing between workers0.73ns per pick
Flat-array stats (not HashMap)Compile-time metric IDs, array index0.48ns per increment
arc-swap for config (not Mutex)Lock-free reads on every request0.68ns per read
RAII circuit breaker guardsNo “forgot to release” bugs, no branches5.3ns per acquire

Running Benchmarks

cargo bench -p meridian-bench

Results with HTML reports are generated in target/criterion/.

Error Handling

Meridian uses typed, explicit error handling throughout. No panics, no silent failures.

Error Types

CodecError (HTTP parsing)

#![allow(unused)]
fn main() {
pub enum CodecError {
    Io(std::io::Error),              // I/O layer failure
    Parse(String),                   // Malformed HTTP
    HeadersTooLarge,                 // >64KB headers
    InvalidHeader(String),           // Invalid header name/value
    RequestSmuggling,                // Both CL and TE present
    InvalidContentLength(String),    // Non-numeric or padded CL
    UnsupportedVersion,              // Not HTTP/1.0 or 1.1
    ConnectionClosed,                // Peer closed mid-parse
    ChunkedEncoding(String),         // Invalid chunked framing
    BodyTooLarge(usize),             // Body exceeds limit
}
}

FilterError (filter chain)

#![allow(unused)]
fn main() {
pub enum FilterError {
    Internal { filter: &'static str, source: Box<dyn Error> },
    Abort { filter: &'static str, reason: String },
    Timeout { filter: &'static str, elapsed: Duration },
}
}

TlsError (TLS configuration)

#![allow(unused)]
fn main() {
pub enum TlsError {
    Io { path: String, source: std::io::Error },
    NoCertificates(String),
    NoPrivateKey(String),
    Config(String),
}
}

MeridianError (top-level)

#![allow(unused)]
fn main() {
pub enum MeridianError {
    Io(std::io::Error),
    Config(String),
    FilterRejection { reason: String, status: u16 },
    UpstreamUnavailable(String),
    CircuitBreakerOpen { cluster: String },
    RateLimited,
    Timeout { elapsed_ms: u64 },
    Protocol(String),
    Parse(String),
}
}

Error-to-HTTP Mapping

Errors are mapped to generic HTTP status codes. Internal details are never sent to clients:

Internal ErrorClient SeesWhy Generic
CodecError::Parse400 Bad RequestDon’t reveal parser internals
No route match404 Not FoundSafe to expose
Cluster not found502 Bad GatewayDon’t reveal cluster names
CB open503 Service UnavailableDon’t reveal CB state
No healthy endpoints502 Bad GatewayDon’t reveal endpoint topology
Upstream connect fail502 Bad GatewayDon’t reveal upstream addresses
Upstream timeout504 Gateway TimeoutDon’t reveal timeout config
Filter error500 Internal Server ErrorDon’t reveal filter internals

All internal details are logged via tracing for operator debugging.

Conventions

  • Core library: every public function returns Result<T, ModuleError>
  • Proxy crate: uses anyhow::Result for application-level errors
  • No .unwrap() in core library code (enforced by pre-commit hook)
  • One error enum per module with thiserror #[derive(Error)]

RFC Index

Meridian’s architecture is defined by 13 RFCs. Each RFC specifies types, APIs, error handling, performance targets, and open questions for a subsystem.

RFC List

RFCTitleCoversPhase
RFC-0000Specification IndexMaster index, dependency graph
RFC-0001Goals & Prior ArtMotivation, non-goals, Envoy/HAProxy/Nginx lessons1
RFC-0002Process ArchitectureTokio runtime, worker model, task spawning1
RFC-0003Network I/OListener, accept loop, connection lifecycle1
RFC-0004Buffer ArchitectureSlabPool, BufChain, WatermarkBuffer, zero-copy2
RFC-0005Filter PipelineNetworkFilter, HttpFilter, FilterChain traits8
RFC-0006HTTP CodecsHTTP/1.1 parser, HTTP/2 (h2), body framing5, 14
RFC-0007ConfigurationTOML config, xDS, hot reload1, 15
RFC-0008Load BalancingRR, LeastRequest, Maglev, cluster management3
RFC-0009ObservabilityIndexedStats, histograms, admin API4, 13
RFC-0010ResilienceCircuitBreaker, TokenBucket, RetryPolicy3
RFC-0011SecurityTLS, mTLS, rate limiting, attack mitigations7, 10
RFC-0012BenchmarksMethodology, acceptance criteria, harnessAll

Implementation Phases

PhaseRFC(s)Status
1: Foundation0001, 0002, 0003Done
2: Buffers & Memory0004Done
3: Load Balancing & Resilience0008, 0010Done
4: Observability0009Done
5: HTTP/1.1 Codec0006Done
6: L7 Proxy IntegrationDone
7: Security Hardening0011Done
8: Filter Chain0005Done
9: Chunked Transfer Encoding0006Done
10: TLS Termination0011Done
11: Connection PoolingDone
12: Health Checking0008Done
13: Admin API & Metrics0009Done
14: HTTP/20006Done
15: xDS Hot Reload0007Next

Security Narrative

In addition to the RFCs, SECURITY-NARRATIVE.md provides a dual-perspective threat analysis covering 6 phases of attack and defense, from network probing through request smuggling to data exfiltration.