Architecture Overview
Meridian is structured as a Cargo workspace with a strict separation between domain logic and orchestration.
Crate Structure
meridian-core/ All domain logic
buffer.rs Slab allocator, buffer chains, watermarks
codec.rs HTTP/1.1 parser (httparse), response serializer
config.rs Configuration types, arc-swap store
error.rs Error types (thiserror)
filter.rs Async filter chain (RFC-0005)
health.rs Active health checking
listener.rs TCP listener, accept loop
load_balancing.rs Round-robin, least-request, Maglev
observability.rs Indexed stats, histograms
pool.rs Upstream connection pool
resilience.rs Circuit breaker (RAII), token bucket, retry
tls.rs Certificate loading, rustls config
meridian-proxy/ Thin orchestration shell
main.rs Tokio runtime, config loading
admin.rs Admin API server
cluster.rs Cluster manager (LB + CB + pool per cluster)
conn_limit.rs Per-IP connection limiter
connection.rs HTTP/1.1 connection handler
h2_connection.rs HTTP/2 connection handler
metrics.rs Proxy metrics wrapper
Design Principles
-
Core owns all logic. The proxy crate is a thin shell that wires core components together. If you’re writing domain logic in meridian-proxy, it belongs in meridian-core.
-
No panics in library code. Every public function in core returns
Result. No.unwrap()or.expect(). -
RAII for resource management. Circuit breaker slots and connection limits are held via guards that automatically release on drop.
-
Lock-free on the hot path. Config reads use
arc-swap(~0.6ns). Counters useRelaxedatomics. Health flags useAtomicBool. -
Generic stream I/O. The codec and connection handler are generic over
AsyncRead + AsyncWrite + Unpin, supporting both plain TCP and TLS streams with zero-cost monomorphization.
Runtime Model
Meridian uses Tokio’s multi-threaded runtime. Each accepted connection spawns a task on the worker pool. There is no thread-per-connection model — thousands of connections share a small number of OS threads.
┌─────────────────────────────────────────────┐
│ Tokio Runtime │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Worker 0 │ │ Worker 1 │ │ Worker 2 │ │
│ │ │ │ │ │ │ │
│ │ Task A │ │ Task D │ │ Task G │ │
│ │ Task B │ │ Task E │ │ Task H │ │
│ │ Task C │ │ Task F │ │ Task I │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ + Accept Loop tasks (one per listener) │
│ + Health Checker tasks (one per cluster) │
│ + Admin Server task │
└─────────────────────────────────────────────┘