System Overview
GasHammer is structured as a distributed system with a central control plane (hive) and regional workers (edges). This architecture allows scaling load generation across multiple machines while maintaining centralized orchestration and data collection.
Design Principles
- Wire-protocol-only integration — GasHammer never imports Nitro crate code. All interaction is via JSON-RPC, WebSocket feed, gRPC, and L1 contract reads.
- Gas-first workload modeling — workloads are defined by target gas/sec, not requests/sec. This accurately models blockchain resource consumption.
- Deterministic reproducibility — seeded PRNGs ensure identical test runs for reliable comparison.
- Correctness by default — every run can verify invariants, not just measure throughput.
Crate Architecture
gashammer/
├── gashammer-common Shared types, config, build info, error codes
├── gashammer-nitro Nitro protocol adapters (RPC, feed, L1 contracts)
├── gashammer-hive Hive control plane (REST API, gRPC, orchestration)
├── gashammer-edge Edge runtime (tx pipeline, feed correlator)
├── gashammer-workload Workload engine (templates, gas modeling, rate control)
├── gashammer-telemetry Event model, pipeline, Parquet storage
├── gashammer-oracle Correctness checks (invariants, verdicts)
├── gashammer-fault Fault injection (network faults, timeline, safety)
├── gashammer-report Reporting (latency, capacity envelope, regression)
├── gashammer-scenario Scenario Definition Language (parser, validator, compiler)
├── gashammer-docgen Documentation engine (syn parser, mdBook generation)
└── gashammer-testenv Testcontainers Nitro devnet orchestration
Component Interactions
┌──────────────────────┐
│ Hive │
│ │
│ ┌─────────────────┐ │
REST API ──────────▶│ │ Orchestrator │ │
(scenarios, │ │ (run lifecycle, │ │
runs, edges) │ │ phase sync) │ │
│ └────────┬────────┘ │
│ │ gRPC │
│ ┌────────┴────────┐ │
│ │ Edge Registry │ │
│ │ (heartbeat, │ │
│ │ health check) │ │
│ └────────┬────────┘ │
│ │ │
│ ┌────────┴────────┐ │
│ │ Telemetry Sink │ │
│ │ (Parquet files) │ │
│ └────────┬────────┘ │
│ │ │
│ ┌────────┴────────┐ │
│ │ Report Engine │ │
│ │ (latency, SOE, │ │
│ │ regression) │ │
│ └─────────────────┘ │
└───────────────────────┘
│
│ gRPC (mTLS)
┌──────────────────┼──────────────────┐
│ │ │
┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐
│ Edge 1 │ │ Edge 2 │ │ Edge N │
│ │ │ │ │ │
│ ┌────────┐ │ │ ┌────────┐ │ │ ┌────────┐ │
│ │Workload│ │ │ │Workload│ │ │ │Workload│ │
│ │ Engine │ │ │ │ Engine │ │ │ │ Engine │ │
│ └───┬────┘ │ │ └───┬────┘ │ │ └───┬────┘ │
│ │ │ │ │ │ │ │ │
│ ┌───┴────┐ │ │ ┌───┴────┐ │ │ ┌───┴────┐ │
│ │ Tx │ │ │ │ Tx │ │ │ │ Tx │ │
│ │Pipeline│ │ │ │Pipeline│ │ │ │Pipeline│ │
│ └───┬────┘ │ │ └───┬────┘ │ │ └───┬────┘ │
└─────┼──────┘ └─────┼──────┘ └─────┼──────┘
│ │ │
└──────────────────┼────────────────────┘
│
┌─────┴──────┐
│ Nitro │
│ Rollup │
│ (Sequencer, │
│ Gateway, │
│ Feed) │
└────────────┘
Data Flow
- User submits a scenario to the hive via REST API
- Hive validates and compiles the scenario, then creates a run
- Hive distributes the run configuration to registered edges via gRPC
- Each edge starts its workload engine, which generates transactions at the target gas rate
- The tx pipeline signs and submits transactions to the Nitro sequencer
- The feed correlator matches submitted txs against the sequencer feed
- Telemetry events flow from edges back to the hive via gRPC streaming
- The hive writes events to Parquet files for durable storage
- The correctness oracle evaluates invariants during and after the run
- The report engine generates the final report with latency analysis, capacity envelope, and verdicts