Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GasHammer

Adversarial load-testing and correctness-verification platform for Arbitrum Nitro rollup infrastructure.

GasHammer is designed to stress-test Nitro rollup deployments by generating realistic, gas-modeled transaction workloads and verifying correctness invariants under load. It interacts with Nitro exclusively through wire protocols (JSON-RPC, WebSocket feed, gRPC) without importing any Nitro internals.

Key Capabilities

  • Gas-first workload modeling — workloads are defined by target gas/sec, not requests/sec. Transaction templates specify gas profiles that accurately model real-world usage patterns.
  • Deterministic reproducibility — seeded PRNGs ensure that the same scenario configuration produces the same transaction sequence every time.
  • Correctness verification — an oracle checks invariants (balance conservation, nonce monotonicity, gas accounting) both during and after each run.
  • Fault injection — controlled network faults (latency, packet loss, partition) test resilience under adverse conditions.
  • Decision-grade reporting — capacity envelopes, regression detection, and release gate verdicts provide actionable results.

Architecture at a Glance

GasHammer uses a hive/edge topology:

                    ┌──────────────┐
                    │     Hive     │  Control plane (1 instance)
                    │  REST + gRPC │  Orchestrates runs, collects telemetry
                    └──────┬───────┘
                           │ gRPC
              ┌────────────┼────────────┐
              │            │            │
        ┌─────┴─────┐┌────┴──────┐┌────┴──────┐
        │   Edge 1  ││  Edge 2   ││  Edge N   │  Workers (many instances)
        │  Region A ││ Region B  ││ Region C  │  Generate txs, collect data
        └─────┬─────┘└────┬──────┘└────┬──────┘
              │           │            │
              └───────────┼────────────┘
                          │ JSON-RPC / WebSocket
                    ┌─────┴──────┐
                    │   Nitro    │  System under test
                    │  Rollup   │
                    └────────────┘
  • Hive accepts scenarios via REST API, distributes work to edges, synchronizes phases, and aggregates results.
  • Edges connect to the Nitro sequencer, generate transactions at the target gas rate, and stream telemetry back to the hive.

License

GasHammer is licensed under the Business Source License 1.1 (BSL-1.1).

Next Steps

Getting Started

This guide walks you through installing GasHammer, configuring it for your Nitro deployment, and running your first load test.

Prerequisites

  • Rust toolchain 1.75.0 or later (rustup recommended)
  • Docker for running integration tests and the Nitro devnet
  • A Nitro deployment (or use the built-in devnet for local testing)

Quick Start

# Clone the repository
git clone https://github.com/copyleftdev/gashammer.git
cd gashammer

# Build everything
cargo build --workspace

# Run unit tests
cargo test --workspace

# Run with the local devnet (requires Docker)
cargo test --test integration -- --test-threads=1

Sections

Installation

From Source

GasHammer is built with Cargo. Clone and build:

git clone https://github.com/copyleftdev/gashammer.git
cd gashammer
cargo build --workspace --release

The release binaries are placed in target/release/:

  • gashammer-hive — the control plane binary
  • gashammer-edge — the edge worker binary

System Requirements

ComponentMinimumRecommended
CPU2 cores4+ cores
RAM2 GB8 GB
Disk1 GB10 GB (for telemetry data)
Network100 Mbps1 Gbps
Rust1.75.0Latest stable
Docker20.10+Latest

Docker Images

Pre-built container images are available from GitHub Container Registry:

docker pull ghcr.io/copyleftdev/gashammer-hive:latest
docker pull ghcr.io/copyleftdev/gashammer-edge:latest

Verifying the Installation

After building, verify the binary includes the GasHammer DNA canary:

strings target/release/gashammer-hive | grep GASHAMMER_CANARY

Check the version and build metadata:

./target/release/gashammer-hive --version

Development Setup

For development, install additional tools:

# Install mdBook for documentation
cargo install mdbook

# Install cargo-deny for license checking
cargo install cargo-deny

# Verify the full CI suite passes locally
cargo fmt --all -- --check
cargo clippy --workspace --all-targets -- -D warnings
cargo test --workspace

Configuration

GasHammer uses TOML configuration files for the hive and edge processes. Scenario definitions use YAML (see Scenario Authoring).

Hive Configuration

The hive reads its configuration from hive.toml:

[server]
bind_address = "0.0.0.0"
rest_port = 8080
grpc_port = 9090

[storage]
data_dir = "/var/lib/gashammer"
parquet_rotation_size_mb = 256
parquet_rotation_interval_secs = 3600

[telemetry]
prometheus_port = 9091

[logging]
level = "info"
format = "json"

Server Section

FieldTypeDefaultDescription
bind_addressstring"0.0.0.0"Network interface to bind
rest_portu168080REST API port
grpc_portu169090gRPC port for edge communication

Storage Section

FieldTypeDefaultDescription
data_dirpath"/var/lib/gashammer"Directory for telemetry data and reports
parquet_rotation_size_mbu64256Rotate Parquet files at this size
parquet_rotation_interval_secsu643600Rotate Parquet files at this interval

Edge Configuration

Each edge reads its configuration from edge.toml:

[edge]
name = "edge-us-east-1"

[hive]
address = "hive.internal:9090"

[nitro]
sequencer_rpc = "http://sequencer:8547"
gateway_rpc = "http://gateway:8547"
feed_relay = "ws://feed-relay:9642"

[nitro.rpc]
timeout_ms = 5000
max_retries = 3
connection_pool_size = 16

[accounts]
mnemonic = "test test test test test test test test test test test junk"
count = 100
derivation_base = "m/44'/60'/0'/0'/0x4748"

[telemetry]
buffer_size = 65536
batch_size = 1024
flush_interval_ms = 1000
prometheus_port = 9091

[logging]
level = "info"
format = "json"

Nitro Section

FieldTypeDefaultDescription
sequencer_rpcURLrequiredNitro sequencer JSON-RPC endpoint
gateway_rpcURLrequiredNitro gateway JSON-RPC endpoint
feed_relayURLrequiredSequencer feed relay WebSocket URL
timeout_msu645000RPC request timeout
max_retriesu323Max retries on transient errors
connection_pool_sizeu3216HTTP connection pool size

Accounts Section

FieldTypeDefaultDescription
mnemonicstringrequiredHD wallet mnemonic for test accounts
countu32100Number of accounts to derive
derivation_basestring"m/44'/60'/0'/0'/0x4748"HD derivation path (includes GasHammer DNA marker 0x4748)

Environment Variables

Configuration values can be overridden with environment variables using the GASHAMMER_ prefix:

GASHAMMER_NITRO_SEQUENCER_RPC=http://localhost:8547
GASHAMMER_HIVE_ADDRESS=hive.internal:9090
GASHAMMER_LOG_LEVEL=debug

First Run

This guide walks through running your first GasHammer load test using the built-in devnet.

Step 1: Start the Devnet

GasHammer includes a testcontainers-based Nitro devnet. For manual testing, start the containers:

docker compose -f docker/devnet.yml up -d

This starts:

  • L1 Geth — Ethereum L1 in dev mode with instant finality
  • Nitro Sequencer — Arbitrum Nitro sequencer node
  • Feed Relay — Sequencer feed relay for WebSocket subscriptions

Wait for the health checks to pass (typically under 30 seconds).

Step 2: Start the Hive

./target/release/gashammer-hive --config hive.toml

The hive starts the REST API on port 8080 and the gRPC server on port 9090. Verify it’s running:

curl http://localhost:8080/health

Step 3: Start an Edge

./target/release/gashammer-edge --config edge.toml

The edge registers with the hive, connects to the sequencer, and begins its heartbeat. Verify registration:

curl http://localhost:8080/edges

Step 4: Submit a Scenario

Use one of the example scenarios:

curl -X POST http://localhost:8080/scenarios \
  -H "Content-Type: application/yaml" \
  --data-binary @scenarios/simple-transfer-load.yaml

Step 5: Start the Run

curl -X POST http://localhost:8080/runs \
  -H "Content-Type: application/json" \
  -d '{"scenario": "simple-transfer-load"}'

Step 6: Monitor Progress

Watch the run status:

# Poll run status
curl http://localhost:8080/runs/{run_id}

# View live metrics (Prometheus)
curl http://localhost:9091/metrics

Step 7: Retrieve the Report

Once the run completes:

curl http://localhost:8080/runs/{run_id}/report > report.json

The report includes:

  • Latency percentiles (p50, p90, p95, p99, p99.9)
  • Throughput time series (gas/sec and tx/sec)
  • Correctness oracle verdicts
  • Capacity envelope (if applicable)

Using the Quick Smoke Test

For a fast validation, use the smoke test scenario:

curl -X POST http://localhost:8080/scenarios \
  --data-binary @scenarios/quick-smoke-test.yaml

curl -X POST http://localhost:8080/runs \
  -d '{"scenario": "quick-smoke-test"}'

This runs a minimal 2-minute test at 1 Mgas/s — enough to verify the full pipeline works.

Scenario Authoring

GasHammer scenarios are defined in YAML using the Scenario Definition Language (SDL). A scenario describes the workload to generate, the phases to execute, and the correctness checks to apply.

Core Concepts

  • Scenario — a complete test definition including metadata, phases, workload configuration, and oracle settings
  • Phase — a time segment within a run, each with its own gas rate and duration
  • Template — a transaction type (e.g., simple transfer, ERC-20 transfer, storage write) with a known gas profile
  • Gas Rate — workloads are defined by target gas per second, not transactions per second
  • Fault Schedule — optional controlled failures injected during the run

Scenario Structure

name: my-scenario
description: "What this scenario tests"
version: "1.0"

target:
  chain_id: 412346
  sequencer_rpc: "http://sequencer:8547"

workload:
  gas_rate:
    type: sustained
    gas_per_sec: 5_000_000
  templates:
    - name: simple-transfer
      weight: 70
    - name: erc20-transfer
      weight: 30

phases:
  - name: warmup
    duration_secs: 30
    gas_rate_override:
      type: sustained
      gas_per_sec: 1_000_000
  - name: steady-state
    duration_secs: 300
  - name: cooldown
    duration_secs: 30
    gas_rate_override:
      type: sustained
      gas_per_sec: 500_000

Sections

  • SDL Reference — complete field-by-field reference for all SDL options
  • Examples — walkthrough of example scenarios
  • Best Practices — tips for writing effective scenarios

SDL Reference

Complete reference for the GasHammer Scenario Definition Language.

Top-Level Fields

FieldTypeRequiredDescription
namestringyesUnique scenario name (alphanumeric, hyphens, underscores)
descriptionstringyesHuman-readable description
versionstringyesScenario version

Target Section

Connection parameters for the Nitro deployment under test.

target:
  chain_id: 412346
  sequencer_rpc: "http://sequencer:8547"
  gateway_rpc: "http://gateway:8547"
  feed_relay: "ws://feed-relay:9642"
FieldTypeRequiredDescription
chain_idu64yesChain ID of the Nitro rollup
sequencer_rpcURLyesSequencer JSON-RPC endpoint
gateway_rpcURLnoGateway JSON-RPC endpoint
feed_relayURLnoFeed relay WebSocket URL

Workload Section

Defines the transaction workload.

workload:
  gas_rate:
    type: sustained | ramp | burst
    gas_per_sec: 5_000_000        # for sustained
    start_gas_per_sec: 1_000_000  # for ramp
    end_gas_per_sec: 10_000_000   # for ramp
    burst_gas: 50_000_000         # for burst
    burst_duration_secs: 5        # for burst
  seed: 42
  templates:
    - name: simple-transfer
      weight: 70
    - name: erc20-transfer
      weight: 30

Gas Rate Types

TypeFieldsDescription
sustainedgas_per_secConstant gas rate
rampstart_gas_per_sec, end_gas_per_secLinear ramp between rates
burstburst_gas, burst_duration_secsShort burst followed by zero

Built-in Templates

TemplateEstimated GasDescription
simple-transfer21,000ETH value transfer
erc20-transfer~65,000ERC-20 token transfer
erc20-approve~46,000ERC-20 approval
storage-write~44,000+Write to storage slots
compute-heavyconfigurableCPU-intensive computation

Seed

The seed field (optional, defaults to 0) seeds the deterministic PRNG. The same seed with the same scenario configuration produces the same transaction sequence.

Phases Section

Phases define time segments within a run. Each phase can override the default gas rate.

phases:
  - name: warmup
    duration_secs: 30
    gas_rate_override:
      type: sustained
      gas_per_sec: 1_000_000
  - name: steady-state
    duration_secs: 300
  - name: cooldown
    duration_secs: 30
    gas_rate_override:
      type: sustained
      gas_per_sec: 500_000
FieldTypeRequiredDescription
namestringyesPhase name
duration_secsu64yesPhase duration in seconds (min: 1)
gas_rate_overrideGasRatenoOverride the default gas rate for this phase

Oracle Section

Configure correctness checks.

oracle:
  invariants:
    - balance-conservation
    - nonce-monotonicity
    - gas-accounting
  check_interval_secs: 10
FieldTypeRequiredDescription
invariantslistnoWhich invariant checks to enable
check_interval_secsu64noHow often to run live checks (default: 10)

Fault Schedule Section

Define controlled failures to inject during the run.

fault_schedule:
  - at_secs: 60
    action: inject
    fault:
      type: latency
      target: sequencer-rpc
      latency_ms: 200
      jitter_ms: 50
  - at_secs: 120
    action: clear
    fault:
      type: latency
      target: sequencer-rpc
FieldTypeRequiredDescription
at_secsu64yesSeconds from run start
actioninject or clearyesWhether to start or stop the fault
fault.typestringyesFault type: latency, packet-loss, feed-disconnect, partition
fault.targetstringyesComponent to target
fault.latency_msu64variesAdded latency (for latency type)
fault.jitter_msu64noLatency jitter
fault.loss_pctf64variesPacket loss percentage (for packet-loss type)

Edges Section

Control edge deployment.

edges:
  count: 3
  accounts_per_edge: 100
FieldTypeRequiredDescription
countu32noNumber of edges (default: 1)
accounts_per_edgeu32noAccounts per edge (default: 100)

Scenario Examples

GasHammer ships with several example scenarios in the scenarios/ directory. This page walks through each one.

Simple Transfer Load

File: scenarios/simple-transfer-load.yaml

A basic throughput test using only ETH transfers at a sustained 5 Mgas/s. Three phases: warmup (30s at 1 Mgas/s), steady state (5 min at 5 Mgas/s), and cooldown (30s at 500 Kgas/s).

This is the simplest scenario and a good starting point for validating that the pipeline works.

Mixed Workload

File: scenarios/mixed-workload.yaml

Tests with multiple transaction templates to simulate realistic usage:

TemplateWeightEstimated Gas
simple-transfer30%21,000
erc20-transfer25%~65,000
erc20-approve10%~46,000
storage-write20%~44,000
compute-heavy15%~200,000

The mixed workload produces a more realistic gas profile than pure transfers.

Ramp to Saturation

File: scenarios/ramp-to-saturation.yaml

Five phases that progressively increase the gas rate from 1 Mgas/s to 50 Mgas/s. Designed to find the capacity ceiling of the system under test.

The resulting report shows where latency begins to degrade (degradation onset) and where the system becomes unstable (instability threshold). These points define the safe operating envelope.

Fault Tolerance

File: scenarios/fault-tolerance.yaml

Injects controlled network faults during load:

  1. 200ms latency on sequencer RPC at t=60s
  2. Feed relay disconnect at t=120s
  3. 10% packet loss at t=180s

Each fault is cleared before the next is injected. This tests resilience and recovery behavior.

Correctness Audit

File: scenarios/correctness-audit.yaml

Low-rate (500 Kgas/s) test with all oracle invariants enabled:

  • Balance conservation
  • Nonce monotonicity
  • Gas accounting

Designed for correctness verification rather than throughput testing. The low rate ensures the oracle can keep up with verification.

Quick Smoke Test

File: scenarios/quick-smoke-test.yaml

Minimal 2-minute test at 1 Mgas/s with a single warmup phase. Used for fast validation that the full pipeline works end-to-end.

Scenario Best Practices

Start Small, Ramp Up

Begin with a low gas rate (1 Mgas/s) and short duration. Validate that the pipeline works before increasing load. The quick-smoke-test.yaml scenario is designed for this.

Use Deterministic Seeds

Always set the seed field in production scenarios. This ensures reproducibility: the same seed with the same configuration produces the same transaction sequence. When comparing results across runs, identical seeds eliminate workload variance as a variable.

workload:
  seed: 42

Include Warmup and Cooldown Phases

Sudden load changes can produce misleading metrics. Include at least a 30-second warmup phase at reduced rate and a 30-second cooldown:

phases:
  - name: warmup
    duration_secs: 30
    gas_rate_override:
      type: sustained
      gas_per_sec: 1_000_000
  - name: steady-state
    duration_secs: 300
  - name: cooldown
    duration_secs: 30
    gas_rate_override:
      type: sustained
      gas_per_sec: 500_000

Mix Templates for Realism

Pure transfer workloads under-represent real usage. Include storage writes, contract calls, and compute-heavy transactions to exercise the full execution pipeline. Weight them according to your expected traffic profile.

Use Ramp Scenarios to Find Limits

The ramp-to-saturation pattern progressively increases load until the system degrades. This is the most reliable way to determine the safe operating envelope. Run it before committing to a target gas rate for sustained tests.

Enable Oracle Checks for Correctness

For correctness-focused testing, enable all oracle invariants and keep the gas rate low enough that the oracle can verify every transaction:

oracle:
  invariants:
    - balance-conservation
    - nonce-monotonicity
    - gas-accounting
  check_interval_secs: 5

Inject Faults Sequentially

When testing fault tolerance, inject one fault at a time and clear it before the next. This makes it easier to correlate observed behavior with specific faults. Always leave recovery time between faults.

Name Phases Descriptively

Phase names appear in reports. Use names that describe the intent (warmup, steady-state-5mgas, fault-latency-200ms) rather than generic labels (phase1, phase2).

Version Your Scenarios

Include a version field and update it when the scenario changes. This makes it possible to correlate reports with specific scenario definitions.

CLI Reference

GasHammer provides two binaries and a set of cargo xtask commands.

Binaries

gashammer-hive

The control plane process. Runs the REST API, gRPC server, and orchestration engine.

gashammer-hive [OPTIONS]

Options:
  --config <PATH>       Path to hive.toml configuration file [default: hive.toml]
  --bind <ADDRESS>      Override bind address [default: 0.0.0.0]
  --rest-port <PORT>    Override REST API port [default: 8080]
  --grpc-port <PORT>    Override gRPC port [default: 9090]
  --data-dir <PATH>     Override data directory [default: /var/lib/gashammer]
  --log-level <LEVEL>   Log level: trace, debug, info, warn, error [default: info]
  --log-format <FMT>    Log format: json, pretty [default: json]
  --version             Print version and build info
  --help                Print help

gashammer-edge

The edge worker process. Connects to a hive and a Nitro deployment.

gashammer-edge [OPTIONS]

Options:
  --config <PATH>             Path to edge.toml configuration file [default: edge.toml]
  --name <NAME>               Edge name (overrides config)
  --hive <ADDRESS>            Hive gRPC address [default: localhost:9090]
  --sequencer-rpc <URL>       Nitro sequencer RPC endpoint
  --gateway-rpc <URL>         Nitro gateway RPC endpoint
  --feed-relay <URL>          Sequencer feed relay WebSocket URL
  --accounts <COUNT>          Number of test accounts [default: 100]
  --log-level <LEVEL>         Log level [default: info]
  --log-format <FMT>          Log format [default: json]
  --version                   Print version and build info
  --help                      Print help

xtask Commands

Run these with cargo xtask <command>:

docgen

Generate API documentation from source code.

cargo xtask docgen [OPTIONS]

Options:
  --watch                 Watch source files and regenerate on change
  --coverage              Print documentation coverage report
  --min-coverage <N>      Exit 1 if coverage is below N% (requires --coverage)
  --json                  Output coverage as JSON (requires --coverage)
  --clean                 Remove all generated API pages
  --dry-run               Show what would be generated without writing
  --crate <NAME>          Generate docs for a single crate only

REST API

The hive exposes a REST API for scenario management and run control. All responses include the X-Powered-By: GasHammer/<version> header.

Scenarios

MethodPathDescription
POST/scenariosUpload a scenario YAML
GET/scenariosList all scenarios
GET/scenarios/{name}Get a specific scenario

Runs

MethodPathDescription
POST/runsStart a new run
GET/runsList all runs
GET/runs/{id}Get run status
GET/runs/{id}/reportDownload the run report
DELETE/runs/{id}Cancel a running test

Edges

MethodPathDescription
GET/edgesList connected edges
GET/edges/{id}Get edge details

System

MethodPathDescription
GET/healthHealth check
GET/versionVersion and build info
GET/metricsPrometheus metrics

System Overview

GasHammer is structured as a distributed system with a central control plane (hive) and regional workers (edges). This architecture allows scaling load generation across multiple machines while maintaining centralized orchestration and data collection.

Design Principles

  1. Wire-protocol-only integration — GasHammer never imports Nitro crate code. All interaction is via JSON-RPC, WebSocket feed, gRPC, and L1 contract reads.
  2. Gas-first workload modeling — workloads are defined by target gas/sec, not requests/sec. This accurately models blockchain resource consumption.
  3. Deterministic reproducibility — seeded PRNGs ensure identical test runs for reliable comparison.
  4. Correctness by default — every run can verify invariants, not just measure throughput.

Crate Architecture

gashammer/
├── gashammer-common       Shared types, config, build info, error codes
├── gashammer-nitro        Nitro protocol adapters (RPC, feed, L1 contracts)
├── gashammer-hive         Hive control plane (REST API, gRPC, orchestration)
├── gashammer-edge         Edge runtime (tx pipeline, feed correlator)
├── gashammer-workload     Workload engine (templates, gas modeling, rate control)
├── gashammer-telemetry    Event model, pipeline, Parquet storage
├── gashammer-oracle       Correctness checks (invariants, verdicts)
├── gashammer-fault        Fault injection (network faults, timeline, safety)
├── gashammer-report       Reporting (latency, capacity envelope, regression)
├── gashammer-scenario     Scenario Definition Language (parser, validator, compiler)
├── gashammer-docgen       Documentation engine (syn parser, mdBook generation)
└── gashammer-testenv      Testcontainers Nitro devnet orchestration

Component Interactions

                          ┌──────────────────────┐
                          │        Hive           │
                          │                       │
                          │  ┌─────────────────┐  │
      REST API ──────────▶│  │  Orchestrator   │  │
    (scenarios,           │  │  (run lifecycle, │  │
     runs, edges)         │  │   phase sync)   │  │
                          │  └────────┬────────┘  │
                          │           │ gRPC      │
                          │  ┌────────┴────────┐  │
                          │  │  Edge Registry   │  │
                          │  │  (heartbeat,     │  │
                          │  │   health check)  │  │
                          │  └────────┬────────┘  │
                          │           │           │
                          │  ┌────────┴────────┐  │
                          │  │  Telemetry Sink  │  │
                          │  │  (Parquet files) │  │
                          │  └────────┬────────┘  │
                          │           │           │
                          │  ┌────────┴────────┐  │
                          │  │  Report Engine   │  │
                          │  │  (latency, SOE,  │  │
                          │  │   regression)    │  │
                          │  └─────────────────┘  │
                          └───────────────────────┘
                                    │
                                    │ gRPC (mTLS)
                 ┌──────────────────┼──────────────────┐
                 │                  │                   │
           ┌─────┴─────┐     ┌─────┴─────┐      ┌─────┴─────┐
           │   Edge 1   │     │   Edge 2   │      │   Edge N   │
           │            │     │            │      │            │
           │ ┌────────┐ │     │ ┌────────┐ │      │ ┌────────┐ │
           │ │Workload│ │     │ │Workload│ │      │ │Workload│ │
           │ │ Engine │ │     │ │ Engine │ │      │ │ Engine │ │
           │ └───┬────┘ │     │ └───┬────┘ │      │ └───┬────┘ │
           │     │      │     │     │      │      │     │      │
           │ ┌───┴────┐ │     │ ┌───┴────┐ │      │ ┌───┴────┐ │
           │ │  Tx    │ │     │ │  Tx    │ │      │ │  Tx    │ │
           │ │Pipeline│ │     │ │Pipeline│ │      │ │Pipeline│ │
           │ └───┬────┘ │     │ └───┬────┘ │      │ └───┬────┘ │
           └─────┼──────┘     └─────┼──────┘      └─────┼──────┘
                 │                  │                    │
                 └──────────────────┼────────────────────┘
                                    │
                              ┌─────┴──────┐
                              │   Nitro     │
                              │  Rollup     │
                              │ (Sequencer, │
                              │  Gateway,   │
                              │  Feed)      │
                              └────────────┘

Data Flow

  1. User submits a scenario to the hive via REST API
  2. Hive validates and compiles the scenario, then creates a run
  3. Hive distributes the run configuration to registered edges via gRPC
  4. Each edge starts its workload engine, which generates transactions at the target gas rate
  5. The tx pipeline signs and submits transactions to the Nitro sequencer
  6. The feed correlator matches submitted txs against the sequencer feed
  7. Telemetry events flow from edges back to the hive via gRPC streaming
  8. The hive writes events to Parquet files for durable storage
  9. The correctness oracle evaluates invariants during and after the run
  10. The report engine generates the final report with latency analysis, capacity envelope, and verdicts

Hive Control Plane

The hive is GasHammer’s central control plane. It coordinates runs across edges, collects telemetry, and produces reports.

Responsibilities

  • Edge management — register, track heartbeats, detect stale edges
  • Scenario validation — parse and compile SDL scenarios
  • Run orchestration — start runs, synchronize phases across edges, handle completion and failure
  • Telemetry collection — receive event streams from edges, write to Parquet
  • Report generation — compute latency percentiles, capacity envelopes, regression analysis
  • REST API — human and CI interface for all operations

Run State Machine

Preflight ──▶ Barrier ──▶ Running ──▶ Completing ──▶ Done
                │            │             │
                │            │             └──▶ Done(Fail)
                │            ├──▶ Recovering ──▶ Running
                │            │
                └────────────┴──▶ Aborted
  • Preflight — fault adapters checked, scenario validated, edges notified
  • Barrier — waiting for all edges to acknowledge readiness (barrier sync)
  • Running — workload active, phases progressing (phase_index, phase_name tracked)
  • Recovering — transient error detected, attempting recovery before resuming
  • Completing — drain in progress, edges flushing final telemetry
  • Done — terminal state with outcome: Pass, Fail, or Inconclusive
  • Aborted — terminal state with reason string (unrecoverable error, manual cancel)

State transitions are validated: RunState::can_transition_to() enforces legal transitions. Terminal states (Done, Aborted) cannot transition further. is_terminal() and is_active() are available for status queries.

Phase Synchronization

The hive coordinates phase transitions using barrier sync: all edges must complete the current phase before any edge starts the next. This ensures consistent behavior across a distributed test.

If an edge misses the barrier within the configured timeout, the hive marks it as stale and proceeds with the remaining edges. The stale edge is flagged in the report.

Edge Registry

The registry tracks connected edges with:

  • Edge ID (UUID)
  • Registration time
  • Last heartbeat timestamp
  • Edge capabilities (reported during registration)
  • Current status (idle, running, draining, stale)

A reaper task periodically removes stale edges that have missed heartbeats beyond the configured timeout.

Telemetry Sink

Edges stream telemetry events to the hive via gRPC. The hive buffers events and writes them to Parquet files, partitioned by run ID and time. Parquet metadata includes GasHammer DNA provenance (version, build SHA, copyright).

Configuration Reference

See Hive Configuration.

Edge Runtime

Edges are GasHammer’s regional worker processes. Each edge connects to a Nitro deployment and generates transaction load as directed by the hive.

Startup Sequence

  1. Parse CLI arguments and load edge.toml configuration
  2. Connect to the Nitro sequencer (JSON-RPC) and feed relay (WebSocket)
  3. Register with the hive via gRPC
  4. Start the heartbeat task
  5. Wait for run assignments

Transaction Pipeline

The tx pipeline is the core of the edge runtime:

Workload Engine ──▶ Build Tx ──▶ Sign ──▶ Submit ──▶ Track Receipt
       │                                      │              │
       │              Account Pool ────────────┘              │
       │                                                      │
       └──────────── Feed Correlator ◀────────────────────────┘
                          │
                    Telemetry Events
  1. Workload engine selects a template and gas rate based on the current phase
  2. Build constructs a transaction from the template and an account from the pool
  3. Sign applies the account’s private key
  4. Submit sends the signed tx to the sequencer via JSON-RPC
  5. Track polls for the receipt and records confirmation latency
  6. Feed correlator matches submitted txs against their appearance in the sequencer feed, measuring inclusion latency

Account Pool

Test accounts are derived deterministically from an HD wallet mnemonic at the derivation path m/44'/60'/0'/0'/0x4748/{index} (the 0x4748 segment is the GasHammer DNA marker — “GH” in hex).

Accounts are partitioned across edges with no overlap. Each account tracks its nonce locally to avoid on-chain nonce queries. On transaction failure, the nonce is recovered.

Feed Correlator

The correlator maintains a set of pending transaction hashes with their submission timestamps. As messages arrive on the sequencer feed, it matches them against the pending set and emits correlation events with the measured inclusion latency.

Transactions that never appear in the feed within the timeout are recorded as correlation misses.

Telemetry

The edge maintains a lock-free ring buffer of telemetry events. A shipper task batches events and streams them to the hive via gRPC. If the buffer is full, the oldest events are dropped (with a counter to track loss).

Graceful Shutdown

On SIGTERM or SIGINT:

  1. Stop the workload engine (no new transactions)
  2. Wait for pending transactions to complete or timeout
  3. Flush remaining telemetry to the hive
  4. Deregister from the hive
  5. Exit

Configuration Reference

See Edge Configuration.

Nitro Integration

GasHammer interacts with Arbitrum Nitro exclusively through wire protocols. No Nitro source code is imported. This boundary is enforced at the dependency level — gashammer-nitro depends only on alloy, tokio-tungstenite, and reqwest, never on Nitro crates.

Ref: RFC-0001.

Integration Surface

┌──────────────────────────────────────────────────────┐
│                   GasHammer Edge                      │
│                                                       │
│  ┌─────────────┐  ┌───────────────┐  ┌─────────────┐ │
│  │ RPC Provider │  │ Feed Consumer │  │ L1 Monitor  │ │
│  └──────┬──────┘  └───────┬───────┘  └──────┬──────┘ │
│         │                 │                  │        │
└─────────┼─────────────────┼──────────────────┼────────┘
          │                 │                  │
    JSON-RPC/HTTP     WebSocket            JSON-RPC/HTTP
    (port 8547)       (port 9642)          (L1 RPC)
          │                 │                  │
    ┌─────┴──────┐   ┌─────┴──────┐    ┌──────┴───────┐
    │ Sequencer  │   │ Feed Relay │    │  L1 (Geth)   │
    │  / Gateway │   │            │    │ SequencerInbox│
    └────────────┘   └────────────┘    │ RollupCore   │
                                       └──────────────┘

JSON-RPC Provider

NitroRpcProvider wraps an alloy HTTP transport with retry logic and health checks.

Configuration (RpcConfig):

FieldDefaultDescription
urlrequiredSequencer or gateway endpoint
timeout_ms5000Per-request timeout
max_retries3Retry count on transient errors
retry_base_delay_ms500Base delay for exponential backoff

Key operations:

  • send_raw_transaction(raw_tx) — submit a signed transaction
  • get_transaction_receipt(hash) — poll for inclusion
  • get_block_number() — current L2 block height
  • health_check() — validates connectivity via eth_chainId

Every outbound HTTP request includes the User-Agent and X-Powered-By headers with GasHammer version metadata (see DNA Provenance).

Sequencer Feed Consumer

FeedConsumer connects to the Nitro feed relay over WebSocket (port 9642) and parses the BroadcastMessage envelope.

Wire format (JSON over WebSocket):

{
  "version": 1,
  "messages": [
    {
      "sequenceNumber": 12345,
      "message": { "header": {...}, "l2Msg": "0x..." },
      "signature": "0x..."
    }
  ],
  "confirmedSequenceNumberMessage": { "sequenceNumber": 12340 }
}

Behavior:

  • Monitors sequence number gaps and emits FeedGap telemetry events.
  • Detects stalls when no message arrives within stall_threshold_ms (default: 30,000).
  • On disconnect, reconnects with exponential backoff (base 1s, max 30s, jitter).
  • Tracks gashammer_feed_messages_total and gashammer_feed_reconnects_total counters.

Configuration (FeedConfig):

FieldDefaultDescription
urlrequiredFeed relay WebSocket URL
channel_buffer4096Internal channel capacity
ping_interval_ms30000WebSocket ping interval
pong_timeout_ms5000Pong deadline
stall_threshold_ms30000Silence before stall detection

L1 Contract Reader

L1ContractReader reads Nitro’s L1 contracts via an Ethereum JSON-RPC provider.

Contracts:

ContractKey Functions / Events
SequencerInboxbatchCount(), event SequencerBatchDelivered(...)
RollupCorelatestConfirmed(), event AssertionCreated(...), event AssertionConfirmed(...)

Decoded event types:

  • BatchDeliveredEvent — batch sequence number, accumulators, time bounds, data location, L1 block/tx
  • AssertionEvent — assertion hash, kind (Created/Confirmed), L1 block/tx

Configuration (L1Config):

FieldDefaultDescription
rpc_urlrequiredL1 Ethereum RPC endpoint
sequencer_inbox_addressrequiredHex address of SequencerInbox
rollup_core_addressrequiredHex address of RollupCore
poll_interval_ms15000Polling interval
lookback_blocks100Blocks to scan on first poll

L1 Monitor

L1Monitor spawns a background tokio task that polls the L1 contracts at the configured interval and emits events via an mpsc channel.

Event types:

VariantPayloadMeaning
BatchDeliveredVec<BatchDeliveredEvent>, blockNew batches posted to L1
AssertionUpdateVec<AssertionEvent>, blockAssertion created or confirmed
PollErrorerror messageNon-fatal poll failure (retries next tick)

Metrics (atomic counters):

CounterDescription
gashammer_l1_pollsTotal poll iterations
gashammer_l1_batches_seenBatch events observed
gashammer_l1_assertions_seenAssertion events observed
gashammer_l1_poll_errorsPoll errors encountered

The monitor exits when the event channel is closed, enabling clean shutdown via dropping the receiver.

Workload Engine

The workload engine generates transaction load at a target gas rate. Workloads are defined by gas/sec, not requests/sec — this accurately models blockchain resource consumption since different transaction types consume vastly different amounts of gas.

Ref: RFC-0005.

Gas Rate Modes

The engine supports four gas rate modes, each computing the target gas/sec for a given elapsed time.

ModeFieldsBehavior
Sustainedgas_per_secConstant rate
Rampedstart_gas_per_sec, end_gas_per_sec, duration_msLinear interpolation between start and end
Burstybaseline_gas_per_sec, burst_peak_gas_per_sec, burst_duration_ms, burst_interval_msPeriodic bursts above a baseline
Customschedule: Vec<(u64, u64)>Arbitrary piecewise-linear schedule

The engine evaluates the rate on each tick (TICK_INTERVAL_MS) and generates enough transactions to consume the target gas for that interval.

Transaction Templates

Templates define the shape of a transaction. Each template implements the Parameterizer trait:

#![allow(unused)]
fn main() {
trait Parameterizer: Send + Sync {
    fn descriptor(&self) -> TemplateDescriptor;
    fn next(&mut self, rng: &mut impl Rng, to: Address) -> TemplateOutput;
}
}

Every call to next() returns a TemplateOutput with to, value, data, gas_limit, estimated_gas, and template_name.

Built-in Templates

TemplateEst. GasDescription
simple-transfer21,000ETH value transfer
noop21,000Zero-value self-transfer
erc20-transfer~65,000ERC-20 transfer()
erc20-approve~46,000ERC-20 approve()
storage-write~44,000Single storage slot write
storage-write-heavy~200,000Multiple storage slot writes
compute-heavy~200,000CPU-intensive loop
calldata-heavyconfigurableLarge calldata payload

All calldata includes the CALLDATA_MAGIC bytes [0x47, 0x48, 0x4D, 0x52] (“GHMR”) for on-chain attribution.

Template Weighting

Scenarios assign integer weights to templates. The engine normalizes weights into a probability distribution and selects templates proportionally.

templates:
  - name: simple-transfer
    weight: 70
  - name: erc20-transfer
    weight: 30

This produces ~70% transfers and ~30% ERC-20 calls by count, though the gas distribution will differ due to different per-tx gas costs.

Account Pool

Test accounts are derived deterministically from an HD wallet mnemonic.

Derivation path: m/44'/60'/0'/0'/0x4748/{index}

The 0x4748 segment is the GasHammer DNA marker (“GH” in hex). This makes GasHammer-generated accounts identifiable on-chain.

Key properties:

  • Deterministic: same mnemonic + same index = same private key.
  • Partitioned: accounts are divided across edges with no overlap.
  • Nonce-tracking: each Account maintains a local nonce counter, avoiding on-chain queries. On tx failure, the nonce is not incremented.
  • Thread-safe: Account::next_nonce() uses AtomicU64 for lock-free increment.

Deterministic PRNG

All randomness in workload generation (template selection, parameter variation, account assignment) flows through a seeded ChaCha8Rng. The same seed value with the same scenario configuration produces the same transaction sequence, enabling reliable comparison across runs.

The seed is set in the scenario:

workload:
  seed: 42

Default seed is 0 if omitted.

Fault Injection

GasHammer injects controlled faults to test resilience under adverse conditions. The fault system is built around a pluggable adapter architecture with safety rails to prevent leaked faults.

Ref: RFC-0008.

Fault Types

TypeAdapterDescription
NetworkLatencynetemAdded delay on network interface
PacketLossnetemRandom packet drops
BandwidthLimitnetemThrottled throughput
NetworkJitternetemVariable delay
ConnectionResetiptablesTCP RST on matching connections
PortBlockiptablesDROP on a target port
FeedDisconnect(planned)Kill WebSocket feed connection
RpcSlowResponse(planned)Inject RPC response delay
RpcErrorInjection(planned)Return errors from RPC

Adapter Architecture

Every adapter implements the FaultAdapter trait:

#![allow(unused)]
fn main() {
trait FaultAdapter: Send + Sync {
    fn name(&self) -> &str;
    fn supported_faults(&self) -> Vec<FaultType>;
    async fn preflight_check(&self) -> PreflightResult;
    async fn inject(&self, spec: FaultSpec) -> Result<FaultHandle, String>;
    async fn clear(&self, handle_id: Uuid) -> Result<(), String>;
    async fn clear_all(&self) -> Result<u32, String>;
}
}

Lifecycle:

  1. preflight_check() — verify prerequisites (binary exists, permissions).
  2. inject(spec) — apply the fault, return a FaultHandle with a UUID.
  3. clear(handle_id) — remove a specific fault by handle.
  4. clear_all() — remove all faults managed by this adapter.

Netem Adapter

Wraps Linux tc qdisc add dev <iface> root netem .... Requires CAP_NET_ADMIN.

Parameters:

ParameterFault TypesDescription
interfaceallNetwork interface (default: eth0)
delay_msLatency, JitterDelay in milliseconds
jitter_msJitterJitter variation
loss_pctPacketLossLoss percentage
rateBandwidthLimitBandwidth cap (e.g., 1mbit)

Cleanup: tc qdisc del dev <iface> root netem.

Iptables Adapter

Wraps iptables -A INPUT .... Requires CAP_NET_ADMIN.

Parameters:

ParameterFault TypesDescription
portPortBlock, ConnectionResetTarget port
protocolalltcp or udp (default: tcp)
  • ConnectionReset injects -j REJECT --reject-with tcp-reset.
  • PortBlock injects -j DROP.

Cleanup: replays the same rule args with -D instead of -A.

Fault Manager

FaultManager routes inject() calls to the correct adapter based on the fault type and tracks all active faults.

Auto-clear: When a FaultSpec includes a duration, the manager spawns a background task that calls adapter.clear(handle_id) after the duration elapses. The adapters are wrapped in Arc<Vec<Box<dyn FaultAdapter>>> to enable safe sharing across the spawn boundary.

Safety invariant: every injected fault is tracked by handle ID. clear_all() iterates all adapters and removes all active faults. This is called during shutdown to prevent fault leakage.

Fault Timeline

A FaultTimeline is a sequence of scheduled fault events, defined in the scenario SDL:

fault_schedule:
  - at_secs: 60
    action: inject
    fault:
      type: latency
      target: sequencer-rpc
      latency_ms: 200
  - at_secs: 120
    action: clear
    fault:
      type: latency
      target: sequencer-rpc

Each event specifies:

  • offset_ms — time from run start.
  • fault_name — human-readable label for correlation.
  • target_edgesAll, Region(name), or Specific(vec![uuid]).
  • actionInject(FaultSpec) or Clear { fault_name }.

The timeline can restrict execution to specific environments via allowed_environments and blocked_environments to prevent accidental injection in production.

Preflight Checks

Before a run starts, the fault manager calls preflight_check() on every adapter. The result reports:

#![allow(unused)]
fn main() {
struct PreflightResult {
    adapter_name: String,
    ready: bool,
    issues: Vec<String>,
}
}

If any required adapter is not ready, the run is blocked.

Correctness Oracle

The oracle verifies that the system under test behaves correctly during and after each run. It evaluates invariants and produces verdicts backed by evidence.

Ref: RFC-0007.

Invariant Framework

Every correctness check implements the Invariant trait:

#![allow(unused)]
fn main() {
trait Invariant: Send + Sync {
    fn check_type(&self) -> CheckType;
    fn name(&self) -> &'static str;
    async fn check(&self, observations: &[Observation]) -> Vec<Verdict>;
}
}

Check Types

TypeWhenUse Case
LiveDuring the run, at check_interval_secsDetect issues early
PostRunAfter the run completesFull-dataset analysis
BothBoth timesDouble coverage

Built-in Invariants

InvariantDescription
balance-conservationSum of all account balance changes equals sum of gas fees paid. No ETH created or destroyed.
nonce-monotonicityEach account’s nonce increases by exactly 1 per included transaction. No gaps, no duplicates.
gas-accountinggas_used never exceeds gas_limit. Reported gas fee matches gas_used * effective_gas_price.

Verdicts and Evidence

Each check produces a Verdict:

#![allow(unused)]
fn main() {
struct Verdict {
    check_name: String,
    passed: bool,
    evidence: Vec<Evidence>,
}

struct Evidence {
    description: String,
    expected: String,
    actual: String,
    tx_hash: Option<String>,
    block_number: Option<u64>,
    raw_data: Option<String>,
}
}

A passing verdict has passed: true and empty evidence. A failing verdict includes one or more Evidence entries that identify the exact transaction, block, expected value, and actual value.

Post-Run Aggregation

After all checks complete, the oracle produces a CorrectnessVerdict:

#![allow(unused)]
fn main() {
struct CorrectnessVerdict {
    overall_pass: bool,
    summaries: Vec<CheckSummary>,
}

struct CheckSummary {
    check_name: String,
    total_checked: u64,
    passed: u64,
    failed: u64,
    skipped: u64,
    violations: Vec<Evidence>,
}
}

MAX_VIOLATIONS_PER_CHECK (100) caps evidence collection to prevent unbounded memory use when many transactions fail the same check.

overall_pass is true only if every check has zero failures.

Observations

The oracle does not query the chain directly. It receives Observation structs distilled from telemetry events:

#![allow(unused)]
fn main() {
struct Observation {
    tx_hash: String,
    from: String,
    to: String,
    value: u64,
    gas_limit: u64,
    gas_used: Option<u64>,
    effective_gas_price: Option<u64>,
    nonce: u64,
    block_number: Option<u64>,
    status: Option<bool>,
}
}

This decouples the oracle from the transport layer and makes checks testable without a live node.

Extending the Oracle

To add a custom invariant:

  1. Implement the Invariant trait.
  2. Register it with the oracle during run setup.
  3. Reference it by name in the scenario’s oracle.invariants list.

Testing convention: every invariant must have both a passing and a failing test case, per CLAUDE.md testing rule #8.

Telemetry Pipeline

The telemetry pipeline captures, transports, and stores every significant event during a run. Events flow from edges to the hive, where they are written to Parquet files for post-run analysis.

Ref: RFC-0006.

Event Model

Every event is a TelemetryEvent:

#![allow(unused)]
fn main() {
struct TelemetryEvent {
    event_id: Uuid,
    edge_id: EdgeId,
    run_id: RunId,
    phase_index: u32,
    monotonic_ns: u64,
    wall_ns: u64,
    payload: EventPayload,
}
}

Two timestamps are recorded: monotonic_ns (for ordering within an edge) and wall_ns (for cross-edge correlation).

Event Payloads

PayloadFieldsMeaning
TxSubmittedhash, template, gas, account, nonceTransaction sent to sequencer
TxAcceptedhash, latency_msSequencer acknowledged
TxIncludedhash, block, gas_used, latency_msTransaction in a block
TxFailedhash, error, phaseSubmission or confirmation failure
TxTimeouthash, timeout_msReceipt polling exceeded deadline
FeedConnectedurlWebSocket connection established
FeedDisconnectedurl, reasonWebSocket connection lost
FeedGapexpected_seq, actual_seqSequence number gap detected
FeedStallduration_msNo messages for threshold duration
FaultInjectedfault_type, target, handle_idFault activated
FaultClearedhandle_idFault removed

Edge-Side Buffer

Each edge maintains a lock-free ring buffer (default capacity: 1,000,000 events). The buffer uses tokio::sync::mpsc with a bounded channel.

Backpressure: If the buffer is full, the oldest events are dropped. A counter (gashammer_events_dropped_total) tracks loss. This guarantees the edge never blocks on telemetry — transaction submission always takes priority.

Transport

A shipper task batches events from the ring buffer and streams them to the hive via gRPC.

Configuration (PipelineConfig):

FieldDefaultDescription
buffer_capacity1,000,000Ring buffer size
batch_size1,000Events per gRPC batch
flush_interval_ms100Max time before flushing a partial batch

Metrics:

CounterDescription
gashammer_events_generatedEvents created on this edge
gashammer_events_shippedEvents sent to hive
gashammer_events_droppedEvents lost to backpressure
gashammer_batches_shippedgRPC batches sent

Hive-Side Storage

The hive receives event batches and writes them to Apache Parquet files.

Partitioning: {data_dir}/runs/{run_id}/{hour}.parquet

Rotation: by file size (default 256 MB) or time (default 1 hour), whichever comes first.

Parquet metadata: Each file footer includes DNA provenance fields:

KeyValue
gashammer.versionSoftware version
gashammer.buildBuild SHA
gashammer.run_idAssociated run UUID
gashammer.copyrightBSL-1.1 notice

Parquet’s columnar format enables efficient analytical queries over telemetry data (e.g., computing latency percentiles across millions of events).

Reporting Engine

The reporting engine transforms raw telemetry into actionable results: latency analysis, capacity envelopes, regression detection, and release gate verdicts.

Ref: RFC-0009.

Run Report

After a run completes, the engine reads Parquet-stored events and produces a RunReport containing:

  • Latency percentiles — p50, p90, p95, p99, p99.9 for submission, acceptance, and inclusion latencies.
  • Throughput time series — gas/sec and tx/sec sampled at 1-second intervals.
  • Error breakdown — failure counts by error type.
  • Oracle verdicts — pass/fail per invariant with evidence.
  • Capacity envelope — if applicable (ramp scenarios).
  • Regression analysis — if a baseline is provided.

Reports are available in JSON and Markdown formats via GET /runs/{id}/report.

Capacity Envelope

The capacity envelope defines the safe operating range of the system under test. It is computed from ramp-to-saturation scenarios where gas rate increases progressively.

#![allow(unused)]
fn main() {
struct CapacityEnvelope {
    safe_sustained_gas_sec: u64,
    degradation_onset_gas_sec: u64,
    instability_threshold_gas_sec: u64,
    first_bottleneck: BottleneckClassification,
    bottleneck_confidence: Confidence,
    recovery_half_life_sec: f64,
    validator_drift_threshold_gas_sec: Option<u64>,
}
}

Envelope Boundaries

BoundaryMeaning
Safe sustainedGas rate the system handles indefinitely with acceptable latency
Degradation onsetRate where p99 latency begins climbing above baseline
Instability thresholdRate where errors appear or latency becomes unbounded

Bottleneck Classification

The engine identifies the first bottleneck from infrastructure metrics:

ClassificationIndicator
SequencerCpuCPU > 80% at degradation onset
SequencerMemoryMemory pressure detected
SequencerNetworkNetwork saturation
BatchPosterLagBatch posting falls behind
ValidatorReplayLagValidator cannot keep up with state
RpcSaturationRPC gateway saturated
DiskIoDisk IOPS limit reached
GasPriceEscalationBase fee escalating under load
UnknownNo clear indicator

Confidence is rated High, Medium, or Low based on the quality and consistency of the signal.

Data Points

The envelope is computed from two input series:

  • RampDataPoint — gas rate, measured metric value, and time offset. One per observation during the ramp.
  • InfraMetricsPoint — CPU %, memory %, disk IOPS, network bytes/sec, base fee, batch post age, validator lag. Sampled at the same interval.

Regression Detection

When a baseline run ID is provided, the engine compares the current run against the baseline and flags regressions.

Comparison dimensions:

MetricRegression Threshold
p50 latency>10% increase
p99 latency>20% increase
Throughput>10% decrease
Error rateAny increase from 0%, or >50% increase
Safe sustained gas rate>10% decrease

Results include the baseline value, current value, delta percentage, and whether the regression is flagged.

Report Formats

FormatContent-TypeUse Case
JSONapplication/jsonCI/CD pipelines, programmatic access
Markdowntext/markdownHuman review, PR comments, wiki

Both formats include the complete data — percentiles, time series, verdicts, envelope, and regression results.

Data Flow

This page describes how data moves through GasHammer during a run.

Event Lifecycle

Every significant action produces a telemetry event. Events flow from edges to the hive, where they are stored and analyzed.

Edge                           Network              Hive
────                           ───────              ────

TxSubmitted ──────────────────────────────────────▶ Ring Buffer
  (hash, template, gas,                              │
   account, timestamp)                                │
                                                      ▼
TxConfirmed ──────────────────────────────────────▶ Parquet Writer
  (hash, block, gas_used,                            │
   latency_ms)                                       │
                                                     ▼
FeedCorrelation ──────────────────────────────────▶ Oracle Evaluator
  (hash, inclusion_latency_ms,                       │
   sequence_number)                                  │
                                                     ▼
OracleVerdict ────────────────────────────────────▶ Report Aggregator
  (invariant, pass/fail,                             │
   evidence)                                         │
                                                     ▼
                                               RunReport (JSON/Markdown)

Telemetry Transport

Events are transported from edges to the hive via gRPC streaming:

  1. Edge batches events from its ring buffer (configurable batch size, default 1024)
  2. Edge sends the batch to the hive via a bidirectional gRPC stream
  3. Hive acknowledges receipt (enables backpressure)
  4. Hive writes events to Parquet files partitioned by run_id and hourly time buckets

Parquet Storage

Telemetry data is stored in Apache Parquet format for efficient columnar access:

  • Partitioning: {data_dir}/runs/{run_id}/{hour}.parquet
  • Rotation: by size (default 256 MB) or time (default 1 hour)
  • Metadata: each Parquet file footer includes GasHammer DNA fields:
    • gashammer.version — software version
    • gashammer.build — build SHA
    • gashammer.run_id — associated run
    • gashammer.copyright — BSL-1.1 notice

Report Generation

After a run completes, the report engine reads events from Parquet and computes:

MetricSource EventsOutput
Latency percentilesTxConfirmedp50, p90, p95, p99, p99.9
Throughput time seriesTxSubmitted, TxConfirmedgas/sec and tx/sec over time
Inclusion latencyFeedCorrelationFeed inclusion timing
Revert rateTxConfirmed (status=reverted)% of failed transactions
Correctness verdictsOracleVerdictPass/fail per invariant
Capacity envelopeAll latency eventsSafe operating envelope boundaries

Metrics (Prometheus)

Both hive and edge expose Prometheus metrics on their configured ports. All metrics use the gashammer_ prefix.

Edge Metrics

MetricTypeDescription
gashammer_tx_submitted_totalcounterTransactions submitted
gashammer_tx_confirmed_totalcounterTransactions confirmed
gashammer_tx_failed_totalcounterTransactions failed
gashammer_tx_latency_secondshistogramTransaction confirmation latency
gashammer_feed_messages_totalcounterFeed messages received
gashammer_correlation_latency_secondshistogramFeed inclusion latency
gashammer_events_generated_totalcounterTelemetry events created
gashammer_events_dropped_totalcounterEvents dropped due to buffer full

Hive Metrics

MetricTypeDescription
gashammer_edges_activegaugeCurrently connected edges
gashammer_runs_activegaugeCurrently running tests
gashammer_events_received_totalcounterEvents received from edges
gashammer_oracle_checks_totalcounterInvariant checks performed
gashammer_oracle_violations_totalcounterInvariant violations detected

Error Codes

Every GasHammer error carries a structured code in the format GH-Exxx. The prefix GH-E is part of GasHammer’s structural DNA (RFC-0011 §8.3). Codes are unique across the entire codebase and grouped by crate.

Code Ranges

RangeCrateCategory
E001–E099gashammer-commonShared infrastructure
E100–E199gashammer-nitroNitro protocol adapters
E200–E299gashammer-edgeEdge runtime
E300–E399gashammer-hiveHive control plane
E400–E499gashammer-workloadWorkload engine
E500–E599gashammer-telemetryTelemetry pipeline
E600–E699gashammer-oracleCorrectness oracle
E700–E799gashammer-faultFault injection
E800–E899gashammer-reportReporting engine
E900–E949gashammer-scenarioSDL parser/compiler
E950–E979gashammer-docgenDocumentation engine
E980–E999gashammer-testenvTest environment

Complete Registry

Common (E001–E099)

CodeNameDescription
GH-E001ConfigErrorConfiguration loading or validation failed
GH-E002SerializationErrorSerialization or deserialization failed
GH-E003InvariantViolationAn internal invariant was violated

Nitro (E100–E199)

CodeNameDescription
GH-E100RpcErrorJSON-RPC call failed
GH-E101FeedErrorSequencer feed connection or parsing failed
GH-E102L1ContractErrorL1 contract read failed

Edge (E200–E299)

CodeNameDescription
GH-E200EdgeRegistrationFailedEdge failed to register with the hive
GH-E201TxPipelineErrorTransaction pipeline error
GH-E202FeedCorrelationErrorFeed correlation error

Hive (E300–E399)

CodeNameDescription
GH-E300OrchestrationErrorRun orchestration failed
GH-E301EdgeRegistryErrorEdge registry operation failed
GH-E302ApiErrorREST API error

Workload (E400–E499)

CodeNameDescription
GH-E400TemplateErrorTransaction template construction failed
GH-E401RateControlErrorRate control error
GH-E402AccountPoolErrorAccount pool exhausted or nonce error

Telemetry (E500–E599)

CodeNameDescription
GH-E500EventShippingErrorEvent shipping failed
GH-E501StorageWriteErrorParquet storage write failed
GH-E502MetricErrorMetric registration or recording failed

Oracle (E600–E699)

CodeNameDescription
GH-E600CheckExecutionErrorInvariant check execution failed
GH-E601EvidenceErrorEvidence collection failed
GH-E602VerdictAggregationErrorVerdict aggregation error

Fault (E700–E799)

CodeNameDescription
GH-E700FaultInjectionErrorFault injection failed
GH-E701FaultReversionErrorFault reversion (clear) failed
GH-E702FaultSchedulingErrorFault scheduling error

Report (E800–E899)

CodeNameDescription
GH-E800ReportGenerationErrorReport generation failed
GH-E801RegressionAnalysisErrorRegression analysis failed
GH-E802ReportTemplateErrorTemplate rendering failed

Scenario (E900–E949)

CodeNameDescription
GH-E900ScenarioParseErrorScenario YAML parsing failed
GH-E901ScenarioValidationErrorScenario validation failed
GH-E902ScenarioCompilationErrorScenario compilation failed

Docgen (E950–E979)

CodeNameDescription
GH-E950SourceParseErrorSource file parsing failed
GH-E951DocgenTemplateErrorTemplate rendering failed
GH-E952DocgenOutputErrormdBook output write failed

Testenv (E980–E999)

CodeNameDescription
GH-E980ContainerErrorContainer startup failed
GH-E981HealthCheckErrorDevnet health check failed
GH-E982ContractDeployErrorContract deployment failed

Using Error Codes

All error types use thiserror and embed the error code in the Display impl:

#![allow(unused)]
fn main() {
#[derive(Debug, thiserror::Error)]
pub enum NitroError {
    #[error("[{code}] {msg}", code = ErrorCode::RpcError.code())]
    Transport(String),
    // ...
}
}

In log output and API responses, errors appear as:

[GH-E100] RPC call failed: connection refused

Search logs for a specific code with grep "GH-E100" to find all instances of a particular error class.

Crate Map

GasHammer is organized as a Cargo workspace with 12 crates. Each crate has a single responsibility and well-defined dependency boundaries.

Dependency Graph

                        gashammer-common
                       /    |    |    \
                      /     |    |     \
              nitro  work  tele  scenario  oracle  fault  report  docgen  testenv
               |      |     |       |        |       |      |       |       |
               edge   |     |       |        |       |      |       |       |
               |      |     |       |        |       |      |       |       |
               hive ──┘     |       |        |       |      |       |       |
                            |       |        |       |      |       |       |
                            └───────┴────────┴───────┴──────┘       |       |
                                            |                       |       |
                                       (all share common)           |       |

Rule: All crates depend on gashammer-common. No crate depends on a binary crate. The edge crate depends on nitro and workload. The hive crate depends on common only.

Crate Details

gashammer-common

Shared types, configuration, build info, and the error code registry.

ExportDescription
RunId, EdgeId, ScenarioId, PhaseIdUUID newtype wrappers
GasRate, TxCountNumeric newtype wrappers
ErrorCode, ErrorCategoryStructured error codes
BuildInfo, GASHAMMER_CANARYBuild metadata and DNA
gashammer::v1::*Generated gRPC/protobuf types

gashammer-nitro

Nitro protocol adapters. Wire-protocol-only — never imports Nitro crates.

ExportDescription
NitroRpcProviderJSON-RPC provider with retry
FeedConsumer, FeedConfigWebSocket feed consumer
L1ContractReader, L1ConfigSequencerInbox and RollupCore reader
L1Monitor, L1EventBackground L1 polling task
NitroErrorNitro-specific errors (E100–E102)

gashammer-edge

Edge worker runtime and binary.

ExportDescription
EdgeRuntime, RuntimeStateEdge lifecycle state machine
EdgeConfigEdge configuration
TxPipelineTransaction submission pipeline
FeedCorrelatorFeed-to-tx correlation
Binary: gashammer-edgeEdge entry point with CLI

gashammer-hive

Hive control plane and binary.

ExportDescription
RunOrchestrator, RunStateRun lifecycle state machine
EdgeRegistry, EdgeStatusEdge tracking with reaper
AppState, router()axum REST API
Binary: gashammer-hiveHive entry point with CLI

gashammer-workload

Workload generation engine.

ExportDescription
WorkloadEngine, EngineConfigCore engine with rate control
GasRateModeSustained, Ramped, Bursty, Custom
Parameterizer traitTemplate interface
AccountPool, AccountDeterministic HD wallet accounts
builtin_templates()8 built-in tx templates

gashammer-telemetry

Event model and transport.

ExportDescription
TelemetryEvent, EventPayloadEvent types
PipelineConfig, PipelineMetricsPipeline configuration

gashammer-oracle

Correctness verification.

ExportDescription
Invariant traitCheck interface
Verdict, EvidenceCheck results
CheckSummary, CorrectnessVerdictAggregated results
ObservationSimplified event for checks

gashammer-fault

Fault injection.

ExportDescription
FaultAdapter traitAdapter interface
FaultManagerRouting and lifecycle
NetemAdaptertc netem wrapper
IptablesAdapteriptables wrapper
FaultTimelineScheduled fault events

gashammer-report

Report generation.

ExportDescription
RunReportComplete run report
CapacityEnvelopeSafe operating envelope
BottleneckClassificationBottleneck identification
Regression detectionBaseline comparison

gashammer-scenario

Scenario Definition Language.

ExportDescription
ScenarioDefinitionParsed SDL structure
validate()Scenario validation
CompilerSDL to run configuration

gashammer-docgen

Documentation engine.

ExportDescription
parse_source()Rust source file parser (via syn)
ModuleDoc, ItemDocDocumentation model

gashammer-testenv

Testcontainers-based Nitro devnet.

ExportDescription
NitroDevnetDevnet orchestrator (Minimal/Standard/Full)
LifecycleTestEnd-to-end test builder
TestContractsCompiled test contract bytecode
Container imagesL1 Geth, Sequencer, Feed Relay

Wire Protocols

GasHammer communicates with Nitro and between its own components through four wire protocols. This page documents each protocol’s usage, message format, and configuration.

JSON-RPC (Edge → Nitro)

Standard Ethereum JSON-RPC over HTTP. Used for transaction submission and state queries.

Endpoint: Sequencer (port 8547) or Gateway (port 8547).

Methods Used

MethodDirectionPurpose
eth_sendRawTransactionEdge → SequencerSubmit signed transactions
eth_getTransactionReceiptEdge → GatewayPoll for confirmation
eth_blockNumberEdge → GatewayCurrent L2 block height
eth_chainIdEdge → SequencerHealth check and chain validation
eth_gasPriceEdge → GatewayCurrent gas price
eth_getTransactionCountEdge → GatewayNonce recovery on failure
eth_callEdge → GatewayRead-only contract calls
eth_getLogsMonitor → L1L1 event queries

Request Format

{
  "jsonrpc": "2.0",
  "method": "eth_sendRawTransaction",
  "params": ["0x<signed_tx_rlp>"],
  "id": 1
}

Headers

Every request includes:

HeaderValue
User-AgentGasHammer/<version> (<build_sha>)
X-Powered-ByGasHammer/<version>
X-GasHammer-Version<version>
X-GasHammer-Build<build_sha>

WebSocket Feed (Edge → Feed Relay)

The sequencer feed is a one-way WebSocket stream from the feed relay (port 9642) to the edge. Messages are JSON-encoded BroadcastMessage envelopes.

Message Format

{
  "version": 1,
  "messages": [
    {
      "sequenceNumber": 12345,
      "message": {
        "header": {
          "kind": 3,
          "poster": "0x...",
          "blockNumber": 100,
          "timestamp": 1700000000,
          "requestId": "0x...",
          "l1BaseFeeEstimate": "0x..."
        },
        "l2Msg": "0x..."
      },
      "signature": "0x..."
    }
  ],
  "confirmedSequenceNumberMessage": {
    "sequenceNumber": 12340
  }
}

Reconnection

On disconnect, the feed consumer reconnects with exponential backoff:

AttemptDelay
11s + jitter
22s + jitter
34s + jitter
nmin(2^n, 30)s + jitter

gRPC (Edge ↔ Hive)

Internal communication between edges and the hive uses gRPC with Protocol Buffers. Definitions are in proto/gashammer/v1/.

Service: HiveEdge

RPCDirectionDescription
RegisterEdgeEdge → HiveRegister with capabilities
HeartbeatEdge → HivePeriodic health signal
StreamTelemetryEdge → HiveBidirectional event stream
ControlStreamHive → EdgeRun start/stop commands

Proto Files

FileContents
hive_edge.protoHiveEdge service, RegisterEdgeRequest/Response, ControlMessage
telemetry.protoTelemetryEvent, TelemetryBatch, event payload messages
types.protoShared types: RunId, EdgeId, PhaseConfig, EdgeStatus

Transport

  • Default port: 9090
  • TLS: mTLS recommended for production (configured via tls.cert_path, tls.key_path, tls.ca_path)
  • Keepalive: 30s interval

L1 JSON-RPC (Monitor → L1 Geth)

The L1 monitor polls an Ethereum L1 node for Nitro contract events.

Methods Used

MethodPurpose
eth_blockNumberCurrent L1 block for range queries
eth_getLogsFetch SequencerBatchDelivered and assertion events
eth_callRead batchCount() and latestConfirmed()

Event Signatures

EventSignature Hash
SequencerBatchDelivered(uint256,bytes32,bytes32,bytes32,uint256,(uint64,uint64),uint8)Computed via keccak256
AssertionCreated(bytes32,...)Computed via keccak256
AssertionConfirmed(bytes32,...)Computed via keccak256

All signature hashes are verified non-zero and mutually distinct in unit tests.

DNA Provenance

GasHammer embeds provenance markers at every layer — binaries, HTTP traffic, on-chain calldata, telemetry storage, and source files. These markers enable attribution, auditing, and forensic identification.

Ref: RFC-0011.

Binary Canary

Every GasHammer binary contains a canary string in the .rodata section:

GasHammer — Adversarial load testing for rollup infrastructure.
Copyright (c) 2025-present Don Johnson.
https://github.com/copyleftdev/gashammer.
Licensed under BSL-1.1.

This string survives strip, UPX compression, and Docker layer squashing. Detect it with:

strings <binary> | grep GasHammer

The canary is defined as GASHAMMER_CANARY in gashammer-common/src/build_info.rs.

Build Metadata

Every binary embeds structured build metadata, accessible via --version:

FieldSource
GASHAMMER_NAME"GasHammer"
GASHAMMER_VERSIONCargo.toml version
GASHAMMER_BUILD_SHAGit SHA at build time
GASHAMMER_BUILD_TIMERFC 3339 timestamp
GASHAMMER_COPYRIGHT"Copyright (c) 2025-present Don Johnson"
GASHAMMER_LICENSE"BSL-1.1"
GASHAMMER_URL"https://github.com/copyleftdev/gashammer"

HTTP Headers

Every outbound HTTP and WebSocket request includes:

HeaderValue
User-AgentGasHammer/<version> (<build_sha>)
X-Powered-ByGasHammer/<version>
X-GasHammer-VersionSemantic version
X-GasHammer-BuildShort git SHA

The hive REST API also returns X-Powered-By on every response.

On-Chain Markers

Calldata Magic Bytes

Every transaction generated by the workload engine includes the magic bytes 0x47484D52 (“GHMR”) in the calldata. This enables on-chain identification of GasHammer-generated transactions.

Defined as CALLDATA_MAGIC: [u8; 4] = [0x47, 0x48, 0x4D, 0x52] in gashammer-workload/src/templates.rs.

Account Derivation Path

Test accounts use the HD derivation path:

m/44'/60'/0'/0'/0x4748/{index}

The 0x4748 segment encodes “GH” (GasHammer) in hex. This makes the derivation path itself a provenance marker. Defined as DERIVATION_MARKER: [u8; 2] = [0x47, 0x48] in gashammer-workload/src/account.rs.

Telemetry Storage

Parquet file footers include metadata fields:

KeyValue
gashammer.versionSoftware version
gashammer.buildBuild SHA
gashammer.run_idAssociated run UUID
gashammer.copyrightBSL-1.1 notice

Prometheus Metrics

All metrics use the gashammer_ prefix. This is enforced by convention and verified in tests.

Source Files

Every .rs file includes the BSL-1.1 license header:

#![allow(unused)]
fn main() {
// Copyright (c) 2025-present Don Johnson
// Licensed under the Business Source License 1.1 (the "License");
// ...
}

This is enforced by scripts/check-license-headers.sh in CI.

Docker Images

Container images include OCI labels:

LabelValue
org.opencontainers.image.titlegashammer-hive / gashammer-edge
org.opencontainers.image.sourceGitHub repository URL
org.opencontainers.image.vendorDon Johnson
org.opencontainers.image.licensesBSL-1.1

Summary

LayerMarkerDetection
BinaryGASHAMMER_CANARYstrings | grep GasHammer
HTTPX-Powered-By, User-AgentRequest/response headers
Calldata0x47484D52On-chain data inspection
Accounts0x4748 in derivation pathDerivation path analysis
Parquetgashammer.* metadataParquet footer inspection
Metricsgashammer_* prefixPrometheus queries
SourceLicense headercheck-license-headers.sh
ContainerOCI labelsdocker inspect

Deployment

Docker Compose (Development)

The simplest deployment uses Docker Compose:

# docker-compose.yml
services:
  hive:
    image: ghcr.io/copyleftdev/gashammer-hive:latest
    ports:
      - "8080:8080"   # REST API
      - "9090:9090"   # gRPC
      - "9091:9091"   # Prometheus metrics
    volumes:
      - ./hive.toml:/etc/gashammer/hive.toml
      - gashammer-data:/var/lib/gashammer

  edge:
    image: ghcr.io/copyleftdev/gashammer-edge:latest
    environment:
      GASHAMMER_HIVE_ADDRESS: hive:9090
      GASHAMMER_NITRO_SEQUENCER_RPC: http://sequencer:8547
    volumes:
      - ./edge.toml:/etc/gashammer/edge.toml
    deploy:
      replicas: 3

volumes:
  gashammer-data:

Kubernetes (Helm)

For production deployments, use the Helm chart:

helm repo add gashammer https://copyleftdev.github.io/gashammer/charts
helm install gashammer gashammer/gashammer \
  --set hive.config.nitro.sequencerRpc=http://sequencer:8547 \
  --set edge.replicas=5

The Helm chart deploys:

  • Hive as a Deployment with a Service (REST + gRPC)
  • Edges as a Deployment with configurable replicas
  • ConfigMaps for hive.toml and edge.toml
  • ServiceMonitor for Prometheus Operator integration

Binary Deployment

For bare-metal or VM deployments:

  1. Download the release binary for your platform from GitHub Releases
  2. Verify the signature:
    minisign -Vm gashammer-hive-linux-amd64 -p gashammer.pub
    
  3. Verify the checksum:
    sha256sum -c SHA256SUMS
    
  4. Run the binary:
    ./gashammer-hive --config /etc/gashammer/hive.toml
    

Multi-Region Deployment

For multi-region load testing:

  1. Deploy one hive in a central location
  2. Deploy edges in each target region
  3. Configure each edge to connect to the central hive via gRPC
  4. All edges connect to the same Nitro deployment (or region-specific endpoints)
Region A                    Central                   Region B
┌─────────┐               ┌─────────┐               ┌─────────┐
│ Edge A  │──── gRPC ────▶│  Hive   │◀──── gRPC ────│ Edge B  │
└────┬────┘               └─────────┘               └────┬────┘
     │                                                    │
     └──────── JSON-RPC ──▶ Nitro ◀── JSON-RPC ──────────┘

Resource Planning

EdgesHive CPUHive RAMHive Disk (per hour)
1-32 cores4 GB~100 MB
4-104 cores8 GB~500 MB
11-508 cores16 GB~2 GB
50+16 cores32 GB~10 GB

Disk usage depends heavily on event volume and Parquet rotation settings.

Monitoring

GasHammer exposes Prometheus metrics from both the hive and edge processes. This page describes how to set up monitoring with Prometheus and Grafana.

Prometheus Configuration

Add GasHammer targets to your prometheus.yml:

scrape_configs:
  - job_name: 'gashammer-hive'
    static_configs:
      - targets: ['hive:9091']

  - job_name: 'gashammer-edge'
    static_configs:
      - targets: ['edge-1:9091', 'edge-2:9091', 'edge-3:9091']

With Kubernetes and the ServiceMonitor from the Helm chart, Prometheus Operator discovers targets automatically.

Key Metrics to Watch

During a Run

MetricAlert ThresholdMeaning
gashammer_tx_submitted_total ratedrops to 0Edge stopped generating
gashammer_tx_failed_total rate>5% of submittedHigh failure rate
gashammer_tx_latency_seconds p99>10sSevere latency degradation
gashammer_events_dropped_totalany increaseTelemetry backpressure
gashammer_edges_activedrops below expectedEdge disconnection
gashammer_oracle_violations_totalany increaseCorrectness issue detected

System Health

MetricAlert ThresholdMeaning
gashammer_feed_messages_total ratedrops to 0Feed connection lost
Edge heartbeat age>30sEdge may be stale
Hive memory usage>80% of limitRisk of OOM

Grafana Dashboard

Import the GasHammer Grafana dashboard from docs/grafana/dashboard.json or use these example panels:

Transaction Throughput

rate(gashammer_tx_submitted_total[1m])
rate(gashammer_tx_confirmed_total[1m])

Latency Percentiles

histogram_quantile(0.50, rate(gashammer_tx_latency_seconds_bucket[1m]))
histogram_quantile(0.90, rate(gashammer_tx_latency_seconds_bucket[1m]))
histogram_quantile(0.99, rate(gashammer_tx_latency_seconds_bucket[1m]))

Gas Rate

rate(gashammer_gas_submitted_total[1m])

Error Rate

rate(gashammer_tx_failed_total[1m]) / rate(gashammer_tx_submitted_total[1m])

Alerting

Recommended Prometheus alerting rules:

groups:
  - name: gashammer
    rules:
      - alert: GasHammerEdgeDown
        expr: gashammer_edges_active < 1
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "No GasHammer edges connected"

      - alert: GasHammerHighErrorRate
        expr: rate(gashammer_tx_failed_total[5m]) / rate(gashammer_tx_submitted_total[5m]) > 0.05
        for: 2m
        labels:
          severity: warning
        annotations:
          summary: "GasHammer error rate exceeds 5%"

      - alert: GasHammerOracleViolation
        expr: increase(gashammer_oracle_violations_total[1m]) > 0
        labels:
          severity: critical
        annotations:
          summary: "Correctness oracle detected a violation"

Structured Logging

Both hive and edge emit structured JSON logs via tracing. Configure your log aggregation system to parse these fields:

FieldDescription
timestampISO 8601 timestamp
levelLog level (TRACE, DEBUG, INFO, WARN, ERROR)
targetRust module path
spanCurrent tracing span
run_idAssociated run (if in a run context)
edge_idEdge identifier (edge logs only)
messageHuman-readable message

Troubleshooting

Edge Cannot Connect to Hive

Symptom: Edge logs show Failed to register with hive or gRPC connection refused.

Checks:

  1. Verify the hive is running and the gRPC port (default 9090) is accessible
  2. Check network connectivity: nc -zv hive-host 9090
  3. If using mTLS, verify certificates are valid and not expired
  4. Check firewall rules between edge and hive

Edge Cannot Connect to Sequencer

Symptom: Edge logs show RPC connection failed or timeout waiting for eth_chainId.

Checks:

  1. Verify the sequencer RPC URL is correct and accessible
  2. Test with curl: curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' http://sequencer:8547
  3. Check that the chain ID matches the scenario’s target.chain_id
  4. Verify connection pool settings are not exhausting the sequencer’s connection limit

High Transaction Failure Rate

Symptom: gashammer_tx_failed_total is increasing rapidly.

Common causes:

  • Nonce too low — another process is using the same accounts. Ensure account partitioning has no overlap.
  • Insufficient funds — test accounts need ETH for gas. Check pre-funding.
  • Gas price too low — if the network has a base fee, the tx may be underpriced.
  • Sequencer overloaded — reduce the gas rate and check sequencer health.

Feed Correlation Misses

Symptom: gashammer_correlation_misses is high.

Common causes:

  • Feed relay WebSocket disconnected (check gashammer_feed_messages_total rate)
  • Transaction was included in a block but the feed message was missed during reconnection
  • Transaction was dropped by the sequencer (check if receipt exists)

Telemetry Events Dropped

Symptom: gashammer_events_dropped_total is increasing.

Resolution:

  • Increase telemetry.buffer_size in edge.toml (default 65536)
  • Increase telemetry.batch_size for larger batches (default 1024)
  • Check network bandwidth between edge and hive
  • Reduce the workload gas rate if the edge cannot keep up

Oracle Violation Detected

Symptom: gashammer_oracle_violations_total increased, report shows failed verdicts.

This may be expected — the oracle detects real issues in the system under test. Review the violation evidence in the report:

  • Balance conservation failure — account balances changed by an unexpected amount. Check for unexpected contract interactions.
  • Nonce monotonicity failure — a transaction was included out of nonce order. This may indicate a sequencer issue.
  • Gas accounting failuregas_used exceeded gas_limit. This should never happen and indicates a VM bug.

Run Stuck in “Starting” State

Symptom: Run status remains “Starting” indefinitely.

Checks:

  1. At least one edge must be registered. Check GET /edges.
  2. All registered edges must acknowledge the run. Check edge logs for errors.
  3. If an edge crashed during startup, the hive waits for it. Restart the edge or deregister it.

Report Generation Fails

Symptom: GET /runs/{id}/report returns an error.

Checks:

  1. Verify the run completed (status “Completed” or “Failed”)
  2. Check that Parquet files exist in {data_dir}/runs/{run_id}/
  3. Check hive logs for Parquet read errors
  4. Ensure sufficient disk space for report generation

Build Errors

Symptom: cargo build --workspace fails.

Common fixes:

  • Run cargo update to update the lock file
  • Ensure Rust version is 1.75.0 or later: rustc --version
  • For protobuf compilation errors: ensure protoc is installed
  • For Docker-related test failures: ensure Docker daemon is running

Getting Help

If you cannot resolve an issue:

  1. Check the GitHub Issues for similar problems
  2. Open a new issue with:
    • GasHammer version (--version output)
    • Relevant configuration (redact secrets)
    • Log output at debug level
    • Steps to reproduce

Contributing

Prerequisites

  • Rust 1.75.0+ (install via rustup)
  • Docker 20.10+ (for integration and E2E tests)
  • protoc (Protocol Buffers compiler, for gRPC code generation)
  • mdbook (for documentation: cargo install mdbook)
  • cargo-deny (for license checks: cargo install cargo-deny)

Development Workflow

# Build everything
cargo build --workspace

# Type-check only (faster)
cargo check --workspace

# Format
cargo fmt --all

# Lint
cargo clippy --workspace --all-targets -- -D warnings

# Test
cargo test --workspace

# License check
cargo deny check licenses

Gitflow

This project uses strict gitflow. Every rule is non-negotiable.

Branch Model

main ───────────────────────────────── tagged releases only
  └── develop ──────────────────────── integration branch
        ├── feature/GH-<#>-<desc> ── new work
        ├── fix/GH-<#>-<desc> ────── bug fixes
        ├── release/<version> ────── release prep
        └── hotfix/GH-<#>-<desc> ── emergency fixes

Rules

  1. Never commit directly to main. All changes reach main through release branches.
  2. All feature and fix branches merge to develop via PR with squash merge.
  3. Release branches merge to both main (with tag) and develop.
  4. Hotfix branches branch from main, merge to both main and develop.
  5. Delete branches after merge. No stale branches.
  6. No force pushes to main or develop.
  7. Rebase feature branches onto develop before merging.

Commit Format

<scope>: <imperative verb> <what> (#<issue>)

Scopes: common, nitro, edge, hive, workload, telemetry, oracle, fault, report, scenario, docgen, testenv, ci, docs, rfc

Examples:

nitro: add L1 monitor background poll loop (#83)
fault: fix auto-clear bug in FaultManager (#82)
docs: rewrite mdBook documentation (#89)

Every commit must reference a GitHub issue number.

GitHub Issues

No work happens without an issue. Every feature, bug fix, refactor, and documentation change requires an issue first.

  • Title format: [<scope>] <imperative description>
  • Required fields: description, acceptance criteria, labels
  • Branch name: feature/GH-<issue#>-<short-description>
  • PR closing keyword: Closes #<issue#>

Code Conventions

Error Handling

  • thiserror for library errors, anyhow only in binaries and tests.
  • Never unwrap() in library code.
  • Never expect() without a message explaining the invariant.

Async

  • tokio only. No async-std.
  • Never tokio::spawn without storing the JoinHandle.
  • No sleep for synchronization — use channels, barriers, or Notify.

Naming

  • Crates: gashammer-{name} in Cargo.toml, gashammer_{name} as Rust modules.
  • No abbreviations in public APIs except: tx, rpc, ws, config.

Visibility

  • Default to pub(crate). Only pub items that are part of the API contract.

Imports

Group in order, separated by blank lines:

  1. std
  2. External crates
  3. Workspace crates
  4. crate/super/self

Testing

  • Every public function gets a test.
  • #[tokio::test] for async tests. Never block_on.
  • Test names describe behavior: test_feed_reconnects_after_disconnect.
  • No #[allow(dead_code)] — delete dead code instead.
  • No println! — use tracing.

License Header

Every .rs file must include:

#![allow(unused)]
fn main() {
// Copyright (c) 2025-present Don Johnson
// Licensed under the Business Source License 1.1 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     https://github.com/copyleftdev/gashammer/blob/main/LICENSE
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
}

Enforced by CI via scripts/check-license-headers.sh.

Testing

GasHammer has three test tiers: unit tests, integration tests, and end-to-end lifecycle tests.

Unit Tests

Unit tests live in mod tests blocks at the bottom of each source file.

# All unit tests
cargo test --workspace

# Single crate
cargo test -p gashammer-nitro

# Single test by name
cargo test --workspace -- test_feed_reconnects

Conventions

  • Test names describe behavior, not implementation: test_feed_reconnects_after_disconnect, not test_feed_1.
  • Async tests use #[tokio::test]. Never block_on.
  • Use proptest for property-based tests on serialization round-trips and workload generation.
  • Oracle invariant checks must have both passing and failing test cases.
  • Do not test private implementation details — test the public contract.

Current Test Counts

CrateTests
gashammer-common30+
gashammer-nitro48
gashammer-edge30+
gashammer-hive65
gashammer-workload40+
gashammer-telemetry20+
gashammer-oracle30+
gashammer-fault35+
gashammer-report30+
gashammer-scenario25+
gashammer-docgen15+
gashammer-testenv10+

Integration Tests

Integration tests require Docker and a running Nitro devnet. They are gated by the integration feature flag.

# Run integration tests (requires Docker)
cargo test --test integration -- --test-threads=1

Location: crates/gashammer-nitro/tests/integration.rs and crates/gashammer-testenv/tests/smoke.rs.

What They Test

  • Real JSON-RPC connectivity to a Nitro sequencer.
  • Transaction submission and receipt polling.
  • Contract deployment and interaction.
  • Feed relay connectivity.
  • L1 Geth health checks.

Devnet Profiles

ProfileContainersUse Case
MinimalL1 Geth, SequencerFast smoke tests
StandardL1 Geth, Sequencer, Feed RelayIntegration tests
FullAll + contract deploymentE2E lifecycle tests

E2E Lifecycle Tests

Full lifecycle tests boot a devnet, deploy contracts, fund accounts, run a scenario, and verify the report.

GASHAMMER_TEST_PROFILE=standard cargo test --test e2e -- --test-threads=1

LifecycleTest Builder

#![allow(unused)]
fn main() {
LifecycleTestBuilder::new()
    .scenario_yaml(include_str!("scenario.yaml"))
    .edge_count(2)
    .deploy_contracts(true)
    .expected_outcome(ExpectedOutcome::Pass)
    .timeout(Duration::from_secs(120))
    .build()
    .run()
    .await;
}

Test Contracts

GasHammer ships 7 compiled Solidity test contracts in contracts/test/build/:

ContractPurpose
CounterIncrement/decrement with storage writes
GasBurnerConfigurable gas consumption loop
StorageWriterBatch storage slot writes
EventEmitterEmit events for log testing
ReverterControlled reverts for failure testing
ContentionTargetConcurrent access patterns
GasHammerERC20ERC-20 token for transfer testing

Contracts are pre-compiled. Bytecode is embedded at build time via include_bytes!.

CI Pipeline

GitHub Actions runs the full test suite on every PR:

jobs:
  check:
    - cargo fmt --all -- --check
    - cargo clippy --workspace --all-targets -- -D warnings
    - cargo test --workspace
    - cargo deny check licenses
    - scripts/check-license-headers.sh

All checks must pass before merge. No exceptions.

Writing New Tests

  1. Add a #[test] or #[tokio::test] function in the mod tests block of the file containing the code under test.
  2. Name the test test_<behavior_description>.
  3. For integration tests that need Docker, add them to the tests/ directory and gate with #[cfg(feature = "integration")].
  4. Run cargo test -p <crate> to verify locally before pushing.

RFC Index

GasHammer’s design is specified in 13 RFCs. These live in the rfcs/ directory and serve as the authoritative specification for each subsystem. When the implementation diverges from an RFC, the RFC is updated first.

RFCTitleScope
RFC-0001Nitro Integration SurfaceWire protocols, RPC methods, feed format, L1 contracts, precompiles
RFC-0002Technology StackLanguage (Rust), runtime (tokio), dependencies, build system
RFC-0003Hive Control PlaneREST API, gRPC service, edge registry, run orchestration
RFC-0004Edge RuntimeTx pipeline, feed correlator, heartbeat, shutdown
RFC-0005Workload EngineGas-first modeling, templates, account pool, PRNG
RFC-0006Telemetry PipelineEvent model, ring buffer, gRPC transport, Parquet storage
RFC-0007Correctness OracleInvariant framework, verdicts, evidence, live and post-run checks
RFC-0008Fault InjectionAdapter trait, netem, iptables, timeline, safety rails
RFC-0009Reporting EngineLatency analysis, capacity envelope, regression detection
RFC-0010Scenario Definition LanguageYAML schema, phases, templates, faults, oracle config
RFC-0011DNA & ProvenanceBinary canary, HTTP headers, calldata magic, error codes
RFC-0012Documentation EngineSource parser, mdBook generation, coverage metrics
RFC-0013E2E TestingTestcontainers, Nitro devnet, lifecycle tests, test contracts

Reading Order

For a new contributor, the recommended reading order is:

  1. RFC-0002 (Technology Stack) — understand the foundational choices.
  2. RFC-0001 (Nitro Integration Surface) — understand what GasHammer talks to.
  3. RFC-0003 and RFC-0004 (Hive + Edge) — understand the distributed architecture.
  4. RFC-0005 (Workload Engine) — understand gas-first modeling.
  5. RFC-0011 (DNA & Provenance) — understand the attribution markers.
  6. Remaining RFCs as needed for the subsystem you are working on.

RFC Conventions

  • RFCs are numbered sequentially: RFC-NNNN.
  • Each RFC has: title, status, author, date, and numbered sections.
  • Section references use § notation: “RFC-0008 §3.2”.
  • RFCs are living documents — update them when the implementation evolves.
  • If a design decision is significant and not covered by an existing RFC, write a new one.