Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Telemetry Pipeline

The telemetry pipeline captures, transports, and stores every significant event during a run. Events flow from edges to the hive, where they are written to Parquet files for post-run analysis.

Ref: RFC-0006.

Event Model

Every event is a TelemetryEvent:

#![allow(unused)]
fn main() {
struct TelemetryEvent {
    event_id: Uuid,
    edge_id: EdgeId,
    run_id: RunId,
    phase_index: u32,
    monotonic_ns: u64,
    wall_ns: u64,
    payload: EventPayload,
}
}

Two timestamps are recorded: monotonic_ns (for ordering within an edge) and wall_ns (for cross-edge correlation).

Event Payloads

PayloadFieldsMeaning
TxSubmittedhash, template, gas, account, nonceTransaction sent to sequencer
TxAcceptedhash, latency_msSequencer acknowledged
TxIncludedhash, block, gas_used, latency_msTransaction in a block
TxFailedhash, error, phaseSubmission or confirmation failure
TxTimeouthash, timeout_msReceipt polling exceeded deadline
FeedConnectedurlWebSocket connection established
FeedDisconnectedurl, reasonWebSocket connection lost
FeedGapexpected_seq, actual_seqSequence number gap detected
FeedStallduration_msNo messages for threshold duration
FaultInjectedfault_type, target, handle_idFault activated
FaultClearedhandle_idFault removed

Edge-Side Buffer

Each edge maintains a lock-free ring buffer (default capacity: 1,000,000 events). The buffer uses tokio::sync::mpsc with a bounded channel.

Backpressure: If the buffer is full, the oldest events are dropped. A counter (gashammer_events_dropped_total) tracks loss. This guarantees the edge never blocks on telemetry — transaction submission always takes priority.

Transport

A shipper task batches events from the ring buffer and streams them to the hive via gRPC.

Configuration (PipelineConfig):

FieldDefaultDescription
buffer_capacity1,000,000Ring buffer size
batch_size1,000Events per gRPC batch
flush_interval_ms100Max time before flushing a partial batch

Metrics:

CounterDescription
gashammer_events_generatedEvents created on this edge
gashammer_events_shippedEvents sent to hive
gashammer_events_droppedEvents lost to backpressure
gashammer_batches_shippedgRPC batches sent

Hive-Side Storage

The hive receives event batches and writes them to Apache Parquet files.

Partitioning: {data_dir}/runs/{run_id}/{hour}.parquet

Rotation: by file size (default 256 MB) or time (default 1 hour), whichever comes first.

Parquet metadata: Each file footer includes DNA provenance fields:

KeyValue
gashammer.versionSoftware version
gashammer.buildBuild SHA
gashammer.run_idAssociated run UUID
gashammer.copyrightBSL-1.1 notice

Parquet’s columnar format enables efficient analytical queries over telemetry data (e.g., computing latency percentiles across millions of events).