p99 dispatch round-trip
Single-connection probe at 20 Hz, alongside ~1 k msg/s of cross-fan-out load. The probe publishes a frame and reads the same bytes back through its wildcard subscription — the dispatch path round-trip end-to-end.
A Rust kernel for the TAK ecosystem.
Drop-in replacement for the messaging core of the upstream Java TAK Server. Single-node mTLS streaming, alloc-free hot path, sub-millisecond p99 dispatch under load.
Numbers from the harness, not the slide deck. Every figure below is reproducible from the test suite or a five-minute soak run on commodity Linux.
Single-connection probe at 20 Hz, alongside ~1 k msg/s of cross-fan-out load. The probe publishes a frame and reads the same bytes back through its wildcard subscription — the dispatch path round-trip end-to-end.
tak_bus::dispatch performs zero heap
allocations in steady state, regardless of subscriber
count. Verified by a dhat integration
test that fails the build if a single block is
allocated during a burst.
Two libFuzzer targets — the XML decoder and the streaming framer — ran for thirty minutes each with sanitizers on. Combined: 2.04 billion parses, zero panics, zero out-of-bounds, zero artifacts.
Workspace-wide. Includes property tests, conformance
scenarios, loom concurrency models, and the dhat-gated
allocation invariant. cargo nextest run is
the runner. No flakes.
Linear regression of resident set size during the late window of a 300-second soak under 10 k msg/s offered load. Threshold gate: 1024 kB/min. Steady-state working set: ~40 MB.
End-to-end scenarios drive a real tak-server
via a postgis testcontainer with mock ATAK
clients. Pins byte-identity, fan-out, replay,
multi-publisher concurrency, drop accounting,
persistence, and lifecycle. Tenth (mTLS handshake) is
queued.
The hot path is fast because the architecture refuses to do slow things. Five locked decisions, machine-enforced.
Codec::decode returns a borrowed view that
points into the original byte slice. No copy, no
String::from, no to_owned().
The XML detail block survives as &str
all the way to fan-out.
// every public decoder fn decode<'a>(buf: &'a [u8]) -> Result<View<'a>>
Dispatching one frame to N subscribers means N reference-count
increments on a single Bytes, not N memcpys.
Vec<u8>::clone is forbidden in the
dispatch path; reviews catch it, the bench harness would
detect it.
// for each matched sub: entry.sender.try_send(payload.clone()) // Bytes::clone => Arc bump, not memcpy
Group authorization uses a fixed
[u64; 4] bitvector. Intersection is the
bitwise OR of four ANDs — ~4 instructions on x86,
vs an unbounded BigInteger.and() allocation
in the upstream Java reference implementation.
pub fn intersects(a: &Self, b: &Self) -> bool { (a.0[0] & b.0[0]) | (a.0[1] & b.0[1]) | (a.0[2] & b.0[2]) | (a.0[3] & b.0[3]) != 0 }
unwrap, expect,
panic!, todo!, and
unimplemented! are denied by clippy in every
library crate. Errors are thiserror enums;
anyhow is allowed only inside binary crates.
A panic in a library is a denial-of-service.
// every lib crate #![deny( clippy::unwrap_used, clippy::expect_used, clippy::panic, )]
The persistence side-channel is a bounded
mpsc off the dispatch loop. If the channel
fills, the persist drops with a counter increment.
Delivery to live subscribers is never blocked by the
database.
match store.try_insert_event(row) { Ok(()) => {} Err(_) => // counter++, drop, move on }
Six independent layers of proof. Each runs on every commit; none of them are decoration.
Generators on every codec invariant: round-trip,
byte-stable proto, BigInteger-comparison
for the group bitvector. "For all inputs, X holds."
libFuzzer with sanitizers on the XML decoder and the
streaming framer. Runs nightly via
cargo +nightly fuzz.
Seed-replay verification harness for the bus.
--alloc-mode verifies H1; --minimize
shrinks failure traces with linear delta-debugging.
Concurrency model checker explores all schedules of concurrent subscribe / unsubscribe / dispatch. No data race, no deadlock, no lost message.
In-process tak-server against a real
postgis container, driven by mock ATAK
clients. Byte-identity, fan-out, replay, drop accounting.
Wall-clock harness drives sustained load while sampling
/proc/<pid>/status. Linear regression
fails the build if RSS drifts; pinned latency probe
gates p99.
What people evaluating a Rust replacement for the Java TAK Server actually ask. No marketing, no hedging.
tak.rs is a high-performance Rust kernel for the Team Awareness Kit (TAK) ecosystem. It is a drop-in replacement for the messaging core of the upstream Java TAK Server, optimized for single-node deployments at 10,000+ concurrent mTLS streaming clients on commodity Linux hardware.
Yes. tak.rs speaks TAK Protocol Version 1
(the magic byte 0xBF varint-prefixed
protobuf TakMessage) on the streaming
firehose — the same wire format that ATAK
(Android), iTAK (iOS), and WinTAK use. The conformance
suite drives mock ATAK clients through canonical
Cursor-on-Target exchanges and pins byte-identity end-to-end.
FreeTAKServer (Python), OpenTAKServer (Python), and taky (Python) are user-friendly community implementations; GoATAK is a Go alternative. tak.rs targets a different point in the design space: a Rust kernel optimized for single-node throughput at 10,000+ concurrent mTLS streaming clients with sub-millisecond p99 dispatch, alloc-free fan-out, and machine-enforced invariants. The hot path is built to be a drop-in for the upstream Java TAK Server, not a reinvention.
Pre-M0. The hot path is verified by six layers of
proof: property tests (proptest),
2.04 billion fuzz executions across
two libFuzzer targets with zero crashes, deterministic
seed-replay verification (VOPR), Loom concurrency
model checking, nine end-to-end conformance scenarios
with mock ATAK clients, and a wall-clock soak harness
with RSS-drift gating. Public deployment is gated on
an mTLS handshake conformance scenario and a 24-hour
soak baseline.
TAK Protocol Version 1 (binary):
magic byte 0xBF, varint-prefixed protobuf
TakMessage. The streaming firehose on
port 8089 (mTLS) is the production
path; plain TCP on 8087 / 8088
is opt-in. The CoT XML codec round-trips
Detail.xmlDetail byte-for-byte to preserve
plug-in payloads ATAK clients depend on.
MIT or Apache-2.0, at the consumer's option — the standard Rust ecosystem license posture.
Two specific architectural wins. First, the Java TAK
Server's hot path is dominated by per-message XPath
evaluation; tak.rs compiles subscriptions into a
struct (type prefix trie + geo R-tree + UID set +
group mask) at subscribe time and dispatches as index
lookups + bitwise AND at runtime. Second, group
authorization in Java uses
BigInteger.and() per message; tak.rs uses
a fixed [u64; 4] bitvector that
intersects in roughly four x86 instructions. Combined
with bytes::Bytes Arc-counted fan-out and
rustls (no OpenSSL), the hot path moves
from milliseconds to microseconds.
Sub-millisecond p99 dispatch round-trip under sustained
load. Reference number:
144 µs p99 in a five-minute
soak with 50 concurrent connections publishing at
200 Hz each (10,000 messages / second offered,
8,820 delivered / second after fan-out). Hot-path
allocations: zero, verified by a
dhat-gated invariant test that fails the
build on a single allocation.
Federation is deferred. v1 is single-node only, no Ignite, no clustering. The Mission API and CoT firehose are the in-scope surfaces; cross-server federation is queued for a later milestone.
git clone the repository, then
cargo nextest run for the test suite,
cargo +nightly fuzz run --fuzz-dir crates/tak-cot/fuzz decode_xml
for the codec fuzz harness, and
tak-soak --duration-secs 300 for a
baseline soak with the pinned latency probe. Every
claim on this page is reproducible from a fresh
clone.
Every claim above is reproducible from a clone. Run
cargo nextest run, cargo +nightly fuzz run,
or tak-soak and compare. The measurement harness
ships with the kernel.