Architecture

How Lattice's eight SDK packages compose into a complete system for verifiable data, hierarchical blockchains, and peer-to-peer networking.

Cashew: The State Layer

At the foundation of Lattice is Cashew — a library of Merkle data structures (dictionaries, arrays, sets) backed by a compressed radix trie. Every piece of state in the system is a Cashew structure, identified by the SHA-256 hash of its serialized form (CID).

Application State │ ▼ ┌───────────────────────────────────────────────┐ │ Cashew: MerkleDictionary / MerkleArray / Set │ │ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ │RadixNode│────→│RadixNode│────→│ Scalar │ │ │ │ (CID₁) │ │ (CID₂) │ │ (CID₃) │ │ │ └─────────┘ └─────────┘ └─────────┘ │ │ │ │ Each node has a CID. Unresolved nodes hold │ │ only the CID — loaded lazily via Fetcher. │ └───────────────┬───────────────────────────────┘ │ Fetcher.fetch(rawCid:) │ Storer.store(rawCid:, data:) ▼ ┌───────────────────────────────────────────────┐ │ Acorn CAS Chain (memory → disk → network) │ └───────────────────────────────────────────────┘

Cashew structures are immutable — every mutation returns a new root with a new CID. Previous versions remain valid. This enables versioning, efficient diffing (only walk nodes with different CIDs), and sparse proofs (include only the branch path needed to verify a claim).

The Fetcher and Storer protocols bridge Cashew to Acorn: a Fetcher backed by a CompositeCASWorker gives Merkle structures that lazily load from memory → disk → network. A Storer backed by the same chain persists mutations across all tiers.

The CAS Worker Chain

The central design pattern in Lattice is the worker chain. Each worker implements the AcornCASWorker protocol and holds optional near (faster) and far (slower) references to adjacent workers. The protocol's default get() walks the chain automatically:

get(cid) called on CompositeCASWorker │ ▼ ┌─────────────────┐ miss ┌─────────────────┐ miss ┌─────────────────┐ │ MemoryCASWorker │ ────────→ │ DiskCASWorker │ ────────→ │ NetworkCASWorker │ │ (~10 ns) │ │ (~100 µs) │ │ (~100 ms) │ └─────────────────┘ └─────────────────┘ └─────────────────┘ ▲ ▲ │ │ backfill │ backfill │ └────────────────────────────┴────────────────────────────┘ found data

When data is found at a slow tier, it automatically backfills toward the fast end. A second request for the same content hits memory in nanoseconds instead of waiting for disk or network.

Data Flow: Get

  1. CompositeCASWorker.get(cid:) delegates to the farthest worker
  2. That worker checks its near reference first (memory)
  3. Memory miss → check local (disk). Disk miss → check far (this doesn't exist, so falls through)
  4. The protocol's default get() on DiskWorker calls the near's get() first, and if near misses, calls getLocal()
  5. Each worker that finds data calls store() on its near, propagating the data upward

Data Flow: Store

  1. storeLocal() on the current worker
  2. store() on near (recurses toward the fast end)
  3. Data ends up in all tiers from the storage point upward

Network Layer

When the local chain misses, NetworkCASWorker delegates to the Ivy actor to fetch content from the peer network:

NetworkCASWorker.getLocal(cid) │ ▼ Ivy.fetchBlock(cid) │ ├── Router.closestPeers(to: hash(cid)) │ Select top-N peers by XOR distance │ ├── Tally.reputation(for: peer) │ Rank by composite reputation score │ ├── PeerConnection.send(.wantBlock(cid)) │ Send request to best peers │ └── await response or timeout │ ├── .block(cid, data) → resolve, backfill chain └── .dontHave(cid) → record failure, try next

Reputation Gating

Tally sits above the CAS chain, not inside it. When the node receives a block request from a peer, Tally gates the response:

Inbound: peer requests block by CID │ ▼ Tally.shouldAllow(peer) │ ├── ratePressure < 0.5 → always allow │ ├── ratePressure 0.5–1.0 → require reputation ≥ scaled threshold │ └── ratePressure ≥ 1.0 → require reputation ≥ 0.8 │ ├── allowed → fetch from local chain, send .block └── denied → send .dontHave, record in metrics

This means the system self-balances: peers who contribute data build reputation and get served. Peers who only consume build debt and are progressively throttled.

Concurrency Model

Actors

All workers (MemoryCASWorker, DiskCASWorker, NetworkCASWorker, CompositeCASWorker) and the Ivy node are Swift actors. This gives compile-time data race safety — no manual lock management for the async path.

Lock-Based Hot Path

MemoryCASWorker and Tally also expose nonisolated sync methods that bypass the actor executor and use an internal lock directly. This eliminates the ~7µs actor hop overhead for hot-path operations:

APILatencyUse Case
syncGet(cid:)6–28 nsHot-path reads in tight loops
await getLocal(cid:)~7.5 µsGeneral async code, chain traversal

Cross-Platform Locking

Both Tally and the storage workers use a LockedState<State> abstraction:

Eviction Strategy

Both memory and disk workers use the same eviction algorithm from Acorn.LFUDecayCache:

Wire Protocol

Ivy uses a simple length-prefixed binary protocol over TCP (via swift-nio):

┌──────────────────────────────────────────────┐ │ Frame Format │ ├──────────┬───────────┬───────────────────────┤ │ 4 bytes │ 1 byte │ variable │ │ uint32 │ tag │ payload │ │ (length) │ (0–7) │ (message-specific) │ └──────────┴───────────┴───────────────────────┘ Tags: 0 = ping (8 bytes: nonce) 1 = pong (8 bytes: nonce) 2 = wantBlock (2 + N bytes: CID string) 3 = block (2 + N + M bytes: CID + data) 4 = dontHave (2 + N bytes: CID string) 5 = findNode (2 + N bytes: target hash) 6 = neighbors (variable: endpoint list) 7 = announce (2 + N bytes: CID string)

DHT Routing

Ivy implements a Kademlia-style distributed hash table: