Architecture
How Lattice's eight SDK packages compose into a complete system for verifiable data, hierarchical blockchains, and peer-to-peer networking.
Cashew: The State Layer
At the foundation of Lattice is Cashew — a library of Merkle data structures (dictionaries, arrays, sets) backed by a compressed radix trie. Every piece of state in the system is a Cashew structure, identified by the SHA-256 hash of its serialized form (CID).
Cashew structures are immutable — every mutation returns a new root with a new CID. Previous versions remain valid. This enables versioning, efficient diffing (only walk nodes with different CIDs), and sparse proofs (include only the branch path needed to verify a claim).
The Fetcher and Storer protocols bridge Cashew to Acorn: a Fetcher backed by a CompositeCASWorker gives Merkle structures that lazily load from memory → disk → network. A Storer backed by the same chain persists mutations across all tiers.
The CAS Worker Chain
The central design pattern in Lattice is the worker chain. Each worker implements the AcornCASWorker protocol and holds optional near (faster) and far (slower) references to adjacent workers. The protocol's default get() walks the chain automatically:
When data is found at a slow tier, it automatically backfills toward the fast end. A second request for the same content hits memory in nanoseconds instead of waiting for disk or network.
Data Flow: Get
CompositeCASWorker.get(cid:)delegates to the farthest worker- That worker checks its
nearreference first (memory) - Memory miss → check local (disk). Disk miss → check
far(this doesn't exist, so falls through) - The protocol's default
get()on DiskWorker calls the near'sget()first, and if near misses, callsgetLocal() - Each worker that finds data calls
store()on itsnear, propagating the data upward
Data Flow: Store
storeLocal()on the current workerstore()onnear(recurses toward the fast end)- Data ends up in all tiers from the storage point upward
Network Layer
When the local chain misses, NetworkCASWorker delegates to the Ivy actor to fetch content from the peer network:
Reputation Gating
Tally sits above the CAS chain, not inside it. When the node receives a block request from a peer, Tally gates the response:
This means the system self-balances: peers who contribute data build reputation and get served. Peers who only consume build debt and are progressively throttled.
Concurrency Model
Actors
All workers (MemoryCASWorker, DiskCASWorker, NetworkCASWorker, CompositeCASWorker) and the Ivy node are Swift actors. This gives compile-time data race safety — no manual lock management for the async path.
Lock-Based Hot Path
MemoryCASWorker and Tally also expose nonisolated sync methods that bypass the actor executor and use an internal lock directly. This eliminates the ~7µs actor hop overhead for hot-path operations:
| API | Latency | Use Case |
|---|---|---|
syncGet(cid:) | 6–28 ns | Hot-path reads in tight loops |
await getLocal(cid:) | ~7.5 µs | General async code, chain traversal |
Cross-Platform Locking
Both Tally and the storage workers use a LockedState<State> abstraction:
- Apple:
OSAllocatedUnfairLock(optimal for uncontended, non-recursive locking) - Linux:
NSLockwrapper with the same API surface
Eviction Strategy
Both memory and disk workers use the same eviction algorithm from Acorn.LFUDecayCache:
- LFU with exponential decay: access scores decay over time so recently-accessed items score higher than historically popular but now-cold items
- Sampled eviction: instead of scanning all entries, the cache samples N random candidates and evicts the lowest-scored one. This is O(k) where k = sample size (default 5)
- O(1) global decay: a global multiplier tracks aggregate decay, avoiding per-entry updates on every access
- Background renormalization: when the global multiplier risks floating-point underflow, scores are renormalized incrementally
Wire Protocol
Ivy uses a simple length-prefixed binary protocol over TCP (via swift-nio):
DHT Routing
Ivy implements a Kademlia-style distributed hash table:
- XOR distance: peers and content are placed in the same 256-bit keyspace via SHA-256
- K-buckets: 256 buckets, one per bit of common prefix length (CPL). Default k=20 entries per bucket
- Reputation-weighted eviction: when a bucket is full, the peer with the lowest Tally reputation score is evicted to make room for the new peer
- Peer lookup:
findNodequeries the closest known peers, who respond with their closest known peers, recursively narrowing toward the target