Subtracks & Tasks
The Lost Update Problem
Implement Basic Counter with Lost Update Problem
Implement a basic grow-only counter that handles add and read operations. Multiple nodes share counter state, but your initial implementation will los...
Integrate Sequentially Consistent Key-Value Store
Use Maelstrom's built-in seq-kv service to store your counter value. This provides sequential consistency but introduces new challenges around availab...
Implement Compare-And-Swap (CAS) Operation
Implement your counter using Compare-And-Swap (CAS) operations. CAS atomically updates a value only if it matches an expected value, preventing lost u...
Build Grow-Only Counter (G-Counter) CRDT
Implement a G-Counter CRDT where each node maintains its own counter. The total is the sum of all per-node counters. Merging is done by taking the max...
Handle Concurrent Increments Across Multiple Nodes
Test your G-Counter under heavy concurrent load. Multiple nodes will simultaneously increment and the final value must equal the sum of all increments...
G-Counter and PN-Counter
Implement a G-Counter (Grow-Only CRDT)
A G-Counter (Grow-only Counter) is the simplest CRDT. Each node maintains a vector of N integers, one per node. A node only increments its own slot, a...
Prove G-Counter CRDT Properties
A CRDT merge function must satisfy three properties to guarantee convergence: commutativity, associativity, and idempotency. Proving these for the G-C...
Implement a PN-Counter for Increment and Decrement
A G-Counter only grows. To support decrements, the PN-Counter uses two G-Counters: P (positive) for increments and N (negative) for decrements. **Dat...
Gossip PN-Counter Across a Cluster
To replicate the PN-Counter across a cluster, each node periodically gossips its full state to random peers. This is an anti-entropy protocol that gua...
Pass the Maelstrom G-Counter Workload
The Maelstrom g-counter workload is the definitive correctness test for your CRDT counter implementation. It runs your nodes in a simulated distribute...
More CRDTs
Implement an OR-Set (Observed-Remove Set)
The OR-Set (Observed-Remove Set) solves the concurrent add/remove problem. Each element is tagged with a unique identifier, and remove only removes ta...
Implement a Multi-Value Register (MV-Register)
A Multi-Value Register (MV-Register) handles concurrent writes by keeping ALL concurrent values as siblings. The client resolves conflicts on read. *...
Implement a Sequence CRDT for Collaborative Text Editing
A Sequence CRDT enables collaborative text editing where multiple users can type simultaneously without conflicts. The RGA (Replicated Growable Array)...
Analyze CRDT Tradeoffs vs. OCC and Locking
CRDTs provide coordination-free eventual consistency, but come with tradeoffs. Comparing CRDTs with OCC (Optimistic Concurrency Control) and locking r...
Build a Distributed Shopping Cart with CRDTs
A distributed shopping cart demonstrates CRDTs solving a real-world problem. Using an OR-Set, items can be added and removed from the cart across part...
Interview Prep
Common interview questions for Backend / Distributed Systems Engineer roles that map directly to what you build in this track. Click any question to reveal the model answer.
Model Answer
Options by consistency requirement: (1) Single Redis node with INCR atomic command (simple, SPOF), (2) Redis with WAIT for replication (better durability), (3) G-Counter CRDT: each node maintains its own count, total is sum of all; merge is component-wise max. G-Counters provide eventual consistency with no coordination. (4) For strong consistency: use a transactional DB or a single leader. The key question is whether you need exact counts or approximate counts.
Model Answer
A CRDT (Conflict-free Replicated Data Type) is a data structure that can be replicated across nodes and merged automatically without conflicts. A G-Counter is a CRDT: each node i maintains counter[i]. Merge is component-wise max(a[i], b[i]). Total is sum of all components. This works because the merge operation is commutative (order of merges does not matter), associative (grouping of merges does not matter), and idempotent (merging the same state twice is harmless). These three properties define a join semilattice, the mathematical foundation of CRDTs.
Model Answer
Depends on consistency model. Read-your-writes consistency guarantees the user sees their own increment. Without it, they may read a stale replica. Implementations: sticky sessions (route user to same replica), read from the writer, synchronous replication before acknowledging the write, or use a linearizable store (etcd, CockroachDB). Amazon DynamoDB's eventually consistent reads may return stale data; consistent reads guarantee the latest committed value at the cost of higher latency.
Model Answer
Exact counts at 100M scale are expensive. Use approximate counting: HyperLogLog for unique viewer counts (2% error, uses ~1.5KB of memory regardless of cardinality). For total views, use a stream-processing pipeline: events go to Kafka, a stream processor (Flink, Samza) aggregates in sliding windows, results go to a cache (Redis). Accept eventual consistency for display counts — showing "10M views" vs "10,000,234 views" is a product decision, not a technical constraint.
Model Answer
A PN-Counter (Positive-Negative Counter) is two G-Counters: P (increments) and N (decrements). Value = sum(P) - sum(N). Merge: component-wise max on both P and N independently. Increment: P[nodeId]++. Decrement: N[nodeId]++. This works because each G-Counter individually satisfies the CRDT properties, and the subtraction is applied only at read time.
Questions are representative of real interview patterns. Model answers are starting points — adapt them with your own experience and the specific context of the interview.
Common Mistakes
The top 5 mistakes builders make in this track — and exactly how to fix them. Click any mistake to see the root cause and the correct approach.
Why it happens
Each node only knows about its own increments unless it actively merges state from peers. Reading local state gives a partial view.
The fix
On read, either (a) query all peers for their current partial counts and sum them, or (b) keep a replicated copy of every node's partial count and sum those.
Why it happens
A single shared integer requires coordination to avoid lost updates. Without coordination you need a data structure that lets each node "own" its slice of the state.
The fix
Use a G-Counter: a map from node ID to that node's cumulative increment count. Merging two G-Counters takes the element-wise maximum. The total is the sum of all values.
Why it happens
Relying solely on the periodic gossip tick to propagate updates introduces up to one full tick interval of staleness.
The fix
After a local increment, immediately kick off a propagation round to a subset of peers rather than waiting for the next scheduled tick.
Why it happens
A PN-Counter is a pair (positive G-Counter, negative G-Counter). If the merge logic applies to only one of the two, the value diverges.
The fix
Always merge both the positive and negative G-Counters together. The total value is `sum(positive[nodeId]) - sum(negative[nodeId])` across all nodes.
Why it happens
Floating-point addition is not associative or commutative in exact arithmetic. Summing many floats in different orders gives different results.
The fix
Always use integer types for counters. If you need fractional increments, multiply by a fixed scale factor and store as integers.
Comparison Mode
Side-by-side comparisons of the approaches, algorithms, and trade-offs you encounter in this track. Expand any comparison to see a detailed breakdown.
| Dimension | G-Counter | PN-Counter | Shared Register |
|---|---|---|---|
| Supports decrement | No | Yes | Yes |
| Coordination needed | None | None | Yes (locking or consensus) |
| Merge strategy | Element-wise max per node | Element-wise max for positive + negative maps | Last write wins (or conflicts) |
| Eventual consistency | Guaranteed | Guaranteed | Requires careful design |
| Space complexity | O(n) — one slot per node | O(2n) — two G-Counters | O(1) — single value |
| Used in | Like counts, view counts | Shopping cart quantities, inventory | Config values, leader epoch |
Concepts Covered
Prerequisites
It is recommended to complete the previous tracks before starting this one. Concepts build progressively throughout the curriculum.