Implementation
Distributed systems must handle network partitions - times when nodes cannot communicate with each other. Your ID generator must continue working even when completely isolated from other nodes.
Test your implementation by verifying it works when:
- The node cannot reach any other nodes
- The network has high latency
- Messages are being dropped
A good ID generation scheme is fully local and does not require coordination with other nodes.
Sample Test Cases
{"src":"c0","dest":"n1","body":{"type":"init","msg_id":1,"node_id":"n1","node_ids":["n1"]}}
{"src":"c1","dest":"n1","body":{"type":"generate","msg_id":2}}
{"src":"n1","dest":"c0","body":{"type":"init_ok","in_reply_to":1,"msg_id":0}}
{"src":"n1","dest":"c1","body":{"type":"generate_ok","in_reply_to":2,"msg_id":1,"id":"n1-0"}}
Hints
Hint 1▾
Hint 2▾
Hint 3▾
Hint 4▾
Theoretical Hub
CAP Theorem and Availability
The CAP theorem states that during a network partition, you can have either Consistency or Availability, but not both.
ID generation should prioritize availability: even an isolated node should generate valid IDs.
Local-First Design
By embedding the node_id in our IDs, we achieve partition tolerance. Each node can independently generate IDs that are guaranteed unique across the cluster, without coordination.
The Trade-off
| Approach | Consistency | Availability |
|---|---|---|
| Central ID server | Strong (sequential IDs) | Fails during partition |
| Node-embedded IDs | Weak (no ordering) | Always available |
Clock Drift Handling
What if the clock goes backwards? This can happen with NTP adjustments. A robust implementation should:
- Detect backward clock movement
- Wait until timestamp advances, OR
- Continue using the previous timestamp with incrementing sequence