Implementation
Distribute cache entries across multiple cache nodes using consistent hashing. This scales cache capacity beyond a single node while maintaining efficient key lookup.
Implement:
- Consistent hash ring for cache nodes
- Key routing to appropriate node
- Handling node join/leave with minimal key redistribution
- Client library that routes requests automatically
Sample Test Cases
{"src":"c0","dest":"n1","body":{"type":"init","msg_id":1,"node_id":"n1","node_ids":["n1","cache1","cache2","cache3"]}}
{"src":"c1","dest":"n1","body":{"type":"hash_key","msg_id":2,"key":"mykey","ring_size":64}}
{"src":"n1","dest":"c0","body":{"type":"init_ok","in_reply_to":1,"msg_id":0}}
{"src":"n1","dest":"c1","body":{"type":"hash_key_ok","in_reply_to":2,"msg_id":1,"key":"mykey","node":"cache2","ring_position":42}}
Hints
Hint 1▾
Hint 2▾
Hint 3▾
Hint 4▾
Theoretical Hub
Distributed Caching
A single cache server has limited memory. Distributed caching shards data across multiple servers. Each key goes to a specific server based on its hash. This scales linearly with nodes.
Consistent Hashing
Regular modulo hashing (key % N) redistributes most keys when N changes. Consistent hashing minimizes redistribution: adding/removing a node only affects keys between it and its neighbor on the ring.
Virtual Nodes
With few physical nodes, the ring may be unbalanced. Virtual nodes map each physical server to many ring positions, improving distribution. Most production systems use 100-200 virtual nodes per server.