TASK
Implementation
Implement a local cache at each request-handling node. When a request comes in:
- Check if the key exists in the local cache
- If cache hit and not expired, return cached value
- If cache miss or expired, fetch from backend
- Store result in cache with TTL
- Return result
Track cache hit rate to measure effectiveness.
Sample Test Cases
Cache hitTimeout: 5000ms
Input
{"src":"c0","dest":"n1","body":{"type":"init","msg_id":1,"node_id":"n1","node_ids":["n1"]}}
{"src":"c1","dest":"n1","body":{"type":"cache_write","msg_id":2,"key":"x","value":100}}
{"src":"c2","dest":"n1","body":{"type":"cache_read","msg_id":3,"key":"x"}}
Expected Output
{"src":"n1","dest":"c0","body":{"type":"init_ok","in_reply_to":1,"msg_id":0}}
{"src":"n1","dest":"c1","body":{"type":"cache_write_ok","in_reply_to":2,"msg_id":1}}
{"src":"n1","dest":"c2","body":{"type":"cache_read_ok","in_reply_to":3,"msg_id":2,"hit":true,"value":100}}
Hints
Hint 1▾
Cache responses at the request node
Hint 2▾
Use TTL for freshness control
Hint 3▾
Check cache before calling backend
OVERVIEW
Theoretical Hub
Why Caching?
Caching trades space for time. By storing frequently accessed data closer to the requester, we reduce latency and backend load. A cache with 90% hit rate reduces backend traffic by 10x.
Request Node Cache
The simplest cache sits at each request handler. It is fast (no network hop) but duplicates data across nodes. This works well for hot data that all nodes access.
TTL (Time-To-Live)
TTL defines how long cached data remains valid. Short TTL = more freshness, more backend load. Long TTL = less freshness, less load. Choose based on how often data changes and tolerance for staleness.
Key Concepts
cachinglocal cacheTTL
main.py
python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#!/usr/bin/env python3
import sys
import json
import time
class RequestCache:
def __init__(self, default_ttl=60):
self.cache = {} # key -> (value, expiry_time)
self.default_ttl = default_ttl
self.hits = 0
self.misses = 0
def get(self, key):
# TODO: Return cached value if exists and not expired
# TODO: Track hit/miss statistics
pass
def set(self, key, value, ttl=None):
# TODO: Store value with expiry time
pass
def get_or_fetch(self, key, fetch_fn, ttl=None):
# TODO: Return cached value or fetch and cache
pass
def hit_rate(self):
total = self.hits + self.misses
return self.hits / total if total > 0 else 0.0
class Node:
def __init__(self):
self.cache = RequestCache(default_ttl=30)
self.backend = {} # Simulated backend storage
def fetch_from_backend(self, key):
# Simulate backend fetch
return self.backend.get(key)
def handle_read(self, key):
# TODO: Use cache to serve reads