TASK
Implementation
Implement collapsed forwarding for cache misses to prevent thundering herd:
When a cached item expires and multiple requests arrive:
- First request goes to the origin (is "collapsed")
- Subsequent requests for the same key wait
- When origin responds, cache the response
- Return the cached response to all waiting requests
This prevents the backend from being overwhelmed when popular items expire.
Sample Test Cases
Collapse concurrent requestsTimeout: 5000ms
Input
{
"src": "c0",
"dest": "n1",
"body": {
"type": "init",
"msg_id": 1,
"node_id": "n1",
"node_ids": [
"n1"
]
}
}Expected Output
{"src":"n1","dest":"c0","body":{"type":"init_ok","in_reply_to":1,"msg_id":0}}Hints
Hint 1▾
Similar to deduplication but for cache misses
Hint 2▾
Only one request goes to origin
Hint 3▾
Queue waiters until response arrives
OVERVIEW
Theoretical Hub
Thundering Herd Problem
When a popular cached item expires, thousands of requests might simultaneously miss the cache and hit the backend. This can overwhelm the backend and cause cascading failures.
Collapsed Forwarding
Collapsed forwarding allows only one request through to the origin while others wait. When the response arrives, it is cached and returned to all waiters. CDNs like Varnish implement this.
Key Concepts
collapsed forwardingthundering herdcache miss handling
main.py
python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/usr/bin/env python3
import sys
import json
import threading
import time
class CollapsedForwardingProxy:
def __init__(self, backend):
self.backend = backend
self.cache = {}
self.pending = {} # key -> threading.Event
self.lock = threading.Lock()
def is_cached(self, key):
if key not in self.cache:
return False
value, expiry = self.cache[key]
return time.time() < expiry
def get(self, key):
# TODO: Return from cache, or collapse requests to origin
pass
def fetch_and_cache(self, key, ttl=60):
# TODO: Fetch from backend and cache
pass
if __name__ == "__main__":
pass