ARCHIVED from builddistributedsystem.com on 2026-04-28 — URL: https://builddistributedsystem.com/tracks/caches/tasks/task-11-4-eviction
TASK

Implementation

Implement cache eviction policies to manage limited memory. When the cache is full and a new entry must be added, an eviction policy decides what to remove.

Implement two eviction strategies:

  1. TTL (Time-To-Live): Remove entries after they expire
  2. LRU (Least Recently Used): Remove entries not accessed recently

Combine them: honor TTL, and when at capacity, evict LRU entries.

Sample Test Cases

LRU evictionTimeout: 5000ms
Input
{
  "src": "c0",
  "dest": "n1",
  "body": {
    "type": "init",
    "msg_id": 1,
    "node_id": "n1",
    "node_ids": [
      "n1"
    ]
  }
}
Expected Output
{"src":"n1","dest":"c0","body":{"type":"init_ok","in_reply_to":1,"msg_id":0}}

Hints

Hint 1
Track access time for LRU
Hint 2
Use ordered dict or doubly-linked list
Hint 3
Evict when capacity is reached
OVERVIEW

Theoretical Hub

Why Eviction?

Memory is finite. Without eviction, caches would grow unbounded. Eviction policies determine which entries to remove when space is needed.

LRU (Least Recently Used)

LRU evicts the entry that has not been accessed for the longest time. The assumption is that recently accessed data will be accessed again (temporal locality). LRU is the most common eviction policy.

LFU (Least Frequently Used)

LFU tracks access counts and evicts entries with the lowest frequency. This works well for stable access patterns but can keep old cold entries that were once popular.

Key Concepts

LRUTTLevictioncache capacity
main.py
python
Add Eviction Strategies (LRU, TTL) - Caches | Build Distributed Systems