ARCHIVED from builddistributedsystem.com on 2026-04-28 — URL: https://builddistributedsystem.com/tracks/logger/tasks/task-10-4-3-partition-leader
TASK

Implementation

In a distributed log like Kafka, each partition must have exactly one leader broker that handles all reads and writes. Followers replicate data from the leader for fault tolerance.

Architecture:

  • Leader: the broker responsible for a partition. All producers and consumers interact with the leader.
  • Followers: replicate the partition log from the leader. They do not serve reads (in standard Kafka).
  • Leader election: when the leader crashes, one of the in-sync followers is elected as the new leader.

The metadata flow:

  1. Producer calls metadata_request to discover which broker is the leader for a partition
  2. Producer sends ProduceRequest directly to the leader broker
  3. Leader writes the message to its local log
  4. Leader replicates to followers
  5. After replication, leader acknowledges the producer

This ensures total order within a partition — all messages pass through a single leader.

Request:  {"type": "partition_leader", "msg_id": 1, "topic": "orders", "partition": 0}
Response: {"type": "partition_leader_ok", "in_reply_to": 1, "leader": "broker-1", "followers": ["broker-2", "broker-3"], "term": 3}

Request:  {"type": "partition_failover", "msg_id": 2, "topic": "orders", "partition": 0, "failed_leader": "broker-1"}
Response: {"type": "partition_failover_ok", "in_reply_to": 2, "new_leader": "broker-2", "new_term": 4, "failover_ms": 250}

Sample Test Cases

Query partition leaderTimeout: 5000ms
Input
{"src":"c0","dest":"n1","body":{"type":"init","msg_id":1,"node_id":"n1","node_ids":["n1","n2","n3"]}}
{"src":"c1","dest":"n1","body":{"type":"partition_leader","msg_id":2,"topic":"orders","partition":0}}
Expected Output
{"src": "n1", "dest": "c0", "body": {"type": "init_ok", "in_reply_to": 1, "msg_id": 0}}
Leader failover elects new leaderTimeout: 5000ms
Input
{"src":"c0","dest":"n1","body":{"type":"init","msg_id":1,"node_id":"n1","node_ids":["n1","n2","n3"]}}
{"src":"c1","dest":"n1","body":{"type":"partition_failover","msg_id":2,"topic":"orders","partition":0,"failed_leader":"n1"}}
Expected Output
{"src": "n1", "dest": "c0", "body": {"type": "init_ok", "in_reply_to": 1, "msg_id": 0}}

Hints

Hint 1
Each Kafka partition has a leader broker that handles all reads and writes
Hint 2
N-1 follower brokers replicate from the leader for fault tolerance
Hint 3
Use Raft for leader election within each partition group
Hint 4
Producers discover the leader via a metadata request and send writes directly to it
Hint 5
On leader failure, Raft automatically elects a new leader from the followers
OVERVIEW

Theoretical Hub

Concept overview coming soon

Key Concepts

partition leaderRaft per partitionleader brokerfollower replicationmetadata
main.py
python
Implement Partition Leader Election via Raft - The Logger | Build Distributed Systems