TASK
Implementation
How fast is your node? In production systems, you need to measure throughput (messages per second) and latency (time to process each message). Profiling reveals whether the bottleneck is in parsing, dispatch, or serialization.
Your task is to add benchmarking to your node:
- Track the timestamp of each incoming message and each outgoing response
- Compute per-message latency (time from receive to send)
- Compute overall throughput (messages processed per second)
- Report statistics via a
bench_statsmessage type
Request: {"type": "bench_stats", "msg_id": 1}
Response: {"type": "bench_stats_ok", "in_reply_to": 1,
"total_messages": 100,
"elapsed_ms": 523,
"throughput_per_sec": 191.2,
"avg_latency_us": 45,
"p99_latency_us": 120}Additionally implement a bench_echo type that is identical to echo but records timing:
Request: {"type": "bench_echo", "msg_id": 1, "echo": "perf"}
Response: {"type": "bench_echo_ok", "in_reply_to": 1, "echo": "perf", "latency_us": 42}Sample Test Cases
Init and echo still workTimeout: 5000ms
Input
{"src":"c0","dest":"n1","body":{"type":"init","msg_id":1,"node_id":"n1","node_ids":["n1"]}}
{"src":"c1","dest":"n1","body":{"type":"echo","msg_id":2,"echo":"bench"}}
Expected Output
{"src": "n1", "dest": "c0", "body": {"type": "init_ok", "in_reply_to": 1, "msg_id": 0}}
{"src": "n1", "dest": "c1", "body": {"type": "echo_ok", "echo": "bench", "in_reply_to": 2, "msg_id": 1}}
Bench echo includes latency_us fieldTimeout: 5000ms
Input
{"src":"c0","dest":"n1","body":{"type":"init","msg_id":1,"node_id":"n1","node_ids":["n1"]}}
{"src":"c1","dest":"n1","body":{"type":"bench_echo","msg_id":2,"echo":"perf"}}
Expected Output
{"src": "n1", "dest": "c0", "body": {"type": "init_ok", "in_reply_to": 1, "msg_id": 0}}
Hints
Hint 1▾
Track timestamps for each message received and each response sent
Hint 2▾
Throughput = total messages / elapsed time
Hint 3▾
Latency = time between receiving a message and sending the response
Hint 4▾
Use time.monotonic() for accurate elapsed time measurements
Hint 5▾
Store latency samples to compute percentiles (p50, p99)
OVERVIEW
Theoretical Hub
Concept overview coming soon
Key Concepts
benchmarkingthroughputlatencyprofilingperformance
main.py
python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#!/usr/bin/env python3
import sys
import json
import time
class Node:
def __init__(self):
self.node_id = None
self.node_ids = []
self.next_msg_id = 0
self.start_time = None
self.message_count = 0
self.latencies = [] # microseconds
def send(self, dest, body):
body["msg_id"] = self.next_msg_id
self.next_msg_id += 1
message = {"src": self.node_id, "dest": dest, "body": body}
print(json.dumps(message), flush=True)
def reply(self, request, body):
body["in_reply_to"] = request["body"]["msg_id"]
self.send(request["src"], body)
def record_latency(self, start_us):
# TODO: Compute and store latency in microseconds
pass
def get_percentile(self, percentile):
# TODO: Compute the given percentile from self.latencies
# e.g., p99 = 99th percentile
pass
def main():
node = Node()
for line in sys.stdin:
recv_time = time.monotonic()
line = line.strip()
if not line: