System Design FAQ: Top Questions
25. How would you design an Event-Driven Architecture (EDA) for Microservices?
An Event-Driven Architecture (EDA) decouples services by using asynchronous messaging and event publication/subscription, enabling loosely coupled, scalable, and reactive systems.
📋 Functional Requirements
- Allow services to emit and consume events
- Support pub/sub and event replay
- Ensure message delivery guarantees (at least once, exactly once)
- Enable auditing and debugging
📦 Non-Functional Requirements
- Low latency message processing
- Resilience to service failures
- Horizontal scalability
🏗️ Core Components
- Event Broker: Kafka, RabbitMQ, or NATS
- Event Schema Registry: Ensures event compatibility
- Event Consumers: Services listening for domain events
- Dead Letter Queue (DLQ): Stores failed messages
- Audit Store: Logs for observability and replay
📨 Kafka Producer Example
from kafka import KafkaProducer
import json
producer = KafkaProducer(
bootstrap_servers='localhost:9092',
value_serializer=lambda v: json.dumps(v).encode('utf-8')
)
event = {
"event_type": "OrderCreated",
"order_id": "ORD-1234",
"customer_id": "CUST-4567",
"timestamp": "2025-06-11T12:00:00Z"
}
producer.send("orders", value=event)
📬 Kafka Consumer Example
from kafka import KafkaConsumer
consumer = KafkaConsumer(
"orders",
bootstrap_servers='localhost:9092',
auto_offset_reset='earliest',
group_id='payment-service',
value_deserializer=lambda m: json.loads(m.decode('utf-8'))
)
for msg in consumer:
handle_event(msg.value)
🧱 Event Schema Registry Sample (Avro)
{
"type": "record",
"name": "OrderCreated",
"fields": [
{"name": "order_id", "type": "string"},
{"name": "customer_id", "type": "string"},
{"name": "timestamp", "type": "string"}
]
}
🔁 Replay and DLQ Strategy
- Use Kafka log compaction or rewind offsets for reprocessing
- Push failed events to a DLQ with error context
- Retry policy with exponential backoff
📈 Observability
- Event delivery latency
- Consumer lag
- Failed message rate
📌 Final Insight
EDA enables scalable, decoupled microservices with high responsiveness. Key challenges lie in schema evolution, event ordering, and fault-tolerant delivery strategies. Leveraging Kafka, schema registry, and observability tools ensures robustness and traceability.
