Swiftorial Logo
Home
Swift Lessons
Matchups
CodeSnaps
Tutorials
Career
Resources

Core Concepts: Topics and Partitions in Kafka

Introduction to Topics and Partitions

In Kafka, topics and partitions are fundamental concepts that form the backbone of its data storage and distribution mechanism. Understanding these concepts is crucial for designing and implementing effective Kafka-based solutions.

What are Topics?

A topic in Kafka is a category or feed name to which records are stored and published. Topics in Kafka are always multi-subscriber, meaning a topic can have zero, one, or many consumers that subscribe to the data written to it.

Creating a Topic

To create a topic in Kafka, you use the kafka-topics.sh script:

bin/kafka-topics.sh --create --topic my_topic --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1
Example:

Creating a topic named user_logs with 3 partitions and a replication factor of 1:

bin/kafka-topics.sh --create --topic user_logs --bootstrap-server localhost:9092 --replication-factor 1 --partitions 3

What are Partitions?

Partitions are a way to parallelize the data of a topic by splitting it into multiple parts. Each partition is an ordered, immutable sequence of records that are continually appended to.

Partitioning a Topic

When a topic is created, you can specify the number of partitions. More partitions can be added later, but they cannot be removed.

bin/kafka-topics.sh --alter --topic my_topic --partitions 4 --bootstrap-server localhost:9092
Example:

Adding more partitions to an existing topic user_logs:

bin/kafka-topics.sh --alter --topic user_logs --partitions 5 --bootstrap-server localhost:9092

How Partitions Work

Partitions enable Kafka to parallelize the consumption and production of records. Each partition can be hosted on a different server, which means that multiple consumers can read from different partitions in parallel, and multiple producers can write to different partitions in parallel.

Partition Offsets

Each record within a partition is assigned a unique offset, which is a sequential ID that uniquely identifies each record within the partition.

Example:

If you have a partition with records:

0 => {"user": "Alice", "action": "login"}
1 => {"user": "Bob", "action": "logout"}
2 => {"user": "Charlie", "action": "login"}
        

Here, 0, 1, 2 are the offsets of the records.

Producing and Consuming with Partitions

Producing to Partitions

When producing records to a topic, the producer can specify the partition to which the record should be written. If no partition is specified, Kafka uses a partitioner to decide which partition to write to.

producer.send(new ProducerRecord("my_topic", partition, key, value));
Example:

Producing a record to a specific partition:

producer.send(new ProducerRecord("user_logs", 2, "user123", "login"));

Consuming from Partitions

When consuming records from a topic, a consumer can read from specific partitions or from all partitions of the topic.

consumer.assign(Arrays.asList(new TopicPartition("my_topic", partition)));
Example:

Consuming records from a specific partition:

consumer.assign(Arrays.asList(new TopicPartition("user_logs", 2)));

Replication and Fault Tolerance

Kafka ensures fault tolerance by replicating partitions across multiple brokers. Each partition has one leader and multiple followers. The leader handles all read and write requests, while followers replicate the data.

Replication Factor

The replication factor is the number of copies of a partition that Kafka maintains. A higher replication factor increases fault tolerance.

bin/kafka-topics.sh --create --topic my_topic --bootstrap-server localhost:9092 --replication-factor 3 --partitions 3
Example:

Creating a topic with a replication factor of 3:

bin/kafka-topics.sh --create --topic user_logs --bootstrap-server localhost:9092 --replication-factor 3 --partitions 3

Conclusion

In this tutorial, we've explored the core concepts of Kafka topics and partitions, including how to create and manage them, and how they work. Understanding these concepts is essential for building scalable and fault-tolerant Kafka-based applications.