Find us

mobileforming,
611 North Brand, Boulevard, 11th floor
Glendale, CA 91203

Google Maps
Get in touch

Have a question for the mobileforming team?
T: 818-649-3299
E: info@mobileforming.com

Careers

Interested in joining our
amazing team? Email us on: recruiter@mobileforming.com

Current vacancies
New business

For new business enquiries please contact:
Jonathan Arnott
T: 951-229-5790
E: jonathan.arnott@mobileforming.com

© 2018 mobileforming LLC. All rights reserved.

featured post

Seven Ways to Build and Improve Company Culture

read post

an introduction to Apache Kafka & spring-kafka

By Jeffrey Thomas

Posted on: 22 March 2019

Part I

Back in January 2019, I presented an introduction to Kafka basics and spring-kafka at a South Bay JVM User Group meetup. I gave a birds-eye view of what Kafka offers as a distributed streaming platform. We explored a few key concepts and dove into an example of configuring spring-Kafka to be a producer/consumer client. In this two-part blog series, I’ll further discuss this topic and how we’re developing with Kafka at mobileforming.


Apache Kafka was created at LinkedIn, and what was initially conceived as a messaging queue became the backbone to move data from one system to another. Kafka is suitable for building real-time data pipelines and streaming applications. Some use cases that come to my mind are messaging, website activity tracking, metrics, log aggregation, stream processing, event sourcing, and commit logs.

Now that we know what Kafka is, it’s time to ask: why Kafka? Is it because it’s new and shiny? I’d argue that it’s fast, reliable, scalable—among other things.

In order to really understand Kafka’s capabilities, we’ll start by defining the entities that encompasses it and how they work—starting with these core concepts:

Screen Shot 2019-03-22 at 2.39.43 PM

Let’s start with producers. Producers write data to brokers and are responsible for load balancing. Next, we see consumers. Consumers request a range of messages from a broker. They are responsible for their own state. A message is a record that consists of a key, value, and timestamp.

Screen Shot 2019-03-22 at 2.40.32 PM

It’s also important to note that data is stored as a stream of records in a topic. Topics are multi-subscriber. Each topic is split into partitions, and partitions are then replicated.

Screen Shot 2019-03-22 at 2.41.36 PM

Partitions are ordered and immutable sequence of messages continually appended to. The records in each partition are assigned a sequential ID number called the offset. All published records are durably persisted by the Kafka cluster and retained based on a configurable retention period. Partitions determine the maximum consumer (group) parallelism allowed.

Consumer groups come to consensus via zookeeper and broker leaders. Consumers within a group are evenly distributed among available partitions—i.e a consumer in the same group will not share the same partition. Uneven consumers in a group may not get associated with a partition.

Screen Shot 2019-03-22 at 2.48.39 PM

Replicas are backups of a partition solely to prevent data loss. Replicas are never read from and never written to. They do not help to increase producer or consumer parallelism.

replicas-01

Brokers receive messages from a producer (push), and deliver messages to consumers (pull). They’re responsible for some partitions, and they also keep copies of other partitions. They use language agnostic TCP protocol to communicate between producers and consumers. Brokers are typically run as a cluster on one or more servers that can span multiple data centers. Partitions are distributed and replicated over brokers/servers in the cluster.

Each server acts as a leader for some of its partitions and a follower for others, so the load is well balanced within a cluster. If the leader fails, one of the followers will automatically become the new leader.

Screen Shot 2019-03-22 at 2.45.45 PM

A zookeeper is required for Kafka cluster operations. Zookeepers have the following responsibilities:

  • Membership of the cluster: list of all the brokers that are functioning at any given moment and are part of the cluster
  • Controller election: whenever a node shuts down, a new controller can be elected and it can also be made sure that at any given time, there is only one controller and all the follower nodes have agreed on that
  • Configuration of topics: the configuration regarding all the topics—including the list of existing topics, the number of partitions for each topic, the location of all the replicas, topics configuration overrides, and which node is the preferred leader, etc.
  • Access control lists for all the topics

Kafka’s performance can be summed up by these four benchmarks:

  • There have been up to 2 million writes/sec - 3 producers, 3x async replication
  • About 2.5 million reads/sec - 3 parallel consumers reading a topic
  • End-to-end latency: 2 ms (median), 3 ms (99th percentile), and 14 ms (99.9th percentile)
  • Throughput vs. size

To sum up this first part of getting to know Kafka: it’s fast—as proven by sequential reads/writes page cache, it being lightweight, and the simple protocols. It’s scalable—cluster management and partitioned, distributed queues make it easy to spin up new brokers and support very large number of producers and consumers. Kafka is reliable with its data replication and fault tolerance. It’s durable as it persists messages to disk and retains even after consumption.

Having streams API built in gives a greater advantage to have custom business logic drive events. However, Kafka is not without its faults. The most common issues are with consumer lag and consumer rebalancing. Clients tend to be heavy as a majority of the message handling logic is its own responsibility. If one does not have a good client, message consumption will have issues.

Keep an eye out for Part II in which I’ll discuss how we integrate with Kafka.