Batch tuning in Redpanda for optimized performance (part 1)

Learn how to configure batching, what affects batch sizes, and performance tradeoffs

By
on
November 19, 2024

"Faster, faster, must go faster!" This is the world we live in today, where the frequency, volume, and data complexity continually grow. Applications and use cases demand more and more data to answer business-critical questions in real time. The demand for data can easily outstrip the resources and capabilities of the systems designed to keep it moving to its ultimate destination.

Streaming data platforms, such as Redpanda, are designed to take advantage of modern hardware advancements, allowing you to squeeze the most performance out of the available capacity. Even then, high-performance systems like this can still fall prey to poor application behavior.

In this first post of a two-part series, we discuss how to configure batching, what affects batch sizes, and the performance implications of your application design choices.

What is batching?

Let’s start from the top. Every system — a web server, a database, or even a Redpanda cluster — has a limit to the number of concurrent requests it can handle. This limit is often determined by infrastructure, such as how many CPU cores are available, how fast the network or disks are, etc. Just as often, the limit to request concurrency is determined by the workload itself; small requests are usually faster to process (and use fewer resources) than large requests, so the shape of a workload (how many small requests vs. how many large requests) matters.

Although smaller requests may use fewer resources, they can never scale to zero. At some point, the cost of servicing the smallest possible request becomes more significant than the cost of the request itself. One way to think of this is that each request incurs a fixed cost (for work that happens regardless of the request size) and a variable cost (for work determined by what was requested).

Most workloads consist of a collection of requests rather than just a single request in isolation. For some workloads, requests can be grouped into batches and processed as a larger unit. Making one single, larger request (that encompasses multiple smaller requests) is often more efficient than making many small requests since we pay the fixed cost once.

What is batching in Redpanda?

Just as in Apache Kafka®, a batch in Redpanda is a group of one or more messages written to the same partition, which are bundled together and sent to a broker in a single request. Rather than each message being sent and acknowledged separately, requiring multiple calls to Redpanda, the client buffers messages for a short time, optionally compresses the whole batch, and then sends them later as a single request.

This process makes Redpanda far more efficient. There are fewer network calls to handle, fewer acknowledgments to send, fewer internal tasks to perform, and so on. Rather than paying a fixed cost per request over and over, we can pay that fixed cost once for the entire batch and be done. Huzzah! This reduces CPU utilization on the Redpanda broker, allowing it to handle far more messages — a clear case of “do more with less.”

As a double win, batching also makes compression more efficient: the compression ratio improves as you compress more messages at once since it can take advantage of the similarities between messages. Another huzzah!

So, what is the catch?

If batching is such a cure-all for performance, why doesn't it have higher default tunings with Kafka clients? It turns out that batching has a downside: clients are buffering records instead of sending them. In other words, batching is — by intention — introducing additional produce latency. For many use cases, a few milliseconds here or there aren’t even noticeable. But in some use cases, those milliseconds matter.

Given that linger.ms specifies the maximum additional time to wait for a larger batch, it follows that the average latency would increase by half of that. Sometimes, messages are unlucky and arrive just as a new batch is created. And sometimes, messages are lucky and make it into a batch right before it’s sent (and don’t wait as long).

How does Redpanda batch?

Batches are part of the Kafka API and are supported by Redpanda, but creating batches is purely a client-side activity. Rather than creating batches by hand (by creating your own “large messages” in your application), most Kafka client libraries allow you to batch messages automatically simply by configuring the producer.

linger.ms:
This controls how long the client will wait before the batch is sent. The default linger time in Java's Kafka client library is zero milliseconds, indicating that no batching should occur but can differ depending on the Kafka library you're using. It's worth double-checking the defaults for your preferred library.

batch.size:
This controls how large a batch can get (in bytes) before it’s sent. The default batch size is 16KB. Like linger.ms, verify the defaults for your preferred Kafka client library.

buffer.memory:
This controls how much memory the producer will use to buffer messages waiting to be sent to brokers. The default buffer memory size is 32MB.

Triggering batches

You can think of the linger.ms and batch.size properties as triggers: whichever goes off first makes the batch send. Functionally, it works like this pseudo-code:

if (client not at max-in-flight cap):
    if (current linger > linger.ms || next message would exceed batch.size):
        close_and_send_current_batch()
else:
    if next message would exceed batch.size:
        close_and_enqueue_current_batch()

Effective batch size: easier than it looks

Because batching is enabled by setting only a few simple parameters, it can be tempting to think they’re the only factors that matter. In reality, the actual batch sizes result from a range of factors, of which the producer configuration is a small, but crucial part.

7 factors that determine the effective batch size

The following list gives an idea of the things that can affect the actual batch sizes you will see in production:

Message rate
The simplest and most important factor relating to batch size is how fast data is loaded. Use cases that push a few kilobytes per minute are unlikely to produce sizable batches unless the linger time is set very high.

Batch.size
The next most important factor to consider is the maximum batch size in the producer configuration, which sets the upper limit on how large batches can become. Remember that to fill a larger batch, a producer may need to linger for longer, so consider raising linger.ms as well.

Linger.ms
This defines the maximum age a batch can become before it must be sent, placing an upper limit on the additional latency imposed by batching. Remember that if a producer has more time to fill a batch, it may need more space to accommodate the messages, so consider raising batch.size as well.

Messages aren’t often distributed over time perfectly outside of synthetic benchmarks – in the real world, events often arrive in bursts, which means that even a modest linger.ms time is likely to perform better than expected.

Partitioning

A batch contains messages for a single partition, so if we increase the number of partitions in a topic, we’d see fewer messages per partition. If messages have keys, then the partition for each message is set by the key and this then translates into a slower batch fill rate, which may then lead to smaller batches. It also increases the amount of client memory required, since there would be more batches held open.

Increased partition count doesn't affect batch size when the sticky partitioner is in use since that partitioner writes all messages to a single partition (read: batch) until a size threshold is reached based on batch.size.

Number of producers

In some use cases, we can control the number of producers even if the total message rate is unchanged. For example, HTTP servers behind a load balancer producing to a topic. Increasing the number of HTTP servers (producers) will result in fewer messages per producer, resulting in smaller batches.

Client memory

Clients use buffer memory to hold messages while batching. If this is insufficient for all of the open batches (as could happen with a large maximum batch size), then if a new batch is required, one of the existing batches is sent to make space — likely before it hits either the maximum batch size or the maximum linger time.

Backpressure

Kafka clients have an internal mechanism for tracking the in-flight batches sent to the cluster and awaiting acknowledgment. If Redpanda is heavily loaded, a client with all of the “max in flight” slots in use will experience a form of backpressure, such that the client will continue to add records into a queued batch even beyond the maximum batch size. This will result in increased batch sizes while the cluster is heavily loaded.

One consequence is that adding additional brokers to a loaded cluster can sometimes cause batch sizes to decrease since there is less backpressure.

Effective latency: it's counterintuitive

Redpanda includes broker-level produce latency tracking, which measures the time it takes for the batch to be received, written to disk, replicated to other brokers, and an acknowledgment to be sent to the producer. This is but one component of the end-to-end latency of a produce operation.

As the CPU becomes saturated on a broker due to an extremely high request rate, tasks can stack up in the broker's internal work queue. At this point, the backlog of produce requests can impact other internal housekeeping operations as they contend for CPU time. When this backlog becomes high, a produce request will take longer to process because it has to wait for its turn in the queue — the cause of increased latency.

By tuning the effective batch size to reduce the request rate, you reduce the CPU impact, the average size of that backlog, and the latency impact of that backlog. This may seem counterintuitive. Modest adjustments of the producer linger time can ease the saturation enough to allow tail latencies to drop significantly. From the client's perspective, it looks like you're inducing higher potential end-to-end latency by increasing linger, but you're actually reducing it because the scheduling backpressure is largely getting reduced, if not removed.

Conclusion and what’s next

In this blog post, we talked about batching from a first-principles perspective to understand the impact of request batching and client tuning for vastly improving resource utilization, as well as the tradeoffs associated with your application architecture choices. In part two, we’ll cover observability and tuning, optimal batch sizing, and the impact of write caching. Stay tuned.

To read more about batch tuning, browse our Docs. If you have questions, ask away in the Redpanda Community on Slack.

No items found.

Related articles

VIEW ALL POSTS
Celebrating two years of Redpanda Cloud
Towfiqa Yasmeen
&
&
&
December 17, 2024
Text Link
The future of data privacy in the age of AI
James Kinley
&
&
&
December 10, 2024
Text Link
Redpanda 24.3 creates a query-ready table|stream duality with Apache Iceberg™, launches Native Postgres CDC
Matt Schumpert
&
Mike Broberg
&
Towfiqa Yasmeen
&
December 3, 2024
Text Link