It's quite common when using Kafka to treat applications as parts of a bigger pipeline (similarly to Bash pipeline) and forward processing results to other applications. Karafka provides a way of dealing with that by allowing you to use the WaterDrop messages producer from any place within your application.
You can access the pre-initialized WaterDrop producer instance using the Karafka.producer
method from any place within your codebase.
Karafka.producer.produce_async(
topic: 'events',
payload: Events.last.to_json
)
WaterDrop is thread-safe and operates well in scale.
If you want to produce messages from the Karafka consumers, there's a handy alias method #producer
for this:
class VisitsConsumer < ApplicationConsumer
def consume
::Visit.insert_all(messages.payloads)
producer.produce_async(
topic: 'events',
payload: { type: 'inserted', count: messages.count }.to_json
)
end
end
Please follow the WaterDrop README for more details on how to use it.
When using the Karafka producer in processes like Puma, Sidekiq, or rake tasks, it is always recommended to call the #close method on the producer before shutting it down.
This is because the #close
method ensures that any pending messages in the producer's buffer are flushed to the Kafka broker before shutting down the producer. If you do not call #close, there is a risk that some messages may not be sent to the Kafka broker, resulting in lost or incomplete data.
In addition, calling #close also releases any resources held by the producer, such as network connections, file handles, and memory buffers. Failing to release these resources can lead to memory leaks, socket exhaustion, or other system-level issues that can impact the stability and performance of your application.
Overall, calling #close
on the Karafka producer is a best practice that helps ensure reliable and efficient message delivery to Kafka while promoting your application's stability and scalability.
Below you can find an example how to #close
the producer used in various Ruby processes. Please note, that you should not close the producer manually if you are using the Embedding API in the same process.
When you shut down Karafka, the Karafka.producer
automatically closes. There's no need to close it yourself. If you're using multiple producers or a more advanced setup, you can use the app.stopped
event during shutdown to handle them.
# config/puma.rb
on_worker_shutdown do
::Karafka.producer.close
end
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.on(:shutdown) do
::Karafka.producer.close
end
end
PhusionPassenger.on_event(:stopping_worker_process) do
::Karafka.producer.close
end
In case of rake tasks, just invoke ::Karafka.producer.close
at the end of your rake task:
desc 'My example rake task that sends all users data to Kafka'
task send_users: :environment do
User.find_each do |user|
::Karafka.producer.produce_async(
topic: 'users',
payload: user.to_json,
key: user.id
)
end
# Make sure, that the producer is always closed before finishing
# any rake task
::Karafka.producer.close
end
Karafka, by default, provides a producer that sends messages to a specified Kafka cluster. If you don't configure it otherwise, this producer will always produce messages to the default cluster that you've configured Karafka to work with. If you only specify one Kafka cluster in your configuration, all produced messages will be sent to this cluster. This is the out-of-the-box behavior and works well for many setups with a single cluster.
However, if you have a more complex setup where you'd like to produce messages to different Kafka clusters based on certain logic or conditions, you need a more customized setup. In such cases, you must configure a producer for each cluster you want to produce. This means you'll have separate producer configurations tailored to each cluster, allowing you to produce to any of them as required.
In scenarios where you want to decide which cluster to produce to based on the consumer logic or the consumed message, you can override the #producer
method in your consumer. By overriding this method, you can specify a dedicated cluster-aware producer instance depending on your application's logic.
# Define your producers for each of the clusters
PRODUCERS_FOR_CLUSTERS = {
primary: Karafka.producer,
secondary: ::WaterDrop::Producer.new do |p_config|
p_config.kafka = {
'bootstrap.servers': 'localhost:9095',
'request.required.acks': 1
}
end
}
# And overwrite the default producer in any consumer you need
class MyConsumer < ApplicationConsumer
def consume
messages.each do |message|
# Pipe messages to the secondary cluster
producer.produce_async(topic: message.topic, payload: message.raw_payload)
end
end
private
def producer
PRODUCERS_FOR_CLUSTERS.fetch(:secondary)
end
end
The Web UI relies on per-producer listeners to monitor asynchronous errors. If you're crafting your consumers and utilizing the Web UI, please ensure you configure this integration appropriately.
By leveraging this flexibility in Karafka, you can effectively manage and direct the flow of messages in multi-cluster Kafka environments, ensuring that data gets to the right place based on your application's unique requirements.