Karafka contains multiple configuration options. To keep everything organized, all the configuration options were divided into two groups:
-
karafka
options - options directly related to the Karafka framework and its components. -
librdkafka
options - options related to librdkafka
To apply all those configuration options, you need to use the #setup
method from the Karafka::App
class:
class KarafkaApp < Karafka::App
setup do |config|
config.client_id = 'my_application'
# librdkafka configuration options need to be set as symbol values
config.kafka = {
'bootstrap.servers': '127.0.0.1:9092'
}
end
end
!!! note ""
Karafka allows you to redefine some of the settings per each topic, which means that you can have a specific custom configuration that might differ from the default one configured at the app level. This allows you for example, to connect to multiple Kafka clusters.
!!! note ""
kafka `client.id` is a string passed to the server when making requests. This is to track the source of requests beyond just IP/port by allowing a logical application name to be included in server-side request logging. Therefore the `client_id` should be shared across multiple instances in a cluster or horizontally scaled application but distinct for each application.
A list of all the karafka configuration options with their details and defaults can be found here.
A list of all the configuration options related to librdkafka
with their details and defaults can be found here.
For additional setup and/or configuration tasks, you can use the app.initialized
event hook. It is executed once per process, right after all the framework components are ready (including those dynamically built). It can be used, for example, to configure some external components that need to be based on Karafka internal settings.
Because of how the Karafka framework lifecycle works, this event is triggered after the #setup
is done. You need to subscribe to this event before that happens, either from the #setup
block or before.
class KarafkaApp < Karafka::App
setup do |config|
# All the config magic
# Once everything is configured and done, assign Karafka app logger as a MyComponent logger
# @note This example does not use config details, but you can use all the config values
# to setup your external components
config.monitor.subscribe('app.initialized') do
MyComponent::Logging.logger = Karafka::App.logger
end
end
end
There are several env settings you can use with Karafka. They are described under the Env Variables section of this Wiki.
Kafka lets you compress your messages as they travel over the wire. By default, producer messages are sent uncompressed.
Karafka producer (WaterDrop) supports following compression types:
gzip
zstd
lz4
snappy
You can enable the compression by using the compression.codec
and compression.level
settings:
class KarafkaApp < Karafka::App
setup do |config|
config.kafka = {
# Other kafka settings...
'compression.codec': 'gzip',
'compression.level': '12'
}
end
end
!!! note ""
In order to use `zstd`, you need to install `libzstd-dev`:
```bash
apt-get install -y libzstd-dev
```