diff --git a/content/en/altinity-kb-setup-and-maintenance/configure_clickhouse_for_low_mem_envs.md b/content/en/altinity-kb-setup-and-maintenance/configure_clickhouse_for_low_mem_envs.md index 0f62262d74..1602e705aa 100644 --- a/content/en/altinity-kb-setup-and-maintenance/configure_clickhouse_for_low_mem_envs.md +++ b/content/en/altinity-kb-setup-and-maintenance/configure_clickhouse_for_low_mem_envs.md @@ -1,11 +1,11 @@ --- -title: "Configure ClickHouse for low memory environments" -linkTitle: "Configure ClickHouse for low memory environments" +title: "Configure ClickHouse® for low memory environments" +linkTitle: "Configure ClickHouse® for low memory environments" description: > - Configure ClickHouse for low memory environments + Configure ClickHouse® for low memory environments --- -While Clickhouse it's typically deployed on powerful servers with ample memory and CPU, it can be deployed in resource-constrained environments like a Raspberry Pi. Whether you're working on edge computing, IoT data collection, or simply experimenting with ClickHouse in a small-scale setup, running it efficiently on low-memory hardware can be a rewarding challenge. +While Clickhouse® it's typically deployed on powerful servers with ample memory and CPU, it can be deployed in resource-constrained environments like a Raspberry Pi. Whether you're working on edge computing, IoT data collection, or simply experimenting with ClickHouse in a small-scale setup, running it efficiently on low-memory hardware can be a rewarding challenge. TLDR; @@ -87,4 +87,4 @@ Some interesting settings to explain: - `merge_max_block_size` will reduce the number of rows per block when merging. Default is 8192 and this will reduce the memory usage of merges. - The `number_of_free_entries_in_pool` settings are very nice to tune how much concurrent merges are allowed in the queue. When there is less than specified number of free entries in pool , start to lower maximum size of merge to process (or to put in queue) or do not execute part mutations to leave free threads for regular merges . This is to allow small merges to process - not filling the pool with long running merges or multiple mutations. You can check clickhouse documentation to get more insights. - Reduce the background pools and be conservative. In a Raspi4 with 4 cores and 4 GB or ram, background pool should be not bigger than the number of cores and even less if possible. -- Tune some profile settings to enable disk spilling (`max_bytes_before_external_group_by` and `max_bytes_before_external_sort`) and reduce the number of threads per query plus enable queuing of queries (`queue_max_wait_ms`) if the `max_concurrent_queries` limit is exceeded. \ No newline at end of file +- Tune some profile settings to enable disk spilling (`max_bytes_before_external_group_by` and `max_bytes_before_external_sort`) and reduce the number of threads per query plus enable queuing of queries (`queue_max_wait_ms`) if the `max_concurrent_queries` limit is exceeded.