Skip to content

Commit

Permalink
Update configure_clickhouse_for_low_mem_envs.md
Browse files Browse the repository at this point in the history
Added TM
  • Loading branch information
lesandie authored Jan 9, 2025
1 parent 20fdfa0 commit f94ff25
Showing 1 changed file with 5 additions and 5 deletions.
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
title: "Configure ClickHouse for low memory environments"
linkTitle: "Configure ClickHouse for low memory environments"
title: "Configure ClickHouse® for low memory environments"
linkTitle: "Configure ClickHouse® for low memory environments"
description: >
Configure ClickHouse for low memory environments
Configure ClickHouse® for low memory environments
---

While Clickhouse it's typically deployed on powerful servers with ample memory and CPU, it can be deployed in resource-constrained environments like a Raspberry Pi. Whether you're working on edge computing, IoT data collection, or simply experimenting with ClickHouse in a small-scale setup, running it efficiently on low-memory hardware can be a rewarding challenge.
While Clickhouse® it's typically deployed on powerful servers with ample memory and CPU, it can be deployed in resource-constrained environments like a Raspberry Pi. Whether you're working on edge computing, IoT data collection, or simply experimenting with ClickHouse in a small-scale setup, running it efficiently on low-memory hardware can be a rewarding challenge.

TLDR;

Expand Down Expand Up @@ -87,4 +87,4 @@ Some interesting settings to explain:
- `merge_max_block_size` will reduce the number of rows per block when merging. Default is 8192 and this will reduce the memory usage of merges.
- The `number_of_free_entries_in_pool` settings are very nice to tune how much concurrent merges are allowed in the queue. When there is less than specified number of free entries in pool , start to lower maximum size of merge to process (or to put in queue) or do not execute part mutations to leave free threads for regular merges . This is to allow small merges to process - not filling the pool with long running merges or multiple mutations. You can check clickhouse documentation to get more insights.
- Reduce the background pools and be conservative. In a Raspi4 with 4 cores and 4 GB or ram, background pool should be not bigger than the number of cores and even less if possible.
- Tune some profile settings to enable disk spilling (`max_bytes_before_external_group_by` and `max_bytes_before_external_sort`) and reduce the number of threads per query plus enable queuing of queries (`queue_max_wait_ms`) if the `max_concurrent_queries` limit is exceeded.
- Tune some profile settings to enable disk spilling (`max_bytes_before_external_group_by` and `max_bytes_before_external_sort`) and reduce the number of threads per query plus enable queuing of queries (`queue_max_wait_ms`) if the `max_concurrent_queries` limit is exceeded.

0 comments on commit f94ff25

Please sign in to comment.