Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC-884 Update recommendations for Pod resource management #946

Merged
merged 10 commits into from
Feb 7, 2025
110 changes: 71 additions & 39 deletions modules/manage/pages/kubernetes/k-manage-resources.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,32 @@ kubectl describe nodes
[[memory]]
== Configure memory resources

On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. You can configure the memory resources that are allocated to these processes.
On a worker node, Kubernetes and Redpanda processes are running at the same time. Redpanda's memory usage is influenced by its architecture, which leverages the Seastar framework for efficient performance.

By default, the Helm chart allocates 80% of the configured memory in `resources.memory.container` to Redpanda, with the remaining reserved for overhead such as the Seastar subsystem and other container processes.
Redpanda Data recommends this default setting.
=== Memory allocation and Seastar flags
JakeSCahill marked this conversation as resolved.
Show resolved Hide resolved

NOTE: Although you can also allocate the exact amount of memory for Redpanda and the Seastar subsystem manually, Redpanda Data does not recommend this approach because setting the wrong values can lead to performance issues, instability, or data loss. As a result, this approach is not documented here.
Redpanda uses the following Seastar flags to control memory allocation:

[cols="1m,2a"]
|===
|Seastar Flag|Description
chrisseto marked this conversation as resolved.
Show resolved Hide resolved

|--memory
|Specifies the memory available to the Redpanda process. This value directly impacts Redpanda's ability to manage workloads efficiently.

|--reserve-memory
|Reserves a part of memory for system overheads such as non-heap memory, page tables, and other non-Redpanda operations. This flag is designed for Seastar running on a dedicated VM rather than inside a container.
|===

*Default (legacy) behavior*: By default, the Helm chart allocates 80% of the memory in `resources.memory.container` to `--memory` and reserves 20% for `--reserve-memory`. This is legacy behavior to maintain backward compatibility. Do not use this default in production.

*Production recommendation*: Use `resources.requests.memory` for production deployments. This configuration:

- Sets `--memory` to 90% of the requested memory.
- Fixes `--reserve-memory` at 0, as Kubernetes already manages container overhead using https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[resource requests and limits^]. This simplifies memory allocation and ensures predictable resource usage.
- Configures Kubernetes resource requests for memory, enabling Kubernetes to effectively schedule and enforce memory allocation for containers.

CAUTION: Avoid manually setting the `--memory` and `--reserve-memory` flags unless absolutely necessary. Incorrect values can lead to performance issues, instability, or data loss.

[tabs]
======
Expand All @@ -49,13 +69,13 @@ metadata:
spec:
chartRef: {}
clusterSpec:
statefulset:
additionalRedpandaCmdFlags:
- '--lock-memory' <1>
resources:
memory:
enable_memory_locking: true <1>
container:
# If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits)
# min:
max: <number><unit> <2>
requests:
JakeSCahill marked this conversation as resolved.
Show resolved Hide resolved
# Allocates 90% to the --memory Seastar flag
memory: <number><unit> <2>
----

```bash
Expand All @@ -73,13 +93,13 @@ Helm::
.`memory.yaml`
[,yaml]
----
statefulset:
additionalRedpandaCmdFlags:
- '--lock-memory' <1>
resources:
memory:
enable_memory_locking: true <1>
container:
# If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits)
# min:
max: <number><unit> <2>
requests:
# Allocates 90% to the --memory Seastar flag
memory: <number><unit> <2>
----
+
```bash
Expand All @@ -91,16 +111,21 @@ helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --crea
+
```bash
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--set resources.memory.enable_memory_locking=true \ <1>
--set resources.memory.container.max=<number><unit> <2>
--set statefulset.additionalRedpandaCmdFlags=="{--lock-memory}" \ <1>
--set resources.requests.memory=<number><unit> <2>
```

====
--
======

<1> For production, enable memory locking to prevent the operating system from paging out Redpanda's memory to disk, which can significantly impact performance.
<2> The amount of memory to give Redpanda, Seastar, and the other container processes. You should give Redpanda at least 2 Gi of memory per core. Given that the Helm chart allocates 80% of the container's memory to Redpanda, leaving the rest for the Seastar subsystem and other processes, set this value to at least 2.5 Gi per core to ensure Redpanda has a full 2 Gi. Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^].
<1> Enabling memory locking prevents the operating system from paging out Redpanda's memory to disk. This can significantly improve performance by ensuring Redpanda has uninterrupted access to its allocated memory.

<2> Allocate at least 2.22 Gi of memory per core to ensure Redpanda has the 2 Gi per core it requires after accounting for the 90% allocation to the `--memory` flag.
+
Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi.
+
Memory units are truncated to the nearest whole MiB. For example, a memory request of 1024 KiB will result in 1 MiB being allocated. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^].

[[qos]]
== Quality of service and resource guarantees
Expand Down Expand Up @@ -129,12 +154,12 @@ spec:
chartRef: {}
clusterSpec:
resources:
cpu:
cores: <number-of-cpu-cores>
memory:
container:
min: <redpanda-container-memory>
max: <redpanda-container-memory>
requests:
cpu: <number-of-cpu-cores>
memory: <redpanda-container-memory>
limits:
cpu: <number-of-cpu-cores> # Matches the request
memory: <redpanda-container-memory> # Matches the request
statefulset:
sideCars:
configWatcher:
Expand Down Expand Up @@ -188,12 +213,12 @@ Helm::
[,yaml]
----
resources:
cpu:
cores: <number-of-cpu-cores>
memory:
container:
min: <redpanda-container-memory>
max: <redpanda-container-memory>
requests:
cpu: <number-of-cpu-cores>
memory: <redpanda-container-memory>
limits:
cpu: <number-of-cpu-cores> # Matches the request
memory: <redpanda-container-memory> # Matches the request
statefulset:
sideCars:
configWatcher:
Expand Down Expand Up @@ -240,9 +265,10 @@ helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --crea
+
```bash
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--set resources.cpu.cores=<number-of-cpu-cores> \
--set resources.memory.container.min=<redpanda-container-memory> \
--set resources.memory.container.max=<redpanda-container-memory> \
--set resources.requests.cpu=<number-of-cpu-cores> \
--set resources.limits.cpu=<number-of-cpu-cores> \
--set resources.requests.memory=<redpanda-container-memory> \
--set resources.limits.memory=<redpanda-container-memory> \
--set statefulset.sideCars.configWatcher.resources.requests.cpu=<redpanda-sidecar-container-cpu> \
--set statefulset.sideCars.configWatcher.resources.requests.memory=<redpanda-sidecar-container-memory> \
--set statefulset.sideCars.configWatcher.resources.limits.cpu=<redpanda-sidecar-container-cpu> \
Expand Down Expand Up @@ -283,7 +309,9 @@ If you use PersistentVolumes, you can set the storage capacity for each volume.

If Redpanda runs in a shared environment, where multiple applications run on the same worker node, you can make Redpanda less aggressive in CPU usage by enabling overprovisioning. This adjustment ensures a fairer distribution of CPU time among all processes, improving overall system efficiency at the cost of Redpanda's performance.

You can enable overprovisioning by either setting the CPU request to a fractional value or setting `overprovisioned` to `true`.
You can enable overprovisioning by either setting the CPU request to a fractional value less than 1 or enabling the `--overprovisioned` flag.

NOTE: You cannot enable overprovisioning when both `resources.requests` and `resources.limits` are set. When both of these configurations are set, the `resources.cpu` parameter (including cores) is ignored.
JakeSCahill marked this conversation as resolved.
Show resolved Hide resolved

[tabs]
======
Expand All @@ -300,10 +328,12 @@ metadata:
spec:
chartRef: {}
clusterSpec:
statefulset:
additionalRedpandaCmdFlags:
- '--overprovisioned'
resources:
cpu:
cores: <number-of-cpu-cores>
overprovisioned: true
----

```bash
Expand All @@ -321,10 +351,12 @@ Helm::
.`cpu-cores-overprovisioned.yaml`
[,yaml]
----
statefulset:
additionalRedpandaCmdFlags:
- '--overprovisioned'
resources:
cpu:
cores: <number-of-cpu-cores>
overprovisioned: true
----
+
```bash
Expand All @@ -337,7 +369,7 @@ helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --crea
```bash
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--set resources.cpu.cores=<number-of-cpu-cores> \
--set resources.cpu.overprovisioned=true
--set statefulset.additionalRedpandaCmdFlags=="{--overprovisioned}"
```

====
Expand Down
Loading