diff --git a/modules/manage/pages/kubernetes/k-manage-resources.adoc b/modules/manage/pages/kubernetes/k-manage-resources.adoc index cf1505a17..37b7f8aff 100644 --- a/modules/manage/pages/kubernetes/k-manage-resources.adoc +++ b/modules/manage/pages/kubernetes/k-manage-resources.adoc @@ -5,7 +5,7 @@ :page-categories: Management :env-kubernetes: true -You can define requirements for Pod resources such as CPU, memory, and storage. Redpanda Data recommends that you determine and set these values before deploying the cluster, but you can also update the values on a running cluster. +Managing Pod resources, such as CPU, memory, and storage, is critical for ensuring that your Redpanda cluster runs well in Kubernetes. In this guide, you'll learn how to define and configure resource requests and limits using the Redpanda Helm chart and Redpanda CRD. Setting these values before deployment helps guarantee predictable scheduling, enforces the Guaranteed QoS classification, and minimizes the risk of performance issues such as resource contention or unexpected evictions. == Prerequisites @@ -24,15 +24,73 @@ kubectl describe nodes - <>. This configuration prevents the operating system from paging out Redpanda's memory to disk, which can significantly impact performance. +- Ensure you <>. This configuration ensures that Redpanda has enough CPU resources to run efficiently. + +- Ensure you <> per core. This configuration ensures that Redpanda has enough memory to run efficiently. + +- Ensure that the number of CPU cores you allocate to Redpanda is an even integer. This configuration allows Redpanda to leverage the https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-configuration[static CPU manager policy], granting the Pod exclusive access to the requested cores for optimal performance. + [[memory]] == Configure memory resources -On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. You can configure the memory resources that are allocated to these processes. +When deploying Redpanda, you must reserve sufficient memory for both Redpanda and other system processes. Redpanda uses the Seastar framework to manage memory through two important flags. In Kubernetes, the values of these flags are usually set for you, depending on how you configure the Redpanda CRD or the Helm chart. + +- **`--memory`**: When set, explicitly defines the Redpanda heap size. +- **`--reserve-memory`**: When set, reserves a specific amount of memory for system overhead. If not set, a reserve is automatically calculated. + +.Learn more about these Seastar flags +[%collapsible] +==== + +[cols="1,1,2"] +|=== +| | **`--memory` Not set** +| **`--memory` Set (M)** + +| **`--reserve-memory` Not set** +| Heap size = available memory - calculated reserve +| Heap size = exactly M (if M plus calculated reserve ≤ available memory). Otherwise, startup fails + +| **`--reserve-memory` set \(R)** +| Heap size = available memory - R +| Heap size = exactly M (if M + R ≤ available memory). Otherwise, startup fails +|=== + +Definitions and behavior: + +- **Available memory**: The memory remaining after subtracting system requirements, such as `/proc/sys/vm/min_free_kbytes`, from the total or cgroup-limited memory. +- **Calculated reserve**: The greater of 1.5 GiB or 7% of _available memory_ used when `--reserve-memory` is not explicitly set. + +==== + +These flags are set for you by default when using the Redpanda Helm chart or Operator. However, you can manually set these flags using the `statefulset.additionalRedpandaCmdFlags` configuration if needed. + +CAUTION: Avoid manually setting the `--memory` and `--reserve-memory` flags unless absolutely necessary. Incorrect settings can lead to performance issues, instability, or data loss. + +[cols="1a,1a"] +|=== +| **Legacy Behavior (Default)** +| **Production Recommendation** + +| +`resources.memory.container.max`, which allocates: + +- 80% of container memory to `--memory` +- 20% to `--reserve-memory` -By default, the Helm chart allocates 80% of the configured memory in `resources.memory.container` to Redpanda, with the remaining reserved for overhead such as the Seastar subsystem and other container processes. -Redpanda Data recommends this default setting. +NOTE: This option is deprecated and maintained only for backward compatibility. +| +Use `resources.requests.memory` with matching `resources.limits.memory` to provide a predictable, dedicated heap for Redpanda while allowing Kubernetes to effectively schedule and enforce resource limits. -NOTE: Although you can also allocate the exact amount of memory for Redpanda and the Seastar subsystem manually, Redpanda Data does not recommend this approach because setting the wrong values can lead to performance issues, instability, or data loss. As a result, this approach is not documented here. +These options ensure: + +- Predictable scheduling: Kubernetes uses memory requests to accurately place Pods on nodes with sufficient resources. +- Guaranteed QoS: Matching requests and limits ensure the Pod receives the <>, reducing eviction risk. +- Dedicated allocation: +** 90% of the requested memory is allocated to the Redpanda heap using the `--memory` flag. +** The remaining 10% is reserved for other processes such as exec probes, emptyDirs, and `kubectl exec`, helping prevent transient spikes in memory from causing Redpanda to be killed (`OOMKilled`). +- The `--reserve-memory` flag is fixed at 0 because the https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/[kubelet] already manages system-level memory reservations. +|=== [tabs] ====== @@ -49,13 +107,14 @@ metadata: spec: chartRef: {} clusterSpec: + statefulset: + additionalRedpandaCmdFlags: + - '--lock-memory' <1> resources: - memory: - enable_memory_locking: true <1> - container: - # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits) - # min: - max: <2> + requests: + memory: <2> + limits: + memory: <3> ---- ```bash @@ -73,13 +132,14 @@ Helm:: .`memory.yaml` [,yaml] ---- +statefulset: + additionalRedpandaCmdFlags: + - '--lock-memory' <1> resources: - memory: - enable_memory_locking: true <1> - container: - # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits) - # min: - max: <2> + requests: + memory: <2> + limits: + memory: <3> ---- + ```bash @@ -91,16 +151,113 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea + ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ - --set resources.memory.enable_memory_locking=true \ <1> - --set resources.memory.container.max= <2> + --set statefulset.additionalRedpandaCmdFlags=="{--lock-memory}" \ <1> + --set resources.requests.memory= \ <2> + --set resources.limits.memory= <3> ``` ==== -- ====== -<1> For production, enable memory locking to prevent the operating system from paging out Redpanda's memory to disk, which can significantly impact performance. -<2> The amount of memory to give Redpanda, Seastar, and the other container processes. You should give Redpanda at least 2 Gi of memory per core. Given that the Helm chart allocates 80% of the container's memory to Redpanda, leaving the rest for the Seastar subsystem and other processes, set this value to at least 2.5 Gi per core to ensure Redpanda has a full 2 Gi. Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^]. +<1> Enable memory locking to prevent the operating system from paging out Redpanda's memory to disk. This can significantly improve performance by ensuring Redpanda has uninterrupted access to its allocated memory. + +<2> Request at least 2.22 Gi of memory per core to ensure Redpanda has the 2 Gi per core it requires after accounting for the 90% allocation to the `--memory` flag. ++ +Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. ++ +Memory units are truncated to the nearest whole MiB. For example, a memory request of 1024 KiB will result in 1 MiB being allocated. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^]. + +<3> Set the memory limit to match the memory request. This ensures Kubernetes enforces a strict upper bound on memory usage and helps maintain the <>. + +When the StatefulSet is deployed, make sure that the memory request and limit are set: + +[source,bash] +---- +kubectl --namespace= get pod -o jsonpath='{.spec.containers[?(@.name=="redpanda")].resources}{"\n"}' +---- + +[[cpu]] +== Configure CPU resources + +Redpanda uses the Seastar framework to manage CPU usage through the `--smp` flag, which sets the number of CPU cores available to Redpanda. This is configured using `resources.cpu.cores`, which automatically applies the same value to both `resources.requests.cpu` and `resources.limits.cpu`. + +[cols="1,1", options="header"] +|=== +| **Default Configuration** | **Production Recommendation** + +| +`resources.cpu.cores: 1` +Equivalent to setting `resources.requests.cpu` and `resources.limits.cpu` to 1. +| +Set `resources.cpu.cores` to `4` or greater. + +Set CPU cores to an even integer to leverage the static CPU manager policy, granting the Pod exclusive access to the requested cores for optimal performance. +|=== + +[tabs] +====== +Helm + Operator:: ++ +-- +.`redpanda-cluster.yaml` +[,yaml] +---- +apiVersion: cluster.redpanda.com/v1alpha2 +kind: Redpanda +metadata: + name: redpanda +spec: + chartRef: {} + clusterSpec: + resources: + cpu: + cores: <1> +---- + +```bash +kubectl apply -f redpanda-cluster.yaml --namespace +``` + +-- +Helm:: ++ +-- +[tabs] +==== +--values:: ++ +.`cpu-cores.yaml` +[,yaml] +---- +resources: + cpu: + cores: <1> +---- ++ +```bash +helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ + --values cpu-cores.yaml --reuse-values +``` + +--set:: ++ +```bash +helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ + --set resources.cpu.cores= \ <1> +``` +==== +-- +====== + +<1> Set `resources.cpu.cores` to the desired number of CPU cores for Redpanda. This value is applied as both the CPU request and limit, ensuring that the Pod maintains the <> by enforcing a strict upper bound on CPU usage. + +When the StatefulSet is deployed, make sure that the CPU request and limit are set: + +[source,bash] +---- +kubectl --namespace get pod -o jsonpath='{.spec.containers[?(@.name=="redpanda")].resources}{"\n"}' +---- [[qos]] == Quality of service and resource guarantees @@ -131,10 +288,10 @@ spec: resources: cpu: cores: - memory: - container: - min: - max: + requests: + memory: + limits: + memory: # Matches the request statefulset: sideCars: configWatcher: @@ -190,10 +347,10 @@ Helm:: resources: cpu: cores: - memory: - container: - min: - max: + requests: + memory: + limits: + memory: # Matches the request statefulset: sideCars: configWatcher: @@ -241,8 +398,8 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ --set resources.cpu.cores= \ - --set resources.memory.container.min= \ - --set resources.memory.container.max= \ + --set resources.requests.memory= \ + --set resources.limits.memory= \ --set statefulset.sideCars.configWatcher.resources.requests.cpu= \ --set statefulset.sideCars.configWatcher.resources.requests.memory= \ --set statefulset.sideCars.configWatcher.resources.limits.cpu= \ @@ -283,7 +440,9 @@ If you use PersistentVolumes, you can set the storage capacity for each volume. If Redpanda runs in a shared environment, where multiple applications run on the same worker node, you can make Redpanda less aggressive in CPU usage by enabling overprovisioning. This adjustment ensures a fairer distribution of CPU time among all processes, improving overall system efficiency at the cost of Redpanda's performance. -You can enable overprovisioning by either setting the CPU request to a fractional value or setting `overprovisioned` to `true`. +You can enable overprovisioning by either setting the CPU request to a fractional value less than 1 or enabling the `--overprovisioned` flag. + +NOTE: You cannot enable overprovisioning using `resources.cpu.overprovisioned` when both `resources.requests` and `resources.limits` are set. When both of these configurations are set, the `resources.cpu` parameter (including cores) is ignored. Instead, use the `statefulset.additionalRedpandaCmdFlags` configuration to enable overprovisioning. [tabs] ====== @@ -300,10 +459,12 @@ metadata: spec: chartRef: {} clusterSpec: + statefulset: + additionalRedpandaCmdFlags: + - '--overprovisioned' resources: cpu: cores: - overprovisioned: true ---- ```bash @@ -321,10 +482,12 @@ Helm:: .`cpu-cores-overprovisioned.yaml` [,yaml] ---- +statefulset: + additionalRedpandaCmdFlags: + - '--overprovisioned' resources: cpu: cores: - overprovisioned: true ---- + ```bash @@ -337,7 +500,7 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ --set resources.cpu.cores= \ - --set resources.cpu.overprovisioned=true + --set statefulset.additionalRedpandaCmdFlags=="{--overprovisioned}" ``` ====