diff --git a/local-antora-playbook.yml b/local-antora-playbook.yml index e8ae271e5..e0226493e 100644 --- a/local-antora-playbook.yml +++ b/local-antora-playbook.yml @@ -49,6 +49,15 @@ antora: filter: docker-compose env_type: Docker attribute_name: docker-labs-index + - require: '@sntke/antora-mermaid-extension' + mermaid_library_url: https://cdn.jsdelivr.net/npm/mermaid@10/dist/mermaid.esm.min.mjs + script_stem: mermaid-scripts + mermaid_initialize_options: + start_on_load: true + theme: base + theme_variables: + line_color: '#e2401b' + font_family: Inter, sans-serif - require: '@redpanda-data/docs-extensions-and-macros/extensions/collect-bloblang-samples' - require: '@redpanda-data/docs-extensions-and-macros/extensions/generate-rp-connect-categories' - require: '@redpanda-data/docs-extensions-and-macros/extensions/modify-redirects' diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 12c573b8a..affa9a6ee 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -133,6 +133,7 @@ *** xref:manage:kubernetes/k-remote-read-replicas.adoc[Remote Read Replicas] *** xref:manage:kubernetes/k-manage-resources.adoc[Manage Pod Resources] *** xref:manage:kubernetes/k-scale-redpanda.adoc[Scale] +*** xref:manage:kubernetes/k-nodewatcher.adoc[] *** xref:manage:kubernetes/k-decommission-brokers.adoc[Decommission Brokers] *** xref:manage:kubernetes/k-recovery-mode.adoc[Recovery Mode] *** xref:manage:kubernetes/monitoring/index.adoc[Monitor] diff --git a/modules/manage/pages/cluster-maintenance/decommission-brokers.adoc b/modules/manage/pages/cluster-maintenance/decommission-brokers.adoc index c5522a0f9..c2559a221 100644 --- a/modules/manage/pages/cluster-maintenance/decommission-brokers.adoc +++ b/modules/manage/pages/cluster-maintenance/decommission-brokers.adoc @@ -77,7 +77,7 @@ Rack awareness is just one aspect of availability. Check out xref:deploy:deploym === Cost -Infrastructure costs increase with each broker, so adding a broker means an additional instance to pay for. In this example we deploy to GKE on seven https://gcloud-compute.com/n2-standard-8.html[n2-standard-8^] GCP instances. This means that the instance cost of the cluster is around $1.9K per month. Dropping down to 5 brokers would save over $500 per month, and dropping down to 3 brokers would save around $1100 per month. Of course, there are other costs to consider, but they won't be as impacted by changing the broker count. +Infrastructure costs increase with each broker because each broker requires a dedicated node (instance), so adding a broker means an additional instance cost. For example, if the instance cost is $1925 per month in a cluster with seven brokers, the instance cost for each broker is $275. Reducing the number of brokers from seven to five would save $550 per month ($275 x 2), and reducing it to three brokers would save $1100 per month. You must also consider other costs, but they won't be as impacted by changing the broker count. === Data retention diff --git a/modules/manage/pages/kubernetes/k-decommission-brokers.adoc b/modules/manage/pages/kubernetes/k-decommission-brokers.adoc index bf2d62d04..b6db66a34 100644 --- a/modules/manage/pages/kubernetes/k-decommission-brokers.adoc +++ b/modules/manage/pages/kubernetes/k-decommission-brokers.adoc @@ -1,16 +1,18 @@ = Decommission Brokers in Kubernetes -:description: Remove a broker so that it is no longer considered part of the cluster. +:description: Remove a Redpanda broker from the cluster without risking data loss or causing instability. :page-context-links: [{"name": "Linux", "to": "manage:cluster-maintenance/decommission-brokers.adoc" },{"name": "Kubernetes", "to": "manage:kubernetes/k-decommission-brokers.adoc" } ] :tags: ["Kubernetes"] :page-aliases: manage:kubernetes/decommission-brokers.adoc :page-categories: Management :env-kubernetes: true -When you decommission a broker, its partition replicas are reallocated across the remaining brokers and it is removed from the cluster. You may want to decommission a broker in the following circumstances: +Decommissioning a broker is the *safe and controlled* way to remove a Redpanda broker from the cluster without risking data loss or causing instability. By decommissioning, you ensure that partition replicas are reallocated across the remaining brokers so that you can then safely shut down the broker. + +You may want to decommission a broker in the following situations: * You are removing a broker to decrease the size of the cluster, also known as scaling down. * The broker has lost its storage and you need a new broker with a new node ID (broker ID). -* You are replacing a worker node, for example to upgrade the Kubernetes cluster or to replace the hardware. +* You are replacing a worker node, for example, by upgrading the Kubernetes cluster or replacing the hardware. NOTE: When a broker is decommissioned, it cannot rejoin the cluster. If a broker with the same ID tries to rejoin the cluster, it is rejected. @@ -18,7 +20,7 @@ NOTE: When a broker is decommissioned, it cannot rejoin the cluster. If a broker You must have the following: -* Kubernetes cluster: Ensure you have a running Kubernetes cluster, either locally, such as with minikube or kind, or remotely. +* Kubernetes cluster: Ensure you have a running Kubernetes cluster, either locally, with minikube or kind, or remotely. * https://kubernetes.io/docs/tasks/tools/#kubectl[Kubectl^]: Ensure you have the `kubectl` command-line tool installed and configured to communicate with your cluster. @@ -30,7 +32,7 @@ When a broker is decommissioned, the controller leader creates a reallocation pl The reallocation of each partition is translated into a Raft group reconfiguration and executed by the controller leader. The partition leader then handles the reconfiguration for its Raft group. After the reallocation for a partition is complete, it is recorded in the controller log and the status is updated in the topic tables of each broker. -The decommissioning process is successful only when all partition reallocations have been completed successfully. The controller leader polls for the status of all the partition-level reallocations to ensure that everything completes as expected. +The decommissioning process is successful only when all partition reallocations have been completed. The controller leader polls for the status of all the partition-level reallocations to ensure that everything completes as expected. During the decommissioning process, new partitions are not allocated to the broker that is being decommissioned. After all the reallocations have been completed successfully, the broker is removed from the cluster. @@ -77,17 +79,17 @@ ID HOST PORT RACK ``` ==== -The output shows four racks (A/B/C/D), so you might want to have at least four brokers to make use of all racks. +The output shows four racks (A/B/C/D), so you might want to have at least four brokers to use all racks. Rack awareness is just one aspect of availability. Refer to xref:deploy:deployment-option/self-hosted/kubernetes/k-high-availability.adoc[High Availability] for more details on deploying Redpanda for high availability. === Cost -Infrastructure costs increase with each broker, so adding a broker means an additional instance cost. For example, if you deploy Redpanda on GKE on https://gcloud-compute.com/n2-standard-8.html[n2-standard-8^] GCP instances, the instance cost of the cluster is $1925 per month. Reducing the number of brokers to five would save $550 per month, and reducing it further to three brokers would save $1100 per month. You must also consider other costs, but they won't be as impacted by changing the broker count. +Infrastructure costs increase with each broker because each broker requires a dedicated node (instance), so adding a broker means an additional instance cost. For example, if the instance cost is $1925 per month in a cluster with seven brokers, the instance cost for each broker is $275. Reducing the number of brokers from seven to five would save $550 per month ($275 x 2), and reducing it to three brokers would save $1100 per month. You must also consider other costs, but they won't be as impacted by changing the broker count. === Data retention -Local data retention is determined by the storage capability of each broker and producer throughput, which is the amount of data being produced over a given period. When decommissioning, storage capability must take into account both the free storage space and the amount of space already in use by existing partitions. +Local data retention is determined by the storage capability of each broker and producer throughput, which is the amount of data produced over a given period. When decommissioning, storage capability must consider both the free storage space and the amount of space already in use by existing partitions. Run the following command to determine how much storage is being used, in bytes, on each broker: @@ -113,9 +115,9 @@ BROKER SIZE ERROR This example shows that each broker has roughly 240GB of data. This means scaling in to five brokers would require each broker to have at least 337GB to store that same data. -Keep in mind that actual space used on disk will be greater than the data size reported by Redpanda. Redpanda reserves some data on disk per partition, and reserves less space per partition as available disk space decreases. Incoming data for each partition is then written to disk in the form of segments (files). The time when segments are written to disk is based on a number of factors, including the topic's segment configuration, broker restarts, and changes in Raft leadership. +Keep in mind that the actual space used on disk will be greater than the data size reported by Redpanda. Redpanda reserves some data on disk per partition and reserves less space per partition as available disk space decreases. Incoming data for each partition is then written to disk as segments (files). The time when segments are written to disk is based on a number of factors, including the topic's segment configuration, broker restarts, and changes in Raft leadership. -Throughput is the primary measurement required to calculate future data storage requirements. For example, if throughput is at 200MB/sec, the application will generate 0.72TB/hour (or 17.28TB/day, or 120.96TB/wk). Divide this amount by the target number of brokers to get an estimate of how much storage is needed to retain that much data for various periods of time: +Throughput is the primary measurement required to calculate future data storage requirements. For example, if the throughput is 200MB/sec, the application will generate 0.72TB/hour (17.28TB/day, or 120.96TB/wk). Divide this amount by the target number of brokers to get an estimate of how much storage is needed to retain that much data for various periods of time: |=== | Retention | Disk size (on each of the 5 brokers) @@ -159,13 +161,13 @@ Example output: 5 ---- -In this example the highest replication factor is five, which means at least five brokers are required in this cluster. +In this example, the highest replication factor is five, which means that at least five brokers are required in this cluster. Generally, a cluster can withstand a higher number of brokers going down if more brokers exist in the cluster. For details, see xref:get-started:architecture.adoc#raft-consensus-algorithm[Raft consensus algorithm]. === Partition count -It is best practice to make sure the total partition count does not exceed 1K per core. This maximum partition count depends on many other factors, such as memory per core, CPU performance, throughput, and latency requirements. Exceeding 1K partitions per core can lead to increased latency, increased number of partition leadership elections, and general reduced stability. +It is best practice to make sure the total partition count does not exceed 1K per core. This maximum partition count depends on many other factors, such as memory per core, CPU performance, throughput, and latency requirements. Exceeding 1K partitions per core can lead to increased latency, an increased number of partition leadership elections, and generally reduced stability. Run the following command to get the total partition count for your cluster: @@ -222,15 +224,208 @@ So the primary limitation consideration is the replication factor of five, meani To decommission a broker, you can use one of the following methods: -- <>: Use the Decommission controller to automatically decommission brokers whenever you reduce the number of StatefulSet replicas. - <>: Use `rpk` to decommission one broker at a time. +- <>: Use the Decommission controller to automatically decommission brokers whenever you reduce the number of StatefulSet replicas. + +[[Manual]] +=== Manually decommission a broker + +Follow this workflow to manually decommission a broker before reducing the number of StatefulSet replicas: + +[mermaid] +.... +flowchart TB + %% Define classes + classDef userAction stroke:#374D7C, fill:#E2EBFF, font-weight:bold,rx:5,ry:5 + + A[Start Manual Scale-In]:::userAction --> B["Identify broker to R=remove
(highest Pod ordinal)"]:::userAction + B --> C[Decommission broker running on Pod with highest ordinal]:::userAction + C --> D[Monitor decommission status]:::userAction + D --> E{Is broker removed?}:::userAction + E -- No --> D + E -- Yes --> F[Decrease StatefulSet replicas by 1]:::userAction + F --> G[Wait for rolling update and cluster health]:::userAction + G --> H{More brokers to remove?}:::userAction + H -- Yes --> B + H -- No --> I[Done]:::userAction +.... + +. List your brokers and their associated broker IDs: ++ +```bash +kubectl --namespace exec -ti redpanda-0 -c redpanda -- \ + rpk cluster info +``` ++ +.Example output +[%collapsible] +==== +``` +CLUSTER +======= +redpanda.560e2403-3fd6-448c-b720-7b456d0aa78c + +BROKERS +======= +ID HOST PORT RACK +0 redpanda-0.testcluster.local 32180 A +1 redpanda-1.testcluster.local 32180 A +4 redpanda-3.testcluster.local 32180 B +5* redpanda-2.testcluster.local 32180 B +6 redpanda-4.testcluster.local 32180 C +8 redpanda-6.testcluster.local 32180 C +9 redpanda-5.testcluster.local 32180 D +``` +==== ++ +The output shows that the IDs don't match the StatefulSet ordinal, which appears in the hostname. In this example, two brokers will be decommissioned: `redpanda-6` (ID 8) and `redpanda-5` (ID 9). ++ +NOTE: When scaling in a cluster, you cannot choose which broker is removed. Redpanda is deployed as a StatefulSet in Kubernetes. The StatefulSet controls which Pods are destroyed and always starts with the Pod that has the highest ordinal. So the first broker to be removed when updating the StatefulSet in this example is `redpanda-6` (ID 8). + +. Decommission the broker with the highest Pod ordinal: ++ +```bash +kubectl --namespace exec -ti -c -- \ + rpk redpanda admin brokers decommission +``` ++ +This message is displayed before the decommission process is complete. ++ +```bash +Success, broker has been decommissioned! +``` ++ +TIP: If the broker is not running, use the `--force` flag. + +. Monitor the decommissioning status: ++ +```bash +kubectl --namespace exec -ti -c -- \ + rpk redpanda admin brokers decommission-status +``` ++ +The output uses cached cluster health data that is refreshed every 10 seconds. When the completion column for all rows is 100%, the broker is decommissioned. ++ +Another way to verify decommission is complete is by running the following command: ++ +```bash +kubectl --namespace exec -ti -c -- \ + rpk cluster health +``` ++ +Be sure to verify that the decommissioned broker's ID does not appear in the list of IDs. In this example, ID 9 is missing, which means the decommission is complete. ++ +``` +CLUSTER HEALTH OVERVIEW +======================= +Healthy: true +Controller ID: 0 +All nodes: [4 1 0 5 6 8] +Nodes down: [] +Leaderless partitions: [] +Under-replicated partitions: [] +``` + +. Decrease the number of replicas *by one* to remove the Pod with the highest ordinal (the one you just decommissioned). ++ +:caution-caption: Reduce replicas by one +[CAUTION] +==== +When scaling in (removing brokers), remove only one broker at a time. If you reduce the StatefulSet replicas by more than one, Kubernetes can terminate multiple Pods simultaneously, causing quorum loss and cluster unavailability. +==== +:caution-caption: Caution ++ +[tabs] +====== +Helm + Operator:: ++ +-- +.`redpanda-cluster.yaml` +[,yaml] +---- +apiVersion: cluster.redpanda.com/v1alpha2 +kind: Redpanda +metadata: + name: redpanda +spec: + chartRef: {} + clusterSpec: + statefulset: + replicas: +---- + +Apply the Redpanda resource: + +```bash +kubectl apply -f redpanda-cluster.yaml --namespace +``` + +-- +Helm:: ++ +-- + +[tabs] +==== +--values:: ++ +.`decommission.yaml` +[,yaml] +---- +statefulset: + replicas: +---- + +--set:: ++ +[,bash] +---- +helm upgrade redpanda redpanda/redpanda --namespace --wait --reuse-values --set statefulset.replicas= +---- +==== +-- +====== ++ +This process triggers a rolling restart of each Pod so that each broker has an up-to-date `seed_servers` configuration to reflect the new list of brokers. -This example shows how to scale a cluster from seven brokers to five brokers. +You can repeat this procedure to continue to scale down. [[Automated]] === Use the Decommission controller -The Decommission controller is responsible for monitoring the StatefulSet for changes in the number replicas. When the number of replicas is reduced, the controller decommissions brokers, starting from the highest Pod ordinal, until the number of brokers matches the number of replicas. For example, you have a Redpanda cluster with the following brokers: +The Decommission controller is responsible for monitoring the StatefulSet for changes in the number replicas. When the number of replicas is reduced, the controller decommissions brokers, starting from the highest Pod ordinal, until the number of brokers matches the number of replicas. + +[mermaid] +.... +flowchart TB + %% Define classes + classDef userAction stroke:#374D7C, fill:#E2EBFF, font-weight:bold,rx:5,ry:5 + classDef systemAction fill:#F6FBF6,stroke:#25855a,stroke-width:2px,color:#20293c,rx:5,ry:5 + + %% Legend + subgraph Legend + direction TB + UA([User action]):::userAction + SE([System event]):::systemAction + end + Legend ~~~ Workflow + + %% Main workflow + subgraph Workflow + direction TB + A[Start automated scale-in]:::userAction --> B[Decrease StatefulSet
replicas by 1]:::userAction + B --> C[Decommission controller
detects reduced replicas]:::systemEvent + C --> D[Controller marks
highest ordinal Pod for removal]:::systemEvent + D --> E[Controller orchestrates
broker decommission]:::systemEvent + E --> F[Partitions reallocate
under controller supervision]:::systemEvent + F --> G[Check cluster health]:::systemEvent + G --> H{Broker fully removed?}:::systemEvent + H -- No --> F + H -- Yes --> I[Done,
or repeat if further scale-in needed]:::userAction + end +.... + +For example, you have a Redpanda cluster with the following brokers: [.no-copy] ---- @@ -402,7 +597,14 @@ helm upgrade --install redpanda redpanda/redpanda \ kubectl exec redpanda-0 --namespace -- rpk cluster health ``` -. Decrease the number of replicas by one: +. Decrease the number of replicas *by one*. ++ +:caution-caption: Reduce replicas by one +[CAUTION] +==== +When scaling in (removing brokers), remove only one broker at a time. If you reduce the StatefulSet replicas by more than one, Kubernetes can terminate multiple Pods simultaneously, causing quorum loss and cluster unavailability. +==== +:caution-caption: Caution + [tabs] ====== @@ -493,104 +695,7 @@ If you're running the Decommission controller as a sidecar: kubectl logs --namespace -c redpanda-controllers ---- -You can repeat this procedure to scale down to 5 brokers. - -[[Manual]] -=== Manually decommission a broker - -If you don't want to use the <>, follow these steps to manually decommission a broker before reducing the number of StatefulSet replicas: - -. List your brokers and their associated broker IDs: -+ -```bash -kubectl --namespace exec -ti redpanda-0 -c redpanda -- \ - rpk cluster info -``` -+ -.Example output -[%collapsible] -==== -``` -CLUSTER -======= -redpanda.560e2403-3fd6-448c-b720-7b456d0aa78c - -BROKERS -======= -ID HOST PORT RACK -0 redpanda-0.testcluster.local 32180 A -1 redpanda-1.testcluster.local 32180 A -4 redpanda-3.testcluster.local 32180 B -5* redpanda-2.testcluster.local 32180 B -6 redpanda-4.testcluster.local 32180 C -8 redpanda-6.testcluster.local 32180 C -9 redpanda-5.testcluster.local 32180 D -``` -==== -+ -The output shows that the IDs don't match the StatefulSet ordinal, which appears in the hostname. In this example, two brokers will be decommissioned: `redpanda-6` (ID 8) and `redpanda-5` (ID 9). -+ -NOTE: When scaling in a cluster, you cannot choose which broker is decommissioned. Redpanda is deployed as a StatefulSet in Kubernetes. The StatefulSet controls which Pods are destroyed and always starts with the Pod that has the highest ordinal. So the first broker to be destroyed when updating the StatefulSet in this example is `redpanda-6` (ID 8). - -. Decommission the broker with your selected broker ID: -+ -```bash -kubectl --namespace exec -ti -c -- \ - rpk redpanda admin brokers decommission -``` -+ -This message is displayed before the decommission process is complete. -+ -``` -Success, broker has been decommissioned! -``` -+ -TIP: If the broker is not running, use the `--force` flag. - -. Monitor the decommissioning status: -+ -```bash -kubectl --namespace exec -ti -c -- \ - rpk redpanda admin brokers decommission-status -``` -+ -The output uses cached cluster health data that is refreshed every 10 seconds. When the completion column for all rows is 100%, the broker is decommissioned. -+ -Another way to verify decommission is complete is by running the following command: -+ -```bash -kubectl --namespace exec -ti -c -- \ - rpk cluster health -``` -+ -Be sure to verify that the decommissioned broker's ID does not appear in the list of IDs. In this example, ID 9 is missing, which means the decommission is complete. -+ -``` -CLUSTER HEALTH OVERVIEW -======================= -Healthy: true -Controller ID: 0 -All nodes: [4 1 0 5 6 8] -Nodes down: [] -Leaderless partitions: [] -Under-replicated partitions: [] -``` - -. Decommission any other brokers. -+ -After decommissioning one broker and verifying that the process is complete, continue decommissioning another broker by repeating the previous two steps. -+ -NOTE: Be sure to take into account everything in <>, and that you have verified that your cluster and use cases will not be negatively impacted by losing brokers. - -. Update the StatefulSet replica value. -+ -The last step is to update the StatefulSet replica value to reflect the new broker count. In this example the count was updated to five. If you deployed with the Helm chart, then run following command: -+ -```bash -helm upgrade redpanda redpanda/redpanda --namespace --wait --reuse-values --set statefulset.replicas=5 -``` -+ -This process triggers a rolling restart of each Pod so that each broker has an up-to-date `seed_servers` configuration to reflect the new list of brokers. +You can repeat this procedure to continue to scale down. == Troubleshooting diff --git a/modules/manage/pages/kubernetes/k-nodewatcher.adoc b/modules/manage/pages/kubernetes/k-nodewatcher.adoc new file mode 100644 index 000000000..d146b90c0 --- /dev/null +++ b/modules/manage/pages/kubernetes/k-nodewatcher.adoc @@ -0,0 +1,207 @@ += Install the Nodewatcher Controller +:page-categories: Management +:env-kubernetes: true +:description: pass:q[The Nodewatcher controller is an emergency backstop for Redpanda clusters that use PersistentVolumes (PVs) for the Redpanda data directory. When a node running a Redpanda Pod suddenly goes offline, Nodewatcher detects the lost node, retains the associated PV, and removes the corresponding PersistentVolumeClaim (PVC). This workflow allows the Redpanda Pod to be rescheduled on a new node without losing critical data.] + +{description} + +:warning-caption: Emergency use only + +[WARNING] +==== +The Nodewatcher controller is intended only for emergency scenarios (for example, node hardware or infrastructure failures). *Never use the Nodewatcher controller as a routine method for removing brokers.* If you want to remove brokers, see xref:manage:kubernetes/k-decommission-brokers.adoc[Decommission brokers] for the correct procedure. +==== + +:warning-caption: Warning + +== Why use Nodewatcher? + +If a worker node hosting a Redpanda Pod suddenly fails or disappears, Kubernetes might leave the associated PV and PVC in an _attached_ or _in-use_ state. Without Nodewatcher (or manual intervention), the Redpanda Pod cannot safely reschedule to another node because the volume is still recognized as occupied. Also, the default reclaim policy might delete the volume, risking data loss. Nodewatcher automates the steps needed to retain the volume and remove the stale PVC, so Redpanda Pods can move to healthy nodes without losing the data in the original PV. + +== How Nodewatcher works + +When the controller detects events that indicate a Node resource is no longer available, it does the following: + +- For each Redpanda Pod on that Node, it identifies the PVC (if any) the Pod was using for its storage. +- It sets the reclaim policy of the affected PersistentVolume (PV) to `Retain`. +- It deletes the associated PersistentVolumeClaim (PVC) to allow the Redpanda broker Pod to reschedule onto a new, operational node. + +[mermaid] +.... +flowchart TB + %% Define classes + classDef systemAction fill:#F6FBF6,stroke:#25855a,stroke-width:2px,color:#20293c,rx:5,ry:5 + + A[Node fails] --> B{Is Node
running Redpanda?}:::systemAction + B -- Yes --> C[Identify Redpanda Pod PVC]:::systemAction + C --> D[Set PV reclaim policy to 'Retain']:::systemAction + D --> E[Delete PVC]:::systemAction + E --> F[Redpanda Pod
is rescheduled]:::systemAction + B -- No --> G[Ignore event]:::systemAction +.... + + +== Install Nodewatcher + +[tabs] +====== +Helm + Operator:: ++ +-- + +You can install the Nodewatcher controller as part of the Redpanda Operator or as a sidecar on each Pod that runs a Redpanda broker. When you install the controller as part of the Redpanda Operator, the controller monitors all Redpanda clusters running in the same namespace as the Redpanda Operator. If you want the controller to manage only a single Redpanda cluster, install it as a sidecar on each Pod that runs a Redpanda broker, using the Redpanda resource. + +To install the Nodewatcher controller as part of the Redpanda Operator: + +. Deploy the Redpanda Operator with the Nodewatcher controller: ++ +[,bash,subs="attributes+",lines=7+8] +---- +helm repo add redpanda https://charts.redpanda.com +helm repo update +helm upgrade --install redpanda-controller redpanda/operator \ + --namespace \ + --set image.tag={latest-operator-version} \ + --create-namespace \ + --set additionalCmdFlags={--additional-controllers="nodeWatcher"} \ + --set rbac.createAdditionalControllerCRs=true +---- ++ +- `--additional-controllers="nodeWatcher"`: Enables the Nodewatcher controller. +- `--rbac.createAdditionalControllerCRs=true`: Creates the required RBAC rules for the Redpanda Operator to monitor the Node resources and update PVCs and PVs. + +. Deploy a Redpanda resource: ++ +.`redpanda-cluster.yaml` +[,yaml] +---- +apiVersion: cluster.redpanda.com/v1alpha2 +kind: Redpanda +metadata: + name: redpanda +spec: + chartRef: {} + clusterSpec: {} +---- ++ +```bash +kubectl apply -f redpanda-cluster.yaml --namespace +``` + +To install the Decommission controller as a sidecar: + +.`redpanda-cluster.yaml` +[,yaml,lines=11+13+15] +---- +apiVersion: cluster.redpanda.com/v1alpha2 +kind: Redpanda +metadata: + name: redpanda +spec: + chartRef: {} + clusterSpec: + statefulset: + sideCars: + controllers: + enabled: true + run: + - "nodeWatcher" + rbac: + enabled: true +---- + +- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. +- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. +- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. + +-- +Helm:: ++ +-- +[tabs] +==== +--values:: ++ +.`decommission-controller.yaml` +[,yaml,lines=4+6+8] +---- +statefulset: + sideCars: + controllers: + enabled: true + run: + - "nodeWatcher" +rbac: + enabled: true +---- ++ +- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. +- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. +- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. + +--set:: ++ +[,bash,lines=4-6] +---- +helm upgrade --install redpanda redpanda/redpanda \ + --namespace \ + --create-namespace \ + --set statefulset.sideCars.controllers.enabled=true \ + --set statefulset.sideCars.controllers.run={"nodeWatcher"} \ + --set rbac.enabled=true +---- ++ +- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. +- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. +- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. + +==== +-- +====== + +== Test the Nodewatcher controller + +. Test the Nodewatcher controller by deleting a Node resource: ++ +[,bash] +---- +kubectl delete node +---- ++ +NOTE: This step is for testing purposes only. + +. Monitor the logs of the Nodewatcher controller: ++ +-- +- If you're running the Nodewatcher controller as part of the Redpanda Operator: ++ +[,bash] +---- +kubectl logs -l app.kubernetes.io/name=operator -c manager --namespace +---- + +- If you're running the Nodewatcher controller as a sidecar: ++ +[,bash] +---- +kubectl logs --namespace -c redpanda-controllers +---- +-- ++ +You should see that the controller successfully deleted the PVC of the Pod that was running on the deleted Node resource. ++ +[,bash] +---- +kubectl get persistentvolumeclaim --namespace +---- + +. Verify that the reclaim policy of the PV is set to `Retain` to allow you to recover the node, if necessary: ++ +[,bash] +---- +kubectl get persistentvolume --namespace +---- + +After the Nodewatcher controller has finished, xref:manage:kubernetes/k-decommission-brokers.adoc[decommission the broker] that was removed from the node. This is necessary to prevent a potential loss of quorum and ensure cluster stability. + +NOTE: Make sure to use the `--force` flag when decommissioning the broker with xref:reference:rpk/rpk-redpanda/rpk-redpanda-admin-brokers-decommission.adoc[`rpk redpanda admin brokers decommission`]. This flag is required when the broker is no longer running. diff --git a/modules/manage/pages/kubernetes/k-scale-redpanda.adoc b/modules/manage/pages/kubernetes/k-scale-redpanda.adoc index 28df81151..608dc9a9b 100644 --- a/modules/manage/pages/kubernetes/k-scale-redpanda.adoc +++ b/modules/manage/pages/kubernetes/k-scale-redpanda.adoc @@ -21,13 +21,26 @@ If your existing worker nodes have either too many resources or not enough resou - Deleting the Pod's PersistentVolumeClaim (PVC). - Ensuring that the PersistentVolume's (PV) reclaim policy is set to `Retain` to make sure that you can roll back to the original worker node without losing data. -As an emergency backstop, the <> can automate the deletion of PVCs and set the reclaim policy of PVs to `Retain`. +TIP: For emergency scenarios in which a node unexpectedly fails or is decommissioned without warning, the Nodewatcher controller can help protect your Redpanda data. For details, see xref:manage:kubernetes/k-nodewatcher.adoc[]. == Horizontal scaling Horizontal scaling involves modifying the number of brokers in your cluster, either by adding new ones (scaling out) or removing existing ones (scaling in). In situations where the workload is variable, horizontal scaling allows for flexibility. You can scale out when demand is high and scale in when demand is low, optimizing resource usage and cost. -CAUTION: Redpanda does not support Kubernetes autoscalers. Autoscalers rely on CPU and memory metrics for scaling decisions, which do not fully capture the complexities involved in scaling Redpanda clusters. Improper scaling can lead to operational challenges. Always manually scale your Redpanda clusters as described in this topic. +:caution-caption: Do not use autoscalers + +CAUTION: Redpanda does not support Kubernetes autoscalers. Autoscalers rely on CPU and memory metrics for scaling decisions, which do not fully capture the complexities involved in scaling Redpanda clusters. Always manually scale your Redpanda clusters as described in this topic. + +:caution-caption: Caution + +While you should not rely on Kubernetes autoscalers to scale your Redpanda brokers, you can prevent infrastructure-level autoscalers like Karpenter from terminating nodes that run Redpanda Pods. For example, you can set the xref:reference:k-redpanda-helm-spec.adoc#statefulset-podtemplate-annotations[`statefulset.podTemplate.annotations`] field in the Redpanda Helm values, or the xref:reference:k-crd.adoc#k8s-api-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-podtemplate[`statefulset.podTemplate.annotations`] field in the Redpanda custom resource to include: + +[,yaml] +---- +karpenter.sh/do-not-disrupt: "true" +---- + +This annotation tells Karpenter not to disrupt the node on which the annotated Pod is running. This can help protect Redpanda brokers from unexpected shutdowns in environments that use Karpenter to manage infrastructure nodes. === Scale out @@ -119,184 +132,7 @@ kubectl exec redpanda-0 --namespace -- rpk cluster health Scaling in is the process of removing brokers from your Redpanda cluster. You may want to remove brokers for cost reduction and resource optimization. -To scale in a Redpanda cluster, you must decommission the brokers that you want to remove before updating the `statefulset.replica` setting in the Helm values. See xref:manage:kubernetes/k-decommission-brokers.adoc[]. - -[[node-pvc]] -== Install the Nodewatcher controller - -The Nodewatcher controller maintains cluster operation during node failures by managing the lifecycle of PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) for Redpanda clusters. When the controller detects that a Node resource is not available, it sets the reclaim policy of the PV to `Retain`, helping to prevent data loss. Concurrently, it orchestrates the deletion of the PVC, which allows the Redpanda broker that was previously running on the deleted worker node to be rescheduled onto new, operational nodes. - -[WARNING] -==== -The Nodewatcher controller is an emergency backstop to keep your Redpanda cluster running in case of unexpected node failures. *Never use this controller as a routine method for removing brokers.* - -Using the Nodewatcher controller as a routine method for removing brokers can lead to unintended consequences, such as increased risk of data loss and inconsistent cluster states. The Nodewatcher is designed for emergency scenarios and not for managing the regular scaling, decommissioning, and rebalancing of brokers. - -To safely scale in your Redpanda cluster, always use the xref:manage:kubernetes/k-decommission-brokers.adoc[decommission process], which ensures that brokers are removed in a controlled manner, with data properly redistributed across the remaining nodes, maintaining cluster health and data integrity. -==== - -. Install the Nodewatcher controller: -+ -[tabs] -====== -Helm + Operator:: -+ --- - -You can install the Nodewatcher controller as part of the Redpanda Operator or as a sidecar on each Pod that runs a Redpanda broker. When you install the controller as part of the Redpanda Operator, the controller monitors all Redpanda clusters running in the same namespace as the Redpanda Operator. If you want the controller to manage only a single Redpanda cluster, install it as a sidecar on each Pod that runs a Redpanda broker, using the Redpanda resource. - -To install the Nodewatcher controller as part of the Redpanda Operator: - -.. Deploy the Redpanda Operator with the Nodewatcher controller: -+ -[,bash,subs="attributes+",lines=7+8] ----- -helm repo add redpanda https://charts.redpanda.com -helm repo update -helm upgrade --install redpanda-controller redpanda/operator \ - --namespace \ - --set image.tag={latest-operator-version} \ - --create-namespace \ - --set additionalCmdFlags={--additional-controllers="nodeWatcher"} \ - --set rbac.createAdditionalControllerCRs=true ----- -+ -- `--additional-controllers="nodeWatcher"`: Enables the Nodewatcher controller. -- `rbac.createAdditionalControllerCRs=true`: Creates the required RBAC rules for the Redpanda Operator to monitor the Node resources and update PVCs and PVs. - -.. Deploy a Redpanda resource: -+ -.`redpanda-cluster.yaml` -[,yaml] ----- -apiVersion: cluster.redpanda.com/v1alpha2 -kind: Redpanda -metadata: - name: redpanda -spec: - chartRef: {} - clusterSpec: {} ----- -+ -```bash -kubectl apply -f redpanda-cluster.yaml --namespace -``` - -To install the Decommission controller as a sidecar: - -.`redpanda-cluster.yaml` -[,yaml,lines=11+13+15] ----- -apiVersion: cluster.redpanda.com/v1alpha2 -kind: Redpanda -metadata: - name: redpanda -spec: - chartRef: {} - clusterSpec: - statefulset: - sideCars: - controllers: - enabled: true - run: - - "nodeWatcher" - rbac: - enabled: true ----- - -- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. -- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. -- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. - --- -Helm:: -+ --- -[tabs] -==== ---values:: -+ -.`decommission-controller.yaml` -[,yaml,lines=4+6+8] ----- -statefulset: - sideCars: - controllers: - enabled: true - run: - - "nodeWatcher" -rbac: - enabled: true ----- -+ -- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. -- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. -- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. - ---set:: -+ -[,bash,lines=4-6] ----- -helm upgrade --install redpanda redpanda/redpanda \ - --namespace \ - --create-namespace \ - --set statefulset.sideCars.controllers.enabled=true \ - --set statefulset.sideCars.controllers.run={"nodeWatcher"} \ - --set rbac.enabled=true ----- -+ -- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. -- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. -- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. - -==== --- -====== - -. Test the Nodewatcher controller by deleting a Node resource: -+ -[,bash] ----- -kubectl delete node ----- -+ -NOTE: This step is for testing purposes only. - -. Monitor the logs of the Nodewatcher controller: -+ --- -- If you're running the Nodewatcher controller as part of the Redpanda Operator: -+ -[,bash] ----- -kubectl logs -l app.kubernetes.io/name=operator -c manager --namespace ----- - -- If you're running the Nodewatcher controller as a sidecar: -+ -[,bash] ----- -kubectl logs --namespace -c redpanda-controllers ----- --- -+ -You should see that the controller successfully deleted the PVC of the Pod that was running on the deleted Node resource. -+ -[,bash] ----- -kubectl get persistentvolumeclaim --namespace ----- - -. Verify that the reclaim policy of the PV is set to `Retain` to allow you to recover the node, if necessary: -+ -[,bash] ----- -kubectl get persistentvolume --namespace ----- - -After the Nodewatcher controller has finished, xref:manage:kubernetes/k-decommission-brokers.adoc[decommission the broker] that was removed from the node. This is necessary to prevent a potential loss of quorum and ensure cluster stability. - -NOTE: Make sure to use the `--force` flag when decommissioning the broker with xref:reference:rpk/rpk-redpanda/rpk-redpanda-admin-brokers-decommission.adoc[`rpk redpanda admin brokers decommission`]. This flag is required when the broker is no longer running. +To scale in a Redpanda cluster, follow the xref:manage:kubernetes/k-decommission-brokers.adoc[instructions for decommissioning brokers in Kubernetes] to safely remove brokers from the Redpanda cluster. diff --git a/package-lock.json b/package-lock.json index 5676f9464..559466d6a 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,7 +12,8 @@ "@antora/cli": "3.1.2", "@antora/site-generator": "3.1.2", "@asciidoctor/tabs": "^1.0.0-beta.5", - "@redpanda-data/docs-extensions-and-macros": "^3.0.0" + "@redpanda-data/docs-extensions-and-macros": "^3.0.0", + "@sntke/antora-mermaid-extension": "^0.0.6" }, "devDependencies": { "@octokit/core": "^6.1.2", @@ -2890,6 +2891,12 @@ "lilconfig": ">=2" } }, + "node_modules/@sntke/antora-mermaid-extension": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/@sntke/antora-mermaid-extension/-/antora-mermaid-extension-0.0.6.tgz", + "integrity": "sha512-c4L+JsJYQYq/R73h5yRdBBR1jVkVdhIm6yhRy1Y009IpvvYAQor3TIxwaFXnPNR2NyfSlXUpXHelkEHddmJMOw==", + "license": "MIT" + }, "node_modules/@szmarczak/http-timer": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/@szmarczak/http-timer/-/http-timer-5.0.1.tgz", diff --git a/package.json b/package.json index bc5c23a53..c747eebdb 100644 --- a/package.json +++ b/package.json @@ -15,12 +15,13 @@ "@antora/cli": "3.1.2", "@antora/site-generator": "3.1.2", "@asciidoctor/tabs": "^1.0.0-beta.5", - "@redpanda-data/docs-extensions-and-macros": "^3.0.0" + "@redpanda-data/docs-extensions-and-macros": "^3.0.0", + "@sntke/antora-mermaid-extension": "^0.0.6" }, "devDependencies": { - "@octokit/rest": "^21.0.1", "@octokit/core": "^6.1.2", "@octokit/plugin-retry": "^7.1.1", + "@octokit/rest": "^21.0.1", "@web/dev-server": "^0.2.1", "cross-env": "^7.0.3", "doc-detective": "^2.17.0",