From 9f8d4a1b31a3090225bb1e57eefd0198e1865780 Mon Sep 17 00:00:00 2001 From: rickbrouwer Date: Tue, 8 Oct 2024 19:38:44 +0200 Subject: [PATCH 1/2] Clarify pollingInterval Signed-off-by: rickbrouwer --- content/docs/2.16/reference/scaledobject-spec.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/content/docs/2.16/reference/scaledobject-spec.md b/content/docs/2.16/reference/scaledobject-spec.md index 124d02062..71de5c0b1 100644 --- a/content/docs/2.16/reference/scaledobject-spec.md +++ b/content/docs/2.16/reference/scaledobject-spec.md @@ -70,11 +70,15 @@ To scale Kubernetes Deployments only `name` need be specified. To scale a differ ```yaml pollingInterval: 30 # Optional. Default: 30 seconds ``` - This is the interval to check each trigger on. By default, KEDA will check each trigger source on every ScaledObject every 30 seconds. -**Example:** in a queue scenario, KEDA will check the queueLength every `pollingInterval`, and scale the resource up or down accordingly. +When scaling from 0 to 1, the polling interval is controlled by KEDA. For example, if this parameter is set to `60`, KEDA will poll for a metric value every 60 seconds while the number of replicas is 0. + +While scaling from 1 to N, on top of KEDA, the HPA will also poll regularly for metrics, based on the [`--horizontal-pod-autoscaler-sync-period`](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#options) parameter to the `kube-controller-manager`, which by default is 15 seconds. For example, if the `kube-controller-manager` was started with `--horizontal-pod-autoscaler-sync-period=30`, the HPA will poll for a metric value every 30 seconds while the number of replicas is between 1 and N. + +If you want respect the polling interval, the feature [`caching metrics`](../concepts/scaling-deployments/#caching-metrics) enables caching of metric values during polling interval. +**Example:** in a queue scenario, KEDA will check the queueLength every `pollingInterval` while the number of replicas is 0, and scale the resource up an down accordingly. ## cooldownPeriod ```yaml From 86e1f58c10834a3a8f454df9ad2fa185329ca24a Mon Sep 17 00:00:00 2001 From: rickbrouwer Date: Wed, 9 Oct 2024 08:38:04 +0200 Subject: [PATCH 2/2] Clarify caching metrics Signed-off-by: rickbrouwer --- content/docs/2.16/concepts/scaling-deployments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/docs/2.16/concepts/scaling-deployments.md b/content/docs/2.16/concepts/scaling-deployments.md index db6e69140..63a5f6661 100644 --- a/content/docs/2.16/concepts/scaling-deployments.md +++ b/content/docs/2.16/concepts/scaling-deployments.md @@ -39,7 +39,7 @@ The only constraint is that the target `Custom Resource` must define `/scale` [s This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior such that KEDA Metrics Server tries to read metric from the cache first. This cache is updated periodically during the polling interval. -Enabling this feature can significantly reduce the load on the scaler service. +Enabling [`useCachedMetrics`](../reference/scaledobject-spec/#triggers) can significantly reduce the load on the scaler service. This feature is not supported for `cpu`, `memory` or `cron` scaler.