Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snatch docs #38

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open

Snatch docs #38

wants to merge 11 commits into from

Conversation

tobiscr
Copy link
Contributor

@tobiscr tobiscr commented Jan 22, 2025

Description

First draft of KIM-Snatch documentation

Related issue(s)

@tobiscr tobiscr requested a review from a team as a code owner January 22, 2025 15:48
@tobiscr tobiscr requested a review from a team as a code owner January 22, 2025 15:52
m00g3n
m00g3n previously approved these changes Jan 23, 2025
Copy link
Contributor

@m00g3n m00g3n left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

docs/user/README.md Show resolved Hide resolved
docs/user/README.md Outdated Show resolved Hide resolved

In the past, Kyma had only one worker pool (so called "Kyma worker pool") where every workload was scheduled on. This Kyma worker pool is mandatory and cannot be removed from a Kyma runtime. Customers have several configuration options, but it's not fully adjustable and can be too limited for customers who require special node setups.

By introducing the Kyma worker pool feature, customers can add additional worker pools to their Kyma runtime. This enables customer to introduce worker nodes, which are optimized for their particular workload requirements.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By introducing the Kyma worker pool feature, customers can add additional worker pools to their Kyma runtime. This enables customer to introduce worker nodes, which are optimized for their particular workload requirements.
With the Kyma worker pool feature, you can add additional worker pools to your Kyma runtime and introduce worker nodes optimized for their particular workload requirements.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 6 and 8 describe the Kyma worker pool.
If these are two different things, I recommend naming/describing them so that it's easy to understand which is which.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fully agreed. I've remove overlapping names and used different formatting for Kyma worker pool.

docs/user/README.md Outdated Show resolved Hide resolved
docs/user/README.md Outdated Show resolved Hide resolved
docs/user/README.md Outdated Show resolved Hide resolved
docs/user/README.md Outdated Show resolved Hide resolved
docs/user/README.md Outdated Show resolved Hide resolved
docs/user/README.md Outdated Show resolved Hide resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. In the diagram, please replace the following:
    • Kyma Runtime (SKR) with SAP BTP, Kyma Runtime
    • Customer with User
  2. Is it possible to move the namespace shape so that it doesn't intersect with the channel (?) symbol?
  3. The diagram does not follow our content guidelines, but as we are thinking of switching to TAM, let me check if the current version can be accepted.
  4. Still, can you change it to SVG. If it's added as snatch-deployment.drawio.svg, it will be easier to edit/update if need be.

@kyma-bot
Copy link
Contributor

New changes are detected. LGTM label has been removed.

@tobiscr tobiscr requested a review from IwonaLanger January 23, 2025 17:00
Copy link

@mmitoraj mmitoraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to/have to call KIM Snatch a module? For example, Warden is called just "Warden". As far as I understood, KIM Snatch is not a module in the sense of customer-facing (internal or external) modules. I'm just wondering if this is the right name and want to avoid confusion and misunderstanding. Or maybe you chose the name of purpose and you don't get it right :)

@tobiscr
Copy link
Contributor Author

tobiscr commented Jan 24, 2025

Do we want to/have to call KIM Snatch a module? For example, Warden is called just "Warden". As far as I understood, KIM Snatch is not a module in the sense of customer-facing (internal or external) modules. I'm just wondering if this is the right name and want to avoid confusion and misunderstanding. Or maybe you chose the name of purpose and you don't get it right :)

Thanks for asking, this is the situation:
KIM Snatch is a mandatory module, required for the worker-pool feature in KIM. Mandatory modules are treated special by KLM:

  • Customer's won't be able to disable this module
  • Mandatory modules are hidden for customers - they don't see it in Busola or the KymaCR
  • Such modules are always enabled and installed automatically by KLM on all SKRs

Caused by these constraints, my recommendation is to not mention it in our docs. The purpose of the module is acutally "housekeeping" of Kyma workloads. Customers won't have any touchpoints to it.

We tried to indicate that this module is related to the Infrastructure Manager (KIM), that's why we called it "KIM Snatch". "Snatch" has no deeper meaning what it's internally doing - but similar for Warden: the name is also not indicating what it's under the hood doing :)

Using "KIM Snatch" instead calling it only "Snatch" was requested by @zhoujing2022 as we treat this module from release-management perspective as sub-component of KIM. We want to stay consistent and decided to call it everywhere "KIM Snatch".

As this module is pure Kyma related and not used by / visible for customers, I don't think the name causes impacts for customers.


## Overview
> Provide a description of your module and its components. Describe its features and functionalities. Mention the scope and add information on the CustomResourceDefinitions (CRDs).
> You can divide this section to the relevant subsections.
The KIM Snatch module is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is a mandatory Kyma module deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The KIM Snatch module is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is a mandatory Kyma module deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.
The KIM Snatch module is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is a mandatory Kyma module deployed on all Kyma-managed runtimes. Mandatory modules are automatically installed by the [Kyma Lifecycle Manager (KLM)](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime, but not visible to Kyma runtime users.


## Overview
> Provide a description of your module and its components. Describe its features and functionalities. Mention the scope and add information on the CustomResourceDefinitions (CRDs).
> You can divide this section to the relevant subsections.
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might skip the sentence about mandatory modules now that the word "module" is not used in the name, and have one of the following instead:


## Overview
> Provide a description of your module and its components. Describe its features and functionalities. Mention the scope and add information on the CustomResourceDefinitions (CRDs).
> You can divide this section to the relevant subsections.
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes but is not visible to SAP BTP, Kyma runtime customers.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or:


## Overview
> Provide a description of your module and its components. Describe its features and functionalities. Mention the scope and add information on the CustomResourceDefinitions (CRDs).
> You can divide this section to the relevant subsections.
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes.

> You can divide this section to the relevant subsections.
The KIM Snatch is part of Kyma Infrastructure Manager's (KIM) worker pool feature. It is deployed on all Kyma-managed runtimes. Mandatory modules are not visible for SAP BTP, Kyma runtime customers and automatically installed by the [KLM](https://github.com/kyma-project/lifecycle-manager) on each SAP BTP, Kyma runtime.

In the past, only one worker pool existed in a Kyma runtime (called `Kyma worker pool`). This `Kyma worker pool` is mandatory and cannot be removed. It allows several configuration options, which can be too limited for users requiring special node setups.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the past, only one worker pool existed in a Kyma runtime (called `Kyma worker pool`). This `Kyma worker pool` is mandatory and cannot be removed. It allows several configuration options, which can be too limited for users requiring special node setups.
So far, `Kyma worker pool` has been the only existing worker pool in SAP BTP, Kyma runtime. This `Kyma worker pool` is mandatory and cannot be removed. It allows several configuration options, which can be too limited for users requiring special node setups.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would refrain from using the phrase "in the past." It suggests that the Kyma worker pool no longer exists, which is not true.


With the worker pool feature, you can add customized worker pools to your Kyma runtime and introduce worker nodes optimized for your particular workload requirements.

The KIM-Snatch assigns Kyma workloads, for example, Kyma modules' operators, to the `Kyma worker pool` and ensures that your worker pools are reserved for your workloads. This solution has the following advantages:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The KIM-Snatch assigns Kyma workloads, for example, Kyma modules' operators, to the `Kyma worker pool` and ensures that your worker pools are reserved for your workloads. This solution has the following advantages:
KIM Snatch assigns Kyma workloads, for example, Kyma modules' operators, to `Kyma worker pool` and ensures that your worker pools are reserved for your workloads. This solution has the following advantages:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you prefer KIM-Snatch to KIM Snatch, please ignore this comment, but add the hyphen in line 1. Otherwise, please remove the hyphen in lines 17, 23, and 40.


The KIM-Snatch assigns Kyma workloads, for example, Kyma modules' operators, to the `Kyma worker pool` and ensures that your worker pools are reserved for your workloads. This solution has the following advantages:

* Kyma workloads are not allocating resources on customized worker pools. This ensures that customers have the full capacity of the worker pool available for their workloads.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Kyma workloads are not allocating resources on customized worker pools. This ensures that customers have the full capacity of the worker pool available for their workloads.
* Kyma workloads don't allocate resources on customized worker pools. This ensures that customers have the full capacity of the worker pool available for their workloads.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aren't allocating suggests something happening temporarily; don't allocate refers to something that generally happens as a rule, which I assume is the case. If I'm wrong, please ignore this suggestion.


The KIM-Snatch introduces the Kubernetes [mutating admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook).

It intercepts all Pods that are scheduled in a Kyma-managed namespace. [Kyma Lifecycle Manager (KLM)](https://github.com/kyma-project/lifecycle-manager) always labels a managed namespace with `operator.kyma-project.io/managed-by: kyma`. KIM reacts only to Pods scheduled in one of these labeled namespaces. Typical Kyma-managed namespaces are `kyma-system` or, if the Kyma Istio module is used, `istio`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
It intercepts all Pods that are scheduled in a Kyma-managed namespace. [Kyma Lifecycle Manager (KLM)](https://github.com/kyma-project/lifecycle-manager) always labels a managed namespace with `operator.kyma-project.io/managed-by: kyma`. KIM reacts only to Pods scheduled in one of these labeled namespaces. Typical Kyma-managed namespaces are `kyma-system` or, if the Kyma Istio module is used, `istio`.
It intercepts all Pods that are scheduled in a Kyma-managed namespace. [Kyma Lifecycle Manager (KLM)](https://github.com/kyma-project/lifecycle-manager) always labels a managed namespace with `operator.kyma-project.io/managed-by: kyma`. KIM reacts only to Pods scheduled in one of these labeled namespaces. Typical Kyma-managed namespaces are `kyma-system` or, if the Kyma Istio module is used, `istio`.

* Resources of the preferred worker pool are exhausted, while other worker pools still have free capacities.
* If no suitable worker pool can be found and the node affinity is set as a "hard" rule, the Pod is not scheduled.

To overcome these limitations, we use `preferredDuringSchedulingIgnoredDuringExecution` so that the configured node affinity on Kyma workloads is a "soft" rule. For more details, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)). The Kubernetes scheduler prefers the Kyma worker pool. Still, if scheduling the Pod in this pool is impossible, it also considers other worker pools.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To overcome these limitations, we use `preferredDuringSchedulingIgnoredDuringExecution` so that the configured node affinity on Kyma workloads is a "soft" rule. For more details, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)). The Kubernetes scheduler prefers the Kyma worker pool. Still, if scheduling the Pod in this pool is impossible, it also considers other worker pools.
To overcome these limitations, we use `preferredDuringSchedulingIgnoredDuringExecution` so that the configured node affinity on Kyma workloads is a "soft" rule. For more details, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity). The Kubernetes scheduler prefers the Kyma worker pool. Still, if scheduling the Pod in this pool is impossible, it also considers other worker pools.


To overcome these limitations, we use `preferredDuringSchedulingIgnoredDuringExecution` so that the configured node affinity on Kyma workloads is a "soft" rule. For more details, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)). The Kubernetes scheduler prefers the Kyma worker pool. Still, if scheduling the Pod in this pool is impossible, it also considers other worker pools.

### Kyma workloads are not Intercepted

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Kyma workloads are not Intercepted
### Kyma Workloads are not Intercepted

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants