Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Non-Strict Affinity and Non-Strict Anti-Affinity Support #330

Closed

Conversation

FarnazBGH
Copy link

Description of changes:
This PR integrates non-strict Anti-affinity and non-strict Affinity types into CloudStackAffinityGroup. With the latest CloudStack version 4.18, we now have the capability to set these flexible affinity rules. This enhancement allows for more adaptable and efficient resource management, particularly beneficial for the configuration of worker nodes.

Testing performed:

  • Make Test
  • Creating cluster with similar CloudStackMachineTemplate for the worker machines.
    Below is an example configuration showcasing the use of a non-strict anti-affinity type in CloudStackMachineTemplate:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: CloudStackMachineTemplate
    metadata:
      name: example-worker-0
    spec:
      template:
        spec:
          affinity: soft-anti
          offering:
            name: example-offering
          template:
            name: example-template

Requirements:
CloudStack Version: This feature requires CloudStack version 4.18.1 or higher.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Copy link

linux-foundation-easycla bot commented Dec 13, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

Copy link

netlify bot commented Dec 13, 2023

Deploy Preview for kubernetes-sigs-cluster-api-cloudstack ready!

Name Link
🔨 Latest commit 2f0e962
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-cluster-api-cloudstack/deploys/665f0bce4018fe00081d3780
😎 Deploy Preview https://deploy-preview-330--kubernetes-sigs-cluster-api-cloudstack.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Dec 13, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: FarnazBGH
Once this PR has been reviewed and has the lgtm label, please assign davidjumani for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @FarnazBGH!

It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-cloudstack 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api-provider-cloudstack has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Dec 13, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @FarnazBGH. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Dec 13, 2023
@FarnazBGH FarnazBGH force-pushed the feat/non-strict-affinity-rules branch from 9158f7e to b918dbc Compare December 13, 2023 17:03
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Dec 13, 2023
@FarnazBGH FarnazBGH force-pushed the feat/non-strict-affinity-rules branch from b918dbc to ce0ec25 Compare December 14, 2023 07:50
@chrisdoherty4
Copy link
Member

/uncc @chrisdoherty4
/cc @g-gaston

@k8s-ci-robot k8s-ci-robot requested review from g-gaston and removed request for chrisdoherty4 January 2, 2024 16:09
@rohityadavcloud
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jan 3, 2024
@rohityadavcloud
Copy link
Member

/run-e2e -c 4.18

@blueorangutan
Copy link

@rohityadavcloud a jenkins job has been kicked to run test with following paramaters:

  • kubernetes version: 1.27.2
  • CloudStack version: 4.18
  • hypervisor: kvm
  • template: ubuntu-2004-kube
  • Kubernetes upgrade from: 1.26.5 to 1.27.2

Copy link
Contributor

@g-gaston g-gaston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the PR @FarnazBGH!

How would this behave if the user is running an older version of CloudStack that doesn't support this feature?

@blueorangutan
Copy link

Test Results : (tid-355)
Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8
Kubernetes Version: v1.27.2
Kubernetes Version upgrade from: v1.26.5
Kubernetes Version upgrade to: v1.27.2
CloudStack Version: 4.18
Template: ubuntu-2004-kube
E2E Test Run Logs: https://github.com/blueorangutan/capc-prs/releases/download/capc-pr-ci-cd/capc-e2e-artifacts-pr330-sl-355.zip

[PASS] When testing affinity group Should have host affinity group when affinity is anti
[PASS] When testing machine remediation Should replace a machine when it is destroyed
[PASS] When testing horizontal scale out/in [TC17][TC18][TC20][TC21] Should successfully scale machine replicas up and down horizontally
[PASS] When the specified resource does not exist Should fail due to the specified account is not found [TC4a]
[PASS] When the specified resource does not exist Should fail due to the specified domain is not found [TC4b]
[PASS] When the specified resource does not exist Should fail due to the specified control plane offering is not found [TC7]
[PASS] When the specified resource does not exist Should fail due to the specified template is not found [TC6]
[PASS] When the specified resource does not exist Should fail due to the specified zone is not found [TC3]
[PASS] When the specified resource does not exist Should fail due to the specified disk offering is not found
[PASS] When the specified resource does not exist Should fail due to the compute resources are not sufficient for the specified offering [TC8]
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is not customized but the disk size is specified
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is customized but the disk size is not specified
[PASS] When the specified resource does not exist Should fail due to the public IP can not be found
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade worker machine due to insufficient compute resources
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade control plane machine due to insufficient compute resources
[PASS] When testing subdomain Should create a cluster in a subdomain
[PASS] When testing app deployment to the workload cluster [TC1][PR-Blocking] Should be able to download an HTML from the app deployed to the workload cluster
[PASS] When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
[PASS] When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
[PASS] When testing with custom disk offering Should successfully create a cluster with a custom disk offering
[PASS] When testing resource cleanup Should create a new network when the specified network does not exist
[PASS] When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
[PASS] with two clusters should successfully add and remove a second cluster without breaking the first cluster
[PASS] When testing with disk offering Should successfully create a cluster with disk offering


Summarizing 7 Failures:

[Fail] When testing Kubernetes version upgrades [It] Should successfully upgrade kubernetes versions when there is a change in relevant fields 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/machinedeployment_helpers.go:129

[Fail] When testing affinity group [It] Should have host affinity group when affinity is pro 
/jenkins/workspace/capc-e2e-new/test/e2e/common.go:331

[Fail] When testing affinity group [It] Should have host affinity group when affinity is soft-pro 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/client.go:248

[Fail] When testing affinity group [It] Should have host affinity group when affinity is soft-anti 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/client.go:248

[Fail] When testing app deployment to the workload cluster with network interruption [ToxiProxy] [BeforeEach] Should be able to create a cluster despite a network interruption during that process 
/jenkins/workspace/capc-e2e-new/test/e2e/toxiproxy/toxiProxy.go:203

[Fail] When testing app deployment to the workload cluster with slow network [ToxiProxy] [BeforeEach] Should be able to download an HTML from the app deployed to the workload cluster 
/jenkins/workspace/capc-e2e-new/test/e2e/toxiproxy/toxiProxy.go:203

[Fail] When testing multiple CPs in a shared network with kubevip [It] Should successfully create a cluster with multiple CPs in a shared network 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/cluster_helpers.go:143

Ran 30 of 31 Specs in 9834.870 seconds
FAIL! -- 23 Passed | 7 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (9834.88s)
FAIL

@FarnazBGH
Copy link
Author

FarnazBGH commented Jan 9, 2024

Thanks a lot for the PR @FarnazBGH!

How would this behave if the user is running an older version of CloudStack that doesn't support this feature?

In our environment all the CloudStack are upgraded to the latest version so non-strict affinities are available, in response to your question I created an extra-test affinity (assume there is affinity in the later version which is test-anti) and revised the code with test affinity which is not available in our system which was (TestAntiAffinity = "test-anti"):

const (
	// The presence of a finalizer prevents CAPI from deleting the corresponding CAPI data.
	MachineFinalizer = "cloudstackmachine.infrastructure.cluster.x-k8s.io"
	ProAffinity      = "pro"
	AntiAffinity     = "anti"
	SoftProAffinity  = "soft-pro"
	SoftAntiAffinity = "soft-anti"
	TestAntiAffinity = "test-anti"
	NoAffinity       = "no"
)

Now during the creation of the cluster when I set the affinity to the value of"test-anti", I will get the below error:

E0109 09:53:14.109649       1 controller.go:324]  "msg"="Reconciler error" "error"="CloudStack API error 431 (CSExceptionErrorCode: 4350): Unable to create affinity group, invalid affinity group type: test host anti-affinity. Valid values are non-strict host affinity, host affinity,host anti-affinity,ExplicitDedication,non-strict host anti-affinity" 

which means for the lower version of cloudstack they will get similar errors for the non-strict affinities.
For example for " non-strict host anti-affinity" something similar to:

CloudStack API error 431 (CSExceptionErrorCode: 4350): Unable to create affinity group, invalid affinity group type: non-strict host anti-affinity. Valid values are host affinity, host anti-affinity, ExplicitDedication" 

@davidjumani
Copy link
Contributor

LGTM but waiting for input from other maintainers as well.
@g-gaston If this is tied to a specific version of cloudstack, the compatibility matrix can be updated accordingly when a release with this feature is cut so users are aware of this requirement

Copy link
Contributor

@hrak hrak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some small comments.

Question from my side to the other maintainers would be whether its a good idea to apply this change to v1beta1 and v1beta2, or if it doesn't really matter because the type of value is not changing, just the allowed values.

And a general suggestion for this Type field from my side would be to consider using the following kubebuilder validation so the CRD already defines the allowed values:

In cloudstackaffinitygroup_types.go:

type CloudStackAffinityGroupSpec struct {
	// Mutually exclusive parameter with AffinityGroupIDs.
	// Can be "host affinity" or "host anti-affinity". Will create an affinity group per machine set.
	// +kubebuilder:validation:Enum=host affinity;host anti-affinity;non-strict host affinity;non-strict host anti-affinity
	// +optional
	Type string `json:"type,omitempty"`

In cloudstackmachine_types.go:

	// Mutually exclusive parameter with AffinityGroupIDs.
	// Defaults to `no`. Can be `pro` or `anti`. Will create an affinity group per machine set.
	// +kubebuilder:validation:Enum=no;pro;anti;soft-pro;soft-anti
	// +optional
	Affinity string `json:"affinity,omitempty"`

Another question from my end is why we use two different formats for the affinity types (host affinity vs pro f.e.), but this might be something we can deal with in a seperate PR.

test/e2e/common.go Outdated Show resolved Hide resolved
pkg/cloud/affinity_groups.go Outdated Show resolved Hide resolved
pkg/cloud/affinity_groups.go Outdated Show resolved Hide resolved
@FarnazBGH FarnazBGH force-pushed the feat/non-strict-affinity-rules branch from ce0ec25 to 148da67 Compare January 17, 2024 10:59
@rohityadavcloud
Copy link
Member

/run-e2e -c 4.18

@FarnazBGH FarnazBGH force-pushed the feat/non-strict-affinity-rules branch 2 times, most recently from 9cdf7eb to 3a9f88c Compare June 2, 2024 11:33
@FarnazBGH FarnazBGH force-pushed the feat/non-strict-affinity-rules branch from 3a9f88c to 1e04f47 Compare June 2, 2024 11:35
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jun 2, 2024
@FarnazBGH
Copy link
Author

Sorry for the delayed response.

The comments and documentation are added.

In the next few days, I will also work on checking and testing the CloudStack version for the soft-pro and soft-anti.

@blueorangutan
Copy link

Test Results : (tid-470)
Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8
Kubernetes Version: v1.27.2
Kubernetes Version upgrade from: v1.26.5
Kubernetes Version upgrade to: v1.27.2
CloudStack Version: 4.19
Template: ubuntu-2004-kube
E2E Test Run Logs: https://github.com/blueorangutan/capc-prs/releases/download/capc-pr-ci-cd/capc-e2e-artifacts-pr330-sl-470.zip



Summarizing 6 Failures:
 [FAIL] When testing affinity group [It] Should have host affinity group when affinity is pro
 /jenkins/workspace/capc-e2e-new/test/e2e/common.go:348
 [FAIL] When testing affinity group [It] Should have host affinity group when affinity is soft-pro
 /root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/client.go:292
 [FAIL] When testing affinity group [It] Should have host affinity group when affinity is soft-anti
 /root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/client.go:292
 [FAIL] When testing MachineDeployment rolling upgrades [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
 /root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/machinedeployment_helpers.go:329
 [FAIL] When testing project [AfterEach] Should create a cluster in a project
 /jenkins/workspace/capc-e2e-new/test/e2e/project.go:103
 [FAIL] with two clusters [It] should successfully add and remove a second cluster without breaking the first cluster
 /root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/clusterctl_helpers.go:345

Ran 31 of 32 Specs in 10470.880 seconds
FAIL! -- 25 Passed | 6 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (10470.88s)
FAIL

@blueorangutan
Copy link

Test Results : (tid-472)
Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8
Kubernetes Version: v1.27.2
Kubernetes Version upgrade from: v1.26.5
Kubernetes Version upgrade to: v1.27.2
CloudStack Version: 4.19
Template: ubuntu-2004-kube
E2E Test Run Logs: https://github.com/blueorangutan/capc-prs/releases/download/capc-pr-ci-cd/capc-e2e-artifacts-pr330-sl-472.zip



Summarizing 6 Failures:
 [FAIL] When testing project [AfterEach] Should create a cluster in a project
 /jenkins/workspace/capc-e2e-new/test/e2e/project.go:103
 [FAIL] When testing affinity group [It] Should have host affinity group when affinity is pro
 /jenkins/workspace/capc-e2e-new/test/e2e/common.go:348
 [FAIL] When testing affinity group [It] Should have host affinity group when affinity is soft-pro
 /root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/client.go:292
 [FAIL] When testing affinity group [It] Should have host affinity group when affinity is soft-anti
 /root/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/client.go:292
 [FAIL] When testing machine remediation [It] Should replace a machine when it is destroyed
 /jenkins/workspace/capc-e2e-new/test/e2e/common.go:495
 [TIMEDOUT] When testing with disk offering [AfterEach] Should successfully create a cluster with disk offering
 /jenkins/workspace/capc-e2e-new/test/e2e/disk_offering.go:89

Ran 31 of 32 Specs in 10786.159 seconds
FAIL! - Suite Timeout Elapsed -- 25 Passed | 6 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (10786.16s)
FAIL

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jun 4, 2024
@FarnazBGH FarnazBGH force-pushed the feat/non-strict-affinity-rules branch from ed7d165 to 2f0e962 Compare June 4, 2024 12:42
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jun 4, 2024
@@ -73,6 +73,22 @@ func AffinityGroupSpec(ctx context.Context, inputGetter func() CommonSpecInput)
affinityIds = executeTest(ctx, input, namespace, specName, clusterResources, "anti")
})

It("Should have host affinity group when affinity is soft-pro", func() {
cloudStackVersion := input.E2EConfig.GetVariable("CLOUDSTACK_VERSION")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AffinityGroupType = "host affinity"
AntiAffinityGroupType = "host anti-affinity"
AffinityGroupType = "host affinity"
SoftAntiAffinityGroupType = "non-strict anti-affinity"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
SoftAntiAffinityGroupType = "non-strict anti-affinity"
SoftAntiAffinityGroupType = "non-strict host anti-affinity"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FarnazBGH This needs to be fixed.

@vishesh92
Copy link
Member

@FarnazBGH I will be cutting the RC on Monday. If possible, please address/resolve the comments before then. Else we will have to move this to the next release.

@vishesh92 vishesh92 modified the milestones: v0.5.0, v0.6 Jul 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 2, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants