Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-48250: MCO CO degrades are stuck on until master pool updates complete #4791

Merged
merged 1 commit into from
Jan 29, 2025

Conversation

djoshy
Copy link
Contributor

@djoshy djoshy commented Jan 14, 2025

- What I did
Added a clear CO degrade function, which is called after a successful invocation of an operator sync function.

- How to verify it
On a build without this fix:

  1. Degrade the operator. I did this is by scaling down the CVO and editing the releaseVersion field in the machine-config-operator-images configmap to a bad value. This will cause syncRenderConfig to fail and degrade the operator(visible in the CO object and operator logs).
  2. Now, deploy an MC update to the master pool. This will cause the operator to be stuck in syncRequiredMachineConfigPool sync function, where it'll wait until the master pool completes the update.
  3. While the master pool is still updating, restore the releaseVersion back to the original value. You should see the operator log clear up shortly, but the CO will continue to be degraded. Once the master pool is done updating, the CO degrade will clear up.

On a build with this fix:
Repeat steps 1 to 4 above. This time, you should notice that the CO degrade will clear up shortly after restoring releaseVersion, without having to wait for the master pool to complete the update.

Note: The update needs to be applied to a master pool because the syncRequiredMachineConfigPools function will only "trap" the operator for master pool updates.

@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Jan 14, 2025
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-48250, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

- What I did
Added a clear CO degrade function, which is called after a successful invocation of an operator sync function.

- How to verify it
On a build without this fix:

  1. Degrade the operator. I did this is by scaling down the CVO and editing the releaseVersion field in the machine-config-operator-images configmap to a bad value. This will cause syncRenderConfig to fail and degrade the operator(visible in the CO object and operator logs).
  2. Now, deploy an MC update to the master pool. This will cause the operator to be stuck in syncRequiredMachineConfigPool sync function, where it'll wait until the master pool completes the update.
  3. While the master pool is still updating, restore the releaseVersion back to the original value. You should see the operator log clear up shortly, but the CO will continue to be degraded. Once the master pool is done updating, the CO degrade will clear up.

On a build with this fix:
Repeat steps 1 to 4 above. This time, you should notice that the CO degrade will clear up shortly after restoring releaseVersion, without having to wait for the master pool to complete the update.

Note: The update needs to be applied to a master pool because the syncRequiredMachineConfigPools function will only "trap" the operator for master pool updates.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 14, 2025
@djoshy
Copy link
Contributor Author

djoshy commented Jan 15, 2025

/retest-required

@djoshy
Copy link
Contributor Author

djoshy commented Jan 15, 2025

/test security

@sergiordlr
Copy link

Verified using IPI on AWS

  1. Scale down CVO
    $ oc scale deployment cluster-version-operator --replicas 0 -n openshift-cluster-version

  2. Remove the configuration of the machine-config-operator-images configmap to force a degraded status in the machine-config clusteroperator

# Don't forget the get the original configuration, we will restore it later
$ oc get cm machine-config-operator-images -oyaml
$ oc set data cm machine-config-operator-images 'images.json='

  1. Wait until the machine-config CO is degraded
$ oc get co machine-config
NAME             VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
machine-config   4.19.0-0.test-2025-01-20-085829-ci-ln-x2x5hkk-latest   True        False         True       120m    Failed to resync 4.19.0-0.test-2025-01-20-085829-ci-ln-x2x5hkk-latest because: could not parse images.json bytes: unexpected end of JSON input
  1. Apply a mc to the master pool
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: test-mc-master
spec:
  config:
    ignition:
      version: 3.1.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,dGVzdA==
        filesystem: root
        mode: 420
        path: /etc/test-file-0.test
  1. Wait for the master pool to start updating
$ oc get mcp,nodes; oc get co machine-config
NAME                                                         CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
machineconfigpool.machineconfiguration.openshift.io/master   rendered-master-55d74396e13f62dc07bc3b38e3ed2aa3   False     True       False      3              0                   0                     0                      128m
machineconfigpool.machineconfiguration.openshift.io/worker   rendered-worker-639975f0322cd83401ab1d9a80e6ae46   True      False      False      3              3                   3                     0                      128m

NAME                                             STATUS                     ROLES                  AGE    VERSION
node/ip-10-0-11-207.us-east-2.compute.internal   Ready                      worker                 125m   v1.31.3
node/ip-10-0-21-105.us-east-2.compute.internal   Ready,SchedulingDisabled   control-plane,master   130m   v1.31.3
node/ip-10-0-34-100.us-east-2.compute.internal   Ready                      control-plane,master   130m   v1.31.3
node/ip-10-0-51-58.us-east-2.compute.internal    Ready                      worker                 125m   v1.31.3
node/ip-10-0-64-184.us-east-2.compute.internal   Ready                      worker                 125m   v1.31.3
node/ip-10-0-84-199.us-east-2.compute.internal   Ready                      control-plane,master   130m   v1.31.3
  1. Fix the machine-config-operator-images configmap using the original configuration that we got in step 2
  2. After a few seconds the machine-config CO should stop being degraded
$ oc get co machine-config
NAME             VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
machine-config   4.19.0-0.test-2025-01-20-085829-ci-ln-x2x5hkk-latest   True        False         False      127m   

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Jan 20, 2025
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-48250, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

In response to this:

- What I did
Added a clear CO degrade function, which is called after a successful invocation of an operator sync function.

- How to verify it
On a build without this fix:

  1. Degrade the operator. I did this is by scaling down the CVO and editing the releaseVersion field in the machine-config-operator-images configmap to a bad value. This will cause syncRenderConfig to fail and degrade the operator(visible in the CO object and operator logs).
  2. Now, deploy an MC update to the master pool. This will cause the operator to be stuck in syncRequiredMachineConfigPool sync function, where it'll wait until the master pool completes the update.
  3. While the master pool is still updating, restore the releaseVersion back to the original value. You should see the operator log clear up shortly, but the CO will continue to be degraded. Once the master pool is done updating, the CO degrade will clear up.

On a build with this fix:
Repeat steps 1 to 4 above. This time, you should notice that the CO degrade will clear up shortly after restoring releaseVersion, without having to wait for the master pool to complete the update.

Note: The update needs to be applied to a master pool because the syncRequiredMachineConfigPools function will only "trap" the operator for master pool updates.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Member

@isabella-janssen isabella-janssen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 28, 2025
Copy link
Contributor

openshift-ci bot commented Jan 28, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy, isabella-janssen

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@djoshy
Copy link
Contributor Author

djoshy commented Jan 28, 2025

/label acknowledge-critical-fixes-only

should be a safe change, this should just improve overall speed with which CO degrades clear

@openshift-ci openshift-ci bot added the acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. label Jan 28, 2025
@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD b2e4422 and 2 for PR HEAD 07e371a in total

@djoshy
Copy link
Contributor Author

djoshy commented Jan 28, 2025

/test unit

Copy link
Contributor

openshift-ci bot commented Jan 29, 2025

@djoshy: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-vsphere-ovn-upi 07e371a link false /test e2e-vsphere-ovn-upi

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@djoshy
Copy link
Contributor Author

djoshy commented Jan 29, 2025

/retest-required

@yuqi-zhang
Copy link
Contributor

/override ci/prow/e2e-hypershift

Isn't affected (and passed in the past)

Copy link
Contributor

openshift-ci bot commented Jan 29, 2025

@yuqi-zhang: Overrode contexts on behalf of yuqi-zhang: ci/prow/e2e-hypershift

In response to this:

/override ci/prow/e2e-hypershift

Isn't affected (and passed in the past)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-bot openshift-merge-bot bot merged commit 31c9683 into openshift:master Jan 29, 2025
19 of 20 checks passed
@openshift-ci-robot
Copy link
Contributor

@djoshy: Jira Issue OCPBUGS-48250: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-48250 has been moved to the MODIFIED state.

In response to this:

- What I did
Added a clear CO degrade function, which is called after a successful invocation of an operator sync function.

- How to verify it
On a build without this fix:

  1. Degrade the operator. I did this is by scaling down the CVO and editing the releaseVersion field in the machine-config-operator-images configmap to a bad value. This will cause syncRenderConfig to fail and degrade the operator(visible in the CO object and operator logs).
  2. Now, deploy an MC update to the master pool. This will cause the operator to be stuck in syncRequiredMachineConfigPool sync function, where it'll wait until the master pool completes the update.
  3. While the master pool is still updating, restore the releaseVersion back to the original value. You should see the operator log clear up shortly, but the CO will continue to be degraded. Once the master pool is done updating, the CO degrade will clear up.

On a build with this fix:
Repeat steps 1 to 4 above. This time, you should notice that the CO degrade will clear up shortly after restoring releaseVersion, without having to wait for the master pool to complete the update.

Note: The update needs to be applied to a master pool because the syncRequiredMachineConfigPools function will only "trap" the operator for master pool updates.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@djoshy
Copy link
Contributor Author

djoshy commented Jan 29, 2025

/cherry-pick release-4.18 release-4.17

@openshift-cherrypick-robot

@djoshy: new pull request created: #4818

In response to this:

/cherry-pick release-4.18 release-4.17

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-machine-config-operator
This PR has been included in build ose-machine-config-operator-container-v4.19.0-202501291707.p0.g31c9683.assembly.stream.el9.
All builds following this will include this PR.

@djoshy djoshy deleted the add-clear-degrade branch January 31, 2025 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants