Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-41903: operator/status: clear azure path fix job conditions on operator removal #1142

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

flavianmissi
Copy link
Member

there is sometimes a race condition when setting .spec.managementState back and forth between Removed and Managed where the azure path fix controller will kick off the job, but the resources needed for the job to run will get removed (as expected) before the job can finish and its controller update the operator's progressing condition to reflect that.

this has been happening often in the TestLeaderElection e2e test. this test does not wait for the image registry operator to become available before removing it, and this is what triggers the race. a customer has reported a similar issue, although their error was slightly different and proved harder to reproduce. this commit should fix both problems.

clearing the status is most important when users upgrade from a version of the azure path fix controller that deploys the job. OCP versions that do not deploy the job should not have the problem, and including this code in them should be harmless.
in OCP >= 4.17 this job is no longer deployed, though the controller is kept it can probably be safely removed in OCP >= 4.18.


Testing

We need upgrade tests as well as regression tests for this one. Testing operator removal (.spec.managementState: Removed) would also be great.

there is sometimes a race condition when setting .spec.managementState
back and forth between Removed and Managed where the azure path fix
controller will kick off the job, but the resources needed for the job
to run will get removed (as expected) before the job can finish and its
controller update the operator's progressing condition to reflect that.

this has been happening often in the TestLeaderElection e2e test. this
test does not wait for the image registry operator to become available
before removing it, and this is what triggers the race. a customer has
reported a similar issue, although their error was slightly different
and proved harder to reproduce. this commit should fix both problems.

clearing the status is most important when users upgrade from a version
of the azure path fix controller that deploys the job. OCP versions that
do not deploy the job should not have the problem, and including this
code in them should be harmless.
in OCP >= 4.17 this job is no longer deployed, though the controller is
kept it can probably be safely removed in OCP >= 4.18.
@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Oct 17, 2024
@openshift-ci-robot
Copy link
Contributor

@flavianmissi: This pull request references Jira Issue OCPBUGS-41903, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.18.0) matches configured target version for branch (4.18.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (xiuwang+1@redhat.com), skipping review request.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

there is sometimes a race condition when setting .spec.managementState back and forth between Removed and Managed where the azure path fix controller will kick off the job, but the resources needed for the job to run will get removed (as expected) before the job can finish and its controller update the operator's progressing condition to reflect that.

this has been happening often in the TestLeaderElection e2e test. this test does not wait for the image registry operator to become available before removing it, and this is what triggers the race. a customer has reported a similar issue, although their error was slightly different and proved harder to reproduce. this commit should fix both problems.

clearing the status is most important when users upgrade from a version of the azure path fix controller that deploys the job. OCP versions that do not deploy the job should not have the problem, and including this code in them should be harmless.
in OCP >= 4.17 this job is no longer deployed, though the controller is kept it can probably be safely removed in OCP >= 4.18.


Testing

We need upgrade tests as well as regression tests for this one. Testing operator removal (.spec.managementState: Removed) would also be great.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

openshift-ci bot commented Oct 17, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: flavianmissi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 17, 2024
@flavianmissi
Copy link
Member Author

/test e2e-azure-operator
/test e2e-azure-ovn
/test e2e-azure-ovn-upgrade

Copy link
Contributor

openshift-ci bot commented Oct 17, 2024

@flavianmissi: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-hypershift 58a78e9 link true /test e2e-hypershift

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@flavianmissi
Copy link
Member Author

e2e-aws-operator failed on a known flake in TestImageRegistryRemovedWithImages.
/test e2e-aws-operator

will re-run e2e-azure-operator for good measure.
/test e2e-azure-operator

e2e-hypershift failure seem unrelated
/test e2e-hypershift

@xiuwang
Copy link

xiuwang commented Oct 18, 2024

/label qe-approved
/label cherry-pick-approved

@openshift-ci openshift-ci bot added qe-approved Signifies that QE has signed off on this PR cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. labels Oct 18, 2024
@openshift-ci-robot
Copy link
Contributor

@flavianmissi: This pull request references Jira Issue OCPBUGS-41903, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.18.0) matches configured target version for branch (4.18.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (xiuwang+1@redhat.com), skipping review request.

In response to this:

there is sometimes a race condition when setting .spec.managementState back and forth between Removed and Managed where the azure path fix controller will kick off the job, but the resources needed for the job to run will get removed (as expected) before the job can finish and its controller update the operator's progressing condition to reflect that.

this has been happening often in the TestLeaderElection e2e test. this test does not wait for the image registry operator to become available before removing it, and this is what triggers the race. a customer has reported a similar issue, although their error was slightly different and proved harder to reproduce. this commit should fix both problems.

clearing the status is most important when users upgrade from a version of the azure path fix controller that deploys the job. OCP versions that do not deploy the job should not have the problem, and including this code in them should be harmless.
in OCP >= 4.17 this job is no longer deployed, though the controller is kept it can probably be safely removed in OCP >= 4.18.


Testing

We need upgrade tests as well as regression tests for this one. Testing operator removal (.spec.managementState: Removed) would also be great.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@flavianmissi
Copy link
Member Author

This seem to behave well overall but I'm still slightly skeptical :D
/payload-aggregate periodic-ci-openshift-release-master-ci-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade 5

Copy link
Contributor

openshift-ci bot commented Oct 18, 2024

@flavianmissi: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/04ea1110-8d30-11ef-8a54-0b936220e312-0

@flavianmissi
Copy link
Member Author

/payload-abort

Copy link
Contributor

openshift-ci bot commented Oct 18, 2024

@flavianmissi: aborted active payload jobs for pull request #1142

@flavianmissi
Copy link
Member Author

/payload-aggregate periodic-ci-openshift-release-master-ci-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade 5

Copy link
Contributor

openshift-ci bot commented Oct 18, 2024

@flavianmissi: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6d3e5ee0-8d32-11ef-80b2-c38a9eac7d34-0

@flavianmissi
Copy link
Member Author

/payload-abort

Copy link
Contributor

openshift-ci bot commented Oct 18, 2024

@flavianmissi: aborted active payload jobs for pull request #1142

@flavianmissi
Copy link
Member Author

Test failures are infra related - I'll allow CI some time to recover before retesting.
Failure:

error: error creating buildah builder: copying system image from manifest list: determining manifest MIME type for docker://registry.build10.ci.openshift.org/ci/managed-clonerefs@sha256:84b65cb47df5321b0f344d66290144e853eca71055f9553a6056193f6f9ebb6c: reading manifest sha256:9514ef6fdb801fca35c35c6749e6382ecdc0498f09d7b023cb01d7b728454272 in registry.build10.ci.openshift.org/ci/managed-clonerefs: manifest unknown

@flavianmissi
Copy link
Member Author

/retest
/payload-aggregate periodic-ci-openshift-release-master-ci-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade 5

Copy link
Contributor

openshift-ci bot commented Oct 18, 2024

@flavianmissi: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/71599d90-8d60-11ef-9619-1d5ed6ce6387-0

@flavianmissi
Copy link
Member Author

e2e-azure-operator failed because TestS3Minio failed. It seems unrelated to the previous flakes. I've seen quite a few errors in the operator logs when it tries to communicate with minio, I suspect the minio deployment might just be unreliable. I've seen TestS3Minio fail previously but never spent time investigating it.
This was seen in the logs for this test, which I think corroborates my understanding of the failure:

imageregistry.go:399: the imageregistry resource is processed, but the the image registry is not available

e2e-aws-operator failure was infra releated:

* could not run steps: step [input:origin-centos-8] failed: failed to wait for importing imagestreamtags on ci-op-nz0w8tln/pipeline:origin-centos-8: failed to import tag(s) [origin-centos-8] on image stream ci-op-nz0w8tln/pipeline because of missing definition in the spec

/retest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants