-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug][v1.23]: cluster_ca_cert and cluster_ca_key always trigger cluster updater #530
Comments
Thanks for reporting. |
I would say every time, after a couple applies like that I just checked this morning and again the same scenario on a Maybe useful info: I have completely destroyed the 1.22 cluster and restarted 1.23 in a different AZ as part of the upgrade, so the chances of 1.22 remnants are small (although I didn't explicitly check whether tfstate was empty/deleted before spinning up 1.23) |
Just for helping out narrowing in what might be the problem, I don't see this on my own prod-cluster where I have a docker config defined and not on a fresh test-cluster either where I have no docker config. |
@ddelange did you see the same problem with 1.22 or only 1.23 ? |
I spent some time trying to reproduce the issue, but I didn't succeed. |
Hmm, interesting! Thanks for checking guys. This was not the case before upgrading to 1.23. The only diff to Here's an excerpt of our |
This looks similar to what I tested. |
Correct, variable "kubernetes_version" {
type = string
description = "Kubernetes version to use for the cluster. MAJOR.MINOR here should not be newer than the kops provider version in versions.tf ref https://kops.sigs.k8s.io/welcome/releases/"
default = "v1.23.5"
} I'm now trying to isolate the bug to the |
I don't think your issue is related to the config override block, but thanks for trying it out. |
Looks like unrelated indeed. Behaviour stays the same 🤔 |
Could be related to permissions in the s3 bucket. |
The file exists, looks like a valid manifest. Created (r/w perms) by my AWS account yesterday when I created the 1.23 cluster. Same permissions as the rest of the files, e.g. under |
Hmmm, I'm dry on ideas :-( |
Many thanks for taking the time! I would just close with that for now and if it disappears over time (or I miraculously find a fix) I will report back here. Have a nice weekend 💥 |
hotfix seems to be holding steady :) lifecycle {
ignore_changes = [
secrets,
]
} |
It’s a hack, you shouldn’t need this. |
I just released |
Thanks for the ping! |
Seems like it did not solve the issue 🤔 |
Me and @argoyle had this issue as well on some clusters. secrets {
docker_config = "{}"
} |
Interesting, I am going to reopen and investigate the issue again. |
I was able to somewhat reproduce the issue:
Would that look like the scenario you are hitting ? |
fwiw, I never used the |
I have a good suspect in mind but no access to an AWS account, making it difficult to track it down. CAs are stored in When one doesn't provide a CA cert/key, kOps will create one and I'm not sure if it is stored in the same place (I suppose it is but can't confirm). From what I understand, it's not possible to remove a CA that is being used so it is probably related. What could happen:
Now, if you specify the resource "kops_cluster" "cluster" {
// ...
secrets {}
// ...
} It would be nice if someone can confirm this. |
@peter-svensson @argoyle I suspect you can use an empty |
--- a/k8s/kops/cluster.tf
+++ b/k8s/kops/cluster.tf
@@ -217,12 +217,7 @@ EOF
}
}
- lifecycle {
- ignore_changes = [
- secrets,
- ]
- }
+ secrets {}
} did indeed not trigger the updater! |
Seems to do the trick with our setup as well 🎉 |
we also have a
block for the same reason btw :) |
Great, thanks for testing it ! |
@ddelange why not RBAC ?
|
We're spinning up Rancher v2 on the cluster via helm chart v2.6.5. I flipped the authorization config this morning and recreated the cluster, but the rancher mechanics (lots of opaque helm jobs etc) don't like it and the cluster won't show up anymore in the Rancher UI. I've been skimming through all different pod logs but no rbac related messages. Ironically, some time ago I managed to fix another Rancher related issue by creating a ClusterRole. Which shouldn't have been necessary as we had Like I wrote there, spinning up with
EDIT: now succesfully changed to |
Hi again!
I just tried out v1.23, and spotted this one triggering the cluster updater. I haven't provided the
secrets
block in my cluster config.The text was updated successfully, but these errors were encountered: