-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fabric8 leader election (CAN ONLY GO IN THE NEXT MAJOR RELEASE) #1658
base: main
Are you sure you want to change the base?
Conversation
Configure Renovate
spring.cloud.kubernetes.leader.election.lockNamespace=other-namespace | ||
---- | ||
|
||
Before the leader election process kicks in, you can wait until the pod is ready (via the readiness check). This is enabled by default, but you can disable it if needed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the point of disabling this if it won't work without it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So there are two reasons I did this. First one is that in the current (old) implementation, we already have such a check: "if pod is not ready, don't start leader election, re-check after some interval". So having this check in the first place, is to do what the old implementation was doing.
The second one, to why I would like to give an option for users to disable this, is what if readiness is made in such a way that has no influence on the leadership election? So when pods scale up, disabling readiness checks here would mean that pods can start faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused by the PR title. If this is disabled by default I think it can go in main and then make the old implementation as deprecated. Then we enable it by default in the major then remove the old implementation.
Also do you plan on an equivalent implementation for the K8S Java client?
|
||
@Bean | ||
@ConditionalOnMissingBean | ||
Lock lock(KubernetesClient fabric8KubernetesClient, LeaderElectionProperties properties, String holderIdentity) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would there need to be specific RBAC configuration to allow the app to make this API request?
Also, even if it is supported would it make sense to have a configuration property to force using config maps?
CompletableFuture<Void> podReadyFuture = new CompletableFuture<>(); | ||
|
||
// wait until pod is ready | ||
if (leaderElectionProperties.waitForPodReady()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a useful feature. I wonder if it makes sense for it to live in the commons package?
Honestly it would be useful to have in Fabric8 IMO
It can go in
100%. I started to look into that implementation also, but as with fabric-8 I want to be sure I understand all the details, first, so it will take a bit.
not sure, I'll dig more into it.
valid point, it might be actually. I'll work on it. |
It should go into main IMO |
if you say it can go into
|
I thought everything was wrapped in a feature flag and it wasn't introducing any breaking changes. If that is not the case then it will need to wait and cannot go into main right now. |
it is protected by a feature flag, I got your point now. |
good point! Added it in the documentation
good point again, added such an option |
I can't really do that because of the very specific fabric8 APIs... but may be I will revisit this idea once I get a better understanding of the native client implementation. I've addressed all of the comments here I think, you can take another look now. Users have the option to switch back to the old implementation at any point in time, so the more the new one is used, the more we will polish it. As usual, I'll be there for that work... |
LGTM once you are satisfied with it we can merge it into main. |
I'm not there yet, I want to work on something else related indirectly with this PR for some time, and then get back to it. I would like to try and refactor some integration tests. The idea is that some integration tests do make sense, but some really need to be moved to plain tests. For example, here in this leader implementation some ITs are really needed, but because we already consume so much time with them, we can't add more. So I want to reduce their number and then add one more here. I'm not sure where that will take me, but I have to try this path first |
No description provided.