-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use ConfigMapList's resourceVersion for watching (w/ PropertySource) #300
Use ConfigMapList's resourceVersion for watching (w/ PropertySource) #300
Conversation
f34d493
to
64a2e90
Compare
329d744
to
e79bcb2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdesgroppes I prefer to use this way of resourceVersion tracking b/c of using the property source as a vehicle how to deliver and store the value that is effectively a dynamic configuration property. Though it would be great if the resource version would be updated on every add/modify event so we won't hit 504 | 410
in case of restart of the watcher.
You ok with it or do you prefer #299 for some reason?
#299 is the simplest patch @grollinger and I came up with, but #300 seems more forward looking, so I'd agree with you. Update: I took the liberty to squash commits (w/ due |
ea97eee
to
e0712e4
Compare
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com> Co-authored-by: Pavol Gressa <pavol.gressa@oracle.com>
e0712e4
to
01c2ea4
Compare
Rationale: avoid
ERROR
notifications during rolling upgrades.Alternative PR: #299.
Using the greatest
resourceVersion
of all theConfigMap
s returnedwithin the
ConfigMapList
works as expected for fresh deployments.But, when performing a rolling upgrade (and depending on the upgrade
strategy), the watcher happens to frequently stop after having received
an
ERROR
notification:What's actually streamed in that case is a
Status
object such as:A few references:
It's possible to recover by adding some logic to reinstall the watcher
starting with the newly advertised
resourceVersion
, but this may beavoided at all by starting the initial watch at the
resourceVersion
of the
ConfigMapList
itself: this one won't expire.The proposed implementation consists in storing last received
resourceVersion
as an additionalPropertySource
(through a dedicatedname and version key) and later using it when installing the watcher.