Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically re-label nodes with openebs.io/nodeid during upgrade #456

Open
jnels124 opened this issue Jun 26, 2023 · 5 comments
Open

Automatically re-label nodes with openebs.io/nodeid during upgrade #456

jnels124 opened this issue Jun 26, 2023 · 5 comments
Labels
help wanted Need help from community contributors. Need community involvement Needs community involvement on some action item. New Feature Request for new feature.

Comments

@jnels124
Copy link
Contributor

jnels124 commented Jun 26, 2023

Describe the problem/challenge you have
Once issue #450 is resolved, it would be nice if a gke upgrade can be performed in a hands off fashion. In order to do that, we need to be able to identify nodes going down as part of an upgrade and harvest their labels. Then, we would need to identify nodes coming up as part of the upgrade and attach the labels from the old node to the new.

Describe the solution you'd like
A potential solution to this is.

Add a eventHandler to the node Informer and use gke-current-operation:operation_type: UPGRADE_NODES to identify nodes being added/removed as part of an upgrade.

cs.k8sNodeInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: onNodeAdd, UpdateFunc: onUpdateNode, DeleteFunc: onDeleteNode, })

In onNodeAdd, we can save a reference to this node in a cache to be operated on later. Then in onDeleteNode, can watch for the node coming down to get its labels and then attach them to the cached node from onNodeAdd. This would then allow kubernetes to automatically schedule the pods on the new nodes once the relabeling takes effect.

Since multiple nodes can be getting upgraded at the same time, we will need a way to identify which cached node needs to be updated by a node being removed. May just be able to use the zone here since really all that matters in order to successfully mount and import the zfs pool is that they are on the same zone. Only a single node pool should be upgraded at once and since all resources within a node pool are expected to be identical, this should work.

Anything else you would like to add:
I would like some feedback from the maintainers on implementing the solution described above. Is this functionality you would accept into the repo? Do you have any additional input guidance you would like to provide? I have no problem taking a stab at the implementation but wanted to discuss here first.

Environment:

  • ZFS-LocalPV version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
@laurivosandi
Copy link

laurivosandi commented Jun 28, 2023

I would actually request making the label configurable, I don't see any reason to specify openebs.io/nodeid for my nodes and I would rather want to use topology.kubernetes.io/zone or kubernetes.io/hostname depending on the scenario.

@jnels124
Copy link
Contributor Author

jnels124 commented Aug 1, 2023

zone by itself wouldn't be good enough unless of course you limited yourself to a single zfs member in each zone. This would identify many vm's and make it impossible to schedule on the one containing the local PV. The key used by the plugin is already configurable but whatever the key is, its value needs to be uniquely identifiable for each node.

Hostname will not work because once this value is set, volumes are bound to it. When replacing nodes, the hostname will change. This means you would have to recreate all of the CRD's (PV,PVC,ZVolume,ZFSNode, etc) to reference the correct zfs node

@hrudaya21
Copy link
Contributor

For Issue #450, PR #451 under review. Once it is merged, this issue can be looked upon.

@sinhaashish sinhaashish added New Feature Request for new feature. help wanted Need help from community contributors. Need community involvement Needs community involvement on some action item. labels Jun 6, 2024
@sinhaashish
Copy link
Member

@jnels124 would you like to contribute for this feature?

@avishnu
Copy link
Member

avishnu commented Oct 1, 2024

Thanks @jnels124 for your contribution (PR #451 ). Would you be interested to take a stab on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Need help from community contributors. Need community involvement Needs community involvement on some action item. New Feature Request for new feature.
Projects
None yet
Development

No branches or pull requests

5 participants