Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow automatic recreation on LB ERROR state to be disabled #2596

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions pkg/openstack/loadbalancer.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,9 @@ const (
ServiceAnnotationTlsContainerRef = "loadbalancer.openstack.org/default-tls-container-ref"
// revive:enable:var-naming
// See https://nip.io
defaultProxyHostnameSuffix = "nip.io"
ServiceAnnotationLoadBalancerID = "loadbalancer.openstack.org/load-balancer-id"
defaultProxyHostnameSuffix = "nip.io"
ServiceAnnotationLoadBalancerID = "loadbalancer.openstack.org/load-balancer-id"
ServiceAnnotationLoadBalancerRecreateOnError = "loadbalancer.openstack.org/recreate-on-error"

// Octavia resources name formats
servicePrefix = "kube_service_"
Expand Down Expand Up @@ -306,8 +307,11 @@ func (lbaas *LbaasV2) createOctaviaLoadBalancer(name, clusterName string, servic
svcConf.lbMemberSubnetID = loadbalancer.VipSubnetID
}

// Allow users to disable automatic recreation on Octavia ERROR state
recreateOnError := getBoolFromServiceAnnotation(service, ServiceAnnotationLoadBalancerRecreateOnError, true)
Comment on lines +310 to +311
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is exposing a debugging function to the end users. I think I'd rather make it an option in OCCM configuration, so that administrator can turn it on and investigate what's happening. What do you thinK?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dulek Thanks for your review! :) In general I agree, but in our Managed Kubernetes setup users wouldn't be able to change the OCCM configuration due to it not being (editable) exposed to the user. Implementing it as an OCCM configuration would also affect all LBs, while my implementation limits the functionality to a single LB.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

um.. does this mean every configuration option we have for OCCM will also not available for managed k8s?

not sure other sitaution we have in such scenario and how we handle it?

Copy link
Author

@baurmatt baurmatt May 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

um.. does this mean every configuration option we have for OCCM will also not available for managed k8s?

Just to clarify, when I'm talking about OCCM config I mean the cloud-config file/secret. In our managed k8s setup, we don't have (persistent) writeable access to it, because the cloudprovider runs on the master nodes which we don't have access to. cloud-config is readonly accessible on the worker nodes for csi-cinder-nodeplugin. So yes, other options aren't (freely) configurable for us as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@baurmatt: I don't believe the users of the managed K8s should be trying to debug stuff on Octavia side really. Can you provide an example where keeping the Octavia LB in ERROR state aids with debugging? The regular users should not have access to amphora resources (admin-only API) or Nova VMs backing amphoras (these should live in a service tenant). The LB itself does not expose any debugging information. The Nova VM does expose the error, but most of the time it's a NoValidHost anyway, so scheduler logs are required to do debugging.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dulek To the background: I've created a LoadBalancer Service with loadbalancer.openstack.org/network-id: $uuid and loadbalancer.openstack.org/member-subnet-id: $uuid, which failed because one of the UUIDs was wrong. Thus it was recreated "all the time". This was hard for the OpenStack team to debug because they only had seconds to take a look on the Octavia LB before it was deleted by cloudprovider. Keeping it in ERROR state allowed for easier debugging on my/OpenStack team side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was the error missing from kubectl describe <svc-name>? We emit events that should be enough to debug such problems. What was Octavia returning? Just normally 201?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dulek It only shows that the load balancer went into ERROR state:

$ kubectl describe service cloudnative-pg-cluster-primary2
...
Events:
  Type     Reason                  Age                   From                Message
  ----     ------                  ----                  ----                -------
  Warning  SyncLoadBalancerFailed  29m                   service-controller  Error syncing load balancer: failed to ensure load balancer: error creating loadbalancer kube_service_2mqlgjjphg_cloudnative-pg-cluster_cloudnative-pg-cluster-primary2: loadbalancer has gone into ERROR state
  Normal   EnsuringLoadBalancer    24m (x7 over 31m)     service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  24m (x6 over 29m)     service-controller  Error syncing load balancer: failed to ensure load balancer: load balancer 71f3ac5c-6740-4fca-8d57-a62d30697629 is not ACTIVE, current provisioning status: ERROR
  Normal   EnsuringLoadBalancer    2m34s (x10 over 22m)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  2m34s (x10 over 22m)  service-controller  Error syncing load balancer: failed to ensure load balancer: load balancer 71f3ac5c-6740-4fca-8d57-a62d30697629 is not ACTIVE, current provisioning status: ERROR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I see, though in this case you end up with an LB in ERROR state. I still don't see how it staying there is helpful to debugging. Maybe seeing the full LB resource helps, as then you can see the wrong ID, but then we can solve that use case by making sure at some more granular log level, we log full request made to Octavia by Gophercloud instead of adding a new option.

I can also see value in CPO validating network and subnet IDs before creating the LB.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply, I've been on vacation. It didn't help me directly, because as a user I still wasn't able to get more information.But when I was able to give our cloud operations team the ID, they were able to debug the problem and tell me the reason.


if loadbalancer, err = openstackutil.WaitActiveAndGetLoadBalancer(lbaas.lb, loadbalancer.ID); err != nil {
if loadbalancer != nil && loadbalancer.ProvisioningStatus == errorStatus {
if loadbalancer != nil && loadbalancer.ProvisioningStatus == errorStatus && recreateOnError {
// If LB landed in ERROR state we should delete it and retry the creation later.
if err = lbaas.deleteLoadBalancer(loadbalancer, service, svcConf, true); err != nil {
return nil, fmt.Errorf("loadbalancer %s is in ERROR state and there was an error when removing it: %v", loadbalancer.ID, err)
Expand Down