Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service flavor implementation #114

Merged
merged 1 commit into from
Oct 10, 2024
Merged

Service flavor implementation #114

merged 1 commit into from
Oct 10, 2024

Conversation

andreacv98
Copy link
Contributor

Hi everyone,

With this PR we finally introduce actual support for the Service Flavor in the FLUIDOS Node.

The Service Flavor is nothing more than a software service managed by the provider. The flavor should be created starting from the new CRD "Service Blueprint" which defines the name, configuration and templates of the service that the provider will create for the consumer.

Once the flavor exists, the procedure is always the same: create a solver that can do everything or control the steps (discovery, reservation, purchase and allocation) by creating the appropriate resources in the right order. The allocation process is a bit different in terms of "who really starts the allocation". In the K8Slice, the allocation process and the actual creation of resources in the consumer cluster is something that starts from the consumer, while in the service this is something "passive". The peering establishment, resource allocation and creation is done by the provider, right after the purchase phase, the allocation on the consumer side just controls this operation and tells you what the provider is doing (establishing the peering, creating the namespace, resources, etc.). This procedure will probably change once the new Liqo 1.0.0 will be implemented, moving to a logic like "the consumer creates the allocation, the allocation by the provider starts at that moment".

More information about the usage of the service flavors can be found in the usual documentation (usage.md).

Any feedback would be more than welcome, so feel free to give your inputs.

Thanks

Andrea

@andreacv98 andreacv98 self-assigned this Oct 9, 2024
@fluidos-bot
Copy link

The generated artifacts appear to be out-of-date.

Please, ensure you are using the correct version of the generators (eg. controller-gen) and re-run:

make generate
Here it is an excerpt of the diff:
diff --git a/pkg/rear-manager/allocation_controller.go b/pkg/rear-manager/allocation_controller.go%0Aindex ee72217..b6380fc 100644%0A--- a/pkg/rear-manager/allocation_controller.go%0A+++ b/pkg/rear-manager/allocation_controller.go%0A@@ -494,7 +494,7 @@ func (r *AllocationReconciler) handleServiceProviderAllocation(ctx context.Conte%0A 				klog.Errorf("Error when updating Allocation %25s status: %25v", req.NamespacedName, err)%0A 				return ctrl.Result{}, err%0A 			}%0A-		}		%0A+		}%0A %0A 		// TODO(Service): check if the service software applied is working correctly, maybe checking the pods deployed in the namespace offloaded.%0A %0Adiff --git a/pkg/rear-manager/solver_controller.go b/pkg/rear-manager/solver_controller.go%0Aindex c56e92e..5e64a4b 100644%0A--- a/pkg/rear-manager/solver_controller.go%0A+++ b/pkg/rear-manager/solver_controller.go%0A@@ -401,18 +401,18 @@ func (r *SolverReconciler) handleReserveAndBuy(ctx context.Context, req ctrl.Req%0A 					k8sliceSelector := solverTypeData.(nodecorev1alpha1.K8SliceSelector)%0A 					klog.Infof("K8S Slice Selector: %25v", k8sliceSelector)%0A %0A-						// Parse flavor from PeeringCandidate%0A-						flavorTypeIdentifier, flavorData, err := nodecorev1alpha1.ParseFlavorType(&pc.Spec.Flavor)%0A-						if err != nil {%0A-							klog.Errorf("Error when parsing Flavor for Solver %25s: %25s", solver.Name, err)%0A-							return ctrl.Result{}, err%0A-						}%0A-						if flavorTypeIdentifier != nodecorev1alpha1.TypeK8Slice {%0A-							klog.Errorf("Flavor type is different from K8Slice as expected by the Solver selector for Solver %25s", solver.Name)%0A-						}%0A-%0A-						// Force casting%0A-						k8slice := flavorData.(nodecorev1alpha1.K8Slice)%0A+					// Parse flavor from PeeringCandidate%0A+					flavorTypeIdentifier, flavorData, err := nodecorev1alpha1.ParseFlavorType(&pc.Spec.Flavor)%0A+					if err != nil {%0A+						klog.Errorf("Error when parsing Flavor for Solver %25s: %25s", solver.Name, err)%0A+						return ctrl.Result{}, err%0A+					}%0A+					if flavorTypeIdentifier != nodecorev1alpha1.TypeK8Slice {%0A+						klog.Errorf("Flavor type is different from K8Slice as expected by the Solver selector for Solver %25s", solver.Name)%0A+					}%0A+%0A+					// Force casting%0A+					k8slice := flavorData.(nodecorev1alpha1.K8Slice)%0A %0A 					k8SliceConfiguration := resourceforge.ForgeK8SliceConfiguration(k8sliceSelector, &k8slice)%0A 					// Convert the K8SlicePartition to JSON%0A@@ -650,14 +650,14 @@ func (r *SolverReconciler) handlePeering(%0A 				break%0A 			}%0A 		}%0A-		%0A+%0A 		common.AllocationStatusCheck(solver, allocation)%0A %0A 		if err := r.updateSolverStatus(ctx, solver); err != nil {%0A 			klog.Errorf("Error when updating Solver %25s status: %25s", req.NamespacedName, err)%0A 			return ctrl.Result{}, err%0A 		}%0A-		%0A+%0A 		return ctrl.Result{}, nil%0A 	case nodecorev1alpha1.PhaseFailed:%0A 		klog.Infof("Solver %25s has failed to establish a peering", req.NamespacedName.Name)%0Adiff --git a/pkg/utils/consts/consts.go b/pkg/utils/consts/consts.go%0Aindex 83deeea..32f140a 100644%0A--- a/pkg/utils/consts/consts.go%0A+++ b/pkg/utils/consts/consts.go%0A@@ -24,7 +24,7 @@ const (%0A 	LiqoRemoteClusterIDLabel      = "liqo.io/remote-cluster-id"%0A 	FluidosContractLabel          = "reservation.fluidos.eu/contract"%0A 	FluidosServiceCredentials     = "nodecore.fluidos.eu/flavor-service-credentials"%0A-	FluidosServiceEndpoint		= "nodecore.fluidos.eu/flavor-service-endpoint"%0A+	FluidosServiceEndpoint        = "nodecore.fluidos.eu/flavor-service-endpoint"%0A )%0A %0A // ServiceCategory represents a category of a service%0Adiff --git a/pkg/utils/resourceforge/forge.go b/pkg/utils/resourceforge/forge.go%0Aindex 30e948d..0ba7151 100644%0A--- a/pkg/utils/resourceforge/forge.go%0A+++ b/pkg/utils/resourceforge/forge.go%0A@@ -1307,7 +1307,7 @@ func ForgeSecretForService(ctx context.Context, client client.Client, contract *%0A 	var endpoints = make([]string, 0)%0A 	if serviceEndpoint != nil {%0A 		// Generate service dns name with default k8s dns resolution%0A-		endpointsHost := string(serviceEndpoint.Name+"."+serviceEndpoint.Namespace+".svc.cluster.local")%0A+		endpointsHost := string(serviceEndpoint.Name + "." + serviceEndpoint.Namespace + ".svc.cluster.local")%0A 		for _, port := range serviceEndpoint.Spec.Ports {%0A 			endpoints = append(endpoints, fmt.Sprintf("%25s:%25d", endpointsHost, port.Port))%0A 		}%0A@@ -1329,9 +1329,9 @@ func ForgeSecretForService(ctx context.Context, client client.Client, contract *%0A 			},%0A 			StringData: map[string]string{%0A 				"endpoints": stringEndpoint,%0A-				"username": configurationDataMap["username"].(string),%0A-				"password": configurationDataMap["password"].(string),%0A-				"database": configurationDataMap["database"].(string),%0A+				"username":  configurationDataMap["username"].(string),%0A+				"password":  configurationDataMap["password"].(string),

@andreacv98 andreacv98 force-pushed the features/service-flavor branch 4 times, most recently from 82eddf4 to 9a00c6d Compare October 9, 2024 19:53
@fluidos-bot
Copy link

The generated artifacts appear to be out-of-date.

Please, ensure you are using the correct version of the generators (eg. controller-gen) and re-run:

make generate
Here it is an excerpt of the diff:
diff --git a/pkg/rear-manager/allocation_controller.go b/pkg/rear-manager/allocation_controller.go%0Aindex ef039e6..ef47d9d 100644%0A--- a/pkg/rear-manager/allocation_controller.go%0A+++ b/pkg/rear-manager/allocation_controller.go%0A@@ -554,7 +554,7 @@ func (r *AllocationReconciler) handleServiceProviderAllocation(ctx context.Conte%0A 			klog.Errorf("Error when updating Allocation %25s status: %25v", req.NamespacedName, err)%0A 			return ctrl.Result{}, err%0A 		}%0A-		%0A+%0A 		return ctrl.Result{}, nil%0A 	case nodecorev1alpha1.Peering:%0A 		// Create peering with the consumer

Copy link
Contributor

@fracappa fracappa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just adding a few comments for something that's not clear.

apis/nodecore/v1alpha1/allocation_types.go Show resolved Hide resolved
apis/nodecore/v1alpha1/common.go Show resolved Hide resolved
apis/nodecore/v1alpha1/common.go Show resolved Hide resolved
Copy link
Contributor

@stefano81 stefano81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, minor comments

apis/nodecore/v1alpha1/common.go Show resolved Hide resolved
pkg/rear-controller/gateway/utils.go Show resolved Hide resolved
pkg/rear-controller/gateway/utils.go Show resolved Hide resolved
@andreacv98 andreacv98 merged commit 2e48100 into main Oct 10, 2024
6 checks passed
@fracappa fracappa deleted the features/service-flavor branch October 15, 2024 13:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants