Replies: 1 comment 1 reply
-
Really good summary, thanks! So far I think that testing on kind and minikube is a really good indicator that OLM will work on K8S conformant clusters and not only on OpenShift. We should decide on supporting different modes of running OLM (e.g. with different pod security standards) which will cover different setups for different K8S distros. After that - In my opinion - we should rely on the community feedback and we should avoid investing into something without understanding how big is demand for it. I would love to have E2E against major distros/providers - gives more confidence in what we release. But given that OLM upstream doesn't use (as far as I know) any distro-specific or cloud-specific API - it feels redundant.
I think these two might create a situation when a release is blocked because we catch a regression only just before creating a release. Having periodic (e.g. nightly/weekly) could improve reaction speed assuming that we get good at reacting to failed periodic jobs (from my experience they are often overlooked). |
Beta Was this translation helpful? Give feedback.
-
Follow-up of the June 13 2023 community call whether we discuss whether there is appetite for validating OLM on Kubernetes distributions other than kind and minikube.
We discussed two aspects:
EKS, AKS, GKE and other largely adopted Kubernetes hosted platforms or distributions are all going through Kubernetes certification process. This ensures a common behavior when interacting with Kubernetes API or leveraging webhooks.
That said these distributions may come with different default configurations, e.g. Pod Security Standards that may impact the experience around OLM. On this specific aspect:
OLM is mostly used with OpenShift, although it is tested on kind and minikube and there is no known issue preventing its usage on other Kubernetes distributions. The feeling is that "demonstrating" its usage and the commitment of the project to other Kubernetes distributions may help with growing the community, the number of contributors and their diversity. This would bring new ideas and hands, which is a characteristic of thriving open source communities.
As OLM is currently transitioning from the historical design to the new component based approach, referred to as OLM v1 maintainers of the OLM project express that such an effort should take place with OLM v1 and not the previous versions. We should seek adoption increase with what is becoming our offer rather than with something, which will be retired next.
Cost and approach
Cost was mentioned as something to take in consideration, although infrastructure may get "sponsored". Summarizing the different approaches and related cost level:
Looking at what others are doing:
Kubernetes validation is moving to CNCF infrastructure (mainly GCP). Prow jobs running on the CNCF infra can be seen here
OCM are using kind
Tekton are using GCP/GKE
Please share your views on the matter. cc people involved in the discussion @awgreene @m1kola @anik120 @joelanford @Jamstah @pgodowski
Beta Was this translation helpful? Give feedback.
All reactions