diff --git a/OWNERS.md b/OWNERS.md index a2bedf7af..49e30e334 100644 --- a/OWNERS.md +++ b/OWNERS.md @@ -1,9 +1,6 @@ approvers: - JimBugwadia - realshuting -- chipzoller -- MarcelMue -- treydock - eddycharly - fjogeleit - MariamFahmy98 @@ -12,10 +9,7 @@ approvers: reviewers: - JimBugwadia - realshuting -- chipzoller - eddycharly - MariamFahmy98 - vishal-chdhry -- treydock -- MarcelMue - fjogeleit \ No newline at end of file diff --git a/config/_default/menus/menu.en.toml b/config/_default/menus/menu.en.toml index 13ede2352..679edcdd5 100644 --- a/config/_default/menus/menu.en.toml +++ b/config/_default/menus/menu.en.toml @@ -3,7 +3,7 @@ [[main]] name = "About" weight = -103 - url = "#kyverno-is-a-policy-engine-designed-for-kubernetes" + url = "#about-kyverno" [[main]] name = "Documentation" diff --git a/content/en/_index.md b/content/en/_index.md index 654e8ac86..59867c9b4 100644 --- a/content/en/_index.md +++ b/content/en/_index.md @@ -4,7 +4,7 @@ linkTitle = "Kyverno" +++ {{< blocks/cover title="Kyverno" image_anchor="top" height="full" color="dark" >}} -# Cloud Native Policy Management { class="text-center" } +# Policy Management. Simplified. { class="text-center" }
@@ -28,14 +28,22 @@ linkTitle = "Kyverno"

-Kyverno is a policy engine built for Kubernetes and cloud native environments. +Policy Management for Kubernetes and cloud native environments.


-Kyverno policies are declarative Kubernetes resources and no new language is required to write policies. This allows using familiar tools such as kubectl, git, and kustomize to manage policies. Kyverno policies can validate, mutate, generate, and cleanup any Kubernetes resource, including custom resrources. To help secure the software supply chain Kyverno policies can verify OCI container image signatures and artifacts. +Kyverno policies are declarative YAML resources and no new language is required to write policies. This allows using familiar tools such as kubectl, git, and kustomize to manage policies. For efficient handling of complex logic, Kyverno supports both JMESPath and the Common Expressions Language (CEL) languages. -The **Kyverno CLI** can be used to test policies and validate resources off-cluster e.g. as part of a CI/CD pipeline. Kyverno policy reports and policy exceptions are also Kubernetes resources. The **Policy Reporter** provides in-cluster report management with a graphical web-based user interface. **Kyverno JSON** allows applying Kyverno policies in non-Kubernetes environments and on any JSON payload. **Kyverno Chainsaw** provides declarative end-to-end testing for policies and controllers. +In Kubernetes environments, Kyverno policies can validate, mutate, generate, and cleanup any Kubernetes resource, including custom resources. To help secure the software supply chain Kyverno policies can verify OCI container image signatures and artifacts. Kyverno policy reports and policy exceptions are also Kubernetes API resources. + +The **Kyverno CLI** can be used to apply and test policies off-cluster e.g., as part of a CI/CD pipeline. + +**Kyverno Policy Reporter** provides in-cluster report management with a graphical web-based user interface. + +**Kyverno JSON** allows applying Kyverno policies in non-Kubernetes environments and on any JSON payload. + +**Kyverno Chainsaw** provides declarative end-to-end testing for policies.

diff --git a/content/en/docs/installation/customization.md b/content/en/docs/installation/customization.md index 1e7139545..aba4033f0 100644 --- a/content/en/docs/installation/customization.md +++ b/content/en/docs/installation/customization.md @@ -153,10 +153,14 @@ Kyverno uses Secrets created above to setup TLS communication with the Kubernete You can now install Kyverno by selecting one of the available methods from the [installation section](methods.md). -### Roles and Permissions +### Role Based Access Controls -Kyverno creates several Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings some of which may need to be customized depending on additional functionality required. To view all ClusterRoles and Roles associated with Kyverno, use the command `kubectl get clusterroles,roles -A | grep kyverno`. +Kyverno uses Kubernetes Role Based Access Controls (RBAC) to configure permissions for Kyverno controllers to allow access to other resources. +Kyverno creates several Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings some of which may need to be customized depending on additional functionality required. + +To view all ClusterRoles and Roles associated with Kyverno, use the command `kubectl get clusterroles,roles -A | grep kyverno`. + #### Roles Kyverno creates the following Roles in its Namespace, one per controller type: @@ -183,33 +187,53 @@ Kyverno uses [aggregated ClusterRoles](https://kubernetes.io/docs/reference/acce The following `ClusterRoles` provide Kyverno with permissions to policies and other Kubernetes resources across all Namespaces. -* `kyverno:admission-controller:core`: aggregate ClusterRole for the admission controller -* `kyverno:admission-controller`: aggregated (top-level) ClusterRole for the admission controller -* `kyverno:reports-controller:core`: aggregate ClusterRole for the reports controller -* `kyverno:reports-controller`: aggregated (top-level) ClusterRole for the reports controller -* `kyverno:background-controller:core`: aggregate ClusterRole for the background controller -* `kyverno:background-controller`: aggregated (top-level) ClusterRole for the background controller -* `kyverno:cleanup-controller:core`: aggregate ClusterRole for the cleanup controller -* `kyverno:cleanup-controller`: aggregated (top-level) ClusterRole for the cleanup controller -* `kyverno-cleanup-jobs`: used by the helper CronJob to periodically remove excessive/stale admission reports if found -* `kyverno:rbac:admin:policies`: aggregates to admin the ability to fully manage Kyverno policies -* `kyverno:rbac:admin:policyreports`: aggregates to admin the ability to fully manage Policy Reports -* `kyverno:rbac:admin:reports`: aggregates to admin the ability to fully manage intermediary admission and background reports -* `kyverno:rbac:admin:updaterequests`: aggregates to admin the ability to fully manage UpdateRequests, intermediary resource for generate rules -* `kyverno:rbac:view:policies`: aggregates to view the ability to view Kyverno policies -* `kyverno:rbac:view:policyreports`: aggregates to view the ability to view Policy Reports -* `kyverno:rbac:view:reports`: aggregates to view the ability to view intermediary admission and background reports -* `kyverno:rbac:view:updaterequests`: aggregates to view the ability to view UpdateRequests, intermediary resource for generate rules +Role Binding | Service Account | Role +---------------------------------- | ------------------------------------ | ------------- +kyverno:admission-controller | kyverno-admission-controller | kyverno:admission-controller +kyverno:admission-controller:view | kyverno-admission-controller | view +kyverno:admission-controller:core | -- | -- +kyverno:background-controller | kyverno-background-controller | kyverno:background-controller +kyverno:background-controller:view | kyverno-background-controller | view +kyverno:background-controller:core | -- | -- +kyverno:cleanup-controller | kyverno-cleanup-controller | kyverno:cleanup-controller +kyverno:cleanup-controller:core | -- | -- +kyverno:reports-controller | kyverno-reports-controller | kyverno:reports-controller +kyverno:reports-controller:view | kyverno-reports-controller | view +kyverno:reports-controller:core | -- | -- + {{% alert title="Note" color="info" %}} -Most Kyverno controllers' ClusterRoles include a rule which allows for `get`, `list`, and `read` permissions to all resources in the cluster. This is to ensure Kyverno functions smoothly despite the type and subject of future-installed policies. If this rule is removed, users must manually create and manage a number of different ClusterRoles applicable across potentially multiple controllers depending on the type and configuration of installed policies. +The Kyverno admission, background, and reports controller have a role binding to the built-in `view` role. This allows these Kyverno controllers view access to most namespaced resources. You can customize this role during Helm installation using variables like `admissionController.rbac.viewRoleName`. {{% /alert %}} #### Customizing Permissions -Because the ClusterRoles used by Kyverno use the [aggregation feature](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles), extending the permission for Kyverno's use in cases like mutate existing or generate rules or generating ValidatingAdmissionPolicies is a simple matter of creating one or more new ClusterRoles which use the appropriate labels. It is not necessary to modify any existing ClusterRoles created as part of the Kyverno installation. Doing so is not recommended as changes may be lost during an upgrade. Since there are multiple controllers each with their own ServiceAccount, granting Kyverno additional permissions involves identifying the correct controller and using the labels needed to aggregate to that ClusterRole. +Kyverno's default permissions are designed to cover commonly used and security non-critical resources. Hence, Kyverno will need to be configured with additional permissions for CRDs, or to allow access to security critical resources. + +The ClusterRoles installed by Kyverno use the [cluster role aggregation feature](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles), making it easy to extend the permissions for Kyverno's controllers. To extend a controller's permissions, add a new role with one or more of the following labels: + +Controller | Role Aggregation Label +---------------------- | ----------------------------- +admission-controller | rbac.kyverno.io/aggregate-to-admission-controller: "true" +background-controller | rbac.kyverno.io/aggregate-to-background-controller: "true" +reports-controller | rbac.kyverno.io/aggregate-to-reports-controller: "true" +cleanup-controller | rbac.kyverno.io/aggregate-to-cleanup-controller: "true" + +To avoid upgrade issues, it is highly recommended that default roles are not modified but new roles are used to extend them. -For example, if a new Kyverno generate policy requires that Kyverno be able to create or modify Deployments, this is not a permission Kyverno has by default. Generate rules are handled by the background controller and so it will be necessary to create a new ClusterRole and assign it the aggregation labels specific to the background controller in order for those permissions to take effect. +Since there are multiple controllers each with their own ServiceAccount, granting Kyverno additional permissions involves identifying the correct controller and using the labels needed to aggregate to that ClusterRole. The table below identifies required permissions for Kyverno features: + +Controller | Permission Verbs | Required For +---------------------- | ----- | ------------------------------- +admission-controller | view, list, ... | API Calls +admission-controller | view, list, watch | Global Context +background-controller | update, view, list, watch | Mutate Policies +background-controller | create, update, delete, view, list, watch | Generate Policies +reports-controller | view, list, watch | Policy Reports +cleanup-controller | delete, view, list, watch | Cleanup Policies + + +For example, if a new Kyverno generate policy requires that Kyverno be able to create and update Deployments, new permissions need to be provided. Generate rules are handled by the background controller and so it will be necessary to create a new ClusterRole and assign it the aggregation labels specific to the background controller in order for those permissions to take effect. This sample ClusterRole provides the Kyverno background controller additional permissions to create Deployments: @@ -217,11 +241,9 @@ This sample ClusterRole provides the Kyverno background controller additional pe apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - labels: - app.kubernetes.io/component: background-controller - app.kubernetes.io/instance: kyverno - app.kubernetes.io/part-of: kyverno name: kyverno:create-deployments + labels: + rbac.kyverno.io/aggregate-to-background-controller: "true" rules: - apiGroups: - apps @@ -229,15 +251,51 @@ rules: - deployments verbs: - create + - update ``` -Once a supplemental ClusterRole has been created, get the top-level ClusterRole for that controller to ensure aggregation has occurred. +Once a supplemental ClusterRole has been created, check the top-level ClusterRole for that controller to ensure aggregation has occurred. ```sh kubectl get clusterrole kyverno:background-controller -o yaml ``` -Generating Kubernetes ValidatingAdmissionPolicies and their bindings are handled by the admission controller and it will be necessary to grant the controller the required permissions to generate these types. In this scenario, a ClusterRole should be created and assigned the aggregation labels for the admission controller in order for those permissions to take effect. +Similary, if a Kyverno validate and mutate policies operates on a custom resource the background and reports controllers needs to be provided permissions to manage the resource: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: kyverno:crontab:edit + labels: + rbac.kyverno.io/aggregate-to-background-controller: "true" +rules: +- apiGroups: + - stable.example.com + resources: + - crontabs + verbs: + - update +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: kyverno:crontab:view + labels: + rbac.kyverno.io/aggregate-to-background-controller: "true" + rbac.kyverno.io/aggregate-to-reports-controller: "true" +rules: +- apiGroups: + - stable.example.com + resources: + - crontabs + verbs: + - get + - list + - watch +``` + +Generating Kubernetes ValidatingAdmissionPolicies and their bindings are handled by the admission controller and it is necessary to grant the controller the required permissions to generate these types. For this, a ClusterRole should be created and assigned the aggregation labels for the admission controller in order for those permissions to take effect. This sample ClusterRole provides the Kyverno admission controller additional permissions to create ValidatingAdmissionPolicies and ValidatingAdmissionPolicyBindings: @@ -245,11 +303,9 @@ This sample ClusterRole provides the Kyverno admission controller additional per apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - labels: - app.kubernetes.io/component: admission-controller - app.kubernetes.io/instance: kyverno - app.kubernetes.io/part-of: kyverno name: kyverno:generate-validatingadmissionpolicy + labels: + rbac.kyverno.io/aggregate-to-admission-controller: "true" rules: - apiGroups: - admissionregistration.k8s.io @@ -257,10 +313,12 @@ rules: - validatingadmissionpolicies - validatingadmissionpolicybindings verbs: + - get + - list + - watch - create - update - delete - - list ``` ### ConfigMap Keys diff --git a/content/en/docs/introduction/_index.md b/content/en/docs/introduction/_index.md index c43ce226c..492177616 100644 --- a/content/en/docs/introduction/_index.md +++ b/content/en/docs/introduction/_index.md @@ -3,299 +3,24 @@ title: "Introduction" linkTitle: "Introduction" weight: 10 description: > - Learn about Kyverno and create your first policy through a Quick Start guide. + Learn about Kyverno and its powerful capabilities --- ## About Kyverno -Kyverno (Greek for "govern") is a policy engine designed specifically for Kubernetes. Some of its many features include: +Kyverno (Greek for "govern") is a cloud native policy engine. It was originally built for Kubernetes and now can also be used outside of Kubernetes clusters as a unified policy language. -* policies as Kubernetes resources (no new language to learn!) -* validate, mutate, generate, or cleanup (remove) any resource -* verify container images for software supply chain security -* inspect image metadata -* match resources using label selectors and wildcards -* validate and mutate using overlays (like Kustomize!) -* synchronize configurations across Namespaces -* block non-conformant resources using admission controls, or report policy violations -* self-service reports (no proprietary audit log!) -* self-service policy exceptions -* test policies and validate resources using the Kyverno CLI, in your CI/CD pipeline, before applying to your cluster -* manage policies as code using familiar tools like `git` and `kustomize` +Kyverno allows platform engineers to automate security, complianace, and best practices validation and deliver secure self-service to application teams. -Kyverno allows cluster administrators to manage environment specific configurations independently of workload configurations and enforce configuration best practices for their clusters. Kyverno can be used to scan existing workloads for best practices, or can be used to enforce best practices by blocking or mutating API requests. +Some of its many features include: -## How Kyverno works +* policies as YAML-based declarative Kubernetes resources with no new language to learn! +* enforce policies as a Kubernetes admission controller, CLI-based scanner, and at runtime +* validate, mutate, generate, or cleanup (remove) any Kubernetes resource +* verify container images and metadata for software supply chain security +* policies for any JSON payload including Terraform resources, cloud resources, and service authoriation +* policy reporting using the open reporting format from the CNCF Policy WG +* flexible policy exception management +* tooling for comprehensive unit and e2e testing of policies +* management of policies as code resources using familiar tools like `git` and `kustomize` -Kyverno runs as a [dynamic admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to return results that enforce admission policies or reject requests. - -Kyverno policies can match resources using the resource kind, name, label selectors, and much more. - -Mutating policies can be written as overlays (similar to [Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#bases-and-overlays)) or as a [RFC 6902 JSON Patch](http://jsonpatch.com/). Validating policies also use an overlay style syntax, with support for pattern matching and conditional (if-then-else) processing. - -Policy enforcement is captured using Kubernetes events. For requests that are either allowed or existed prior to introduction of a Kyverno policy, Kyverno creates Policy Reports in the cluster which contain a running list of resources matched by a policy, their status, and more. - -The diagram below shows the high-level logical architecture of Kyverno. - -Kyverno Architecture -

- -The **Webhook** is the server which handles incoming AdmissionReview requests from the Kubernetes API server and sends them to the **Engine** for processing. It is dynamically configured by the **Webhook Controller** which watches the installed policies and modifies the webhooks to request only the resources matched by those policies. The **Cert Renewer** is responsible for watching and renewing the certificates, stored as Kubernetes Secrets, needed by the webhook. The **Background Controller** handles all generate and mutate-existing policies by reconciling UpdateRequests, an intermediary resource. And the **Report Controllers** handle creation and reconciliation of Policy Reports from their intermediary resources, Admission Reports and Background Scan Reports. - -Kyverno also supports high availability. A highly-available installation of Kyverno is one in which the controllers selected for installation are configured to run with multiple replicas. Depending on the controller, the additional replicas may also serve the purpose of increasing the scalability of Kyverno. See the [high availability page](../high-availability/_index.md) for more details on the various Kyverno controllers, their components, and how availability is handled in each one. - -## Quick Start Guides - -This section is intended to provide you with some quick guides on how to get Kyverno up and running and demonstrate a few of Kyverno's seminal features. There are quick start guides which focus on validation, mutation, as well as generation allowing you to select the one (or all) which is most relevant to your use case. - -These guides are intended for proof-of-concept or lab demonstrations only and not recommended as a guide for production. Please see the [installation page](../installation/_index.md) for more complete information on how to install Kyverno in production. - -First, install Kyverno from the latest release manifest. - -```sh -kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.12.0/install.yaml -``` - -Next, select the quick start guide in which you are interested. Alternatively, start at the top and work your way down. - -### Validation - -In the validation guide, you will see how simple an example Kyverno policy can be which ensures a label called `team` is present on every Pod. Validation is the most common use case for policy and functions as a "yes" or "no" decision making process. Resources which are compliant with the policy are allowed to pass ("yes, this is allowed") and those which are not compliant may not be allowed to pass ("no, this is not allowed"). An additional effect of these validate policies is to produce Policy Reports. A [Policy Report](../policy-reports/_index.md) is a custom Kubernetes resource, produced and managed by Kyverno, which shows the results of policy decisions upon allowed resources in a user-friendly way. - -Add the policy below to your cluster. It contains a single validation rule that requires that all Pods have the `team` label. Kyverno supports different rule types to validate, mutate, generate, cleanup, and verify image configurations. The field `validationFailureAction` is set to `Enforce` to block Pods that are non-compliant. Using the default value `Audit` will report violations but not block requests. - -```yaml -kubectl create -f- << EOF -apiVersion: kyverno.io/v1 -kind: ClusterPolicy -metadata: - name: require-labels -spec: - validationFailureAction: Enforce - rules: - - name: check-team - match: - any: - - resources: - kinds: - - Pod - validate: - message: "label 'team' is required" - pattern: - metadata: - labels: - team: "?*" -EOF -``` - -Try creating a Deployment without the required label. - -```sh -kubectl create deployment nginx --image=nginx -``` - -You should see an error. - -```sh -error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request: - -resource Deployment/default/nginx was blocked due to the following policies: - -require-labels: - autogen-check-team: 'validation error: label ''team'' is - required. Rule autogen-check-team failed at path /spec/template/metadata/labels/team/' -``` - -In addition to the error returned, Kyverno also produces an Event in the same Namespace which contains this information. - -{{% alert title="Note" color="info" %}} -Kyverno may be configured to exclude system Namespaces like `kube-system` and `kyverno`. Make sure you create the Deployment in a user-defined Namespace or the `default` Namespace (for testing only). -{{% /alert %}} - -Note that how although the policy matches on Pods, Kyverno blocked the Deployment you just created. This is because Kyverno intelligently applies policies written exclusively for Pods, using its [rule auto-generation](../writing-policies/autogen.md) feature, to all standard Kubernetes Pod controllers including the Deployment above. - -Now, create a Pod with the required label. - -```sh -kubectl run nginx --image nginx --labels team=backend -``` - -This Pod configuration is compliant with the policy and is allowed. - -Now that the Pod exists, wait just a few seconds longer and see what other action Kyverno took. Run the following command to retrieve the Policy Report that Kyverno just created. - -```sh -kubectl get policyreport -o wide -``` - -Notice that there is a single Policy Report with just one result listed under the "PASS" column. This result is due to the Pod we just created having passed the policy. - -```sh -NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE -89044d72-8a1e-4af0-877b-9be727dc3ec4 Pod nginx 1 0 0 0 0 15s -``` - -If you were to describe the above policy report you would see more information about the policy and resource. - -```yaml -Results: - Message: validation rule 'check-team' passed. - Policy: require-labels - Resources: - API Version: v1 - Kind: Pod - Name: nginx - Namespace: default - UID: 07d04dc0-fbb4-479a-b049-a3d63342b354 - Result: pass - Rule: check-team - Scored: true - Source: kyverno - Timestamp: - Nanos: 0 - Seconds: 1683759146 -``` - -Policy reports are helpful in that they are both user- and tool-friendly, based upon an open standard, and separated from the policies which produced them. This separation has the benefit of report access being easy to grant and manage for other users who may not need or have access to Kyverno policies. - -Now that you've experienced validate policies and seen a bit about policy reports, clean up by deleting the policy you created above. - -```sh -kubectl delete clusterpolicy require-labels -``` - -Congratulations, you've just implemented a validation policy in your Kubernetes cluster! For more details on validation policies, see the [validate section](../writing-policies/validate.md). - -### Mutation - -Mutation is the ability to change or "mutate" a resource in some way prior to it being admitted into the cluster. A mutate rule is similar to a validate rule in that it selects some type of resource (like Pods or ConfigMaps) and defines what the desired state should look like. - -Add this Kyverno mutate policy to your cluster. This policy will add the label `team` to any new Pod and give it the value of `bravo` but only if a Pod does not already have this label assigned. Kyverno has the ability to perform basic "if-then" logical decisions in a very easy way making policies trivial to write and read. The `+(team)` notation uses a Kyverno anchor to define the behavior Kyverno should take if the label key is not found. - -```yaml -kubectl create -f- << EOF -apiVersion: kyverno.io/v1 -kind: ClusterPolicy -metadata: - name: add-labels -spec: - rules: - - name: add-team - match: - any: - - resources: - kinds: - - Pod - mutate: - patchStrategicMerge: - metadata: - labels: - +(team): bravo -EOF -``` - -Let's now create a new Pod which does not have the desired label defined. - -```sh -kubectl run redis --image redis -``` - -{{% alert title="Note" color="info" %}} -Kyverno may be configured to exclude system Namespaces like `kube-system` and `kyverno`. Make sure you create the Pod in a user-defined Namespace or the `default` Namespace (for testing only). -{{% /alert %}} - -Once the Pod has been created, get the Pod to see if the `team` label was added. - -```sh -kubectl get pod redis --show-labels -``` - -You should see that the label `team=bravo` has been added by Kyverno. - -Try one more Pod, this time one which does already define the `team` label. - -```sh -kubectl run newredis --image redis -l team=alpha -``` - -Get this Pod back and check once again for labels. - -```sh -kubectl get pod newredis --show-labels -``` - -This time, you should see Kyverno did not add the `team` label with the value defined in the policy since one was already found on the Pod. - -Now that you've experienced mutate policies and seen how logic can be written easily, clean up by deleting the policy you created above. - -```sh -kubectl delete clusterpolicy add-labels -``` - -Congratulations, you've just implemented a mutation policy in your Kubernetes cluster! For more details on mutate policies, see the [mutate section](../writing-policies/mutate.md). - -### Generation - -Kyverno has the ability to generate (i.e., create) a new Kubernetes resource based upon a definition stored in a policy. Like both validate and mutate rules, Kyverno generate rules use similar concepts and structures to express policy. The generation ability is both powerful and flexible with one of its most useful aspects being, in addition to the initial generation, it has the ability to continually synchronize the resources it has generated. Generate rules can be a powerful automation tool and can solve many common challenges faced by Kubernetes operators. Let's look at one such use case in this guide. - -We will use a Kyverno generate policy to generate an image pull secret in a new Namespace. - -First, create this Kubernetes Secret in your cluster which will simulate a real image pull secret. - -```sh -kubectl -n default create secret docker-registry regcred \ - --docker-server=myinternalreg.corp.com \ - --docker-username=john.doe \ - --docker-password=Passw0rd123! \ - --docker-email=john.doe@corp.com -``` - -Next, create the following Kyverno policy. The `sync-secrets` policy will match on any newly-created Namespace and will clone the Secret we just created earlier into that new Namespace. - -```yaml -kubectl create -f- << EOF -apiVersion: kyverno.io/v1 -kind: ClusterPolicy -metadata: - name: sync-secrets -spec: - rules: - - name: sync-image-pull-secret - match: - any: - - resources: - kinds: - - Namespace - generate: - apiVersion: v1 - kind: Secret - name: regcred - namespace: "{{request.object.metadata.name}}" - synchronize: true - clone: - namespace: default - name: regcred -EOF -``` - -Create a new Namespace to test the policy. - -```sh -kubectl create ns mytestns -``` - -Get the Secrets in this new Namespace and see if `regcred` is present. - -```sh -kubectl -n mytestns get secret -``` - -You should see that Kyverno has generated the `regcred` Secret using the source Secret from the `default` Namespace as the template. If you wish, you may also modify the source Secret and watch as Kyverno synchronizes those changes down to wherever it has generated it. - -With a basic understanding of generate policies, clean up by deleting the policy you created above. - -```sh -kubectl delete clusterpolicy sync-secrets -``` - -Congratulations, you've just implemented a generation policy in your Kubernetes cluster! For more details on generate policies, see the [generate section](../writing-policies/generate.md). diff --git a/content/en/docs/introduction/admission-controllers.md b/content/en/docs/introduction/admission-controllers.md index acf06d586..c0914252d 100644 --- a/content/en/docs/introduction/admission-controllers.md +++ b/content/en/docs/introduction/admission-controllers.md @@ -99,7 +99,7 @@ In this example, the API server has been instructed to send any creation request The controller is the other half of the dynamic admission controller story. Something must be listening for the requests sent by the API server and be prepared to respond. This is typically implemented by a controller running in the same cluster as a Pod. This controller, like the API server with the webhook, must have some instruction for how to respond to requests. This instruction is provided to it in the form of a **policy**. A policy is typically another Kubernetes resource, but this time a [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which the controller uses to determine that response. Once the controller examines the policy it is prepared to make a decision for resources it receives. -For example, as you may have learned in the [validation quick start section](../introduction/_index.md#validation), a policy such as `require-labels` can be used to instruct the controller how to respond in the case where it receives a matching request. If the Pod has a label named `team` then its creation will be allowed. If it does not, it will be prevented. +For example, as you may have learned in the [validation quick start section](../introduction/quick-start.md#validate-resources), a policy such as `require-labels` can be used to instruct the controller how to respond in the case where it receives a matching request. If the Pod has a label named `team` then its creation will be allowed. If it does not, it will be prevented. Controllers receiving requests from the Kubernetes API server do so over HTTP/REST. The contents of that request are a "packaging" or "wrapping" of the resource, which has been defined via the webhook, in addition to other pertinent information about who or what made the request. This package is called an `AdmissionReview`. More details on this packaging format along with an example can be seen [here](../writing-policies/jmespath.md#admissionreview). diff --git a/content/en/docs/introduction/how-kyverno-works.md b/content/en/docs/introduction/how-kyverno-works.md new file mode 100644 index 000000000..a5a88fdf4 --- /dev/null +++ b/content/en/docs/introduction/how-kyverno-works.md @@ -0,0 +1,31 @@ + +--- +title: How Kyverno Works +linkTitle: How Kyverno Works +weight: 10 +description: > + An overview of how Kyverno works +--- + + +## Kubernetes Admission Controls + +Kyverno runs as a [dynamic admission controller](./admission-controllers.md) in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to return results that enforce admission policies or reject requests. + +Kyverno policies can match resources using the resource kind, name, label selectors, and much more. + +Mutating policies can be written as overlays (similar to [Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#bases-and-overlays)) or as a [RFC 6902 JSON Patch](http://jsonpatch.com/). Validating policies also use an overlay style syntax, with support for pattern matching and conditional (if-then-else) processing. + +Policy enforcement is captured using Kubernetes events. For requests that are either allowed or existed prior to introduction of a Kyverno policy, Kyverno creates Policy Reports in the cluster which contain a running list of resources matched by a policy, their status, and more. + +The diagram below shows the high-level logical architecture of Kyverno. + +Kyverno Architecture +

+ +The **Webhook** is the server which handles incoming AdmissionReview requests from the Kubernetes API server and sends them to the **Engine** for processing. It is dynamically configured by the **Webhook Controller** which watches the installed policies and modifies the webhooks to request only the resources matched by those policies. The **Cert Renewer** is responsible for watching and renewing the certificates, stored as Kubernetes Secrets, needed by the webhook. The **Background Controller** handles all generate and mutate-existing policies by reconciling UpdateRequests, an intermediary resource. And the **Report Controllers** handle creation and reconciliation of Policy Reports from their intermediary resources, Admission Reports and Background Scan Reports. + +Kyverno also supports high availability. A highly-available installation of Kyverno is one in which the controllers selected for installation are configured to run with multiple replicas. Depending on the controller, the additional replicas may also serve the purpose of increasing the scalability of Kyverno. See the [high availability page](../high-availability/_index.md) for more details on the various Kyverno controllers, their components, and how availability is handled in each one. + + + diff --git a/content/en/docs/introduction/quick-start.md b/content/en/docs/introduction/quick-start.md new file mode 100644 index 000000000..a1fb7ba39 --- /dev/null +++ b/content/en/docs/introduction/quick-start.md @@ -0,0 +1,282 @@ +--- +title: Quick Start Guides +linkTitle: Quick Start Guides +weight: 20 +description: > + An introduction to Kyverno policy and rule types +--- + +This section is intended to provide you with some quick guides on how to get Kyverno up and running and demonstrate a few of Kyverno's seminal features. There are quick start guides which focus on validation, mutation, as well as generation allowing you to select the one (or all) which is most relevant to your use case. + +These guides are intended for proof-of-concept or lab demonstrations only and not recommended as a guide for production. Please see the [installation page](../installation/_index.md) for more complete information on how to install Kyverno in production. + +First, install Kyverno from the latest release manifest. + +```sh +kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.12.0/install.yaml +``` + +Next, select the quick start guide in which you are interested. Alternatively, start at the top and work your way down. + +## Validate Resources + +In the validation guide, you will see how simple an example Kyverno policy can be which ensures a label called `team` is present on every Pod. Validation is the most common use case for policy and functions as a "yes" or "no" decision making process. Resources which are compliant with the policy are allowed to pass ("yes, this is allowed") and those which are not compliant may not be allowed to pass ("no, this is not allowed"). An additional effect of these validate policies is to produce Policy Reports. A [Policy Report](../policy-reports/_index.md) is a custom Kubernetes resource, produced and managed by Kyverno, which shows the results of policy decisions upon allowed resources in a user-friendly way. + +Add the policy below to your cluster. It contains a single validation rule that requires that all Pods have the `team` label. Kyverno supports different rule types to validate, mutate, generate, cleanup, and verify image configurations. The field `validationFailureAction` is set to `Enforce` to block Pods that are non-compliant. Using the default value `Audit` will report violations but not block requests. + +```yaml +kubectl create -f- << EOF +apiVersion: kyverno.io/v1 +kind: ClusterPolicy +metadata: + name: require-labels +spec: + validationFailureAction: Enforce + rules: + - name: check-team + match: + any: + - resources: + kinds: + - Pod + validate: + message: "label 'team' is required" + pattern: + metadata: + labels: + team: "?*" +EOF +``` + +Try creating a Deployment without the required label. + +```sh +kubectl create deployment nginx --image=nginx +``` + +You should see an error. + +```sh +error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request: + +resource Deployment/default/nginx was blocked due to the following policies: + +require-labels: + autogen-check-team: 'validation error: label ''team'' is + required. Rule autogen-check-team failed at path /spec/template/metadata/labels/team/' +``` + +In addition to the error returned, Kyverno also produces an Event in the same Namespace which contains this information. + +{{% alert title="Note" color="info" %}} +Kyverno may be configured to exclude system Namespaces like `kube-system` and `kyverno`. Make sure you create the Deployment in a user-defined Namespace or the `default` Namespace (for testing only). +{{% /alert %}} + +Note that how although the policy matches on Pods, Kyverno blocked the Deployment you just created. This is because Kyverno intelligently applies policies written exclusively for Pods, using its [rule auto-generation](../writing-policies/autogen.md) feature, to all standard Kubernetes Pod controllers including the Deployment above. + +Now, create a Pod with the required label. + +```sh +kubectl run nginx --image nginx --labels team=backend +``` + +This Pod configuration is compliant with the policy and is allowed. + +Now that the Pod exists, wait just a few seconds longer and see what other action Kyverno took. Run the following command to retrieve the Policy Report that Kyverno just created. + +```sh +kubectl get policyreport -o wide +``` + +Notice that there is a single Policy Report with just one result listed under the "PASS" column. This result is due to the Pod we just created having passed the policy. + +```sh +NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE +89044d72-8a1e-4af0-877b-9be727dc3ec4 Pod nginx 1 0 0 0 0 15s +``` + +If you were to describe the above policy report you would see more information about the policy and resource. + +```yaml +Results: + Message: validation rule 'check-team' passed. + Policy: require-labels + Resources: + API Version: v1 + Kind: Pod + Name: nginx + Namespace: default + UID: 07d04dc0-fbb4-479a-b049-a3d63342b354 + Result: pass + Rule: check-team + Scored: true + Source: kyverno + Timestamp: + Nanos: 0 + Seconds: 1683759146 +``` + +Policy reports are helpful in that they are both user- and tool-friendly, based upon an open standard, and separated from the policies which produced them. This separation has the benefit of report access being easy to grant and manage for other users who may not need or have access to Kyverno policies. + +Now that you've experienced validate policies and seen a bit about policy reports, clean up by deleting the policy you created above. + +```sh +kubectl delete clusterpolicy require-labels +``` + +Congratulations, you've just implemented a validation policy in your Kubernetes cluster! For more details on validation policies, see the [validate section](../writing-policies/validate.md). + +## Mutate Resources + +Mutation is the ability to change or "mutate" a resource in some way prior to it being admitted into the cluster. A mutate rule is similar to a validate rule in that it selects some type of resource (like Pods or ConfigMaps) and defines what the desired state should look like. + +Add this Kyverno mutate policy to your cluster. This policy will add the label `team` to any new Pod and give it the value of `bravo` but only if a Pod does not already have this label assigned. Kyverno has the ability to perform basic "if-then" logical decisions in a very easy way making policies trivial to write and read. The `+(team)` notation uses a Kyverno anchor to define the behavior Kyverno should take if the label key is not found. + +```yaml +kubectl create -f- << EOF +apiVersion: kyverno.io/v1 +kind: ClusterPolicy +metadata: + name: add-labels +spec: + rules: + - name: add-team + match: + any: + - resources: + kinds: + - Pod + mutate: + patchStrategicMerge: + metadata: + labels: + +(team): bravo +EOF +``` + +Let's now create a new Pod which does not have the desired label defined. + +```sh +kubectl run redis --image redis +``` + +{{% alert title="Note" color="info" %}} +Kyverno may be configured to exclude system Namespaces like `kube-system` and `kyverno`. Make sure you create the Pod in a user-defined Namespace or the `default` Namespace (for testing only). +{{% /alert %}} + +Once the Pod has been created, get the Pod to see if the `team` label was added. + +```sh +kubectl get pod redis --show-labels +``` + +You should see that the label `team=bravo` has been added by Kyverno. + +Try one more Pod, this time one which does already define the `team` label. + +```sh +kubectl run newredis --image redis -l team=alpha +``` + +Get this Pod back and check once again for labels. + +```sh +kubectl get pod newredis --show-labels +``` + +This time, you should see Kyverno did not add the `team` label with the value defined in the policy since one was already found on the Pod. + +Now that you've experienced mutate policies and seen how logic can be written easily, clean up by deleting the policy you created above. + +```sh +kubectl delete clusterpolicy add-labels +``` + +Congratulations, you've just implemented a mutation policy in your Kubernetes cluster! For more details on mutate policies, see the [mutate section](../writing-policies/mutate.md). + +## Generate Resources + +Kyverno has the ability to generate (i.e., create) a new Kubernetes resource based upon a definition stored in a policy. Like both validate and mutate rules, Kyverno generate rules use similar concepts and structures to express policy. The generation ability is both powerful and flexible with one of its most useful aspects being, in addition to the initial generation, it has the ability to continually synchronize the resources it has generated. Generate rules can be a powerful automation tool and can solve many common challenges faced by Kubernetes operators. Let's look at one such use case in this guide. + +We will use a Kyverno generate policy to generate an image pull secret in a new Namespace. + +First, create this Kubernetes Secret in your cluster which will simulate a real image pull secret. + +```sh +kubectl -n default create secret docker-registry regcred \ + --docker-server=myinternalreg.corp.com \ + --docker-username=john.doe \ + --docker-password=Passw0rd123! \ + --docker-email=john.doe@corp.com +``` +By default, Kyverno is [configured with minimal permissions](../installation/customization.md#role-based-access-controls) and does not have access to security sensitive resources like Secrets. You can provide additional permissions using cluster role aggregation. The following role permits the Kyverno background-controller to create (clone) secrets. + +```yaml +kubectl create -f- << EOF +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: kyverno:secrets:manage + labels: + rbac.kyverno.io/aggregate-to-background-controller: "true" +rules: +- apiGroups: + - '' + resources: + - secrets + verbs: + - create +EOF +``` + +Next, create the following Kyverno policy. The `sync-secrets` policy will match on any newly-created Namespace and will clone the Secret we just created earlier into that new Namespace. + +```yaml +kubectl create -f- << EOF +apiVersion: kyverno.io/v1 +kind: ClusterPolicy +metadata: + name: sync-secrets +spec: + rules: + - name: sync-image-pull-secret + match: + any: + - resources: + kinds: + - Namespace + generate: + apiVersion: v1 + kind: Secret + name: regcred + namespace: "{{request.object.metadata.name}}" + synchronize: false + clone: + namespace: default + name: regcred +EOF +``` + +Create a new Namespace to test the policy. + +```sh +kubectl create ns mytestns +``` + +Get the Secrets in this new Namespace and see if `regcred` is present. + +```sh +kubectl -n mytestns get secret +``` + +You should see that Kyverno has generated the `regcred` Secret using the source Secret from the `default` Namespace as the template. If you wish, you may also modify the source Secret and watch as Kyverno synchronizes those changes down to wherever it has generated it. + +With a basic understanding of generate policies, clean up by deleting the policy you created above. + +```sh +kubectl delete clusterpolicy sync-secrets +``` + +Congratulations, you've just implemented a generation policy in your Kubernetes cluster! For more details on generate policies, see the [generate section](../writing-policies/generate.md). + + diff --git a/content/en/docs/security/_index.md b/content/en/docs/security/_index.md index 8c26dca4b..b8e4118f6 100644 --- a/content/en/docs/security/_index.md +++ b/content/en/docs/security/_index.md @@ -242,7 +242,7 @@ Kyverno Pods are configured to follow security best practices and conform to the ### RBAC -The Kyverno RBAC configurations are described in the [installation](../installation/customization.md#roles-and-permissions) section. +The Kyverno RBAC configurations are described in the [installation](../installation/customization.md#role-based-access-controls) section. Use the following command to view all Kyverno roles: @@ -352,7 +352,7 @@ The sections below list each threat, mitigation, and provide Kyverno specific de * [Mitigation ID 1 - RBAC rights are strictly controlled](https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/admission-control/kubernetes-admission-control-threat-model.md#mitigation-id-1---rbac-rights-are-strictly-controlled) - Kyverno RBAC configurations are described in the [installation section](../installation/customization.md#roles-and-permissions). The `kyverno:admission-controller` role is used by Kyverno to configure webhooks. It is important to limit Kyverno to the required permissions and audit changes in the RBAC roles and role bindings. + Kyverno RBAC configurations are described in the [installation section](../installation/customization.md#role-based-access-controls). The `kyverno:admission-controller` role is used by Kyverno to configure webhooks. It is important to limit Kyverno to the required permissions and audit changes in the RBAC roles and role bindings. ### Threat ID 5 - Attacker gets access to valid credentials for the webhook @@ -420,7 +420,7 @@ The sections below list each threat, mitigation, and provide Kyverno specific de * [Mitigation ID 1 - RBAC rights are strictly controlled](https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/admission-control/kubernetes-admission-control-threat-model.md#mitigation-id-1---rbac-rights-are-strictly-controlled) - Kyverno RBAC configurations are described in the [configuration section](../installation/customization.md#roles-and-permissions). The `kyverno:admission-controller` role is used by Kyverno to configure webhooks. It is important to limit Kyverno to the required permissions and audit changes in the RBAC roles and role bindings. + Kyverno RBAC configurations are described in the [configuration section](../installation/customization.md#role-based-access-controls). The `kyverno:admission-controller` role is used by Kyverno to configure webhooks. It is important to limit Kyverno to the required permissions and audit changes in the RBAC roles and role bindings. Kyverno excludes certain critical system Namespaces by default including the Kyverno Namespace itself. These exclusions can be managed and configured via the [ConfigMap](../installation/customization.md#configmap-keys). diff --git a/layouts/partials/footer.html b/layouts/partials/footer.html index 0881ecefa..d76e35c31 100644 --- a/layouts/partials/footer.html +++ b/layouts/partials/footer.html @@ -12,7 +12,7 @@