Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(validation)!: validation testing framework #667

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/cli-commands/lula_dev_validate.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,9 @@ To hang for timeout of 5 seconds:
-h, --help help for validate
-f, --input-file string the path to a validation manifest file (default "0")
-o, --output-file string the path to write the validation with results
--print-test-resources whether to print resources used for tests; prints <test-name>.json to the validation directory
-r, --resources-file string the path to an optional resources file
--run-tests run tests specified in the validation
-t, --timeout int the timeout for stdin (in seconds, -1 for no timeout) (default 1)
```

Expand Down
144 changes: 144 additions & 0 deletions docs/reference/testing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# Testing

Testing is a key part of Lula Validation development. Since the results of the Lula Validations are determined by the policy set by the `provider`, those policies must be tested to ensure they are working as expected.

## Validation Testing

In the Lula Validation, a `tests` property is used to specify each test that should be performed against the validation. Each test is a map of the following properties:

- `name`: The name of the test
- `changes`: An array of changes or transformations to be applied to the resources used in the test validation
- `expected-result`: The expected result of the test - satisfied or not-satisfied
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expected result of the test - satisfied or not-satisfied

This is an interesting point as we look to clear a path for validations used outside of OSCAL. We map the result of a policy (pass/fail) to align to findings/observations using the OSCAL terminology (satisfied/not-satisfied).

We might look at what terminology OPA/Kyverno use and/or decide on what Lulas will be.

Copy link
Collaborator Author

@meganwolf0 meganwolf0 Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair - I initially was thinking like pass/fail - that also sort of conflates with the overall test result (which is pass/fail) so walked back the terminology. I mean it could be more like policy/provider-result -> true/false? Or accept/reject?


A change is a map of the following properties:

- `path`: The path to the resource to be modified. The path syntax is described below.
- `type`: The type of operation to be performed on the resource
- `update`: (default) updates the resource with the specified value
- `delete`: deletes the field specified
- `add`: adds the specified value
- `value`: The value to be used for the operation (string)
- `value-map`: The value to be used for the operation (map[string]interface{})

An example of a test added to a validation is:

```yaml
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: podsvt
resource-rule:
version: v1
resource: pods
namespaces: [validation-test]
provider:
type: opa
opa-spec:
rego: |
package validate

import future.keywords.every

validate {
every pod in input.podsvt {
podLabel := pod.metadata.labels.foo
podLabel == "bar"
}
}
tests:
- name: modify-pod-label-not-satisfied
expected-result: not-satisfied
changes:
- path: podsvt.[metadata.namespace=validation-test].metadata.labels.foo
type: update
value: baz
- name: delete-pod-label-not-satisfied
expected-result: not-satisfied
changes:
- path: podsvt.[metadata.namespace=validation-test].metadata.labels.foo
type: delete
```

There are two tests here:
* The first test will locate the first pod in the `validation-test` namespace and update the label `foo` to `baz`. Then a `validate` will be executed against the modified resources. The expected result of this is that the validation will fail, i.e., will be `not-satisfied`, which would result in a successful test.
* The second test will locate the first pod in the `validation-test` namespace and delete the label `foo`, then proceed to validate the modified resources and compare to the expected result.

### Path Syntax

This feature uses the kyaml library to inject data into the resources, so the path syntax is based on this library.

The path should be a "." delimited string that specifies the keys along the path to the resource seeking to be modified. In addition to keys, a list item can be specified by using the “[some-key=value]” syntax. For example, the following path:

```
pods.[metadata.namespace=grafana].spec.containers.[name=istio-proxy]
```

Will start at the pods key, then since the next item is a [*=*] it assumes pods is a list, and will iterate over each item in the list to find where the key `metadata.namespace` is equal to `grafana`. It will then find the `containers` list item in `spec`, and iterate over each item in the list to find where the key `name` is equal to `istio-proxy`.

Multiple filters can be added for a list, for example the above example could be modified to filter both by namespace and pod name:

```
pods.[metadata.namespace=grafana,metadata.name=operator].spec.containers.[name=istio-proxy]
```

To support map keys containing ".", [] syntax will also be used, e.g.,

```
namespaces.[metadata.namespace=grafana].metadata.labels.["some.key/label"]
```

Additionally, individual list items can be found via their index, e.g.,

```
namespaces.[0].metadata.labels
```

Which will point to the labels key of the first namespace. Additionally, a `[-]` can be used to specify the last item in the list.

>[!IMPORTANT]
> The path will return only one item, the first item that matches the filters along the path. If no items match the filters, the path will return an empty map.

### Change Type Behavior

**Add**
* All keys in the path must exist, except for the last key. If you are trying to add a map, then use `value-map` and specify the existing root key.
* If a sequence is "added" to, then the value items will be appended to the sequence.

**Update**
* If a sequence is "updated", then the entire sequence will be replaced.

**Delete**
* Currently only supports deleting a key, error will be returned if the last item in the path resolves to a sequence.
* No values should be specified for delete.

A note about replacing a key with an empty map - due to the way the `kyaml` library works, simply trying to overwrite an existing key with an empty map will not yield a removal of all the existing data of the map, it will just try and merge the differences, which is possibly not the desired outcome. To replace a map with an empty map, you must combine `delete` a change type and `add` a change type, e.g.,

```yaml
changes:
- path: pods.[metadata.namespace=grafana].metadata.labels
type: delete
- path: pods.[metadata.namespace=grafana].metadata
type: add
value-map:
labels: {}
```

Which will delete the existing labels map and then add an empty map, such that the "labels" key will still exist but will be an empty map.

## Executing Tests

Tests can be executed by specifying the `--run-tests` flag when running `lula dev validate`. E.g.,

```sh
lula dev validate -f ./validation.yaml --run-tests
```

This will execute the tests and print the test results to the console.

To aid in debugging, the `--print-test-resources` flag can be used to print the resources used for each test to the validation directory, the filenames will be `<test-name>.json`.. E.g.,

```sh
lula dev validate -f ./validation.yaml --run-tests --print-test-resources
```

5 changes: 2 additions & 3 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ require (
k8s.io/client-go v0.31.2
sigs.k8s.io/cli-utils v0.37.2
sigs.k8s.io/e2e-framework v0.5.0
sigs.k8s.io/kustomize/kyaml v0.17.2
sigs.k8s.io/kustomize/kyaml v0.18.1
sigs.k8s.io/yaml v1.4.0
)

Expand Down Expand Up @@ -163,7 +163,6 @@ require (
go.opentelemetry.io/otel/metric v1.28.0 // indirect
go.opentelemetry.io/otel/sdk v1.28.0 // indirect
go.opentelemetry.io/otel/trace v1.28.0 // indirect
go.starlark.net v0.0.0-20240123142251-f86470692795 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.27.0 // indirect
golang.org/x/exp v0.0.0-20240222234643-814bf88cf225 // indirect
Expand All @@ -190,6 +189,6 @@ require (
olympos.io/encoding/edn v0.0.0-20201019073823-d3554ca0b0a3 // indirect
sigs.k8s.io/controller-runtime v0.19.0 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/kustomize/api v0.17.2 // indirect
sigs.k8s.io/kustomize/api v0.18.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
)
10 changes: 4 additions & 6 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -511,8 +511,6 @@ go.opentelemetry.io/otel/trace v1.28.0 h1:GhQ9cUuQGmNDd5BTCP2dAvv75RdMxEfTmYejp+
go.opentelemetry.io/otel/trace v1.28.0/go.mod h1:jPyXzNPg6da9+38HEwElrQiHlVMTnVfM3/yv2OlIHaI=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.starlark.net v0.0.0-20240123142251-f86470692795 h1:LmbG8Pq7KDGkglKVn8VpZOZj6vb9b8nKEGcg9l03epM=
go.starlark.net v0.0.0-20240123142251-f86470692795/go.mod h1:LcLNIzVOMp4oV+uusnpk+VU+SzXaJakUuBjoCSWH5dM=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
Expand Down Expand Up @@ -675,10 +673,10 @@ sigs.k8s.io/e2e-framework v0.5.0 h1:YLhk8R7EHuTFQAe6Fxy5eBzn5Vb+yamR5u8MH1Rq3cE=
sigs.k8s.io/e2e-framework v0.5.0/go.mod h1:jJSH8u2RNmruekUZgHAtmRjb5Wj67GErli9UjLSY7Zc=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/kustomize/api v0.17.2 h1:E7/Fjk7V5fboiuijoZHgs4aHuexi5Y2loXlVOAVAG5g=
sigs.k8s.io/kustomize/api v0.17.2/go.mod h1:UWTz9Ct+MvoeQsHcJ5e+vziRRkwimm3HytpZgIYqye0=
sigs.k8s.io/kustomize/kyaml v0.17.2 h1:+AzvoJUY0kq4QAhH/ydPHHMRLijtUKiyVyh7fOSshr0=
sigs.k8s.io/kustomize/kyaml v0.17.2/go.mod h1:9V0mCjIEYjlXuCdYsSXvyoy2BTsLESH7TlGV81S282U=
sigs.k8s.io/kustomize/api v0.18.0 h1:hTzp67k+3NEVInwz5BHyzc9rGxIauoXferXyjv5lWPo=
sigs.k8s.io/kustomize/api v0.18.0/go.mod h1:f8isXnX+8b+SGLHQ6yO4JG1rdkZlvhaCf/uZbLVMb0U=
sigs.k8s.io/kustomize/kyaml v0.18.1 h1:WvBo56Wzw3fjS+7vBjN6TeivvpbW9GmRaWZ9CIVmt4E=
sigs.k8s.io/kustomize/kyaml v0.18.1/go.mod h1:C3L2BFVU1jgcddNBE1TxuVLgS46TjObMwW5FT9FcjYo=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
Expand Down
48 changes: 43 additions & 5 deletions src/cmd/dev/validate.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,17 @@ import (
"context"
"encoding/json"
"fmt"
"path/filepath"
"strings"

"github.com/defenseunicorns/go-oscal/src/pkg/files"
"github.com/spf13/cobra"
"sigs.k8s.io/yaml"

"github.com/defenseunicorns/lula/src/cmd/common"
pkgCommon "github.com/defenseunicorns/lula/src/pkg/common"
"github.com/defenseunicorns/lula/src/pkg/message"
"github.com/defenseunicorns/lula/src/types"
"github.com/spf13/cobra"
"sigs.k8s.io/yaml"
)

var validateHelp = `
Expand All @@ -32,8 +34,10 @@ To hang for timeout of 5 seconds:

type ValidateFlags struct {
flags
ExpectedResult bool // -e --expected-result
ResourcesFile string // -r --resources-file
ExpectedResult bool // -e --expected-result
ResourcesFile string // -r --resources-file
RunTests bool // --run-tests
PrintTestResources bool // --print-test-resources
}

var validateOpts = &ValidateFlags{}
Expand All @@ -48,7 +52,6 @@ var validateCmd = &cobra.Command{
spinner := message.NewProgressSpinner("%s", spinnerMessage)
defer spinner.Stop()

ctx := context.Background()
var validationBytes []byte
var resourcesBytes []byte
var err error
Expand All @@ -75,6 +78,7 @@ var validateCmd = &cobra.Command{
}
}

ctx := context.WithValue(cmd.Context(), types.LulaValidationWorkDir, filepath.Dir(validateOpts.InputFile))
validation, err := DevValidate(ctx, validationBytes, resourcesBytes, spinner)
if err != nil {
message.Fatalf(err, "error running dev validate: %v", err)
Expand Down Expand Up @@ -102,6 +106,38 @@ var validateCmd = &cobra.Command{
}
// Print the number of passing and failing results
message.Infof("Validation completed with %d passing and %d failing results", validation.Result.Passing, validation.Result.Failing)

// Run tests if requested
if validateOpts.RunTests {
testReports, err := validation.RunTests(ctx, validateOpts.PrintTestResources)
if err != nil {
message.Fatalf(err, "error running tests")
}
if testReports == nil {
message.HeaderInfof("No tests found")
} else {
message.HeaderInfof("Test results:")
for _, testReport := range *testReports {
if testReport.Pass {
message.Successf("Pass: %s", testReport.TestName)
} else {
var failMsg string
if testReport.Result == "" {
failMsg = "No Result"
} else {
failMsg = "Expected Result =/= Actual Result"
}
message.Failf("Fail: %s - %s", testReport.TestName, failMsg)
}
if testReport.Result != "" {
message.Infof("Result: %s", testReport.Result)
}
for remark, value := range testReport.Remarks {
message.Infof("--> %s: %s", remark, value)
}
}
}
}
},
}

Expand All @@ -117,6 +153,8 @@ func init() {
validateCmd.Flags().IntVarP(&validateOpts.Timeout, "timeout", "t", DEFAULT_TIMEOUT, "the timeout for stdin (in seconds, -1 for no timeout)")
validateCmd.Flags().BoolVarP(&validateOpts.ExpectedResult, "expected-result", "e", true, "the expected result of the validation (-e=false for failing result)")
validateCmd.Flags().BoolVar(&validateOpts.ConfirmExecution, "confirm-execution", false, "confirm execution scripts run as part of the validation")
validateCmd.Flags().BoolVar(&validateOpts.RunTests, "run-tests", false, "run tests specified in the validation")
validateCmd.Flags().BoolVar(&validateOpts.PrintTestResources, "print-test-resources", false, "whether to print resources used for tests; prints <test-name>.json to the validation directory")
}

// DevValidate reads a validation manifest and converts it to a LulaValidation struct, then validates it
Expand Down
Loading