From 3062e1e53ea401438e07a767c05b0ba7333979f0 Mon Sep 17 00:00:00 2001 From: Micah Nagel Date: Wed, 25 Sep 2024 12:58:42 -0600 Subject: [PATCH] feat!: switch from promtail to vector (#724) BREAKING CHANGE: Noting this as a breaking change as Promtail is removed and replaced by Vector. If using overrides to setup additional log targets/endpoints this configuration will need to be updated to Vector's chart/config formats. Primary docs on rationale, decision, and impact of this switch are [here](https://github.com/defenseunicorns/uds-core/blob/vector-add/src/vector/README.md). Fixes https://github.com/defenseunicorns/uds-core/issues/377 - [ ] Bug fix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Other (security config, docs update, etc) - [x] Test, docs, adr added or updated as needed - [x] [Contributor Guide](https://github.com/defenseunicorns/uds-template-capability/blob/main/CONTRIBUTING.md) followed --- .github/filters.yaml | 32 + docs/application-baseline.md | 26 + packages/standard/zarf.yaml | 4 + src/istio/oscal-component.yaml | 985 ++++++++++++++++++ src/pepr/zarf.yaml | 23 + src/vector/chart/templates/uds-exemption.yaml | 3 + src/vector/chart/templates/uds-package.yaml | 3 + src/vector/chart/values.yaml | 3 + src/vector/common/zarf.yaml | 3 + src/vector/tasks.yaml | 3 + src/vector/values/registry1-values.yaml | 3 + src/vector/values/unicorn-values.yaml | 3 + src/vector/values/upstream-values.yaml | 3 + src/vector/values/values.yaml | 3 + src/vector/zarf.yaml | 3 + 15 files changed, 1100 insertions(+) create mode 100644 docs/application-baseline.md diff --git a/.github/filters.yaml b/.github/filters.yaml index 8ffee6f5b..a047ff952 100644 --- a/.github/filters.yaml +++ b/.github/filters.yaml @@ -38,4 +38,36 @@ metrics-server: monitoring: - "packages/monitoring/**" - "src/prometheus-stack/**" +<<<<<<< HEAD - "src/grafana/**" +======= + - "!**/*.md" + - "!**/*.jpg" + - "!**/*.png" + - "!**/*.gif" + - "!**/*.svg" + +vector: + - "src/vector/**" + - "!**/*.md" + - "!**/*.jpg" + - "!**/*.png" + - "!**/*.gif" + - "!**/*.svg" + +tempo: + - "src/tempo/**" + - "!**/*.md" + - "!**/*.jpg" + - "!**/*.png" + - "!**/*.gif" + - "!**/*.svg" + +velero: + - "src/velero/**" + - "!**/*.md" + - "!**/*.jpg" + - "!**/*.png" + - "!**/*.gif" + - "!**/*.svg" +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) diff --git a/docs/application-baseline.md b/docs/application-baseline.md new file mode 100644 index 000000000..7eec1c580 --- /dev/null +++ b/docs/application-baseline.md @@ -0,0 +1,26 @@ +--- +title: Application Baseline +type: docs +weight: 1 +--- + +UDS Core provides a foundational set of applications that form the backbone of a secure and efficient mission environment. Each application addresses critical aspects of microservices communication, monitoring, logging, security, compliance, and data protection. These applications are essential for establishing a reliable runtime environment and ensuring that mission-critical applications operate seamlessly. + +By leveraging these applications within UDS Core, users can confidently deploy and operate source packages that meet stringent security and performance standards. UDS Core provides the applications and flexibility required to achieve diverse mission objectives, whether in cloud, on-premises, or edge environments. UDS source packages cater to the specific needs of Mission Heroes and their mission-critical operations. Below are some of the key applications offered by UDS Core: + +{{% alert-note %}} +For optimal deployment and operational efficiency, it is important to deliver a UDS Core Bundle before deploying any other optional bundle (UDS or Mission). Failure to meet this prerequisite can alter the complexity of the deployment process. To ensure a seamless experience and to leverage the full potential of UDS capabilities, prioritize the deployment of UDS Core as the foundational step. +{{% /alert-note %}} + +## Core Baseline + +| **Capability** | **Application** | +| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Service Mesh** | **[Istio](https://istio.io/):** A powerful service mesh that provides traffic management, load balancing, security, and observability features. | +| **Monitoring** | **[Metrics Server](https://kubernetes-sigs.github.io/metrics-server/):** Provides container resource utilization metrics API for Kubernetes clusters. Metrics server is an optional (non-default) component since most Kubernetes distros provide it by default.

**[Prometheus](https://prometheus.io/):** Scrapes Metrics Server API and application metrics and stores the data in a time-series database for insights into application health and performance.

**[Grafana](https://grafana.com/grafana/):** Provides visualization and alerting capabilities based on Prometheus's time-series database of metrics. | +| **Logging** | **[Vector](https://vector.dev/):** A companion agent that efficiently gathers and sends container logs to Loki and other storage locations (S3, SIEM tools, etc), simplifying log monitoring, troubleshooting, and compliance auditing, enhancing the overall observability of the mission environment.

**[Loki](https://grafana.com/docs/loki/latest/):** A log aggregation system that allows users to store, search, and analyze logs across their applications. | +| **Security and Compliance** | **[NeuVector](https://open-docs.neuvector.com/):** Offers container-native security, protecting applications against threats and vulnerabilities.

**[Pepr](https://pepr.dev/):** UDS policy engine and operator for enhanced security and compliance.| +| **Identity and Access Management** | **[Keycloak](https://www.keycloak.org/):** A robust open-source Identity and Access Management solution, providing centralized authentication, authorization, and user management for enhanced security and control over access to mission-critical resources.| +| **Backup and Restore** | **[Velero](https://velero.io/):** Provides backup and restore capabilities for Kubernetes clusters, ensuring data protection and disaster recovery.| +| **Authorization** | **[AuthService](https://github.com/istio-ecosystem/authservice):** Offers centralized authorization services, managing access control and permissions within the Istio mesh. AuthService plays a supporting role to Keycloak as it handles part of the OIDC redirect flow.| +| **Frontend Views & Insights** | **[UDS Runtime](https://github.com/defenseunicorns/uds-runtime)**: UDS Runtime is an optional component in Core that provides the frontend for all things UDS, providing views and insights into your UDS cluster. | diff --git a/packages/standard/zarf.yaml b/packages/standard/zarf.yaml index fc79236de..a1f2cb03e 100644 --- a/packages/standard/zarf.yaml +++ b/packages/standard/zarf.yaml @@ -83,7 +83,11 @@ components: - name: vector required: true import: +<<<<<<< HEAD path: ../logging +======= + path: ../../src/vector +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) # Grafana - name: grafana diff --git a/src/istio/oscal-component.yaml b/src/istio/oscal-component.yaml index 8cb61f754..78f2e91ec 100644 --- a/src/istio/oscal-component.yaml +++ b/src/istio/oscal-component.yaml @@ -2,6 +2,991 @@ # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial component-definition: +<<<<<<< HEAD +======= + back-matter: + resources: + - rlinks: + - href: https://github.com/istio/istio/ + title: Istio Operator + uuid: 60826461-D279-468C-9E4B-614FAC44A306 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: istioMeshConfig + resource-rule: + field: + base64: false + jsonpath: .data.mesh + type: yaml + group: "" + name: istio + namespaces: + - istio-system + resource: configmaps + version: v1 + type: kubernetes + lula-version: "" + metadata: + name: check-istio-logging-all-traffic + uuid: 90738c86-6315-450a-ac69-cc50eb4859cc + provider: + opa-spec: + output: + observations: + - validate.msg + validation: validate.validate + rego: | + package validate + + # Default policy result + default validate = false + default msg = "Logging not enabled or configured" + + # Check if Istio's Mesh Configuration has logging enabled + validate { + logging_enabled.result + } + + msg = logging_enabled.msg + + logging_enabled = {"result": true, "msg": msg} { + # Check for access log file output to stdout + input.istioMeshConfig.accessLogFile == "/dev/stdout" + msg := "Istio is logging all traffic" + } else = {"result": false, "msg": msg} { + msg := "Istio is not logging all traffic" + } + type: opa + title: check-istio-logging-all-traffic + uuid: 90738c86-6315-450a-ac69-cc50eb4859cc + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: pods + resource-rule: + group: "" + name: "" + namespaces: [] + resource: pods + version: v1 + type: kubernetes + lula-version: "" + metadata: + name: istio-prometheus-annotations-validation + uuid: f345c359-3208-46fb-9348-959bd628301e + provider: + opa-spec: + output: + observations: + - validate.msg + - validate.exempt_namespaces_msg + validation: validate.validate + rego: | + package validate + import future.keywords.in + + # Default policy result + default validate = false + default msg = "Not evaluated" + + # Check for required Istio and Prometheus annotations + validate { + has_prometheus_annotation.result + } + msg = has_prometheus_annotation.msg + + # Check for prometheus annotations in pod spec + no_annotation = [sprintf("%s/%s", [pod.metadata.namespace, pod.metadata.name]) | pod := input.pods[_]; not contains_annotation(pod); not is_exempt(pod)] + + has_prometheus_annotation = {"result": true, "msg": msg} { + count(no_annotation) == 0 + msg := "All pods have correct prometheus annotations." + } else = {"result": false, "msg": msg} { + msg := sprintf("Prometheus annotations not found in pods: %s.", [concat(", ", no_annotation)]) + } + + contains_annotation(pod) { + annotations := pod.metadata.annotations + annotations["prometheus.io/scrape"] == "true" + annotations["prometheus.io/path"] != "" + annotations["prometheus.io/port"] == "15020" + } + + # Exemptions + exempt_namespaces = {"kube-system", "istio-system", "uds-dev-stack", "zarf"} + exempt_namespaces_msg = sprintf("Exempted Namespaces: %s", [concat(", ", exempt_namespaces)]) + is_exempt(pod) { + pod.metadata.namespace in exempt_namespaces + } + type: opa + title: istio-prometheus-annotations-validation + uuid: f345c359-3208-46fb-9348-959bd628301e + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: pods + resource-rule: + group: "" + name: "" + namespaces: [] + resource: pods + version: v1 + type: kubernetes + lula-version: "" + metadata: + name: all-pods-istio-injected + uuid: 1761ac07-80dd-47d2-947e-09f67943b986 + provider: + opa-spec: + output: + observations: + - validate.msg + - validate.exempt_namespaces_msg + validation: validate.validate + rego: | + package validate + import rego.v1 + + # Default policy result + default validate := false + default msg := "Not evaluated" + + exempt_namespaces := {"kube-system", "istio-system", "uds-dev-stack", "zarf", "istio-admin-gateway", "istio-tenant-gateway", "istio-passthrough-gateway"} + exempt_namespaces_msg = sprintf("Exempted Namespaces: %s", [concat(", ", exempt_namespaces)]) + + validate if { + has_istio_sidecar.result + } + msg = has_istio_sidecar.msg + + # Check for sidecar and init containers in pod spec + no_sidecar = [sprintf("%s/%s", [pod.metadata.namespace, pod.metadata.name]) | pod := input.pods[_]; not has_sidecar(pod); not is_exempt(pod)] + + has_istio_sidecar = {"result": true, "msg": msg} if { + count(no_sidecar) == 0 + msg := "All pods have Istio sidecar proxy." + } else = {"result": false, "msg": msg} if { + msg := sprintf("Istio sidecar proxy not found in pods: %s.", [concat(", ", no_sidecar)]) + } + + has_sidecar(pod) if { + status := pod.metadata.annotations["sidecar.istio.io/status"] + containers := json.unmarshal(status).containers + initContainers := json.unmarshal(status).initContainers + + has_container_name(pod.spec.containers, containers) + has_container_name(pod.spec.initContainers, initContainers) + } else = false + + has_container_name(containers, names) if { + container := containers[_] + container.name in names + } + + is_exempt(pod) if { + pod.metadata.namespace in exempt_namespaces + } + type: opa + title: all-pods-istio-injected + uuid: 1761ac07-80dd-47d2-947e-09f67943b986 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: adminGateway + resource-rule: + group: networking.istio.io + name: admin-gateway + namespaces: + - istio-admin-gateway + resource: gateways + version: v1beta1 + - description: "" + name: virtualServices + resource-rule: + group: networking.istio.io + name: "" + namespaces: [] + resource: virtualservices + version: v1beta1 + type: kubernetes + lula-version: "" + metadata: + name: check-istio-admin-gateway-and-usage + uuid: c6c9daf1-4196-406d-8679-312c0512ab2e + provider: + opa-spec: + output: + observations: + - validate.msg + validation: validate.validate + rego: | + package validate + + # Expected admin gateway details + expected_gateway := "admin-gateway" + expected_gateway_namespace := "istio-admin-gateway" + expected_ns_name := sprintf("%s/%s", [expected_gateway_namespace, expected_gateway]) + + # Default policy result + default validate = false + default admin_gw_exists = false + default admin_vs_match = false + default msg = "Not evaluated" + + validate { + result_admin_gw_exixts.result + result_admin_vs_match.result + } + + msg = concat(" ", [result_admin_gw_exixts.msg, result_admin_vs_match.msg]) + + result_admin_gw_exixts = {"result": true, "msg": msg} { + input.adminGateway.kind == "Gateway" + input.adminGateway.metadata.name == expected_gateway + input.adminGateway.metadata.namespace == expected_gateway_namespace + msg := "Admin gateway exists." + } else = {"result": false, "msg": msg} { + msg := "Admin gateway does not exist." + } + + result_admin_vs_match = {"result": true, "msg": msg}{ + count(admin_vs-admin_vs_using_gateway) == 0 + count(all_vs_using_gateway-admin_vs_using_gateway) == 0 + msg := "Admin virtual services are using admin gateway." + } else = {"result": false, "msg": msg} { + msg := sprintf("Mismatch of admin virtual services using gateway. Admin VS not using GW: %s. Non-Admin VS using gateway: %s.", [concat(", ", admin_vs-admin_vs_using_gateway), concat(", ", all_vs_using_gateway-admin_vs_using_gateway)]) + } + + # Count admin virtual services + admin_vs := {adminVs.metadata.name | adminVs := input.virtualServices[_]; adminVs.kind == "VirtualService"; contains(adminVs.metadata.name, "admin")} + + # Count admin VirtualServices correctly using the admin gateway (given by vs name containing "admin") + admin_vs_using_gateway := {adminVs.metadata.name | adminVs := input.virtualServices[_]; adminVs.kind == "VirtualService"; contains(adminVs.metadata.name, "admin"); adminVs.spec.gateways[_] == expected_ns_name} + + # Count all VirtualServices using the admin gateway + all_vs_using_gateway := {vs.metadata.name | vs := input.virtualServices[_]; vs.kind == "VirtualService"; vs.spec.gateways[_] == expected_ns_name} + type: opa + title: check-istio-admin-gateway-and-usage + uuid: c6c9daf1-4196-406d-8679-312c0512ab2e + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: istioConfig + resource-rule: + field: + base64: false + jsonpath: .data.mesh + type: yaml + group: "" + name: istio + namespaces: + - istio-system + resource: configmaps + version: v1 + type: kubernetes + lula-version: "" + metadata: + name: istio-metrics-logging-configured + uuid: 70d99754-2918-400c-ac9a-319f874fff90 + provider: + opa-spec: + output: + observations: + - validate.msg + validation: validate.validate + rego: | + package validate + + # Default policy result + default validate = false + default msg = "Not evaluated" + + # Validate Istio configuration for metrics logging support + validate { + check_metrics_enabled.result + } + msg = check_metrics_enabled.msg + + check_metrics_enabled = { "result": false, "msg": msg } { + input.istioConfig.enablePrometheusMerge == false + msg := "Metrics logging not supported." + } else = { "result": true, "msg": msg } { + msg := "Metrics logging supported." + } + type: opa + title: istio-metrics-logging-configured + uuid: 70d99754-2918-400c-ac9a-319f874fff90 + - description: | + lula-version: "" + metadata: + name: communications-terminated-after-inactivity-PLACEHOLDER + uuid: 663f5e92-6db4-4042-8b5a-eba3ebe5a622 + provider: + opa-spec: + rego: | + package validate + validate := false + + # Check on destination rule, outlier detection? + # -> Doesn't appear that UDS is configured to create destination rules. + type: opa + title: communications-terminated-after-inactivity-PLACEHOLDER + uuid: 663f5e92-6db4-4042-8b5a-eba3ebe5a622 + - description: | + lula-version: "" + metadata: + name: tls-origination-at-egress-PLACEHOLDER + uuid: 8be1601e-5870-4573-ab4f-c1c199944815 + provider: + opa-spec: + rego: | + package validate + default validate := false + # How to prove TLS origination is configured at egress + # DestinationRule? + type: opa + title: tls-origination-at-egress-PLACEHOLDER + uuid: 8be1601e-5870-4573-ab4f-c1c199944815 + - description: | + lula-version: "" + metadata: + name: fips-evaluation-PLACEHOLDER + uuid: 73434890-2751-4894-b7b2-7e583b4a8977 + title: fips-evaluation-PLACEHOLDER + uuid: 73434890-2751-4894-b7b2-7e583b4a8977 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: authorizationPolicy + resource-rule: + group: security.istio.io + name: keycloak-block-admin-access-from-public-gateway + namespaces: + - keycloak + resource: authorizationpolicies + version: v1beta1 + type: kubernetes + lula-version: "" + metadata: + name: istio-enforces-authorized-keycloak-access + uuid: fbd877c8-d6b6-4d88-8685-2c4aaaab02a1 + provider: + opa-spec: + output: + observations: + - validate.msg + validation: validate.validate + rego: | + package validate + import rego.v1 + + # Default policy result + default validate := false + default msg := "Not evaluated" + + # Validate both AuthorizationPolicy restricts access to Keycloak admin + validate if { + check_auth_policy_for_keycloak_admin_access.result + } + + msg = check_auth_policy_for_keycloak_admin_access.msg + + check_auth_policy_for_keycloak_admin_access = {"result": true, "msg": msg} if { + input.authorizationPolicy.kind == "AuthorizationPolicy" + valid_auth_policy(input.authorizationPolicy) + msg := "AuthorizationPolicy restricts access to Keycloak admin." + } else = {"result": false, "msg": msg} if { + msg := "AuthorizationPolicy does not restrict access to Keycloak admin." + } + + # Define the rule for denying access + expected_keycloak_admin_denial_rule := { + "from": [ + { + "source": { + "notNamespaces": ["istio-admin-gateway"] + } + } + ], + "to": [ + { + "operation": { + "ports": ["8080"], + "paths": ["/admin*", "/realms/master*"] + } + } + ] + } + + # Validate that the authorization policy contains the expected first rule + valid_auth_policy(ap) if { + ap.spec.action == "DENY" + rules := ap.spec.rules + + # Ensure the expected rule is present in the input policy + some i + rules[i] == expected_keycloak_admin_denial_rule + } + type: opa + title: istio-enforces-authorized-keycloak-access + uuid: fbd877c8-d6b6-4d88-8685-2c4aaaab02a1 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: istioConfig + resource-rule: + field: + base64: false + jsonpath: .data.mesh + type: yaml + group: "" + name: istio + namespaces: + - istio-system + resource: configmaps + version: v1 + type: kubernetes + lula-version: "" + metadata: + name: istio-tracing-logging-support + uuid: f346b797-be35-40a8-a93a-585db6fd56ec + provider: + opa-spec: + output: + observations: + - validate.msg + validation: validate.validate + rego: | + package validate + + # Default policy result + default validate = false + default msg = "Not evaluated" + + # Validate Istio configuration for event logging support + validate { + check_tracing_enabled.result + } + msg = check_tracing_enabled.msg + + check_tracing_enabled = { "result": true, "msg": msg } { + input.istioConfig.defaultConfig.tracing != null + input.istioConfig.defaultConfig.tracing.zipkin.address != "" + msg := "Tracing logging supported." + } else = { "result": false, "msg": msg } { + msg := "Tracing logging not supported." + } + type: opa + title: istio-tracing-logging-support + uuid: f346b797-be35-40a8-a93a-585db6fd56ec + - description: | + lula-version: "" + metadata: + name: egress-gateway-exists-and-configured-PLACEHOLDER + uuid: ecdb90c7-971a-4442-8f29-a8b0f6076bc9 + title: egress-gateway-exists-and-configured-PLACEHOLDER + uuid: ecdb90c7-971a-4442-8f29-a8b0f6076bc9 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: networkPolicies + resource-rule: + group: networking.k8s.io + name: "" + namespaces: [] + resource: networkpolicies + version: v1 + type: kubernetes + lula-version: "" + metadata: + name: secure-communication-with-istiod + uuid: 570e2dc7-e6c2-4ad5-8ea3-f07974f59747 + provider: + opa-spec: + output: + observations: + - validate.msg_correct + - validate.msg_incorrect + validation: validate.validate + rego: | + package validate + + # Default policy result + default validate = false + default msg_correct = "Not evaluated" + default msg_incorrect = "Not evaluated" + + # Expected values + expected_istiod_port := 15012 + expected_istiod_protocol := "TCP" + required_namespaces := {"authservice", "grafana", "keycloak", "loki", "metrics-server", "monitoring", "neuvector", "vector", "velero"} + + # Validate NetworkPolicy for Istiod in required namespaces + validate { + count(required_namespaces - correct_istiod_namespaces) == 0 + } + + msg_correct = sprintf("NetworkPolicies correctly configured for istiod in namespaces: %v.", [concat(", ", correct_istiod_namespaces)]) + msg_incorrect = msg { + missing_namespace := required_namespaces - correct_istiod_namespaces + count(missing_namespace) > 0 + msg := sprintf("NetworkPolicies not correctly configured for istiod in namespaces: %v.", [concat(", ", missing_namespace)]) + } else = "No incorrect istiod NetworkPolicies found." + + # Helper to find correct NetworkPolicies + correct_istiod_policies = {policy | + policy := input.networkPolicies[_] + policy.spec.egress[_].to[_].podSelector.matchLabels["istio"] == "pilot" + policy.spec.egress[_].ports[_].port == expected_istiod_port + policy.spec.egress[_].ports[_].protocol == expected_istiod_protocol + } + + # Helper to extract namespaces of correct NetworkPolicies + correct_istiod_namespaces = {policy.metadata.namespace | + policy := correct_istiod_policies[_] + } + type: opa + title: secure-communication-with-istiod + uuid: 570e2dc7-e6c2-4ad5-8ea3-f07974f59747 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: peerAuths + resource-rule: + group: security.istio.io + name: "" + namespaces: [] + resource: peerauthentications + version: v1beta1 + type: kubernetes + lula-version: "" + metadata: + name: enforce-mtls-strict + uuid: ca49ac97-487a-446a-a0b7-92b20e2c83cb + provider: + opa-spec: + output: + observations: + - validate.msg + validation: validate.validate + rego: | + package validate + + import future.keywords.every + + # Default policy result + default validate = false + default all_strict = false + default msg = "Not evaluated" + + validate { + result_all_strict.result + } + + msg = concat(" ", [result_all_strict.msg]) + + # Rego policy logic to evaluate if all PeerAuthentications have mtls mode set to STRICT + result_all_strict = {"result": true, "msg": msg} { + every peerAuthentication in input.peerAuths { + mode := peerAuthentication.spec.mtls.mode + mode == "STRICT" + } + msg := "All PeerAuthentications have mtls mode set to STRICT." + } else = {"result": false, "msg": msg} { + msg := "Not all PeerAuthentications have mtls mode set to STRICT." + } + type: opa + title: enforce-mtls-strict + uuid: ca49ac97-487a-446a-a0b7-92b20e2c83cb + - description: | + lula-version: "" + metadata: + name: authorized-traffic-egress-PLACEHOLDER + uuid: 7455f86d-b79c-4226-9ce3-f3fb7d9348c8 + title: authorized-traffic-egress-PLACEHOLDER + uuid: 7455f86d-b79c-4226-9ce3-f3fb7d9348c8 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: namespaces + resource-rule: + group: "" + name: "" + namespaces: [] + resource: namespaces + version: v1 + type: kubernetes + lula-version: "" + metadata: + name: all-namespaces-istio-injected + uuid: 0da39859-a91a-4ca6-bd8b-9b117689188f + provider: + opa-spec: + output: + observations: + - validate.msg + - validate.exempted_namespaces_msg + validation: validate.validate + rego: | + package validate + import future.keywords.every + import future.keywords.in + + default validate = false + default msg = "Not evaluated" + + # Validation + validate { + check_non_istio_injected_namespaces.result + } + msg = check_non_istio_injected_namespaces.msg + exempted_namespaces_msg = sprintf("Exempted Namespaces: %s", [concat(", ", exempted_namespaces)]) + + # List of exempted namespaces + exempted_namespaces := {"istio-system", "kube-system", "default", "istio-admin-gateway", + "istio-passthrough-gateway", "istio-tenant-gateway", "kube-node-lease", "kube-public", "uds-crds", + "uds-dev-stack", "uds-policy-exemptions", "zarf"} + + # Collect non-Istio-injected namespaces + non_istio_injected_namespaces := {ns.metadata.name | + ns := input.namespaces[_] + ns.kind == "Namespace" + not ns.metadata.labels["istio-injection"] == "enabled" + not ns.metadata.name in exempted_namespaces + } + + # Check no non-Istio-injected namespaces + check_non_istio_injected_namespaces = { "result": true, "msg": "All namespaces are Istio-injected" } { + count(non_istio_injected_namespaces) == 0 + } else = { "result": false, "msg": msg } { + msg := sprintf("Non-Istio-injected namespaces: %v", [non_istio_injected_namespaces]) + } + type: opa + title: all-namespaces-istio-injected + uuid: 0da39859-a91a-4ca6-bd8b-9b117689188f + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: gateways + resource-rule: + group: networking.istio.io + name: "" + namespaces: [] + resource: gateways + version: v1beta1 + type: kubernetes + lula-version: "" + metadata: + name: gateway-configuration-check + uuid: b0a8f21e-b12f-47ea-a967-2f4a3ec69e44 + provider: + opa-spec: + output: + observations: + - validate.msg + - validate.msg_existing_gateways + - validate.msg_allowed_gateways + validation: validate.validate + rego: | + package validate + import rego.v1 + + # default values + default validate := false + default msg := "Not evaluated" + + validate if { + check_expected_gw.result + check_all_gw_found.result + } + + msg := concat(" ", [check_expected_gw.msg, check_all_gw_found.msg]) + msg_existing_gateways := concat(", ", gateways) + msg_allowed_gateways := concat(", ", allowed) + + # Check if only allowed gateways are in the system + allowed := {"admin", "tenant", "passthrough"} + gateways := {sprintf("%s/%s", [gw.metadata.namespace, gw.metadata.name]) | gw := input.gateways[_]} + allowed_gateways := {sprintf("%s/%s", [gw.metadata.namespace, gw.metadata.name]) | gw := input.gateways[_]; gw_in_list(gw, allowed)} + actual_allowed := {s | g := gateways[_]; s := allowed[_]; contains(g, s)} + + check_expected_gw = {"result": true, "msg": msg} if { + gateways == allowed_gateways + msg := "Only allowed gateways found." + } else = {"result": false, "msg": msg} if { + msg := sprintf("Some disallowed gateways found: %v.", [gateways-allowed_gateways]) + } + + gw_in_list(gw, allowed) if { + contains(gw.metadata.name, allowed[_]) + } + + # Check if the entire set contains all required gateways + check_all_gw_found = {"result": true, "msg": msg} if { + actual_allowed == allowed + msg := "All gateway types found." + } else = {"result": false, "msg": msg} if { + msg := sprintf("Gateway type(s) missing: %v.", [allowed - actual_allowed]) + } + type: opa + title: gateway-configuration-check + uuid: b0a8f21e-b12f-47ea-a967-2f4a3ec69e44 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: authorizationPolicies + resource-rule: + group: security.istio.io + name: "" + namespaces: [] + resource: authorizationpolicies + version: v1beta1 + type: kubernetes + lula-version: "" + metadata: + name: istio-rbac-enforcement-check + uuid: 7b045b2a-106f-4c8c-85d9-ae3d7a8e0e28 + provider: + opa-spec: + output: + observations: + - validate.msg + - validate.msg_authPolicies + validation: validate.validate + rego: | + package validate + + # Default policy result + default validate = false + default msg = "Istio RBAC not enforced" + + # Evaluation for Istio Authorization Policies + validate { + count(all_auth_policies) > 0 + } + + # Get all authorization policies + all_auth_policies := { sprintf("%s/%s", [authPolicy.metadata.namespace, authPolicy.metadata.name]) | + authPolicy := input.authorizationPolicies[_]; authPolicy.kind == "AuthorizationPolicy" } + + msg = "Istio RBAC enforced" { + validate + } + msg_authPolicies = sprintf("Authorization Policies: %v", [concat(", ", all_auth_policies)]) + type: opa + title: istio-rbac-enforcement-check + uuid: 7b045b2a-106f-4c8c-85d9-ae3d7a8e0e28 + - description: | + lula-version: "" + metadata: + name: istio-rbac-for-approved-personnel-PLACEHOLDER + uuid: 9b361d7b-4e07-40db-8b86-3854ed499a4b + title: istio-rbac-for-approved-personnel-PLACEHOLDER + uuid: 9b361d7b-4e07-40db-8b86-3854ed499a4b + - description: | + lula-version: "" + metadata: + name: external-traffic-managed-PLACEHOLDER + uuid: 19faf69a-de74-4b78-a628-64a9f244ae13 + provider: + opa-spec: + rego: | + package validate + default validate := false + # This policy could check meshConfig.outboundTrafficPolicy.mode (default is ALLOW_ANY) + # Possibly would need a ServiceEntry(?) + # (https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services) + type: opa + title: external-traffic-managed-PLACEHOLDER + uuid: 19faf69a-de74-4b78-a628-64a9f244ae13 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: istioddeployment + resource-rule: + group: apps + name: istiod + namespaces: + - istio-system + resource: deployments + version: v1 + - description: "" + name: istiodhpa + resource-rule: + group: autoscaling + name: istiod + namespaces: + - istio-system + resource: horizontalpodautoscalers + version: v2 + type: kubernetes + lula-version: "" + metadata: + name: istio-health-check + uuid: 67456ae8-4505-4c93-b341-d977d90cb125 + provider: + opa-spec: + output: + observations: + - istiohealth.deployment_message + - istiohealth.hpa_message + validation: istiohealth.is_istio_healthy + rego: | + package istiohealth + + default is_istio_healthy = false + default deployment_message = "Deployment status not evaluated" + default hpa_message = "HPA status not evaluated" + + # Check if the Istio Deployment is healthy + is_istio_healthy { + count(input.istioddeployment.status.conditions) > 0 + all_deployment_conditions_are_true + input.istiodhpa.status.currentReplicas >= input.istiodhpa.spec.minReplicas + } + + all_deployment_conditions_are_true { + # Ensure every condition in the array has a status that is not "False" + all_true = {c | c := input.istioddeployment.status.conditions[_]; c.status != "False"} + count(all_true) == count(input.istioddeployment.status.conditions) + } + + deployment_message = msg { + all_deployment_conditions_are_true + msg := "All deployment conditions are true." + } else = msg { + msg := "One or more deployment conditions are false." + } + + hpa_message = msg { + input.istiodhpa.status.currentReplicas >= input.istiodhpa.spec.minReplicas + msg := "HPA has sufficient replicas." + } else = msg { + msg := "HPA does not have sufficient replicas." + } + type: opa + title: istio-health-check + uuid: 67456ae8-4505-4c93-b341-d977d90cb125 + - description: | + domain: + kubernetes-spec: + create-resources: null + resources: + - description: "" + name: gateways + resource-rule: + group: networking.istio.io + name: "" + namespaces: [] + resource: gateways + version: v1beta1 + type: kubernetes + lula-version: "" + metadata: + name: ingress-traffic-encrypted + uuid: fd071676-6b92-4e1c-a4f0-4c8d2bd55aed + provider: + opa-spec: + output: + observations: + - validate.msg + - validate.msg_exempt + validation: validate.validate + rego: | + package validate + import future.keywords.every + + default validate = false + default msg = "Not evaluated" + + # Validation + validate { + check_gateways_allowed.result + } + msg := check_gateways_allowed.msg + msg_exempt := sprintf("Exempted Gateways: %s", [concat(", ", exempt_gateways)]) + + # Collect gateways that do not encrypt ingress traffic + gateways_disallowed = {sprintf("%s/%s", [gateway.metadata.namespace, gateway.metadata.name]) | + gateway := input.gateways[_]; + not allowed_gateway(gateway) + } + + check_gateways_allowed = {"result": true, "msg": "All gateways encrypt ingress traffic"} { + count(gateways_disallowed) == 0 + } else = {"result": false, "msg": msg} { + msg := sprintf("Some gateways do not encrypt ingress traffic: %s", [concat(", ", gateways_disallowed)]) + } + + # Check allowed gateway + allowed_gateway(gateway) { + every server in gateway.spec.servers { + allowed_server(server) + } + } + + exempt_gateways := {"istio-passthrough-gateway/passthrough-gateway"} + allowed_gateway(gateway) { + sprintf("%s/%s", [gateway.metadata.namespace, gateway.metadata.name]) in exempt_gateways + # *Unchecked condition that exempted gateway is only used by virtual services that route https traffic + # Find all virtual services that use this gateway + # Check that vs has https scheme + } + + # Check allowed server spec in gateway + allowed_server(server) { + server.port.protocol == "HTTP" + server.tls.httpsRedirect == true + } + + allowed_server(server) { + server.port.protocol == "HTTPS" + server.tls.mode in {"SIMPLE", "OPTIONAL_MUTUAL"} + } + type: opa + title: ingress-traffic-encrypted + uuid: fd071676-6b92-4e1c-a4f0-4c8d2bd55aed +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) components: - control-implementations: - description: Control Implementation Description diff --git a/src/pepr/zarf.yaml b/src/pepr/zarf.yaml index 267a24f96..235ce27fe 100644 --- a/src/pepr/zarf.yaml +++ b/src/pepr/zarf.yaml @@ -48,3 +48,26 @@ components: - name: module valuesFiles: - values.yaml +<<<<<<< HEAD +======= + actions: + onDeploy: + before: + - mute: true + description: "Update helm ownership for Pepr resources if necessary during the upgrade" + cmd: | + ./zarf tools kubectl annotate secret -n pepr-system pepr-uds-core-api-token meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate secret -n pepr-system pepr-uds-core-module meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate secret -n pepr-system pepr-uds-core-tls meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate serviceaccount -n pepr-system pepr-uds-core meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate clusterrolebinding pepr-uds-core meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate clusterrole pepr-uds-core meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate role -n pepr-system pepr-uds-core-store meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate rolebinding -n pepr-system pepr-uds-core-store meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate service -n pepr-system pepr-uds-core meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate service -n pepr-system pepr-uds-core-watcher meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate deployment -n pepr-system pepr-uds-core meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate deployment -n pepr-system pepr-uds-core-watcher meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate mutatingwebhookconfiguration -n pepr-system pepr-uds-core meta.helm.sh/release-name=module --overwrite || true + ./zarf tools kubectl annotate validatingwebhookconfiguration -n pepr-system pepr-uds-core meta.helm.sh/release-name=module --overwrite || true +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) diff --git a/src/vector/chart/templates/uds-exemption.yaml b/src/vector/chart/templates/uds-exemption.yaml index c78054815..7ec004775 100644 --- a/src/vector/chart/templates/uds-exemption.yaml +++ b/src/vector/chart/templates/uds-exemption.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) apiVersion: uds.dev/v1alpha1 kind: Exemption metadata: diff --git a/src/vector/chart/templates/uds-package.yaml b/src/vector/chart/templates/uds-package.yaml index e4ac9d9c9..329effcf3 100644 --- a/src/vector/chart/templates/uds-package.yaml +++ b/src/vector/chart/templates/uds-package.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) apiVersion: uds.dev/v1alpha1 kind: Package metadata: diff --git a/src/vector/chart/values.yaml b/src/vector/chart/values.yaml index c4bf61c2a..5ef1e885e 100644 --- a/src/vector/chart/values.yaml +++ b/src/vector/chart/values.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) additionalNetworkAllow: [] # Examples: # - direction: Egress diff --git a/src/vector/common/zarf.yaml b/src/vector/common/zarf.yaml index 631dce2c9..89f933638 100644 --- a/src/vector/common/zarf.yaml +++ b/src/vector/common/zarf.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) kind: ZarfPackageConfig metadata: name: uds-core-vector-common diff --git a/src/vector/tasks.yaml b/src/vector/tasks.yaml index f044c85d5..afc4a5ea7 100644 --- a/src/vector/tasks.yaml +++ b/src/vector/tasks.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) tasks: - name: validate actions: diff --git a/src/vector/values/registry1-values.yaml b/src/vector/values/registry1-values.yaml index 95187afb4..961e5a5ce 100644 --- a/src/vector/values/registry1-values.yaml +++ b/src/vector/values/registry1-values.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) image: repository: registry1.dso.mil/ironbank/opensource/timberio/vector tag: 0.41.1 diff --git a/src/vector/values/unicorn-values.yaml b/src/vector/values/unicorn-values.yaml index 5a6d40405..68e19e7d7 100644 --- a/src/vector/values/unicorn-values.yaml +++ b/src/vector/values/unicorn-values.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) image: repository: cgr.dev/du-uds-defenseunicorns/vector tag: 0.41.1 diff --git a/src/vector/values/upstream-values.yaml b/src/vector/values/upstream-values.yaml index 8954e9d7d..9cc0bce8d 100644 --- a/src/vector/values/upstream-values.yaml +++ b/src/vector/values/upstream-values.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) image: repository: timberio/vector tag: 0.41.1-distroless-static diff --git a/src/vector/values/values.yaml b/src/vector/values/values.yaml index aff279fe7..7946cfa6f 100644 --- a/src/vector/values/values.yaml +++ b/src/vector/values/values.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) # Run as an agent daemonset role: "Agent" diff --git a/src/vector/zarf.yaml b/src/vector/zarf.yaml index 4a6b4da8c..ec1a31437 100644 --- a/src/vector/zarf.yaml +++ b/src/vector/zarf.yaml @@ -1,6 +1,9 @@ +<<<<<<< HEAD # Copyright 2024 Defense Unicorns # SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial +======= +>>>>>>> 2f6ed02 (feat!: switch from promtail to vector (#724)) kind: ZarfPackageConfig metadata: name: uds-core-vector