-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(adr): add ADR for E2E intent and design #432
Draft
brandtkeller
wants to merge
6
commits into
main
Choose a base branch
from
adr_0008-e2e-oscal
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
5e89332
chore(adr): e2e oscal stash changes
brandtkeller d711718
chore(adr): documenting context and strategy
brandtkeller 22ed016
chore(adr): add mermaid diagram to adr
brandtkeller a9b04a1
chore(adr): establish content management and validation initiatives
brandtkeller 6eff7f9
chore(adr): add decision and consequences
brandtkeller b65eb90
Merge branch 'main' into adr_0008-e2e-oscal
brandtkeller File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,97 @@ | ||
# 8. End to End OSCAL Artifact Strategy | ||
|
||
Date: 2024-05-17 | ||
|
||
## Status | ||
|
||
Proposed | ||
|
||
## Context | ||
|
||
A number of experiments have been conducted for the purpose of performing composable actions across multiple Open Security Controls Assessment Language (OSCAL) artifacts and models. While these do allow for centrally managing OSCAL through declarative manifests - they add overhead for additional tooling and orchestration. It also duplicates the intended behavior of OSCAL import fields. | ||
|
||
The OSCAL models are designed with support for identifying when data from one model should be imported into another model. Lula will both support and optimize the use of this intended functionality for purposes of enabling composable workflows between models as well as supporting artifacts. This import process can be an OSCAL-native to establish relationships across artifacts and models for which Lula can enrich to establish meaningful context and reporting for human-readable output. | ||
|
||
### OSCAL Context | ||
|
||
Composing compliance and security control information for various components and systems relies upon the context required and the purpose of the artifact. The current [OSCAL models](https://pages.nist.gov/OSCAL/resources/concepts/layer/) are categorized into three layers: | ||
- Controls layer | ||
> Defines a set of controls intended to reduce the risk to a system | ||
- Implementation Layer | ||
> Implementation of a system under a specific baseline as well as the individual components that may be incorporated into a system | ||
- Assessment Layer | ||
> Communicating all assessment findings including supporting evidence, and identifying and managing the remediation of identified risks to a system identified as a result of assessment activities | ||
|
||
The controls layer and catalogs/profiles are often associated with an authority behind the regulatory information. These are not limited to authorities but can also be authored internal to organizations or shared in the open. The implementation layer builds upon intent authored in the controls layer and allows for the implementation of controls for a system to be authored in an System Security Plan (SSP) as well as modular components that could then comprise a system. With components of a system _and_ a system defined, the assessment of said system against one or many baselines may be conducted with resulting information available for assessment conducted, the results of said assessment, and plans for remediation of identified gaps or risks. | ||
|
||
Lula will operate against the intent of a Catalog baseline for the purpose of the intended context for which GRC information is being authored. The context will vary dependent on where in the lifecycle the information is being produced and consumed in the following pattern: | ||
- A `component definition` will perform the mapping of one-to-many standards/benchmarks/policies to one-to-many system components | ||
- Where possible, a single `component definition` should focus on a single system component | ||
- A `system security plan` will aggregate the collection of one-to-many `component definitions` into a single system definition | ||
- An `assessment plan` will aggregate one-to-many `component-definitions` into a plan with depth of control satisfaction based upon many layers of implementation for any given control | ||
- An `assessment result` will outline the state after assessment based upon the the result of each control as identified in the `assessment plan` | ||
- A `plan of actions and milestones` will outline planned remediation for subsequent findings in the `assessment results` that were identified as not-satisfied | ||
|
||
### Artifact Strategy | ||
|
||
Given the context above, Lula will enrich the process through generation and the automation of artifacts and the content that comprises them. There are multiple workflows that allow for the similar outcomes - Lula focuses on enhancing the use of transient data from available context through the following but not limited to: | ||
- Generation of a `component-definition` will allow for initial mapping of controls for one-to-many catalogs/standards for a given component | ||
- A `validation` is a Lula construct for automation for use in a body-of-evidence towards a control/implemented-requirement being `satisfied` or `not-satisfied` | ||
- A single control/implemented-requirement may have many `validations` | ||
- A `component definition` serves as both the reusable compliance information, as well as the interface for overriding validations from other imported/inherited control information for a given component | ||
- A `component definition` can be validated using established `validations` to assess the component against an established threshold in isolation. | ||
- This threshold and output of a validation is represented in the `assessment results` model/artifact | ||
- When a target environment has been established, multiple `component definitions` can be aggregated into a top-level `component definition` via the `import component definitions` field | ||
- This system `component definition` allows for additional context/controls/implemented-requirements to be added based upon configuration or potential inheritance | ||
- This system `component definition` will then be used for the generation of a `system security plan` and an `assessment plan` | ||
- It is intended that the `assessment plan` be used for performing automated validation against a system in accordance with Risk Management Framework processes | ||
- Performing validation with the `assessment plan` will produce `assessment result` artifacts which can be used for compliance threshold decision making as identified above | ||
- With an `assessment result`, a `plan of actions and milestones` can be generated to outline the remediation of a system based on findings that were identified as `not-satisfied` | ||
|
||
There may be additional workflows for generation and context-building within these artifacts that become advisable based on situational needs. | ||
```mermaid | ||
flowchart TD | ||
a[Component Repo A] --> |generate component| A(Component Definition A) | ||
b[Component Repo B] --> |generate component| B(Component Definition B) | ||
c[Component Repo C] --> |generate component| C(Component Definition B) | ||
A --> |validate| D(Assessment Result) | ||
B --> |validate| E(Assessment Result) | ||
C --> |validate| F(Assessment Result) | ||
d[Staging Environment] --> |generate component| G(Component Definition Staging) | ||
A --> G | ||
B --> G | ||
C --> G | ||
G --> |generate assessment-plan| H(Assessment Plan) | ||
H --> |Validate| I(Assessment Results) | ||
e[Prod Environment] --> |generate component| J(Component Definition Prod) | ||
G --> J | ||
J --> |generate assessment-plan| K(Assessment Plan) | ||
J --> |generate system-security-plan| L(System Security Plan) | ||
K --> |validate| M(Assessment Results) | ||
M --> |generate poam| N(Plan of Actions and Milestones) | ||
``` | ||
### Content Management | ||
|
||
Lula will enable artifacts to be co-located in a single file or written to individually managed files. The [go-oscal](https://github.com/defenseunicorns/go-oscal) library and types for OSCAL were built to support a use-case where all OSCAL models can be represented as a single artifact. The single artifact representation allows for context sharing between models and the handling of the existing data. | ||
|
||
Lula will enable models to be represented as one more many artifacts: | ||
- Each model can be represented as a single artifact/file and references can be established between artifacts | ||
- This results in a collection of many files where each file may represent a given model | ||
- All models can be represented as a single artifact/file and references can be established between models | ||
- This results in a single file containing all machine-readable information | ||
|
||
Both scenarios will support composition/decomposition from one scenario to the other. The core focus of how Lula maintains the content of each model will be represented in the design principle for OSCAL data management that focuses on ensuring the existing data (whether generated, manually authored, or both) is retained across operations unless otherwise documented as owned by automation. | ||
|
||
### Validation | ||
|
||
A `validation` is automation used to provide process for the collection of some data and a policy that describes adherence of that data required for a given control. This automation will encompass technical and non-technical controls in order to facilitate the use of OSCAL for end-to-end accreditation. A `domain` is a Lula construct that exists to provide an interface and guardrails for the collection of data or performance of some audit. A `domain` provides the ability to establish guardrails while also enhancing the expected structure of inputs for a given `validation`. This means Lula will support automation that processes evidence for a given requirement provided the domain exists to enable processing the required data - enabling technical as well as non-technical requirements to be assessed. | ||
|
||
## Decision | ||
|
||
The decision will be the utilization of OSCAL-native processes for how to orchestrate end-to-end compliance. This will mean further alignment to the OSCAL framework and ultimately lead to artifacts used with Lula having greater provenance and portability across many reporting systems. | ||
|
||
## Consequences | ||
|
||
- Further alignment with the OSCAL framework | ||
- Reduced overhead for tooling, maintenance, and support of custom workflows | ||
- Restricted to current OSCAL constructs or custom namespace implementations |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be the assessment plan will aggregate one-to-one
system-security-plan
?Thinking the assessment plan links to the ssp which links to the components. It technically assess the admin controls too.
https://pages.nist.gov/OSCAL-Reference/models/v1.1.2/assessment-plan/json-reference/#/assessment-plan/import-ssp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question - it may vey well be.
If the data flow is:
component -> assessment plan -> system security plan
then yes that may be the case?But I see a lot of overlap between components and SSP that makes me wonder about this workflow.
I'm going to look at this ADR again - I don't want to focus too much on the solutioning (hence why there was mention of many possible workflows) - But I think I need to highlight another point - which is that we might use components to generate an AP & SSP - but when the generation of say the AP detects that an SSP exists - it can establish context sharing. So as you implement each model - you may have cross model connections being made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"we might use components to generate an AP & SSP - but when the generation of say the AP detects that an SSP exists - it can establish context sharing. So as you implement each model - you may have cross model connections being made."
^ This, 100% agree. I also agree there are several possibilities for our workflows.
I think OSCAL Data Flow viewpoint is
This may show it a little better too. https://pages.nist.gov/OSCAL/resources/concepts/layer/assessment/assessment-results/#key-concepts