-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: initial lula report #599
base: main
Are you sure you want to change the base?
Conversation
Quick feedback loop - I can trace the code but I would always recommend starting with docs on a new feature. Capture the purpose - let people challenge that intent. I don't believe this needs an ADR - so docs would be sufficient. At the same time I think we should keep challenging if we know enough to pursue it further. Does this provide enough value to be of use or does some infrastructure require more maturation first. Transparently I do not know if we have enough context to make the result meaningful. |
Still need to add tests but want to figure out where/what packages to create to tidy up the main report code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trending in the right direction now that we're getting to core of what we need (or can provide currently).
I think we're still missing a bit of big picture vision -> Which I/the team can help define.
See my comment about the context that drives the reporting (as we see it today). Ideally we as a team need to provide a clearer deliverable on our issues (WIP) - but I appreciate you pursuing this nonetheless.
My current take:
Reporting should be a function of collecting available context and reporting accordingly.
- First iteration starts with component-definition and reports controls available.
- With the presence of a catalog/profile -> we can overlay that control information to calculate percentages.
- with the presence of assessment-results we can check satisfied/not-satisfied
Given that - we can start migrating functionality to the oscal model libraries for each model as applicable to get the context we need to return here for collective reporting.
My opinion is that reporting is not singly-model focused and rather total context driven. I may be very wrong - let's chat as a team if anyone feels otherwise.
src/cmd/report/report.go
Outdated
Short: "Build a compliance report", | ||
Example: reportHelp, | ||
Run: func(_ *cobra.Command, args []string) { | ||
if opts.InputFile == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we mark this flag as required then we can remove this check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Swapped that to required removed the check.
src/cmd/report/report.go
Outdated
spinner.Success() | ||
|
||
spinner = message.NewProgressSpinner("Determining OSCAL model type") | ||
modelType, err := determineOSCALModel(oscalModels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be a good spot to establish a separation of duties -> We've performed the essential operation of retrieving/parsing the data -> now let's create a function that handles the next steps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up splitting that up twice, the first time just to separate duties but ended up adding 2 new functions. 1 that is public that will run the actual logic and the second that is a handler for model handlers.
Ran into an issue testing so I came back to add the function that runs the logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues with the current direction.
I would remove the current documentation and run a lula internal gen-cli-docs
in order to generate the new CLI docs for this command. Iterate as necessary to try and capture all context in the CLI command code as possible. If you need additional docs place them outside the docs/cli-commands
directory.
Otherwise this gets us moving in the right direction - as I mentioned previously, I expect future iterations to be less of a model-by-model reporting structure and rather wholistic reporting of all models - but that is out of scope currently.
src/cmd/report/report.go
Outdated
spinner.Success() | ||
|
||
spinner = message.NewProgressSpinner("Determining OSCAL model type") | ||
modelType, err := determineOSCALModel(oscalModels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The oscal
pkg under src/pkg/common/oscal
contains a GetOscalModel()
that returns the same values.
I would consider removing this -> removing the required parameter from the handleOSCALModel()
function and call it directly inside handleOSCALModel()
.
Unless you have some unique requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up using a few more functions from common where I found redundancy.
src/test/e2e/report_test.go
Outdated
"github.com/defenseunicorns/lula/src/cmd/report" | ||
) | ||
|
||
func TestReportCommand(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason these cannot serve as more of a unit test to be coupled with the report
package.
I'd like to start iterating E2E
tests to be more CLI-driven context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Swapped those to unit and I think created a more CLI approach.
src/test/e2e/report_test.go
Outdated
expectErr: false, | ||
}, | ||
{ | ||
name: "Catalog OSCAL Model", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in addition to the conversion of this test to more of a unit test -> it may help reduce testing file requirements. Possible use of data structures and objects if possible (look at your function inputs/outputs and consider the requirements).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created a mock component definition data structure to have some data for testing.
src/cmd/report/report.go
Outdated
} | ||
|
||
// Handler for Component Definition OSCAL files to create the report | ||
func handleComponentDefinition(componentDefinition *oscalTypes_1_1_2.ComponentDefinition, format string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we consider composing comp-defs here prior to processing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call! added the compose pieces to the beginning there.
src/cmd/report/report.go
Outdated
Use: "report", | ||
Hidden: false, | ||
Aliases: []string{"r"}, | ||
Short: "Build a compliance report", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's challenge this a bit more -> what function does or can this command serve?
Report on the current state of compliance
or....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe not build a compliance report as in a pdf but I can see it pulling various statics and data out of the OSCAL to be easier to read and understand. I am picturing a thousands of lines SSP and I want to know what controls am I missing, what controls do I have, do I have multiple frameworks and can I see that per framework. For POAMs Im thinking how do I see my critical or highs specifically, how do I see time left on moderates to patch.
For components I would like to have a quick count of controls per framework, how many have validations, how many controls are in this component-definition that aren't in a profile/catalog.
I can also see parts of it being reused similar to lula evaluate -f assessment-result.yaml --sumnmary
doing the summary part.
Maybe the part that gets the data for UDS Runtime to present too. Thought there is it would be similar questions/data points around the OSCAL to show.
src/cmd/report/report.go
Outdated
lula report -f oscal-component-definition.yaml --file-format yaml | ||
` | ||
|
||
var reportCmd = &cobra.Command{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this has been up for some time - Can we align the creation of this command to template
such that the same cobra patterns are followed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Believe ive restructured this
) | ||
|
||
// TestLulaReportValidComponent checks that the 'lula report' command works with a valid component definition with multiple components and framework prop. | ||
func TestLulaReportValidComponent(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heading in the right direction. We have some examples and test utilities to pull from. Would like to standardize on that and @meganwolf0 may have some insights for what we need for structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this might be in reference to doing some golden file checks, I think the PR for compose should provide some reusable functions for checking the data from report, and the tools_compose_test.go
and tools_template_test.go
might be decent-ish places to start from
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I think there was also a discussion we were going to have as a team about e2e testing practices/structure so some of that could change)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe ive followed the same structure
src/cmd/report/report.go
Outdated
return strings.HasPrefix(str, "http://") || strings.HasPrefix(str, "https://") | ||
} | ||
|
||
func PrintReport(data ReportData, format string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't picked through this, so this function might not be relevant, but the Table
under messages might save some of this functionality
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added that table functionality, I like the way that looks.
…nitial-lula-report
return err == nil | ||
} | ||
|
||
func fetchOrReadFile(source string) ([]byte, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
network.Fetch()
will return file bytes for local files on the filesystem or remote files on network. I don't believe this functionality is needed unless you have an edge case you're solving for.
) | ||
|
||
// Helper to determine if the controlMap source is a URL | ||
func isURL(str string) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the comment below I imagine this will be removed - but I do think this has some gaps worth addressing if it were to be kept.
IE the return statement does not indicate a value that aligns to the function intent.
Description
Adds report functionality starting with component definition.
Will report how many controls are mapped per source and per framework for each component. There is the possibility that a component-definition file has multiple sources and multiple frameworks. The current report will list how many controls are in each and will give a % mapped based on the source provided for each.
Running
lula generate component --catalog-source https://raw.githubusercontent.com/GSA/fedramp-automation/master/dist/content/rev5/baselines/json/FedRAMP_rev5_MODERATE-baseline-resolved-profile_catalog.json --component 'Keycloak' --requirements ac-12,ac-2,ac-2.1,ac-2.3,ac-2.4,ac-2.5,ac-2.7,ac-3,ac-5,ac-6,ac-6.1,ac-6.2,ac-6.5,ac-6.9,ac-6.10,ac-7,ac-8,au-2,au-3,au-3.1,au-8,au-9.4,au-12,cm-5.1,cm-5.5,ia-2,ia-4.4,ia-5.1,ia-5.2,ia-6,ia-8.1,ia-11,sc-13 --remarks assessment-objective --framework il4
Will show that ac-2 is fixed but also show that other controls aren't broken as a side effect of the fix.
Related Issue
Relates to #223 and #247
Type of change
Checklist before merging