Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance dashboard test reports and data processing #5811

Draft
wants to merge 19 commits into
base: enhacement/24586-benchmark-tests
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
### Root Path
artifacts_*/
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Use an official Python base image
FROM python:3.10-slim-bookworm
FROM python:3.13.0-slim-bookworm

# ENV Variables
ENV WORK_PATH=/app
Expand Down Expand Up @@ -30,6 +30,7 @@ COPY . ${WORK_PATH}/${APP_PATH}
WORKDIR ${WORK_PATH}/${APP_PATH}

# Install Python Dependencies
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install .

# Default command
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
# Variables
CONSOLE=/bin/bash
APP=dashboard_saturation_tests
DATA=data
WORK=/app
ARTIFACTS=artifacts
LOGS=logs
SCREENSHOTS=screenshots
CSV=csv
CONSOLE := /bin/bash
APP := dashboard_saturation_tests
DATA := data
WORK := /app
DATETIME := $(shell date +%Y%m%d_%H%M%S)
ARTIFACTS := artifacts_$(DATETIME)
LOGS := logs
SCREENSHOTS := screenshots
CSV := csv

.PHONY: init
init: rm purge build run ## Recreate Containers
Expand Down Expand Up @@ -48,9 +49,6 @@ purge: ## Purge All Docker Resources
docker system prune -a -f
.PHONY: destroy
destroy: rm purge ## Destroy All Docker Resources
.PHONY: style
style: ## Check Python Style
docker exec -it $(APP) $(CONSOLE) -c "pycodestyle data/dashboard_saturation_tests.py" || true
.PHONY: help
help: ## Display Help Message
@cat $(MAKEFILE_LIST) | grep -e "^[a-zA-Z_\-]*: *.*## *" | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ dashboard_saturation_tests/
├── data/
│ ├── lib/
│ │ ├── CookieManager.js
│ │ ├── ItemManager.js
│ │ ├── PathManager.js
│ │ └── ScreenshotManager.js
│ ├── tests/
Expand All @@ -20,8 +21,11 @@ dashboard_saturation_tests/
│ ├── artillery.xml
│ ├── dashboard_saturation_tests.py
│ └── processor.js
├── README.md
└── pyproject.toml
├── .gitignore
├── Dockerfile
├── Makefile
├── pyproject.toml
└── README.md
```

## Prerequisites
Expand All @@ -34,6 +38,18 @@ To run the script you need to have Python and Pip installed.

### Install Artillery + Playwright

The requirements for using Artillery + Playwright are indicative. The resources needed to run the tests depend greatly on the complexity of the tests.

| Simulated Users | CPU Cores | Memory (RAM) |
| --------------- | --------- | ------------ |
| 1 | 1 | 2 GB |
| 3 | 2 | 4 GB |
| 5 | 2 | 4 GB |
| 7 | 3 | 6 GB |
| 10 | 4 | 8 GB |
| 15 | 6 | 12 GB |
| 20 | 8 | 16 GB |

Artillery, Playwright and all the dependencies required for them to run correctly must be installed. Some dependencies are libraries that can be used in tests.

```shell script
Expand All @@ -56,19 +72,29 @@ artillery --version
playwright --version
```

## Initial setup
## Initial Setup

To run the tests, it is necessary to install the dependencies and the package. This can be done by running the following command:

```shell script
1. Move to the `wazuh-qa/deps/wazuh_testing/wazuh_testing/dashboard_saturation_tests` directory

2. Create the Python environment

```bash
python3 -m venv env
```

3. Activate the environment:
```bash
source env/bin/activate
python3 -m pip install .
```

Note: The use of the environment is optional,
4. Install the package
```bash
python3 -m pip install .
```

## Artillery + Playwright
## Usage

To run the tests, we will need to use the following command:

Expand Down Expand Up @@ -99,14 +125,6 @@ dashboard-saturation-tests --password <wazuh_pass> --ip <dashboard_ip>
- `--artillery` needs to receive a valid Artillery configuration file (for example, `artillery.yml`).
- `--type` only accepts two values ​​(`aggregate` or `intermediate`). Either or both can be chosen.

### Check PEP 8

The Python script complies with the PEP 8 standard. To verify that it continues to comply with the standard (after making changes) you just have to execute the following commands:

```shell script
pycodestyle dashboard_saturation_tests.py
```

## Using Docker

It is possible to use the `Docker` image with the entire environment set up for running tests. To facilitate its use, there is a `makefile` file with the necessary instructions. An example of use would be:
Expand All @@ -122,10 +140,50 @@ The `make exec` command (and, by extension, the `make test` command) requires th

In the Docker container, everything is in the `/app` directory. In `/app`, Artillery, Playwright and everything else necessary for them to work are installed. In `/app/dashboard_saturation_tests` are all the scripts. In that directory, the packages are installed and all the commands are executed.

## Analysis of the Results

### CSV

Running tests generates two types of CSV files:

- `Intermediate`: These represent intermediate statistics that are generated and printed to the console during the test run. By default, they’re generated every 10 seconds. These data points are useful for monitoring the progress of the test in real time.
- `Summaries`: These include partial summaries during the test execution. Useful for seeing application behavior in specific intervals.
- `Histograms`: Show the distribution of response times and other metrics in specific time intervals. Helps to identify performance peaks and dips.
- `Counters`: Show the number of requests, errors, and other events during specific intervals of the test. Ideal for monitoring progress and quickly spotting issues.

- `Aggregate`: These represent the overall statistics for the entire duration of the test. They correspond to the final statistics printed after the test completes. These data provide a complete summary of the test's performance.
- `Summaries`: Provides an overall view of performance across the entire test once it’s completed.
- `Histograms`: Offers a general view of how performance metrics were distributed over time during the entire test. Useful for checking consistency and stability.
- `Counters`: Provides a total summary of completed requests, errors, and key events at the end of the test. Gives a clear picture of overall performance.

### Logs

The log file generated by Artillery contains detailed information about each HTTP request made during the test.

This information is useful for analyzing the performance and efficiency of HTTP requests during load testing.

### Screenshots

Taking screenshots of the dashboard during load tests is a good idea for several reasons:

- `Visual Documentation`: It allows you to document the performance and stability of the system visually during the tests. This is particularly useful for reports and presentations.

- `Detailed Analysis`: You can compare screenshots from different moments to identify patterns or recurring issues that might not be evident from numerical data alone.

- `Problem-Solving`: If something goes wrong, having a screenshot of the exact moment can help identify what was happening in the system at that specific point.

- `Effective Communication`: It's much easier to explain problems and solutions to the team when you can show them exactly what was happening through screenshots.

Screenshots provide an additional layer of information and context that can be crucial to understanding and improving system performance.

## Example

This is an example of the most basic test execution and its corresponding result.

```shell script
dashboard-saturation-tests -p password -i ip
```

- Result: [report.zip](https://github.com/user-attachments/files/16542340/report.zip)
By default, the output is stored in 3 folders: `csv/`, `logs/` and `screenshots/`. This can be changed via the command parameters.

- Result: [report.zip](https://github.com/user-attachments/files/16542340/report.zip)
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ config:
screenshots: '{{ screenshots }}'
session: '{{ session }}'
username: '{{ username }}'
timeout: '{{ timeout }}'
scenarios:
- engine: playwright
name: test_01_login
Expand Down
Loading