-
Notifications
You must be signed in to change notification settings - Fork 0
Testing and QAP (Quality Assurance Plan)
This page describes the testing phase of the library and provides a Quality Assurance Plan (QAP) followed during the build phase that can be useful to evaluate the procedures and the tools used during the development and the build stages to validate the quality of the library. In this page you will find informations regarding:
- tools and procedures used in testing and quality stages
- flow followed during the various stages of the product lifecycle
- Test organisation and management
- feature classification
- quality parameters and their minimum values in order to consider the library acceptable
- issue management
- The documentation generated and provided with the library
The library uses different tools during the development lifecycle and use manual and automatic procedures in order to deliver the library.
The development stage is divided into two steps
- write functional code or edit existing one in order to fix problems and make the library more performant or more readable
- write tests
The process in order to ensure that the code written has a high level of quality is described below:
- Tests are run every time the code is changed. If a tests fails, the code is reviewed and corrected until all tests pass.
- When all tests pass a code coverage report is generated in order to check if the code coverage value is greater than the minimum value. This check is done manually. If the value is low, the code is reviewed and edited, and the process starts again from step 1
- If the value is acceptable, a local static code analysis is performed in order to determine the quality of the code. The tool used to perform this task is SonarQube community edition updated to the latest version. If the quality gate is less than A, the code is reviewed and edited. and the process starts again from step 1
- If the quality gate is A, a new pull request is created in order to check the code with SonarCloud. If the quality gate is lesser than A, the code is reviewed and edited. and the process starts again from step 1
When the commit is pushed to the repository, an automatic job is launched in order to build an publish the library to NuGet. The steps done in order to ensure that the library has a high level of quality are the following:
- The code is rebuilded in an isolated, brand-new Linux and Windows container. If the build fails, an error is raised and the artifact is not generated
- If the code is builded correctly, tests are run in the container in order to check if the library has the same behaviour in different operative systems. If tests fail, an error is raised and the library is not published
- If all tests pass, the library is published to Nuget and the job ends without errors
Even if there are different controls and quality conditions used to provide a high quality library, there may be some errors and issues that are still present. The main channel used to report and manage all kind of issues (bugs, feature request, documentation error and corrections, ...) is the Issue section available in GitHub. There are diffent type of issues that may be opened. Some of these are
- documetation issue
- library issue
The issue must contain the link to the page where the issue is present and must contain details about the changes to edit.
The issue must contain details regarding how to reproduce the error and the error received
The flow followed when a new issue regarding a bug or a feature request is opened is:
- The issue is analysed and, if applicable, a new test case is added in order to avoid regressions
- If the issue is confirmed, the development stage begins with the rules described above. When the development stage ends, a new pull requests is created in order to review if the fix solves the problem. If there is not a response in 2 days, the new code is merged and the issue is closed
- If the issue is not confirmed, new details will be asked in order to reproduce the problem. If there is not a feedback by the requester in 2 days, there will be a kind reminder in order to provide the requested feedback. If there is not a feedback by the requester in 2 days, the issue will be closed
If you want to contribute by resolving an issue, you can do it by creating a pull request and link it to the issue. The flow used to validate the pull request is the same used in development stage.
Features are classified into two categories
- standard features
- preview features
Standard features are well tested and can be used in other projects or in production without significative errors or strange behaviours. They are also well documented.
Preview features are features that have very few test cases and may generate strange behaviours and errors if used in other projects. It is suggested to use this feature only if it strictly necessary until the feature becomes a standard one. Preview features are marked as preview, and are supported as standard one. However, the analysis of this kind of features may be longer that the standard one.
Tests are used to test the functionalities provided by the library and verify if all code branches are covered.
There are three types of tests used in this application
- unit tests
- functional tests
- integration tests
Functional and unit tests are tests that test the program functionalities or a single portion of code (for example a single method). These tests can be launched automatically or manually and are used also in determining the code coverage level of the library. There are one or more tests for a specific functionality or method because they can test different cases. Cases tested are:
- optimal cases where all parameters are correct
- cases where there are some wrong paramters or some values not permitted (if applicable)
- cases where there are some prerequisites not satisfied (if applicable)
- cases that raise exceptions
Integration tests are simple programs or functional tests that are useful to check if the library could be integrated in other projects or not and return an expected output. These programs are run automatically or manually and are also useful for code coverage. Case tests in integration tests are the same as other tests.
As InterAppConnector is a library that should be integrated in other applications, it's important that the quality level should be very high in order to simplify the troubleshooting of the errors that may occur in the application. For this reason, it's important to define some conditions that are useful to evaluate if the quality of the new code added is high. In order to do this, this library use the Clean As You Code approach used by SonarCloud software which defines that in new code:
- No new bugs are introduced
- No new vulnerabilities are introduced
- All new security hotspots are reviewed
- New code has limited technical debt
- New code has limited duplication
- New code is properly covered by tests
The condition and the value defined for each parameters are defined in the table below
Parameter | Condition | Value |
---|---|---|
Reliability Rating | is | A (no bugs) |
Security Rating | is | A (no vulnerabilities) |
Security Hotspots Reviewed | is greater or equal to | 100% |
Maintainability Rating | is | A (Technical debt ratio is less than 5.0%) |
Coverage | is greater or equal to | 90.0% |
Duplicated Lines (%) | is lesser or equal to | 3.0% |
All conditions must be satisfied in order to have a green quality gate and proceed to the build and publish stage
It's important to point out that this approach does not guarantee that the library does not have any kind of errors. Bugs and errors may still exists in the library. If you find an error, open an issue in GitHub. The way how issues are managed are described in the Maintenance stage section
A library without a documentation is extremely difficult to use and implement in other projects. For this reason there are different documentation provided with this library
- a comprehensive guide that explains the library features, how to implemenet them and some practical examples
- the code documentation generated with DocFX tool
A user guide is available in Wiki that use a comprehensive language and can be used to learn how to use and implement the library in other projects. Every function has its own page and contains:
- a description of the functionality
- how to implement the functionality in the code
- the exceptions that can be raised if the functionality is not used correctly
- practical examples
- Getting started
- Create Commands
- Manage arguments
- Argument basics
- Shared arguments between different commands
- Argument aliases
- Argument description
- Exclude an argument from argument list
- Use enumerations as argument
- Use objects and structs as argument type
- Validate argument value before command execution
- Customize the command example in command help
- Define rules