Skip to content

Commit

Permalink
Use same code style across project
Browse files Browse the repository at this point in the history
Changes:
- use google code style across project
- max-line-length set to 100
- updated README.md for TestPyPi
- removed most linter warnings
- added tests/ to flake8 workflow test again
- updated docstrings
  • Loading branch information
mrom1 committed Jul 18, 2024
1 parent 855884b commit ff4d690
Show file tree
Hide file tree
Showing 64 changed files with 1,257 additions and 555 deletions.
4 changes: 3 additions & 1 deletion .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ count = True

statistics = True

max-line-length = 128
max-line-length = 100

exclude = .tox,.venv,.env,venv,env,build,dist,doc,a2lparser/gen/

extend-ignore = E203
13 changes: 1 addition & 12 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
@@ -1,20 +1,9 @@
name: build

# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [ main ]
pull_request:
branches: [ main ]

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
on: push

jobs:
###############
# Linux Build #
###############
build-Linux:
runs-on: ubuntu-latest

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/flake8.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,4 @@ jobs:
# Performs a flake8 check on the fritzsniffer package and tests
- name: Run flake8
run: |
flake8 a2lparser/ --config=.flake8
flake8 a2lparser/ tests/ --config=.flake8
9 changes: 4 additions & 5 deletions .github/workflows/publish-to-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,7 @@ jobs:
path: dist/

publish-to-pypi:
name: >-
Publish Python 🐍 distribution 📦 to PyPI
name: PyPi - Publish distribution 📦 to PyPI
if: startsWith(github.ref, 'refs/tags/') # only publish to PyPI on tag pushes
needs:
- build
Expand All @@ -53,7 +52,7 @@ jobs:

github-release:
name: >-
Sign the Python 🐍 distribution 📦 with Sigstore
Sign the distribution 📦 with Sigstore
and upload them to GitHub Release
needs:
- publish-to-pypi
Expand Down Expand Up @@ -95,7 +94,7 @@ jobs:
--repo '${{ github.repository }}'
publish-to-testpypi:
name: Publish Python 🐍 distribution 📦 to TestPyPI
name: TestPyPi - Publish distribution 📦 to TestPyPI
needs:
- build
runs-on: ubuntu-latest
Expand All @@ -116,4 +115,4 @@ jobs:
- name: Publish distribution 📦 to TestPyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
repository-url: https://test.pypi.org/legacy/
repository-url: https://test.pypi.org/legacy/
43 changes: 31 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,20 @@

## Overview

The Python A2L Parser is a tool designed for reading A2L files compliant with the [ASAM MCD-2 MC](https://www.asam.net/standards/detail/mcd-2-mc/) Data Model for ECU Measurement and Calibration. This parser, implemented in Python using [PLY](https://ply.readthedocs.io/en/latest/index.html), constructs an Abstract Syntax Tree (AST) from A2L files, allowing for structured data access and utility functions like searching.
The Python A2L Parser is a tool designed to parse A2L files compliant with the [ASAM MCD-2 MC](https://www.asam.net/standards/detail/mcd-2-mc/) Data Model for ECU Measurement and Calibration. Implemented in Python using [PLY](https://ply.readthedocs.io/en/latest/index.html), it constructs an Abstract Syntax Tree (AST) from A2L files, enabling structured data access and utility functions such as searching. All resources used in development are sourced from publicly available information, including the [ASAM Wiki](https://www.asam.net/standards/detail/mcd-2-mc/wiki/).

This project supports ASAM MCD-2 MC Version 1.7.1 and focuses on parsing A2L grammar, not providing mapping capabilities. The module also includes functionality for converting parsed A2L files into simpler formats like XML, JSON, and YAML.
The parser supports ASAM MCD-2 MC Version 1.7.1 and is focused on parsing A2L grammar without providing mapping capabilities. Additionally, the module includes functionality for converting parsed A2L files into simpler formats like XML, JSON, and YAML.

You can use this repository to interpret A2L files, build upon this functionality, or for educational purposes.
This repository can be used for interpreting or validating A2L files, extending its functionality, or for educational purposes.

**Note:** This project is released under the GPL license with no warranty and is recommended for educational purposes. For professional solutions, consider exploring specialized tools such as the [MATLAB Vehicle Network Toolbox](https://www.mathworks.com/help/vnt/index.html) or the [Vector ASAP2 Toolset](https://www.vector.com/int/en/products/products-a-z/software/asap2-tool-set/).
**Note:** Released under the GPL license with no warranty, this project is recommended for educational use. For professional solutions, consider specialized tools such as the [MATLAB Vehicle Network Toolbox](https://www.mathworks.com/help/vnt/index.html) or the [Vector ASAP2 Toolset](https://www.vector.com/int/en/products/products-a-z/software/asap2-tool-set/).

## Installation

To install the A2L Parser, run:

**Note:** Until I fix some more minor issues and create a release version, a TestPyPi version is uploaded which you can use:

```console
pip install -i https://test.pypi.org/simple/ a2lparser --extra-index-url https://pypi.org/simple/
```
Expand All @@ -32,17 +34,34 @@ from a2lparser.a2lparser import A2LParser
from a2lparser.a2lparser_exception import A2LParserException

try:
# Create Parser and parse files
ast = A2LParser(quiet=True).parse_file(files="./data/test.a2l")
# Create a parser and parse files.
# Allows multiple files to be passed with wildcards.
# Will only print errors, no information like progressbar.
# Returns a dictionary.
ast_dict = A2LParser(log_level="INFO").parse_file("./testfiles/test_*.a2l")

# The dictionary holds the AbstractSyntaxTree object under the file name key.
ast = ast_dict["test_1.a2l"]

# Dictionary access on the abstract syntax tree.
# Returns a Python dictionary.
project = ast["PROJECT"]
module = project["MODULE"]
print(f"Project {project['Name']} with module: {module['Name']}"

# Searches for all MEASUREMENT sections.
# find_section returns an AbstractSyntaxTree
measurements = ast.find_sections("MEASUREMENT")

# Dictionary access on abstract syntax tree
module = ast["test.a2l"]["PROJECT"]["MODULE"]
# All found MEASUREMENT sections are under the "MEASUREMENT" key
measurements_list = measurements["MEASUREMENT"]

# Searches for all MEASUREMENT sections
measurements = ast.find_sections("MEASUREMENT")
print(measurements)
print(f"Found {len(measurements_list)} MEASUREMENT sections.")

except A2LParserException as ex:
# Catching A2LParserException:
# Generally occurs when a fatal error in parsing is encountered,
# or if the generated AST is empty (i.e., no data could be parsed).
print(ex)
```

Expand All @@ -69,5 +88,5 @@ options:
--no-validation Disables possible A2L validation warnings
--gen-ast [CONFIG] Generates python file containing AST node classes
--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
--version show program's version number and exit ```
--version show program's version number and exit
```
6 changes: 5 additions & 1 deletion a2lparser/a2l/a2l_lex.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,11 @@ def t_NEWLINE(self, t):

@TOKEN(
r"\b("
+ r"|".join(LexerKeywords.keywords_type + LexerKeywords.keywords_enum + LexerKeywords.keywords_datatypes)
+ r"|".join(
LexerKeywords.keywords_type
+ LexerKeywords.keywords_enum
+ LexerKeywords.keywords_datatypes
)
+ r")\b"
)
def t_KEYWORD_TYPE(self, t):
Expand Down
24 changes: 18 additions & 6 deletions a2lparser/a2l/a2l_validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,9 @@ def validate(self, a2l_content: str) -> None:
elif match.group().lower().startswith("/end"):
_, last_section = sections_stack[-1]
if last_section != section:
errors.append(f"Detected unexpected end of section on '{line.lstrip()}' at line {i}.")
errors.append(
f"Detected unexpected end of section on '{line.lstrip()}' at line {i}."
)
else:
sections_stack.pop()

Expand All @@ -86,6 +88,14 @@ def validate(self, a2l_content: str) -> None:
raise self.A2LValidationError(errors)

def _remove_comments(self, line: str) -> str:
"""
Removes comments from a given line of code.
Args:
line (str): The line of code containing comments.
Returns:
str: The line of code with comments removed.
"""
result = []
i = 0
length = len(line)
Expand All @@ -95,25 +105,27 @@ def _remove_comments(self, line: str) -> str:
while i < length:
# If inside a comment block, skip characters until the end of the block
if skip_tokens:
if line[i:i+2] == '*/':
if line[i : i + 2] == "*/":
skip_tokens = False
i += 2
continue
i += 1
# Detect the start of a multiline comment
elif line[i:i+2] == '/*' and not string_literal_started:
elif line[i : i + 2] == "/*" and not string_literal_started:
skip_tokens = True
i += 2
# Detect the start of a single line comment
elif line[i:i+2] == '//' and not string_literal_started:
elif line[i : i + 2] == "//" and not string_literal_started:
break
# Handle string literals properly
elif line[i] in {'"', "'"}:
quote_char = line[i]
result.append(line[i])
i += 1
string_literal_started = not string_literal_started
while i < length and (line[i] != quote_char or (line[i] == quote_char and line[i-1] == '\\')):
while i < length and (
line[i] != quote_char or (line[i] == quote_char and line[i - 1] == "\\")
):
result.append(line[i])
i += 1
if i < length:
Expand All @@ -125,4 +137,4 @@ def _remove_comments(self, line: str) -> str:
result.append(line[i])
i += 1

return ''.join(result)
return "".join(result)
17 changes: 15 additions & 2 deletions a2lparser/a2l/a2l_yacc.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,10 @@ def __init__(
"""
super().__init__()
self.a2l_lex = A2LLex(
debug=debug, optimize=optimize, generated_files_dir=generated_files_dir, lex_table_file=lex_table_file
debug=debug,
optimize=optimize,
generated_files_dir=generated_files_dir,
lex_table_file=lex_table_file,
)
self.tokens = self.a2l_lex.tokens
self.experimental_error_resolve = False
Expand Down Expand Up @@ -114,8 +117,18 @@ def p_error(self, p):
if not p:
# End of file reached. This section could be used for validation.
return
logger.error(f"Syntax error at line {p.lineno} on token \"{p.value}\" in section {self.a2l_lex.current_section}.")
logger.error(
(
f"Syntax error at line {p.lineno} on token '{p.value}' "
f"in section {self.a2l_lex.current_section}. "
f"No Grammar rule found for this token."
)
)

##################################################
# This is the final rule which defines the end #
# and the root of the parsed content. #
##################################################
def p_abstract_syntax_tree_final(self, p):
"""
abstract_syntax_tree_final : a2l_final
Expand Down
Loading

0 comments on commit ff4d690

Please sign in to comment.