Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate dashboard using test results #1137

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

rw1nkler
Copy link
Contributor

@rw1nkler rw1nkler commented Sep 28, 2023

This commit adds an initial version of the dashboard generation feature to XLS. To demonstrate the dashboard generation process, two examples are provided - a simple dashboard for a passthrough example and a more complex dashboard for a basic RLE encoder design.

The dashboard is generated from data extracted from tests or their output. To extract the data, users may use parsers provided in this PR or their custom scripts, which are responsible for extracting the information and saving it in a Dashboard JSON format. Custom parsers should read the log from stdin and write the extracted information to stdout.

The dashboard generation is done in a few steps. The whole process is controlled by the main dashboard.py script. Generating a dashboard is as follows:

  1. The dashboard.py script is invoked with a set of parameters that specify two types of operations: parsing output of a test (-p option) or parsing output files generated by the test (-f option). The arguments contain information about the test to run, the parser to use, and the potential file from which the data should be extracted. The script parses arguments and saves them in a more convenient form internally.

  2. The tests are then executed and parsed using the run_and_parse function from the run_and_parse.py file. For the -p option, the parser is run on the log produced by the test, for the -f option, the parser is run on the output file created by the test. The output files are assumed to be located in $TEST_UNDECLARED_OUTPUTS_DIR, which is a default location for output files produced by Bazel tests.

  3. The parsed data in JSON format is collected and then verified if it matches the Dashboard JSON schema. Both the verification mechanism and the schema are available in the validate_dashboard_json.py file.

  4. Next, the correct piece of information in the Dashboard JSON format is sorted and converted to a markdown file by the json_to_markdown function from the Python file with the same name.

  5. Finally, the utilities from mkdocs_creator.py are used to produce an HTML using mkdocs.

Since the dashboard relies on tests, it cannot be produced on build time. To generate the dashboard, one has to provide a path to which the HTML should be saved. For example:

bazel run -- //xls/modules/rle:rle_dashboard -o rle_dashboard

Here is a screenshot of the RLE Dashboard:
rle_dashboard

Relies on changes from #1031
Resolves #1058
CC @proppy

@proppy
Copy link
Member

proppy commented Sep 29, 2023

Thanks for sending this PR, other interesting metrics that would be interesting to track:

@proppy
Copy link
Member

proppy commented Oct 5, 2023

It would be nice to integrate some of that metric generation with https://github.com/hdl/bazel_rules_hdl and https://github.com/google/xls/tree/main/xls/build_rules so that we can easily generate / configure those report from BUILD files and have them being generated by our current build infrastructure on each CL.

@rw1nkler
Copy link
Contributor Author

rw1nkler commented Oct 6, 2023

Yes, it would be nice. However, can we consider adding more metrics in subsequent PRs?

@proppy
Copy link
Member

proppy commented Nov 24, 2023

lpawelcz and others added 16 commits December 8, 2023 16:04
remove cpython and embedded_python_interpreter

Signed-off-by: Pawel Czarnecki <[email protected]>
Generate GDS for process technologies:
* ASAP7
* SKY130

Signed-off-by: Pawel Czarnecki <[email protected]>
Add GDS write examples for RLE encoder and decoder for process
technologies:
* ASAP7
* SKY130

Signed-off-by: Pawel Czarnecki <[email protected]>
Internal-tag: [#46586]
Signed-off-by: Robert Winkler's avatarRobert Winkler <[email protected]>
The library contains XLSChannel, XLSChannelDriver and XLSChannelMonitor classes.

* XLSChannel - provides a mechanism for wrapping all signals related to XLS channels into one object.
* XLSChannelDriver - may be used to send data to XLS channel
* XLSChannelMonitor - may be used to monitor transaction taking place in XLS Channel

Internal-tag: [#46586]
Signed-off-by: Robert Winkler <[email protected]>
…ation

This commit adds a simple DSLX module that sends back the information
received on the input channel. The example contains tests written in
DSLX to verify the IR, as well as tests that use Cocotb framework to validate
behavior of the generated Verilog sources.

Internal-tag: [#46586]
Signed-off-by: Robert Winkler <[email protected]>
- `jsonschema` is used for validating in data provided for the dashboard
  generation script in for of a dedicated Dashboard JSON is valid

- `mdutils` is used to generate markdown from the obtained Dashboard JSON data

- `mkdocs`, `mkdocs-material` are used to generate HTML website out of the
  markdown files generated previously from Dashboard JSON

Internal-tag: #[46111]
Signed-off-by: Robert Winkler <[email protected]>
- `dashboard.py` is the main script responsible for generating the
  dashboard. It uses the rest of the scripts as utilities.

- `run_and_parse.py` contains functions for running tests and parsing
  their output to a Dashboard JSON format

- `validate_dashboard_json.py` contains function for validating if the
  provided JSON is in the Dashboard JSON format

- `json_to_markdown.py` converts the Dashboard JSON to a markdown document

- `mkdocs_creator.py` converts the markdown to HTML using mkdocs

Internal-tag: #[46111]
Signed-off-by: Robert Winkler <[email protected]>
This commit adds three parsers that can be used by the user
to extract the data for creating a dashboard:

- `cocotb_results_xml_parser.py` can extract the information about
  successful and failed cocotb tests using result.xml saved by the test

- `dslx_test_parser.py` can be used to extract the information about
  successful and failed DSLX tests using the log from the test

- `generic_parser.py` can be used to get the Dashboard JSON data dumped
  directly to the log within special delimiters. To dump data in this format,
  one can use a dedicated function contained in `utils.py`

Internal-tag: #[46111]
Signed-off-by: Robert Winkler <[email protected]>
Internal-tag: #[46111]
Signed-off-by: Robert Winkler <[email protected]>
The test check correctness of the encoding and measures both delay
and performance of the core

Internal-tag: #[46111]
Signed-off-by: Robert Winkler <[email protected]>
The dashboard contains results of DSLX and cocotb tests as well as
delay and performance measurements obtained in the cocotb test.

Internal-tag: #[46111]
Signed-off-by: Robert Winkler <[email protected]>
Internal Tag: [#47739]

Signed-off-by: Pawel Czarnecki <[email protected]>
Internal-tag: [#47739]
Signed-off-by: Robert Winkler <[email protected]>
Internal-tag: [#46111]
Signed-off-by: Pawel Czarnecki <[email protected]>
@proppy
Copy link
Member

proppy commented Apr 3, 2024

what do you think of moving this (along side #1160) in separate repo similar to https://github.com/antmicro/xls-cosimulation-demonstrator ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Dashboard for aggregating RLE/DBE test results
3 participants