Skip to content

feelpp/benchmarking

Repository files navigation

Feel++ Template Project

CI

This repository provides a basic starting point for a Feel++ application including:

  • ✓ Feel++ applications in C++ to use Feel++ and Feel++ toolboxes in src

  • ✓ documentation using asciidoc and antora

  • ✓ python Feel++ notebooks that can be downloaded from the documentation

  • ✓ continuous integration including tests for the C++ applications

  • ✓ docker image generation for the project

The documentation for benchmarking is available at here and you can build on it for your project by enabling the github pages for your repository.

Renaming the project

By default the project is named benchmarking if you cloned the repository feelpp/benchmarking. However if you used the previous repository as a template, then the project is renamed using the name of the repository using the script rename.sh at the initialization of the repository. If the name does not suit you, you can change it again using the script rename.sh and providing the new name as argument.

Warning
the script rename.sh will rename the project however some url might be set properly if you rename the project yourself. You need to check the following files: docs/site.yml and docs/package.json and fix the urls after the rename process is done.

Updating the benchmarking version

The version of the project is defined in the files CMakeLists.txt, docs/antora.yml and docs/package.json. You need to update with the same version in all files.

Release process

  • ✓ update the version in CMakeLists.txt

  • ✓ update the version in docs/antora.yml

  • ✓ commit the changes with the message "Release vx.y.z". At this point the CI will generate the docker image and push it to docker hub

Running the benchmark

Overview

This repository uses ReFrame for launching multiple tests parametrized by the number of CPU. Every system needs a configuration-file for architecture and environment description. This file contains for example the number of accessible nodes and CPU.

For launching a case, it has to be configured through a JSON file. This file is needed for case-specific set up of both ReFrame and Feel++. Every configuration-file used for this benchmarking platform are available in benchConfigs/.

ReFrame will extract and store the result in JSON format. The results will then be computed for generating an output file in adoc format containing information about the run session, and also plots.

Note: Before continuing, make sure to have Feel++ installed on the system on which you want to run the test. If it’s not the case, please refer to this installation guide.

Step-by-step guide

  • Clone the repository on the machine

git clone https://github.com/feelpp/benchmarking.git
  • Add a system configuration-file in src/feelpp/benchmarking/reframe/config-files/ for your machine. Refer to this for exploring the possibilities.

  • Configure the case (CPU number, Feel++ config-files, paths, …​) with a JSON file. A template is available here: src/feelpp/benchmarking/benchConfigTemplate.json

  • Launch the test by specifying hostname, path to the Feel++ root output directory and path to the case configuration. It is also possible to launch multiple JSON files inside a directory with the --dir option.

python launchProcess.py hostname --feelppdb your_path --config config1.json config2.json
  • Generate .adoc output files for the documentation site. render.py will recursively look for ReFrame run-reports inside docs/modules/

python docs/modules/render.py
  • Push and merge into the main branch of the repository for updating the documentation site