A KISS library for benchmarking Fortran functions and subroutines with precision
Explore the project »
[TOC]
Fortran is the fastest language on earth, so they say. But can we prove it?And despite its legendary speed when it comes to crunching numbers, Fortran is no exception when it comes to writing code: it's also very possible to write terribly slow pieces of code. This is where benchmarking different implementations of the same function can help developing better and faster algorithms.
This project aims at providing an easy interface to benchmark functions and subroutines while taking care of warming up the machine, collecting system information, computing statistics and reporting results.
To build that library you need
- a Fortran 2008 compliant compiler, or better, a Fortran 2018 compliant compiler (Intel Fortran compiler is known to work well for benchmark.f. gfortran has some limitations when using implicit procedures and unlimited polymorphic arguments. Please refer to the documentation to see the difference between compilers).
The following compilers are tested on the default branch of benchmark.f:
Name | Version | Platform | Architecture |
---|---|---|---|
GCC Fortran (MinGW) | 14 | Windows 10 | x86_64 |
Intel oneAPI classic | 2021.5 | Windows 10 | x86_64 |
Unit test rely on the the header file assert.inc
. Since the whole framework fits in a single file, it has been added directly to the repo.
Linting, indentation, and styling is done with fprettify with the following settings
fprettify .\src\ -r --case 1 1 1 1 -i 4 --strict-indent --enable-replacements --strip-comments --c-relations
git clone https://github.com/davidpfister/benchmark.f
cd benchmark.f
The repo is compatible with fpm projects. It can be build using fpm
fpm build --flag '-ffree-line-length-none'
For convenience, the repo also contains a response file that can be invoked as follows:
fpm @build
(For the Windows users, that command does not work in Powershell since '@' is a reserved symbol. One should
use the '--%' as follows: fpm --% @build
.
This is linked to the following issue)
Building with ifort requires to specify the compiler name (gfortran by default)
fpm @build --compiler ifort
Alternatively, the compiler can be set using fpm environment variables.
set FPM_FC=ifort
Besides the build command, several commands are also available:
@pretiffy
system fprettify .\examples\ -r --case 1 1 1 1 -i 4 --strict-indent --enable-replacements --strip-comments --c-relations
system fprettify .\src\ -r --case 1 1 1 1 -i 4 --strict-indent --enable-replacements --strip-comments --c-relations
system fprettify .\tests\ -r --case 1 1 1 1 -i 4 --strict-indent --enable-replacements --strip-comments --c-relations
@clean
option clean --all
@rebuild
system rmdir /s /q build
option build --flag '-ffree-line-length-none'
@build
option build --flag '-ffree-line-length-none'
@test
options test --flag '-ffree-line-length-none'
@doc
option clean --all
system cd ./.dox & doxygen ./Doxyfile.in & cd ..
The toml files contains two items that are worth commenting:
- The settings to the cpp preprocessor are specified in the file.
[preprocess]
cpp.suffixes = ["F90", "f90"]
cpp.macros = ["_FPM"]
The _FPM
macro is used to differenciate the build when compiling with fpm or Visual Studio. This is mostly present to adapt the hard coded paths that differs in both cases.
- The code must also be compiled allowing implicit procedures. This is reflected in the following option.
[fortran]
implicit-external = true
In order to be able to benchmark functions AND subroutines with any number of dummy arguments (0 to 7 at the moment) of any types (intrinsic or derived types), implicit procedures are a must. While this may be considered as bad practice and a remainder from F77 and the good old external, there would be no other way to provide a generic library without this option.
The project was originally developed on Windows with Visual Studio 2019. The repo contains the solution file (Benchmark.f.sln) to get you started with Visual Studio 2019.
Running the benchmark could not be simpler.
- Start by including the file
benchmark.inc
into your code - Instantiate a benchmark runner
- Run the benchmark
#include <benchmark.inc>
program test
use benchmark_library
type(runner) :: br
benchmark(br, run(1.0d-6, 30, test_function))
and generates this kind of table:
| Method Name | Mean | Standard Deviation |
|__________________________|________________________|________________________|
|test_poisson(1.0d-6,30) | 217350.000 us| +/- 161306.626|
|test_poisson(1.0d-6,30) | 99250.000 us| +/- 7588.643|
|test_poisson(.10E-05,30) | 176550.000 us| +/- 135795.609|
For more examples, please refer to the Documentation
The library takes care of everything else for you
- Collection of system information
- Collection of compiler information
- Collection of compilation options
- Reporting
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. So, thank you for considering contributing to benchmark.f. Please review and follow these guidelines to make the contribution process simple and effective for all involved. In return, the developers will help address your problem, evaluate changes, and guide you through your pull requests.
By contributing to benchmark.f, you certify that you own or are allowed to share the content of your contribution under the same license.
Please follow the style used in this repository for any Fortran code that you contribute. This allows focusing on substance rather than style.
A bug is a demonstrable problem caused by the code in this repository. Good bug reports are extremely valuable to us—thank you!
Before opening a bug report:
- Check if the issue has already been reported (issues).
- Check if it is still an issue or it has been fixed? Try to reproduce it with the latest version from the default branch.
- Isolate the problem and create a minimal test case.
A good bug report should include all information needed to reproduce the bug. Please be as detailed as possible:
- Which version of benchmark.f are you using? Please be specific.
- What are the steps to reproduce the issue?
- What is the expected outcome?
- What happens instead?
This information will help the developers diagnose the issue quickly and with minimal back-and-forth.
If you have a suggestion that would make this project better, please create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Open a new issue to describe a bug or propose a new feature. Refer to the earlier sections on how to write a good bug report or feature request.
- Discuss with the developers and reach consensus about what should be done about the bug or feature request. When actively working on code towards a PR, please assign yourself to the issue on GitHub. This is good collaborative practice to avoid duplicated effort and also inform others what you are currently working on.
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request with your contribution.
The body of the PR should at least include a bullet-point summary of the
changes, and a detailed description is encouraged.
If the PR completely addresses the issue you opened in step 1, include in
the PR description the following line:
Fixes #<issue-number>
. If your PR implements a feature that adds or changes the behavior of benchmark.f, your PR must also include appropriate changes to the documentation and associated units tests.
In brief,
- A PR should implement only one feature or bug fix.
- Do not commit changes to files that are irrelevant to your feature or bug fix.
- Smaller PRs are better than large PRs, and will lead to a shorter review and merge cycle
- Add tests for your feature or bug fix to be sure that it stays functional and useful
- Be open to constructive criticism and requests for improving your code.
Distributed under the MIT License.