nxbench is a comprehensive benchmarking suite designed to facilitate comparative profiling of graph analytic algorithms across NetworkX and compatible backends. Built with an emphasis on extensibility and detailed performance analysis, nxbench aims to enable developers and researchers to optimize their graph analysis workflows efficiently and reproducibly.
- Cross-Backend Benchmarking: Leverage NetworkX's backend system to profile algorithms across multiple implementations (NetworkX, nx-parallel, GraphBLAS, and CuGraph)
- Configurable Suite: YAML-based configuration for algorithms, datasets, and benchmarking parameters
- Real-World Datasets: Automated downloading and caching of networks and their metadata from NetworkRepository
- Synthetic Graph Generation: Support for generating benchmark graphs using any of NetworkX's built-in generators
- Validation Framework: Comprehensive result validation for correctness across implementations
- Performance Monitoring: Track execution time and memory usage with detailed metrics
- Interactive Visualization: Dynamic dashboard for exploring benchmark results using Plotly Dash
- Flexible Storage: SQLite-based result storage with pandas integration for analysis
- CI Integration: Support for automated benchmarking through ASV (Airspeed Velocity)
git clone https://github.com/dpys/nxbench.git
cd nxbench
pip install -e .[cuda] # CUDA support is needed for CuGraph benchmarking
For benchmarking using CUDA-based tools like CuGraph:
pip install -e .[cuda]
- Configure your benchmarks in a yaml file (see
configs/example.yaml
):
algorithms:
- name: "pagerank"
func: "networkx.pagerank"
params:
alpha: 0.85
groups: ["centrality"]
datasets:
- name: "karate"
source: "networkrepository"
- Run benchmarks based on the configuration:
nxbench --config 'configs/example.yaml' benchmark run
- Export results:
nxbench benchmark export 'results/results.csv' --output-format csv # Convert benchmarked results into csv format.
- View results:
nxbench viz serve # Launch interactive dashboard
The CLI provides comprehensive management of benchmarks, datasets, and visualization:
# Validating asv configuration
asv check
# Data Management
nxbench data download karate # Download specific dataset
nxbench data list --category social # List available datasets
# Benchmarking
nxbench --config 'configs/example.yaml' -vvv benchmark run # Debug benchmark runs
nxbench benchmark export 'results/benchmarks.sqlite' --output-format sql # Export the results into a sql database
nxbench benchmark compare HEAD HEAD~1 # Compare with previous commit
# Visualization
nxbench viz serve # Launch parallel categories dashboard
nxbench viz publish # Generate static asv report
Benchmarks are configured through YAML files with the following structure:
algorithms:
- name: "algorithm_name"
func: "fully.qualified.function.name"
params: {}
requires_directed: false
groups: ["category"]
validate_result: "validation.function"
datasets:
- name: "dataset_name"
source: "networkrepository"
params: {}
- NetworkX (default)
- CuGraph (requires separate CUDA installation and supported GPU hardware)
- GraphBLAS Algorithms (optional)
- nx-parallel (optional)
# Install development dependencies
pip install -e .[test,doc] # testing and documentation
# Run tests
make test
# Run benchmarks with GPU
docker-compose up nxbench
# Run benchmarks CPU-only
NUM_GPU=0 docker-compose up nxbench
# Start visualization dashboard
docker-compose up dashboard
# Run specific backend
docker-compose run --rm nxbench benchmark run --backend networkx
# View results
docker-compose run --rm nxbench benchmark export results.csv
Contributions are welcome! Please read our Contributing Guide for details on:
- Code style guidelines
- Development setup
- Testing requirements
- Pull request process
This project is licensed under the MIT License - see LICENSE for details.
- NetworkX community for the core graph library and dispatching support
- NetworkRepository.com for harmonized dataset access
- ASV team for benchmark infrastructure
For questions or suggestions:
- Open an issue on GitHub
- Email: [email protected]