-
Notifications
You must be signed in to change notification settings - Fork 2
Home
Main organisers: Charalampos Tsoumpas (RU Groningen), Christoph Kolbitsch (PTB), Matthias Ehrhardt (U Bath), Kris Thielemans (UCL)
Technical support (CoSeC, UKRI STFC): Casper da Costa-Luis, Edoardo Pasca
PETRIC has now concluded. Please check our dedicated page for the results!
We are organising the PET Rapid Image Reconstruction Challenge (PETRIC) which will run over the summer 2024 (mid June to 30 September). Its primary aim is to stimulate research into the development of fast PET image reconstruction algorithms applicable to real world data. Motivated by the success of the clinical translation of regularised image reconstruction in PET and other modalities, the challenge will focus on a smoothed version of the relative difference prior [1]. The participants will have access to a sizeable set of phantom data acquired on a range of clinical scanners. The main task for the participants of the challenge will be to reach a solution which is close to the converged image (e.g. in terms of mean VOI SUV) as quickly as possible (as measured in terms of computation time). This task will therefore require a balance between algorithm design and implementation optimisation. An example solution which reaches the target converged image but takes a long time will be provided at the beginning of the challenge. The PET raw data will be pre-processed to enable researchers to take part even if they have little experience in handling real world data. Open-source software (SIRF [2], STIR [3], CIL [4]) will be provided to develop and test the algorithms. Implementations must use a given SIRF projector (together with provided multiplicative and additive projection data) such that reconstructed image quality and timing performance only depends on the reconstruction algorithm.
In the spirit of open science, all competitors who want to win cash prizes must make their GitHub repositories publicly available under an open-source license fulfilling the OSI definition after the challenge. However, to foster inclusivity we also welcome participation without making their code open access (see below for more details). Teams will be required to submit a maximum 1000 word abstract describing their algorithm.
The 3 teams that obtain the highest ranking will present their contributions at a workshop on Advanced Image Reconstruction Algorithms to be held in conjunction with the IEEE MIC 2024. Travel and subsistence will be covered for up to 2 participants per winning team. Online participation to the workshop will be possible. More information can be found here PETRIC Workshop and Award Ceremony.
In addition, the highest ranked 3 teams that provide an open-source solution will get a monetary award for the whole group:
- £500
- £300
- £150
At the start of the challenge, we will provide example data of a small set of phantoms for participants to use for the development of their methods (we will aim for 3 different types of phantoms from 2-3 scanners). The data used for the actual competition and scoring of the different algorithms will be acquired after the end of the challenge at different sites including both Siemens and GE clinical scanners. This minimizes the bias towards a certain vendor or scanner model. In addition, this ensures that also groups involved in the organisation of the challenge can participate because at the time of the challenge, nobody has access to the final ground truth data. We welcome sites to provide acquired raw data for the final testing (together with Regions of Interest) in order to enlarge the database, see the dedicated page with more information. All phantom data will be made publicly available after the challenge. Participants are free to test their algorithms on additional datasets, for instance those available at the Zenodo SyneRBI Community. Due to difficulties with organising data sharing agreements, we will not include patient data in the current challenge.
- Start: PETRIC started on 28 June 2024. Example code and datasets will be available from this point at https://github.com/SyneRBI/PETRIC.
- Finish: PETRIC closes on 30 September 2024 23:59 (GMT). Only submissions that fulfil the requirements listed below will be accepted.
- Last date to make repository open access to qualify for monetary award: 14 October 2024 23:59 (GMT).
- Announcement of final ranking: 15 October 2024.
- PETRIC Workshop and Award Ceremony (Official workshop site @ IEEE MIC 2024): 2 November 2024
The spirit of the competition is that the algorithm is a general-purpose algorithm, capable of reconstructing clinical PET data. The organizing committee has the right to disqualify any algorithms trying to violate that spirit.
We will provide an example repository on GitHub with an implementation of a (modified) BSREM algorithm of [5]. This example can be used as a template for your own modifications and give some indication of how fast your own algorithm is. A Docker container with all installed code will also be made available.
The code MUST run on Python 3.10 using SIRF. We will provide a private template repository for each team to work on.
Note
Teams can submit up to 3 reconstruction algorithms to the challenge (on separate tags).
Each tag in your repository must contain:
- a
main.py
file containing:-
class Submission
which inherits from the CILclass Algorithm
(see the scripts) -
submission_callbacks: list
(which could be an empty list)
-
- a
README.md
file with at least the following sections:- Author(s) & affiliation(s)
- Brief description of your algorithm.
During the challenge, your pushed code will be automatically run and results will be posted on the public leaderboard. This also allows you to troubleshoot your code. If you discover problems with our set-up, please create an "Issue"
We will run all reconstruction algorithms on the STFC cloud computing services platform. Each server will have an AMD EPYC 7452 32-Core CPU and NVIDIA A100-40GB GPU running under Ubuntu 22.04 with CUDA 11.8. Data (e.g. weights of a pre-trained network) can be downloaded before running the reconstruction algorithm but will be limited to 1 GB.
Among all entries we will determine the fastest algorithm to reach target image quality. To this end, all algorithms will be run until their solution obtains a specific relative error on all selected metrics or their runtime exceeds 1 hour. For every metric, results from all teams will be ranked according to the wall-clock time it took them to reach the threshold on our standard platform (ranking is from worst (1) to best (N), with best means “fastest to reach threshold”). The overall rank for each algorithm is the sum of all ranks for the individual metrics on each dataset. The algorithm with the highest overall rank wins the challenge. Note that due to difficulties with wall-clock timing as well as use of stochastic algorithms, each reconstruction will be run 10 times, and the median of wall-clock time will be used.
The optimisation problem is a maximum a-posteriori estimate (MAP) using the smoothed relative difference prior (RDP), i.e.
where the constraint set
with
with
Due to PET conventions, for some scanners, some data bins will always be
zero (corresponding to “virtual crystals”), in which case corresponding
elements in
The smoothed Relative Difference Prior is given by:
with
-
$N$ the number of voxels, -
$N_{i}$ the neighbourhood of voxel$i$ (here taken as the 8 nearest neighbours in the 3 directions), -
$w_{ij}$ weight factors (here taken as “horizontal” voxel-size divided by Euclidean distance between the$i$ and$j$ voxels), -
$\mathbf{\kappa}$ an image to give voxel-dependent weights (here predetermined as the row-sum of the Hessian of the log-likelihood at an initial OSEM reconstruction, see eq. 25 in [7]) -
$\gamma$ an edge-preservation parameter (here taken as 2), -
$\epsilon$ a small number to ensure smoothness (here predetermined from an initial OSEM reconstruction)
Each dataset contains:
-
$r$ : (converged BSREM) reference image -
$W$ : (marginally eroded) whole object VOI (volume of interest) -
$B$ : background VOI -
$R_i$ : one or more VOIs (“tumours”, “spheres”, “white/grey matter”, etc.)
metric calculations (thresholds updated 25 August):
leaderboard metric name | calculation & threshold |
---|---|
whole object RMSE | |
background RMSE | |
VOI AEM (absolute error of the mean) |
where:
-
$\theta$ : your candidate reconstructed image -
$RMSE(\cdot; W)$ : voxel-wise root mean squared error computed in region$W$ with respect to the reference$r$ -
$MEAN(\cdot; R_i)$ : mean for region$R_i$
As our reference algorithm we use a modified version of BSREM (Block-sequential regularized expectation maximization). This converges to the solution of the MAP reconstruction problem but unfortunately can require a high number of iterations. An example demonstrating PET image reconstruction with BSREM using SIRF can be found in this notebook.
To help you get started we have already created an example submission. Of course this will most likely not win the challenge but hopefully give you an idea of how to implement your own algorithm with the framework of PETRIC. Check our page with more information on the software available.
- Please regularly check this page with current status and updates.
- If you have SIRF or PETRIC questions, you could join our Discord server (invitation link).
- For problems with our set-up, please create an "Issue" on https://github.com/SyneRBI/PETRIC/issues.
- Prizes are available for the top ranked 3 teams who make their code publicly available.
- Submitted algorithms may use 1 GB amount of data included in the repository
- Submissions need to be based on SIRF and use Python
- Submissions are via a private GitHub repository
- The evaluation will be performed as described above.
[1] Nuyts, J., Bequé, D., Dupont, P., & Mortelmans, L. (2002). A Concave Prior Penalizing Relative Differences for Maximum-a-Posteriori Reconstruction in Emission Tomography. IEEE Transactions on Nuclear Science, 49(1), 56–60.
[2] Evgueni Ovtchinnikov, Richard Brown, Christoph Kolbitsch, Edoardo Pasca, Casper da Costa-Luis, Ashley G. Gillman, Benjamin A. Thomas, Nikos Efthymiou, Johannes Mayer, Palak Wadhwa, Matthias J. Ehrhardt, Sam Ellis, Jakob S. Jørgensen, Julian Matthews, Claudia Prieto, Andrew J. Reader, Charalampos Tsoumpas, Martin Turner, David Atkinson, Kris Thielemans (2020) SIRF: Synergistic Image Reconstruction Framework, Computer Physics Communications 249, doi: https://doi.org/10.1016/j.cpc.2019.107087. https://github.com/SyneRBI/SIRF/
[3] Thielemans, K., Tsoumpas, C., Mustafovic, S., Beisel, T., Aguiar, P., Dikaios, N., Jacobson, M.W., 2012. STIR: software for tomographic image reconstruction release 2. Physics in Medicine and Biology 57, 867--883. https://doi.org/10.1088/0031-9155/57/4/867 https://github.com/UCL/STIR/
[4] Jørgensen, J.S., Ametova, E., Burca, G., Fardell, G., Papoutsellis, E., Pasca, E., Thielemans, K., Turner, M., Warr, R., Lionheart, W.R.B., Withers, P.J., 2021. Core Imaging Library - Part I: a versatile Python framework for tomographic imaging. Phil Trans Roy Soc A 379, 20200192. https://doi.org/10.1098/rsta.2020.0192 https://github.com/TomographicImaging/CIL
[5] S. Ahn and J. A. Fessler, ‘Globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms’, IEEE Transactions on Medical Imaging, vol. 22, no. 5, pp. 613–626, May 2003, doi: 10.1109/tmi.2003.812251.
[6] Schramm, G., Thielemans, K., 2024. PARALLELPROJ—an open-source framework for fast calculation of projections in tomography. Front. Nucl. Med. 3. https://doi.org/10.3389/fnume.2023.1324562
[7] Tsai, Y.-J., Schramm, G., Ahn, S., Bousse, A., Arridge, S., Nuyts, J., Hutton, B.F., Stearns, C.W., Thielemans, K., 2020. Benefits of Using a Spatially-Variant Penalty Strength With Anatomical Priors in PET Reconstruction. IEEE Transactions on Medical Imaging 39, 11–22. https://doi.org/10.1109/TMI.2019.2913889