-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update benchmarking section #216
Comments
Let's put this on the agenda for an upcoming OQS status meeting -- asking for help around reactivating profiling and/or re-vitalizing. Otherwise yes I agree with having appropriate caveats around the version that is currently there on the website, and am happy to have a PR with appropriate text. With regards to the standardized algorithms, to some extent benchmarking is not quite as important, as that question has been somewhat settled: there's lots of evidence out there now that ML-KEM and ML-DSA are mostly fine from a performance perspective. Although we will want to track and demonstrate the performance of our implementation of these standards. As we get back into adding more algorithms from the signature on-ramp, then there will be more research relevance to showing performance characteristics of candidates. |
We can try -- but the "re-vitalization" of oqs-demos also didn't yield concrete results. You can shame, trick or force someone to say Yes to (do) something in a meeting, but I think this will not be successful: People must be motivated to do something (either intrinsically as they believe in the benefit of it/want to support the team or because their company has a problem that needs fixing) or there's a million ways to not act. As you say
so what about changing the statement regarding profiling to exactly state that, e.g., "Suspended until NIST competition or other projects need benchmarking data again. Arguments and Contributors for this welcome." Remember: I developed all of this on your and @christianpaquin 's request to support your NCCoE project. Without such "itch" what's the reason to work on this? |
FYI, we're planning a new round of perf testing later this fall once the FIPS versions have been integrated in the various TLS components. If this server is still available and running, we'll happily surface its results. |
Christian, do you think there are any members of the NCCoE relying on this who would be willing to spend some time helping improve our profiling system? |
Maybe. I'll bring it up with them |
It sounds like you have sufficient other results to "surface", so what'd be the (incremental) benefits of bringing this section "up to scratch" for you/NCCoE, @christianpaquin ?
There's no doubt about that (reason for the existence of the benchmarking section gone) @dstebila, but this statement
leads to the key question: Does OQS want to have and demonstrate a good performance (why otherwise measure it?). If it wants this (?), it should make this a priority, maybe asking someone to be its "performance czar"? Better of course would be a "product manager" looking after all of OQS(' unique selling propositions, with performance as one element). Reminder: Getting this was a key reason why the project originally agreed to accept the drawbacks of the LF take-over. Why did this not happen? Maybe a(nother) question for TSC, TAC, GB? This question also has a reading on open-quantum-safe/liboqs#1426 and performance-improving proposals such as pq-crystals/kyber#85 (in this case, rejected by the upstream). Resultant technical question here: Does OQS want its (performance) charateristics be controlled by the upstreams or make its own decisions in this regard, e.g., inviting @yoswuk to contribute (e.g., via patch) here? More basic question: Is anyone looking after moving the "strategic" aspects of these questions to resolution? As per PQCA/TAC#40 (comment), asking @KennyPaul for input. |
We aimed to provide performance metrics for all tested implementations, including OQS. The benefit is for all consumers of the NIST/NCCoE report to get this info about OQS. |
@baentsch Also updated PQCA/TAC#40 in a somewhat similar fashion. If by "looking after" you mean "who is driving", that responsibility belongs to the appropriate governing body, which would either be the TAC or the OQS TSC as appropriate. |
Understood. Then it is mostly in the interest of OQS to provide this information (and to have comparatively good results), not so much that of NCCoE. |
The data at https://openquantumsafe.org/benchmarking/ is by now pretty dated (basically all at the level right before OQS was folded into PQCA) and does not contain information about current & standardized algorithms.
This issue is to suggest adding a suitable caveat on the website to not disappoint users trying to find current figures there (e.g., of NIST standardized or on-ramp algorithms).
Alternatively, the profiling sub project and the corresponding runs to generate data could be re-activated (with a definite timeline) given community interest (how to gauge? how to find doers? Maybe add wording asking for such folks on the website?).
Tagging @dstebila @SWilson4 as to your thinking/preference. Please note that many issues have been closed in the profiling sub project assuming a new benchmarking sub project would take its place. Those probably ought to be re-activated and worked on instead, then.
The text was updated successfully, but these errors were encountered: