You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In using the benchmark package. We're sometimes running into improvements and/or regressions in performance due to an update in toolchain. Is there a way we could include the toolchain used in the benchmark results? That way it's easier to clarify certain results or exclude this as a variable when there's a performance change.
The text was updated successfully, but these errors were encountered:
One approach is to do what eg. SwiftNIO does and separate threshold results per toolchain as it’s built - it’s probably the most robust approach in general, WDYT?
In using the benchmark package. We're sometimes running into improvements and/or regressions in performance due to an update in toolchain. Is there a way we could include the toolchain used in the benchmark results? That way it's easier to clarify certain results or exclude this as a variable when there's a performance change.
The text was updated successfully, but these errors were encountered: