Benchmarking for performance regressions in painless #49874
Labels
:Core/Infra/Scripting
Scripting abstractions, Painless, and Mustache
Team:Core/Infra
Meta label for core/infra team
In order to avoid introducing performance degradation and to measure the impact of changes to enhance performance, we should have a standard set of benchmarks we run similar to the Lucene benchmarks.
These bench mark categories include:
Migrated from: #31009
The text was updated successfully, but these errors were encountered: