diff --git a/packages/performance-impact/readme.md b/packages/performance-impact/readme.md index 22ff416..f567eb5 100644 --- a/packages/performance-impact/readme.md +++ b/packages/performance-impact/readme.md @@ -8,13 +8,13 @@ Disclaimer: This tool should not be used to catch small performance variations because they are hard to distinguish from the natural variations of performance metrics (see [performance variability](#Performance-variability)). -# Pull request comment +## Pull request comment _Screenshot of a pull request comment_ ![stuff](./docs/pull_request_comment.png) -# Performance variability +## Performance variability Performance metrics will change due to inherent variability, **even if there hasn't been a code change**. It can be mitigated by measuring performance multiple times. @@ -22,17 +22,17 @@ But you should always keep in mind this variability before drawing conclusions a With time you'll be capable to recognize unusual variation in your performance metrics. -# How to catch small performance impacts? +## How to catch small performance impacts? -Catching small to very small performance impacts with confidence requires a LOT of repetition and time. Both strategies means you will have to wait before knowing the real performance impact. +Catching (very) small performance impacts with confidence requires repetition and time. You need to: -_How to catch small impacts with a lot of repetition?_ +1. Let your code be used a lot of times in a lot of scenarios and see the results. This could be scripts, real users or both. -- Let your code be used a lot of times in a lot of scenarios and see the results. This could be scripts, real users or both. +2. And or push your performance metrics in a tool like Kibana or DataDog and check the tendency of your performance metrics. -- Push your performance metrics in a tool like Kibana or DataDog and check the tendency of your performance metrics. +In any case it means you have to wait before knowing the real performance impact. -In the end I would recommend the following approach: +## Recommended approach to catch performance impacts 1. measure some performance metrics 2. Use `@jsenv/performance-impact` to anticipate big variations