You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Current implementation of integration tests assertion framework (MetricAssertions class) reports assertion failures in a very verbose and non-optimal way.
In case of assertion failure it prints out all received metrics and assertions multiple times. It makes finding the issue really hard. The more metrics is checked the harder it is and often the only way to find out what went wrong is selectively commenting out the code.
Also some assertions are missing details forcing developer to guess what went wrong.
Describe the solution you'd like
New assertion framework should be created fulfilling the following requirements:
Print only the metric that failed the assertion
Provide meaningful description and the metric property details needed to identify root cause of assertion failure
Fail when any of expected metrics was not received
Fail when unexpected metric was received (there is no assertion defined for it). There should be mechanism to opt out form this default behavior. It may be useful when it is very hard/impossible to force tested target to report some metric.
API must be easy to use, concise and reliable (it should be hard to miss some metric checks)
New framework will coexist for some time with the current one, so currently implemented integration tests may be ported incrementally in separate PRs.
Once all tests are ported the old framework will be removed from the code
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Component(s)
jmx-scraper
Is your feature request related to a problem? Please describe.
Current implementation of integration tests assertion framework (MetricAssertions class) reports assertion failures in a very verbose and non-optimal way.
In case of assertion failure it prints out all received metrics and assertions multiple times. It makes finding the issue really hard. The more metrics is checked the harder it is and often the only way to find out what went wrong is selectively commenting out the code.
Also some assertions are missing details forcing developer to guess what went wrong.
Describe the solution you'd like
New assertion framework should be created fulfilling the following requirements:
New framework will coexist for some time with the current one, so currently implemented integration tests may be ported incrementally in separate PRs.
Once all tests are ported the old framework will be removed from the code
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: