-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve errors and warnings #1681
Comments
I'm definitely keen to have this for my artifically low timeouts. I've set some requests with a timeout of 3s to mirror frontend behaviour. Under high load, this clutters my console with tens of thousands of |
@kz, here are some workaround suggestions until this issue is implemented:
|
Just adding my +1 to this idea. I am in favour of more warnings, with a way to suppress them. This could also (later) feed into Performance Insights on k6 Cloud. I also want to specifically mention @yorugac 's comment about the error |
As a way to reduce the log clutter, and instead of (or in addition to) having to configure k6 to ignore specific events, how about grouping repeated events in time buckets, with a counter of how many such events were logged within that period? Let's say the default would be to group events in one minute buckets:
New events would render at the bottom, and if they're repeated, only their counter would increase instead of writing an entire separate line. The time precision of individual events would be reduced, but in most cases exact timestamps aren't very important when the events are flooding the output. This would require some event buffering for the configurable bucket period, and some tricky writing to stderr, but it shouldn't be too difficult. And, of course, it should be possible to disable this behavior. |
I think what Ivan suggested above around bucketing log events is probably a good idea, and in my opinion it deserves a time-boxed effort to build a small proof-of-concept to evaluate how feasible it would be, and what would be the impact. I'll try to do that during one of the upcoming cooldown periods. Beyond that, and regarding what's suggested in the rest of the issue, I think (personal opinion) what we should do is to design some sort of Insights API (similar to what we have for Cloud now, either extend that or build a complementary one), where you can receive this kind of "warnings", telling you about potential ways of improving your tests. I think that approach would be much better because, it would be decoupled from other kinds of error/warning messages, which won't just imply a better UX but would make it easier for instance to enable/disable, and also because I think most of the things mentioned above are kinda similar/related to some of the insights we already have. |
This started as a discussion with @sniku that a lot of k6 users probably don't realize that, even if 100% of their HTTP requests fail,
k6 run their-script.js
will exit with a 0 status (i.e. "all ok") if they have not defined some thresholds. So, he suggested that we can add a warning at the end of ak6 run
, when we show the summary, if no thresholds are defined. He even proposed to use the summary data to suggest some appropriatehttp_req_duration
threshold value for them, if it makes sense. Basically have a customized warning+hint combo if we have the data (e.g. more than 100 HTTP requests) and have a default generic "no thresholds" warning for every other scenario.All of these things seem fine and an improvement in UX, but my biggest problem is that some percentage of users (more than 10%, I think) will purposefully not want to set any thresholds and this warning might get annoying. For example, someone doing simple performance measurements or exploratory testing, or people who are just writing their script and testing it with 1VU, not the SUT. Or, on the other end, people who just use k6 to generate load and are using some other tool to monitor their SUT. Or people using external outputs. So, I think we need some way for users to suppress the warning.
Also, there are other warnings/errors we log that users might want to suppress. The most obvious example is the HTTP request failure, which you can currently only turn off by enabling
throw
... And if we have the possibility to disable warnings, we probably would also want to use them for other things as well! Some ideas:open()
large files.discardResponseBodies
andresponseType
, so we might nudge them with a warning if they have large HTTP responses or something like thatSo, I think we should:
k6NoThresholdsDefined
,k6OpenLargeFile
, etc.options.ignoredWarnings = ['k6NoThresholdsDefined']
(any ideas for a better name?).To get back to the original issue - having a warning for "no defined thresholds" will not solve all issues. It will just educate users that might not know that thresholds exist or their purpose. We'd still have many, many issues to fix in the thresholds themselves (#1443 (comment)) before they are useful enough for detecting all sorts of test issues, but the two problems are orthogonal and we should fix both.
The text was updated successfully, but these errors were encountered: