-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tail logs in k6 #1599
Tail logs in k6 #1599
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1599 +/- ##
==========================================
- Coverage 74.90% 73.94% -0.97%
==========================================
Files 164 165 +1
Lines 14156 14517 +361
==========================================
+ Hits 10604 10734 +130
- Misses 3014 3220 +206
- Partials 538 563 +25
Continue to review full report at Codecov.
|
stats/cloud/logs.go
Outdated
|
||
continue | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: are these additional newlines because of a linter? The ones before continue
look particularly strange to me. :-/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean before continue
- that is nlreturn
this one specifically I think is whitespace
, but not sure ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'd vote for disabling nlreturn
then, the extra newlines look out of place to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mstoykov I tested it on prod now and it works great, but I think we should do something about the log output obscuring the progress bar. With k6 run
the progress bars stay pinned to the bottom and any log output is printed above them. With the current k6 cloud
the progress bar output is mixed with the logs, and with a lot of events scrolling by, the progress bar is only visible when the logs are quiet. Even scrolling back doesn't show it as the log output overwrites it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, besides that env var nitpick. Thankfully that progress bar fix was straightforward and I like that both run
and cloud
are sharing the same progress bar rendering, even though that UI code could use some cleanup.
cmd/cloud.go
Outdated
} | ||
|
||
//nolint:gochecknoglobals | ||
var cloudLogsCmd = &cobra.Command{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the issues we've had, why not delay this the command to get old logs for k6 v0.29.0? As far as I understand, it's not going to work very well now, right? And it's far from essential, especially when we still have k6cloudlogs
doing essentially the same thing? So, personally, I'd vote for not including this now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a possibility, but likely I will be able to make the changes ... today or tomorrow.
But I agree that maybe it should be in a separate PR ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't doubt we can make the changes, but why not wait a version for things to be a bit more stable? There's no rush to have this command, the other use case in this PR is much more important...
030f27c
to
78338ef
Compare
// we are going to set it ... so we always parse it instead of it breaking the command if | ||
// the cli flag is removed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm that's a good point, I haven't considered it in my "fix #883" ideas dreams and it's probably going to slightly complicate matters... 😞
acfd812
to
bc49c3b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
func (m *msg) Log(logger logrus.FieldLogger) { | ||
var level string | ||
|
||
for _, stream := range m.Streams { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm this could potentially cause messages to arrive out-of-order, right? I doubt it's a big issue, if each msg
encapsulates only a second, since each stream would be a separate instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought of this some minutes ago ... as /query_range
apperantly doesn't return logs in any given order as well. I don't even know if tail does ... my testing - shows that - yes it does seem to come in order, but more investigation is required.
I fear I will need to get all the logs and then to order them and then print them :( But I propose that is a separate PR - this is unlikely to be a problem unless you have a lot of logs and arguably that probably is already pretty bad experience (also likely to lead to dropped logs due to k6 not being able to tail fast enough)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 fine for a separate PR, if needed...
This doesn't work well in a non-TTY mode 😭 Try running |
I don't think this PR is the place to fix progressbar issues ;) |
Probably so, but this PR makes it worse, try to run the same example with k6 v0.27.1... So, yeah, the cause is #1580, but we definitely need to fix this before v0.28.0 is officially released |
Again ... this is not an issue with this PR ... I am even not of the opinion this needs fixing in v0.28.0 or at least not any more than it needed until now - try running a script which runs for some time with v0.27.1. |
I think you simply didn't run the example I suggested you run above, it isn't that it takes longer, it simply looks nasty... Anyway, I'll try to fix these things in a new PR |
I did run it ... the reason it looks nasty is because of 2 things:
|
No description provided.