-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream Terraform output through web interface #1315
Stream Terraform output through web interface #1315
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like what you've done here. My only feedback (other than tests) would be that I'm wondering if we could eliminate the global channel entirely and just directly persist to the pull's channel + log buffer through some abstraction
// all clients. This function blocks, please invoke it in a goroutine. | ||
func (j *JobsController) Listen() { | ||
for msg := range j.TerraformOutputChan { | ||
if msg.ClearBefore { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this mainly invoked in the case that new plans are triggered for the same PR before a buffer is cleared?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it should be necessary actually given the current implementation. I guess maybe in the case that a plan crashes and burns badly and the loop over the projects doesn't finish somehow. But I always like to clear caches at startup, not only teardown, "just in case".
JobTemplate TemplateWriter | ||
TerraformOutputChan <-chan *models.TerraformOutputLine | ||
|
||
logBuffers map[string][]string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can see this easily being customized to be a pluggable "backend" allowing us to actually persist these.
|
||
logBuffers map[string][]string | ||
wsChans map[string]map[chan string]bool | ||
chanLock sync.Mutex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should probably use RWMutex so reads don't block each other.
@@ -447,7 +453,21 @@ func (c *DefaultCommandRunner) runProjectCmdsParallel(cmds []models.ProjectComma | |||
|
|||
func (c *DefaultCommandRunner) runProjectCmds(cmds []models.ProjectCommandContext, cmdName models.CommandName) CommandResult { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this is only implemented for the serial implementation, any reason?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, I wasn't aware of there being also a parallel implementation. I don't think we use it at my workplace.
case ch <- line: | ||
default: | ||
// If we get to this case then it means that | ||
// the channel is full, which probably means |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This channel is unbuffered though? How would it be full.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello guys! I'm actually very interested to contribute to this MR. Beyond all the conflicts (and the missing tests), what more is necessary here? I would like to take this MR up []'s |
Those are probably the biggest things. Though the current implementation doesn't translate over well to working for those that use parallel operations so having something that works for both would be great. My suggestion would just be to have separate project specific channels for a given channel instead of a global PR channel. Might be a bunch of changes around how clients connect as well. |
👋 apologies for my ignorance of issue etiquette in advance: the main difference is plumbing a should I open a new PR with those changes? |
implemented in #1937 |
This is an alternative implementation of the same idea as #1309; see #1309 (comment). No tests or documentation updates yet. Unfortunately, I do not currently have the bandwidth to maintain the pull request, but I figured that I would make it available for the community to build upon.