-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add UUID for Log Streaming Job ID #167
Conversation
Didn't we agree to not do this and keep it synchronous/simple? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took a first pass and left some comments, i'll go after it again after.
Yes I created this PR before we had that discussion in the design doc yesterday. Now that the JobIDGenerator does not need the pullToJob mapping (as suggested in the PR comments), it can just be a simple component that just generates new UUID so we won't be adding new stuff to the log handler. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall, left some comments
if p.pullToJobMapping[msg.JobContext.PullContext] == nil { | ||
p.pullToJobMapping[msg.JobContext.PullContext] = map[string]bool{} | ||
} | ||
p.pullToJobMapping[msg.JobContext.PullContext][msg.JobID] = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to use PullContext
vs JobContext
as a key here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because a JobContext is specific to a pull at a certain commit, we cannot use JobContext to track all the jobs in a PR since a PR can have multiple revisions and thus multiple plan and apply jobs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, just to be on the same page, once we implement what we discussed yesterday we won't need this mapping anymore? Running a command will be self contained and the buffers will be active only while the command is running, once it is done buffer is cleared and any future requests will serve the static files from s3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed during our one-on-one's, we will need this mapping for use-cases where log persistence is not configured. In that scenario, we'll have to wait for the PR to be merged before doing any clean up operations. So, we'll need the pullToJobMapping to keep track of all the jobs for a PR.
* Add UUID for Log Streaming Job ID (#167) * Update log handler to close buffered channels when an operation is complete (#170) * Add preliminary check before registering new receivers in the log handler (#173) * Using projectOutputBuffers to check for jobID instead of receiverBuffers (#181) * Refactor log handler (#175) * Reverting go.mod and go.sum * Fix lint errors * Fix linting
This adds the responsibility of generating Job IDs to the log handler which is obviously not ideal. In a follow up PR, the Send(), Handle() and GenerateJobID() methods will be moved into its own component which will sit in between the terraform client and the log handler. It will follow pub/sub pattern where the terraform client will be the producer with one or more than one subscribers. For now, the log handler will be the only subscriber but storage backend will eventually be added as a subscriber as part of persisting logs in Atlantis.