Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add UUID for Log Streaming Job ID #167

Merged
merged 13 commits into from
Feb 1, 2022
Merged

Conversation

Aayyush
Copy link

@Aayyush Aayyush commented Jan 24, 2022

  • Replaced org/repo/pull/project/workspace with a 32 digit hexadecimal string as job identifier in the log handler
  • Updated the job URL to use job ID.

This adds the responsibility of generating Job IDs to the log handler which is obviously not ideal. In a follow up PR, the Send(), Handle() and GenerateJobID() methods will be moved into its own component which will sit in between the terraform client and the log handler. It will follow pub/sub pattern where the terraform client will be the producer with one or more than one subscribers. For now, the log handler will be the only subscriber but storage backend will eventually be added as a subscriber as part of persisting logs in Atlantis.

@nishkrishnan
Copy link
Contributor

It will follow pub/sub pattern where the terraform client will be the producer with one or more than one subscribers. For now, the log handler will be the only subscriber but storage backend will eventually be added as a subscriber as part of persisting logs in Atlantis.

Didn't we agree to not do this and keep it synchronous/simple?

Copy link
Contributor

@nishkrishnan nishkrishnan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a first pass and left some comments, i'll go after it again after.

server/controllers/websocket/mux.go Outdated Show resolved Hide resolved
server/handlers/project_command_output_handler.go Outdated Show resolved Hide resolved
server/handlers/project_command_output_handler.go Outdated Show resolved Hide resolved
server/handlers/project_command_output_handler.go Outdated Show resolved Hide resolved
@Aayyush
Copy link
Author

Aayyush commented Jan 26, 2022

It will follow pub/sub pattern where the terraform client will be the producer with one or more than one subscribers. For now, the log handler will be the only subscriber but storage backend will eventually be added as a subscriber as part of persisting logs in Atlantis.

Didn't we agree to not do this and keep it synchronous/simple?

Yes I created this PR before we had that discussion in the design doc yesterday. Now that the JobIDGenerator does not need the pullToJob mapping (as suggested in the PR comments), it can just be a simple component that just generates new UUID so we won't be adding new stuff to the log handler.

Copy link

@msarvar msarvar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, left some comments

server/controllers/jobs_controller.go Outdated Show resolved Hide resolved
server/controllers/jobs_controller.go Outdated Show resolved Hide resolved
server/handlers/project_command_output_handler.go Outdated Show resolved Hide resolved
server/events/project_command_context_builder.go Outdated Show resolved Hide resolved
server/handlers/project_command_output_handler.go Outdated Show resolved Hide resolved
Comment on lines 125 to 128
if p.pullToJobMapping[msg.JobContext.PullContext] == nil {
p.pullToJobMapping[msg.JobContext.PullContext] = map[string]bool{}
}
p.pullToJobMapping[msg.JobContext.PullContext][msg.JobID] = true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to use PullContext vs JobContext as a key here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because a JobContext is specific to a pull at a certain commit, we cannot use JobContext to track all the jobs in a PR since a PR can have multiple revisions and thus multiple plan and apply jobs.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, just to be on the same page, once we implement what we discussed yesterday we won't need this mapping anymore? Running a command will be self contained and the buffers will be active only while the command is running, once it is done buffer is cleared and any future requests will serve the static files from s3?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed during our one-on-one's, we will need this mapping for use-cases where log persistence is not configured. In that scenario, we'll have to wait for the PR to be merged before doing any clean up operations. So, we'll need the pullToJobMapping to keep track of all the jobs for a PR.

server/controllers/jobs_controller.go Outdated Show resolved Hide resolved
server/events/models/models.go Outdated Show resolved Hide resolved
server/handlers/project_command_output_handler.go Outdated Show resolved Hide resolved
@Aayyush Aayyush merged commit 7f2d9bf into release-v0.17.3-lyft.1 Feb 1, 2022
@Aayyush Aayyush deleted the aay/add-logs-uuid branch February 1, 2022 22:12
Aayyush added a commit that referenced this pull request Feb 14, 2022
* Add UUID for Log Streaming Job ID (#167)

* Update log handler to close buffered  channels when an operation is complete (#170)

* Add preliminary check before registering new receivers in the log handler (#173)

* Using projectOutputBuffers to check for jobID instead of receiverBuffers (#181)

* Refactor log handler  (#175)

* Reverting go.mod and go.sum

* Fix lint errors

* Fix linting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants