Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

supporting PackageEvaluator #1

Closed
jrevels opened this issue Feb 17, 2016 · 4 comments
Closed

supporting PackageEvaluator #1

jrevels opened this issue Feb 17, 2016 · 4 comments

Comments

@jrevels
Copy link
Member

jrevels commented Feb 17, 2016

The immediate next step for this package is supporting PackageEvaluator, but before I start trying to support new kinds of jobs, I'd like to get a consensus on what the end product should look like. Since I've never used PackageEvaluator myself, input from @IainNZ and @tkelman would be valuable for making sure my design goals are sensible.

My initial take is that this package should basically become a job submission/execution system + a collection of job types for @nanosoldier.

For example, it would be easy to alter the current submission syntax (runbenchmarks(...)) to accommodate new job types:

nanosoldier(:benchmark, args...)
nanosoldier(:pkgeval, args...)

On the back-end, we can have different types J <: AbstractJob that overload Base.run. The server can then parse the comments into the proper job type, push them to the job queue, and let the provided pool of workers grab jobs from the queue as they become available. This is more or less already what Nanosoldier.jl does, but it only has one job type, BenchmarkJob.

Thus, each job type J <: AbstractJob would be responsible for it's own setup/work/report/teardown cycle (defined by run(::J)) once it's launched on a worker.

It shouldn't be too hard to plug PackageEvaluator into this model, since it seems to already have all the necessary scripts for spinning up containers, running the relevant tests, and cleaning up after itself. Is that correct? Are there any gotchas that I should watch out for?

It could also be cool to eventually incorporate some METADATA automation here as well.

@IainNZ
Copy link

IainNZ commented Feb 17, 2016

PkgEval is indeed pretty self contained - its already cron-job-able, although I've had issues before with zombie VMs that just won't die. The results are quite chunky, as they contain all the test logs (10s of MB), but you can throw away a lot depending on what it is you want to do with them. You'd probably want to modify PkgEval to take "arguments", like compile-and-compare two commits of Julia - right now, its not written like that.

@tkelman
Copy link
Contributor

tkelman commented Feb 18, 2016

I have a little post-processing script that I use (via jq, but using JSON.jl would work too) to summarize all of the job output into a simple pass/fail summary, then I do a diff of that before vs after. I also will look at the diff over time between separate runs (my release baseline only changes once a month or so, but I might do a new run a few days later with possibly newer package versions etc).

@jrevels jrevels changed the title Initial Planning supporting PackageEvaluator Jun 11, 2016
@tkelman
Copy link
Contributor

tkelman commented Jun 11, 2016

I recently wrote up my thoughts on the pkgeval side of this at JuliaCI/PackageEvaluator.jl#131. It'll need refactoring from the current static script design to more of a library entry point before hooking up a comment listener to it will make any sense.

@maleadt
Copy link
Member

maleadt commented Jan 15, 2020

We have this now.

@maleadt maleadt closed this as completed Jan 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants