-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running tile-reduce accross containers #108
Comments
You can specify Also, didnt't we have a watchbot repo that split and managed tile-reduce jobs? Can't find it right now though. |
it seems cool on paper, but tbh I'm not sure what this would gain you that you can't already do with watchbot. I'd love to be proven wrong, but I suspect it'd be easiest to use ecs-watchbot in reduce mode, storing worker output data in a known location on S3 that the reducer can pull from. You'd need to orchestrate a queueing script that generates the list of tiles to process, as well as replicate logic for reading mbtiles / s3 tiles with tilelive in the workers, but that's relatively minimal effort and could be abstracted. |
I agree... but only from the perspective of someone who has worked pretty extensively with both libraries. I think the primary motivation for work like this would be to ease transition of systems currently running tile-reduce on EC2s over to ECS. Does that seem worthwhile? Or is it more worthwhile to have developers learn the ins and outs of ecs-watchbot to transition these stacks? |
As someone who recently lived through the use case this is meant to address, I lean toward not dedicating concerted resources toward a custom tile-reduce version. It is easy to stack-parameterize |
It is not, except
|
An HTTP/TCP based mapreduce implementation is going to have very different performance characteristics compared to tile-reduce. The communication protocol has a major effect on architecture considerations for any particular job such as tile zoom or memory usage. I think something like this is worthwhile, but it's a very different problem than what tile-reduce is trying to solve (closer to Hadoop). If the noisy neighbor problem is a common footgun though, I would be in favor of making maxWorkers a required parameter with no default. We can document an example that sets maxWorkers to |
The current internal workings of tile-reduce make it very good for running on a single machine, but due to an issue with
require('os').cpus.length
, which doesn't accurately report the number of CPU's available within a docker container, tile-reduce can be very resource hungry when running in a container.Rather than working with a single-machine/process-forking model for distributing work, could we write a version of tile-reduce that can run with workers as their own short-lived containers that communicate to a master container using HTTP or TCP? For use in ECS, we could use https://github.com/mapbox/ecs-watchbot to manage the orchestration of running new containers on an AWS ECS cluster. We could possibly create affordances for other orchestration tools (like mesos or kubernetes) if other people would like them.
cc/ @nickcordella @mourner @tcql @rclark
The text was updated successfully, but these errors were encountered: