-
Notifications
You must be signed in to change notification settings - Fork 2k
Update Docker Machine Roadmap for 0.4.0 #1239
Conversation
Very nice writeup! I think that captures the discussions we've had. I'm worried a bit about the scope but I think focusing on libmachine would get a good groundwork for the others. Overall, I think this would be awesome :) |
addresses are something which must be tackled in some fashion eventually if | ||
Machine is to become a reliable, and error-resilient, tool. | ||
|
||
## Support for creating multiple instances at once |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe move this section up above "Uniform Resource Model" so it's not mistaken as being tentative?
👍 Lots of exciting stuff here :) |
👍 |
2 similar comments
+1 |
👍 |
One common use case for provisioning docker engine on a cloud provider (I have experienced this on azure) is to create an storage backed drive and mount /var/lib/docker on it. |
@chanezon What are the steps to do so? Would |
1b6fd9d
to
f6d0a38
Compare
Changed the order a bit like @hairyhenderson mentioned to put both of the "tentatives" at the bottom. |
Signed-off-by: Nathan LeClaire <nathan.leclaire@gmail.com>
f6d0a38
to
7eedb1b
Compare
@nathanleclaire no that would not be enough. While provisioning you need to create the drive, then format and mount it. I documented that process in details in this doc https://github.com/chanezon/azure-linux/tree/master/coreos/cluster#data-disk Look at the --data-disk option doc: the python provisioning script creates the disk https://github.com/chanezon/azure-linux/blob/master/coreos/cluster/azure-coreos-cluster#L225 then the cloud-init units format and mount it. |
@chanezon I would really like the idea of cloud-init. Perhaps we should re-investigate attempting to use. For those that aren't aware, the reason we delayed on cloudinit is not every provider fully supports it. After running into issues on the 4th (out of 14) provider, we chose to start with our model instead. My fear is that we would rely very heavily on the provider for cloudinit. If they upgrade their version, etc we would have to wait on them to fix (the issue on the 3rd provider was they didn't allow custom repositories to be added - it can vary widely from provider to provider). Another option would be to try to run a version of cloudinit ourselves, however, this usually involves a mounted loopback filesystem and not all providers support that either. As an aside, a pre-create hook in the rivet system would allow this as well. |
cc @efrecon I want to make sure you're looped in on this... I know a lot of these same things are similar to what machinery does today, and it'd be great to get your insight and contributions in terms of the UX (however you can - if you don't know Go, we're happy to help you learn as well). Hopefully we can free you up to do more high-level things with machinery by taking care of some of the boilerplate stuff as well, and/or making machine easier to interact with for you. |
Looks broadly good, thanks guys! A few comments: Quality + UXThis is quite ambitious for 0.4! I would much rather see an explicit focus on improving quality and user experience for what Machine is currently good at than trying to add loads of features. When we make Machine the recommended way of running Docker locally, nearly all of our users are going to be using it to manage a local VirtualBox VM and it needs to be really good at that. I think there's lots of improvements that can be made here -- I'm still running into really basic UX problems such as #962 and #1266. Adding lots of new features is only going to increase the burden of keeping quality high. Compose has done this pretty well, I think. It is still mostly focused on making the core experience (running development environments) really really good, and has been conservatively adding new features. Compose filesI'm concerned about automatically running a Compose file on a Machine. We have some plans about how to manage Compose applications over time, and this feels like a hack that would start to shift this responsibility onto Machine. @aanand might want to chip in here. ExtensionsThe extension support should use the same method as extensions in the Docker Engine (moby/moby#13161 moby/moby#13222). I don't think there should be fragmentation here. |
Thanks @bfirsh. I think the scope was a bit much too (see my first comment) but I like the general direction.
I am not opposed to a quality release (we can always use that :) - I would just like to have clarification on what your ideas are for the direction of Machine. Machine has always supported multiple providers both local and remote. If the focus is shifting to local providers then we should probably move to an extension model ASAP, halt on new remote drivers, push them to the community and focus on getting the UX rock solid for local (I'm assuming you mean all local (VirtualBox, VMware Fusion, Hyper-V).
I am fine leaving compose duties to compose. That makes sense. I think what @nathanleclaire was referring to was the ability to perform actions at certain times during provisioning.
Rivet was an experiment in pushing provisioning responsibilities outside of Machine (in hindsight I should have kept it private). I would much rather use an official extension framework. |
Makes sense to me too. (If Compose is packaged up in a Docker image - which is something we should look into doing officially - then it's just a very simple special case of this feature.) |
(warning... long post) Very interesting reading indeed, and as @nathanleclaire wrote, a number of those things might have overlaps to machinery. I have been looking for a good excuse to start with Go, so yes there are probably ways to collaborate on this in one form or another! As for a start, let me try to describe how machinery came to exist, as I think that it provides (opinionated) insights to where machine itself could (or not) head to. This is both from the point of view of a user and developer, but obviously from the outside as I had not looked at any part of the code prior to start working on machinery and I am not part of the docker team. I took a long and rather unsuccessful foray in CoreOS before taking the decision to move my clusters to using the full docker ecosystem. As I saw it, it looked like this (you all know this, obviously, but sometimes it's perhaps good to look to this through "new" eyes):
So to refer back to @bfirsh above, all these tools do one single thing, and aim at doing this right. I think that this is key to their current success. Machinery came along from reflecting around vagrant, the So machinery started with this: take one YAML file, and arrange for the file to describe a number of VMs that will be created by machine while being part of the same cluster (the I could have stopped there, since the next logical step from a UX perspective was to manually run
Note that machinery does a little bit more, for example around shares/volumes ( So back to where machine should be heading at or not. I think that you might be at a turning point:
I fully understand that I am biased. If you go for the "one machine" route, then, there still is a need for a tool on top of machine, and well... that tool would look very much like machinery (or provide the same kind of abstractions and facilities). This post has tried to take some distance from the current state of affairs. I have little insights in the way you work as a team and the "grand docker goals", so I might completely have misunderstood. I would be happy to dig much more in the feature set and discuss in more details in further posts. Was I of any help? |
@efrecon Thanks for the awesome write up! It absolutely helped. You bring up tremendous points. After reflecting a bit about this I agree with your comment about creating a single VM reliably and stable. We have a lot of work to do around the creation process, provisioning, progress reporting, error reporting / recovery etc as @bfirsh mentioned. I agree with @bfirsh that moving to more features only spreads us out more and exposes a much greater plane for maintenance and issues. I think we could still support the community using Machine as a tool (i.e. machinery, rancher, etc) while still focusing on making that process insanely stable. It is easy to get caught up in wanting to add features (we've had some very interesting brainstorming discussions about the future) - especially when we have several avenues of input each wanting different things. It is awesome to have the community to help drive what we want Machine to become. I am truly grateful for feedback like this to help me personally see other sides. With this being said, it doesn't mean Machine won't try to achieve some or all of the goals listed above. I think we just need to put more into the "foundation" before expanding. @efrecon your feedback was invaluable. Thank you very much! |
This is slightly off-topic, but I thought that it would be of interest to you as we are talking roadmap. I have just pushed an initial implementation of a Web API to operate on machinery from a distance. This has been on my roadmap for a while and I thought that now would be a good time. This makes machinery a flexible machine to create machines, and is in line with docker itself. The reason why I am posting this is because this could be some of the things that you would consider in the future for machine itself. |
This is different than what we're actually going to focus on this release, but we will revisit most of these ideas at some point. |
Meant for a high-level overview of where @ehazlett and I's heads are at in terms of goals for the 0.4.0 release.
Would like to get feedback. Which ideas are great and will help with your use cases? Which ideas seem terrible? Have we accurately represented these goals in our ongoing conversations so far?
cc @aanand @bfirsh @ehazlett @hairyhenderson @sthulb @huslage @tianon @chanezon @vieux @vincentbernat @jeffmendoza @frapposelli @samalba @SvenDowideit @ibuildthecloud @amylindburg @aluzzardi for feedback - sorry about the CC bomb but I think all of you have valuable insight about "different parts of the elephant" so to speak and deciding the direction(s) to go with this correctly is very important for next steps in Docker orchestration.
Signed-off-by: Nathan LeClaire nathan.leclaire@gmail.com