Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC for remote clients #50976

Closed
wants to merge 5 commits into from
Closed

Conversation

cachedout
Copy link
Contributor

What does this PR do?

Introduces an RFC to remove the restriction that the salt command be run from a local master and allow it to be run from anywhere.

@cachedout cachedout requested a review from a team as a code owner December 24, 2018 19:02
rfcs/remote-client.md Outdated Show resolved Hide resolved
have a shared AES key securely transmitted to them.

As such, this proposal suggests that the existing public-key authentication
system simply be extended to accomodate clients.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 makes sense to me

* Create a unified client which can publish commands seamlessly between salt-api
instances and local salt masters. This could potentially be extetended to
`salt-ssh` as well. Though, the goals of this are not necessarily orthoginal
to this RFC.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this option a lot, if we could expand saltnado to be have more feature parity with rest_cherrypy, and then build a client that uses the websocket stream of the event bus in order to watch for events in a similar way to how LocalClient looks for them now, we could also remove the restriction of having the localclient call in the api wait for the events to return, and would also allow for more robust integration in other programming languages.

Also expanding saltnado has the benefit that it doesn't add any extra dependencies, and we can remove the requirement for cherrypy for the api endpoint.

Copy link
Contributor

@isbm isbm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cachedout @gtmanfred @s0undt3ch @thatch45 Generally calling salt ... command to multiple masters is something that is indeed lacking. However, I am not sure the idea to have another pile of keys around to manage is optimal. Maybe it is though... I would go with that idea only if there would be a very good reason to let salt command be totally independent from whatever place it is called. Because it anyway connects to the remote master[s]. And thus local master doesn't have to exist on the same host where salt command is called from. Potentially, every minion can call master then.

But is it indeed needed?

Instead, I would rather focus on master-to-master[s] grid, so then we already have authenticated multi-master network. And then whatever salt command through their local master can send a message to any remote master or to all of them.

Another improvement here is that I would rather... drop the -m parameter. I find it redundant. Let me explain it a bit more. Why would you call a remote master at first place? Because it has minions you're interested in. Thus your target not really a master but actually a specific minion to call. That said, we can do it so:

salt \*@another-master.lan test.ping

Local master takes the * pattern, forwards it to another-master.lan host. It is even SSHshy, in a way. OK, so far, so good.

But then we have another problem with this approach: this still looks like the topology that is only segregating minions-per-master, even boldly assuming that there is only one particular master. Which would mean that it is pretty useless feature with simple multi-master setup, because here only a single master is responsible for specific subset of minions, one can access by * at that particular another-master.lan master. And if we have/could-have/would-have a multi-master setup that are load-balancing minions, then we rather have no idea to which particular master our minions get connected. And thus calling salt \*@master still doesn't mean it will call all the minions we want to reach out, unless messages aren't broadcasted to other masters behind the scenes.

To fix the above, here is a better syntax and it would actually work:

salt \* test.ping

Oh wait... 😆 OK, don't get me wrong here: that is how I would connect to remote master[s]. The difference would be behind the scenes, not in the CLI as actually proposed. Because of the following reasons:

  • I don't care to which master I connect, as long as I am reaching particular minion.

  • I expect my SLS state files are synchronised across the masters, so I do not operate in hostnames/masters, but environments. And thus I do not care from which master my environment reaches out my particular minions, as long as it does that.

  • I configure my grid in the configuration. So my local master knows the network and knows the other peers already and keys are also already known and verified. So I do not need to care about another keys and manage them.

  • I reach out minions by grains and other attributes, not by knowing to which master they are attached. Master is something that is not necessarily needs to drive current minion, especially if we are talking about HA, just like... you know... here, and then you just apply states from your environment.

      salt -G 'ec2_tags:environment:*production*'
    

How would Salt "grid" react to this? Same as it does now: return me the results of minions. With the difference that it would find the minions as masters would reply to the general client's broadcast with "Yes, I have this minion!" and others would just simply ignore the message.

As simple as that. And no need to change semantics.

My points do not interfere with the ability to call different APIs either. What I propose is to reach out minions behind other masters, instead of reach out other masters to reach other minions behind them via remote clients to these masters.

Unless I am missing something...

@damon-atkins
Copy link
Contributor

I have suggest in other issues. That it might be useful to support username@hostname which would represent minion not running as root/system. The keys would be in the format of username@hostname/minionid.

@damon-atkins
Copy link
Contributor

See #44495 RFE: Minion Authenticated Roles with salt-key & Auto Accept

@cachedout
Copy link
Contributor Author

I understand what you're saying here, @isbm, but in my view being able to target a master is actually quite valuable. Think, for example, of a situation where a master plus a set of minions constitutes an application group. Being able to target by the group, as identified by only the master, seems to me like it would be very nice to have.

@isbm
Copy link
Contributor

isbm commented Jan 4, 2019

@cachedout but we have covered that already: environments and grains. Just add your minions to a custom grain, say "cluster" and select it as such. Whatever minion in cluster:my-cool-group responds then to my query. Why I would need to contact a master instead? Masters are basically almost volatile entities: if it gets overloaded, I add another... Your approach still has fundamental flaw herr: you are locking to a particular master. Once I add another master because my group grows a lot, this feature renders itself useless. Much better if I even don't see these masters and they are totally transparent. You are always calling minions, not masters.

@cachedout
Copy link
Contributor Author

We'll have to agree to disagree then. :)

While your workflow might always be limited to "always calling minions", I see cases where addressing targets by master might be desirable and I suspect many others will as well. I suppose these sorts of discussions are the reasons we have an RFC process so I'll be interested to hear what others think.

@isbm
Copy link
Contributor

isbm commented Jan 4, 2019

Sure. :-) Not that I opposing here or supporting something. I just trying to understand your reasoning why this is needed and where this is useful. I still fail to answer to myself these questions:

  • Why we cannot just get minions into a groups in grains during their registration phase and thus instantly forming your desired logical segregations?

  • Why another pile of keys (read another management hassle and worries with them and more CPU heat) if we can completely avoid that?

  • Why grains are not sufficient? We could extend registration workflow to regroup minions for segregation purposes then.

  • Why approaching masters instead of minions?

And in contrast, masters are:

  • "moving target" — minions can talk to one, but can to many or switch them randomly.

  • Don't make any changes on the system but merely sending tasks (e.g. SUSE is shipping them non-root by default).

  • Can/should be replaced/reconfigured at any time.

So far more questions than answers, unless I something fundamentally missing from your RFC. At least please put all these questions as open into your RFC if you won't answer. :-)

@mattp-
Copy link
Contributor

mattp- commented Jan 13, 2019

I tend to lean toward @isbm's and @gtmanfred's thoughts on this RFC. I've coincidentally put some thoughts about how to approach this problem with what is provided today with salt. I think what makes most sense is to make saltnado/salt-api the integration point for external communication. A RestApiClient could be responsible for executing/returning results over remote (or even local) api. At that point states/saltmod would just need to be augmented for when a request is to be remote vs local, likewise for cli integration. You could use a shared minion data cache to establish if a given request needs to be routed remotely instead of explicitly specifying as well.

@AstraLuma
Copy link
Contributor

AstraLuma commented Feb 11, 2019

(disclaimer: I am currently in the process of writing an alternate salt rpc using ssh for transport and authentication.)

To be honest, the only concrete problem I'm seeing this solve is outputters. I don't take it at face value that salt cli only works locally is a problem. Especially if one of your goals is to enable a richer client ecosystem. (I already rewrote pepper once.)

In addition, afaict, I don't see a wire format defined. I must assume, then, the the current salt protocol (I believe there's three options, none documented particularly well, and must be configured on each node). Each client must then depend on salt itself, or salt must refactor to move the protocol code to another package. This effectively makes alternate language implementations impractical, limiting your client ecosystem.

If my rpc client depends on salt, I have the outputters locally, and I can wire up the outputters with salt-api without significant core changes.

At this point, no salt client I've seen supports dispatching a request to multiple endpoints and merging the results. As mentioned above, the usability of this feature makes topology assumptions.

A UI/UX quibble: salt remote clients should be introduced to the concept of multiple independent clusters (different BUs, different clients, isolated dev/prod).

One thing I don't see mentioned at all is runners&wheels. Afaik, runners&wheels can currently only be called by the same machine and same local user as the salt master. They also serve as the API entry point for salt cloud, salt virt, salt-key, monitoring, fileserver, and any other operation that is not minion-oriented. If your goal is to lessen/remove the need for salt users to ssh/sudo, then I would consider any solution that has not at minimum considered and planned for runners & wheels to be woefully incomplete.

tl;dr: this proposal involves a lot of changes to salt's core for value I'm just not buying. All of this could be accomplished with a salt engine implementing well-designed RPC protocol.

@isbm
Copy link
Contributor

isbm commented Feb 11, 2019

<off-topic>

(disclaimer: I am currently in the process of writing an alternate salt rpc using ssh for transport and authentication.)

Not sure what would be the advantage of that in current Salt's architecture, but would be nice to see the RFC to that RPC, explaining motivation and design.

The only value here I would see is to have SaltSSH without copying all possible everything every time you call another machine, so then client-less Salt would be truly one... But that seems not the case, right?

At this point, no salt client I've seen supports dispatching a request to multiple endpoints and merging the results. As mentioned above, the usability of this feature makes topology assumptions.

Sounds like a potential quite significant security problem. 😉

</off-topic>

@gtmanfred
Copy link
Contributor

To be honest, the only concrete problem I'm seeing this solve is outputters. I don't take it at face value that salt cli only works locally is a problem. Especially if one of your goals is to enable a richer client ecosystem. (I already rewrote pepper once.)

Also worth mentioning that pepper already wires up outputters if salt is installed alongside pepper.

@mattp-
Copy link
Contributor

mattp- commented Feb 11, 2019

@astronouth7303 I'd be very curious to hear more about your work. are you planning on making it public?

@gtmanfred on my long todo list is a 'wrapped' outputter for salt-api/pepper that just encodes the original requested output so that dependency isn't necessary.

@AstraLuma
Copy link
Contributor

AstraLuma commented Feb 11, 2019

@isbm @mattp- The project is https://gitlab.com/spirostack/spiroapi (Alpha: it works, but there's important features missing, and I need to do significant refactoring to enable those features). I would be happy to discuss it and answer any questions you have about it in another medium. (I'm @AstraLuma on Salt's Community Slack.)

@isbm
Copy link
Contributor

isbm commented Feb 11, 2019

@astronouth7303 thanks, but should we discuss this here? I'm not sure how it is related to @cachedout 's RFC. BTW, Affero GPL requires Salt be also AGPL as far as I understand. In any case I would avoid anything of GPL* in case of Salt. Feel free to create another RFC and bring this up there.

@AstraLuma
Copy link
Contributor

I mentioned it because I felt like I should disclose that I'm working on a potentially conflicting project. I didn't want to discuss it here because it's off-topic and not about the RFC at hand.

@cachedout
Copy link
Contributor Author

What is the process for existing RFCs? Will they be migrated to the new repo? Are they subject to the comment periods as outlined in saltstack/salt-enhancement-proposals#1 ? If they are to be migrated, when can we expect that to happen?

@cachedout
Copy link
Contributor Author

@astronouth7303 The only thing that really prevents the existing client working from a remote machine right now, IIRC, is the fact that it reads the master's key off disk in order to include it in the request that it makes. It would be a trivial change as it stands today to allow that key (or keys) to be stored in an arbitrary location and sent along with the request. The real gotcha, though, is that I'm fairly sure that it's an unencrypted exchange right now and changing that would mean updating the master.

@AstraLuma
Copy link
Contributor

I'll reiterate, you express the goal:

Salt could have a richer ecosystem of third-party client tool and libraries to interact with Salt masters if this limitation were removed. These could include smartphone clients, as well as a much easier path for Salt libraries which could interact with a Salt master in a direct manner.

Salt-api accomplishes this by using well-supported HTTP. It's problems come from:

  • Insufficient documentation about the low structure
  • Not great authentication options (user/password, Kerberos, or external tokens)

This proposal will have the opposite effect, since the wire protocol used internally by salt is basically entirely undocumented (like, it's based on 0MQ, but you can't go from a 0mq library to talking with salt with published documentation).

New criticism: the authentication in this is dependent on the master's private key. The private key that is the cornerstone of salt security. The private key that allows you to issue commands to any connected minion.

Maybe SaltStack should not encourage workflows that distribute this data to much-harder-to-secure user workstations.

@cachedout
Copy link
Contributor Author

New criticism: the authentication in this is dependent on the master's private key.

Please re-read the original RFC. It discusses the current private key implementation and suggests a way around the issue you reference.

Regarding documentation on the wire protocol that's necessary to send a publication to a master, it could be quite easily documented it's nothing more than a dictionary serialized by message pack.

@dwoz dwoz added the ZRETIRED - RFC retired label see SEP repo label Mar 15, 2019
@cachedout
Copy link
Contributor Author

Closing to migrate to SEP repo.

@cachedout cachedout closed this Mar 26, 2019
cachedout added a commit to cachedout/salt-rfc that referenced this pull request Mar 26, 2019
This has been migrated from the original repo:

saltstack/salt#50976

*IMPORTANT* Please read the comments in the old issue before commenting
on this one.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ZRETIRED - RFC retired label see SEP repo
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants