-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Script] failed to save outputs: verify serviceaccount default:default has necessary privileges #983
Comments
Any updates on this? I'm encountering the same issue on a brand new bare mental kubernetes (rke) cluster. It looks like that this issue might be related to #982 As mentioned in #982 the following workaround works (on RKE) kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default |
I have the same issue,but it looks like
doesn't work |
This message is not always accurate. There are some assumptions being made in the controller that turn out not to always be related to service account privileges. This error happens when the controller is expecting some output annotation from the workflow pod, but it did not see the pod annotations updated with the output result. For example, for a script result, the way a workflow pod communicates the script result back to the controller, is that the wait sidecar annotates its own pod with the output result. When the controller sees a pod completed, but does not see the annotation, it assumes the reason why the annotation is missing, is because the pod did not have privileges (i.e. the serviceAccount the workflow ran as, did not have get/update/patch permissions to pods). As I mentioned, this assumption is not always true, and there are actually other reasons why the annotation might not have been made. One reason that has come up twice so far, is because the wait container could not even communicate to the API server. So despite granting sufficient privileges to the workflow's service account, the wait sidecar still fails to annotate the output. The way to know for sure, is to get the logs of the wait sidecar.
In a recent instance, this manifested in the following error (in wait sidecar) due to an issue with the user's CNI networking:
I think the error message should be improved to also point to API server access issues as a potential issue. For those here who are seeing the error |
Here are a set of minimal privileges needed by a workflow pod:
Issue #1072 has been filed to eliminate |
Thanks for this hint. My issue was that a hostPath volume couldn't be mounted. |
Will use this bug to improve error message. |
I saw a similar message arising from a pod running in a non-default namespace (call it # Argo artifacts require the mynamespace default user to have appropriate privileges
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: artifact-role
namespace: mynamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: artifact-role-binding
namespace: mynamespace
roleRef:
kind: Role
name: artifact-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: default
namespace: mynamespace |
Is this a BUG REPORT or FEATURE REQUEST?:
BUG REQUEST
What happened:
Got an error when trying to use
script
in my workflow.What you expected to happen:
Should run the script without error.
How to reproduce it (as minimally and precisely as possible):
argo submit
Anything else we need to know?:
I even assign role
cluster-admin
todefault
service-account.Environment:
The text was updated successfully, but these errors were encountered: