Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

translating TaskSpec to Pod: error getting image manifest #3655

Closed
ycyxuehan opened this issue Jan 5, 2021 · 13 comments
Closed

translating TaskSpec to Pod: error getting image manifest #3655

ycyxuehan opened this issue Jan 5, 2021 · 13 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ycyxuehan
Copy link

ycyxuehan commented Jan 5, 2021

version: v0.17.2
taskrun:

  podTemplate:
    dnsConfig:
      nameservers:
      - 223.5.5.5
      - 223.6.6.6
      - 114.114.114.114
    imagePullSecrets:
    - name: demo-registry
  resources:
    inputs:
    - name: demo
      resourceRef:
        name: demo
    outputs:
    - name: resourceImage
      resourceRef:
        name: demo-resource-image
  serviceAccountName: demo
....

service account:

secrets:
- name: demo
- name: demo-git-token
- name: demo-git-privatekey
- name: demo-registry
- name: demo-token-gzcjp

secrets:

apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJyZWd...
kind: Secret
metadata:
  annotations:
    tekton.dev/docker-0: https://demo.myregistry.com
type: kubernetes.io/dockerconfigjson
@ycyxuehan ycyxuehan added the kind/bug Categorizes issue or PR as related to a bug. label Jan 5, 2021
@ycyxuehan
Copy link
Author

message: 'failed to create task run pod "demo-1609831023470": translating
TaskSpec to Pod: error getting image manifest: GET https://demo.myregistry.com/v2/demo/ssh/manifests/ops0.1:
UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:demo/ssh
Type:repository]]. Maybe missing or invalid Task demo/demo'

@tekton-robot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2021
@tekton-robot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 5, 2021
@ghost
Copy link

ghost commented May 18, 2021

@ycyxuehan Does this remain an issue for you?

@tekton-robot
Copy link
Collaborator

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@shokohsc
Copy link

shokohsc commented Dec 9, 2021

Hello, i'm still having this issue as of
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.29.0@sha256:72f79471f06d096cc53e51385017c9f0f7edbc87379bf415f99d4bd11cf7bc2b
installed from tektoncd/operator and only when the task's step args key is defined, if I add the command key too, it goes flawlessly.

Relates to:

@imjasonh
Copy link
Member

imjasonh commented Dec 9, 2021

If the image is private and requires credentials, those credentials must be available to the tekton-pipelines-controller SA (the SA that runs the Tekton controller) as an imagePullSecret. The Tekton controller uses this to get image metadata to determine what command to run when a step starts, which it only needs to do when the step spec doesn't explicitly specify it.

@shokohsc
Copy link

shokohsc commented Dec 9, 2021

I'm having the same error after adding imagePullSecret to tekton-pipelines-controller service account. Guess I'll need to define every command from now on.

@shokohsc
Copy link

Reverting to 0.23.0 works too

@shokohsc
Copy link

shokohsc commented Dec 15, 2021

I think this will prevent from using our credentials via an imagePullSecret: 49a7fa2
It's still there as of today December 15 2021: https://github.com/tektoncd/pipeline/blob/main/config/controller.yaml#L99
But commit message says it's needed Oo @sbwsg
I'm lost. Would there be a way to delete those environment variables from the operator ? Asking for a friend :)

@markbastiaans
Copy link

Same issue here, related to #2707.

We're running v0.30.0 on EKS and are running into this issue with ECR. Prior to this we were on v0.19.x.
I've currently managed to work around the issue by copy-pasting entrypoint commands. Deleting the dummy env vars from our (custom) Helm chart seemed kind of a big risk to me with the Go AWS SDK not being updated yet / no structural fix known yet for #4087.

A structural solution would be nice.

@ghost
Copy link

ghost commented Jan 6, 2022

If you're running in AWS I would expect (tho can't confirm) that removing the env vars should be fine. We're slowly making progress on #4087 but I'm having a bit of trouble compiling and testing with the update that brings in the newer go aws sdk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants