Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't update the image of CronJob with patch_namespaced_cron_job #1039

Closed
ricardozd opened this issue Dec 24, 2019 · 14 comments
Closed

Can't update the image of CronJob with patch_namespaced_cron_job #1039

ricardozd opened this issue Dec 24, 2019 · 14 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ricardozd
Copy link

ricardozd commented Dec 24, 2019

What happened (please include outputs or screenshots):

I'm creating a deploy in AWS with lambda for set the images in Kubernetes Cluster.

I want set image in CronJob:

Cron:

NAME                                SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/analytics-staging-0   */5 * * * *   False     0        4m23s           144m

Describe cronjob:

Name:                          analytics-staging-0
Namespace:                     staging-app
Labels:                        app=analytics-staging-0
Annotations:                   <none>
Schedule:                      */5 * * * *
Concurrency Policy:            Replace
Suspend:                       False
Successful Job History Limit:  824641908604
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   1
Completions:                   1
Pod Template:
  Labels:  <none>
  Containers:
   analytics-staging-0:
    Image:      accountid.dkr.ecr.eu-west-1.amazonaws.com/app:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      /opt/app/analytics
    Environment:
      AWS_REGION:    eu-west-1
    Mounts:          <none>
  Volumes:           <none>
Last Schedule Time:  Tue, 24 Dec 2019 15:20:00 +0100
Active Jobs:         analytics-staging-0-1577197200

Python Code:


def set_image_job(image, namespace, name, conn):

     api_instance = conn.BatchV1beta1Api()
 
     body = {
         "spec": {
             "template": {
                 "spec": {
                     "containers": [
                         {
                             "image": image
                         }
                     ]
                 }
             }
         }
     }
 
     try:
         response = api_instance.patch_namespaced_cron_job(name, namespace, body, pretty=True)
         print(response)
     except Exception as e:
         print("Exception when calling BatchV2alpha1Api->patch_namespaced_cron_job: %s\n" % e)
 
 if __name__ == "__main__":
     conn = get_connection_eks()
     eks_resource = EKSResources('staging-app', conn)
     jobs = eks_resource.get_cron_jobs()
     for item in jobs:
         set_image_job('accountid.dkr.ecr.eu-west-1.amazonaws.com/app:latest', 'staging-app',
                       item, conn) 

Note: item = analytics-staging-0

The response is ok, but don't change the image.

Can you help me?

Thanks!

Environment:

  • Kubernetes version (kubectl version):

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-c0eccc", GitCommit:"c0eccca51d7500bb03b2f163dd8d534ffeb2f7a2", GitTreeState:"clean", BuildDate:"2019-12-22T23:14:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

  • Python version (python --version)

Python 3.7

@ricardozd ricardozd added the kind/bug Categorizes issue or PR as related to a bug. label Dec 24, 2019
@roycaihw
Copy link
Member

roycaihw commented Jan 6, 2020

could you enable the debug switch and share the actual API request and response?

@mdgreenwald
Copy link

could you enable the debug switch and share the actual API request and response?

@roycaihw It's not clear how to enable the debug switch (or if its possible) when using the config.load_incluster_config() helper. Would you mind sharing an example of how to setup configuration and enable debugging. I believe I also may be impacted by this issue.

@ricardozd
Copy link
Author

Finally I used golang for the code :(

@roycaihw
Copy link
Member

I created #1083 on how to enable debugging. Please share the HTTP request and response if you hit the issue.

@mdgreenwald
Copy link

I created #1083 on how to enable debugging. Please share the HTTP request and response if you hit the issue.

Thank you! I do not have time to test this at the moment, but will get to it as soon as I can.

@mdgreenwald
Copy link

@roycaihw

Thank you again for your previous reply. So I was able to enable debugging on the client. I have attached the error output as well as the content of the patch.

My patch is almost line for line lifted from the kubernetes patch example documentation.

Thank you for your help. Let me know if there is anything else you need me to share.

error.log
patch.txt

@mdgreenwald
Copy link

@roycaihw Is this a known issue?
#951

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 17, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 15, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 16, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@CarlQLange
Copy link

CarlQLange commented Sep 29, 2022

In case anyone, like me, happened upon this issue for a handy code sample, the actual problem in the initial post is that the body isn't appropriate for a CronJob (rather a Deployment, which is where the code sample came from initially). For a CronJob, you need something like:

    body = {
        "spec": {
            "jobTemplate": {
                "spec": {
                    "template": {
                        "spec": {
                            "containers": [{"name": "my_container", "image": image}]
                        }
                    }
                }
            }
        }
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants