Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output from kpt fn render -o unwrap cannot be piped with kubectl apply -f - #3844

Closed
ericzzzzzzz opened this issue Feb 23, 2023 · 8 comments · Fixed by GoogleContainerTools/kpt-functions-catalog#1023
Assignees
Labels
area/hydrate documentation Improvements or additions to documentation triaged Issue has been triaged by adding an `area/` label

Comments

@ericzzzzzzz
Copy link

ericzzzzzzz commented Feb 23, 2023

Expected behavior

Actual behavior

  • got errors due to crd not installed
  • example error message: error: resource mapping not found for name: "apply-setters-simple" namespace: "" from "STDIN": no matches for kind "Kptfile" in version "kpt.dev/v1" ensure CRDs are installed first

Information

kpt version: 1.0.0-beta.19
example : Kpt Package that can demonstrate the error. (https://github.com/GoogleContainerTools/kpt-functions-catalog.git/examples/apply-setters-simple@apply-setters/v0.2.0)

The cause of this is not working is that kpt fn render -o unwrap now output Kptfile and configMap files defined ini Kptfile, those are not recognized by k8s cluster. Although, we can create .krmignore to exclude Kptfile in output, but this trick cannot be used to configMap files defined in Kptfile, as that would cause dataMap being ignored by krm functions as well.

Before certain version, those stuff are not included in the output from kpt fn render -o unwrap, reverting the behavior will probably introduce another breaking change, but could you add a flag to control the output not including Kptfile and congiMap files?

Steps to reproduce the behavior

  • run kpt pkg get https://github.com/GoogleContainerTools/kpt-functions-catalog.git/examples/apply-setters-simple@apply-setters/v0.2.0 with the latest kpt
  • then in apply-setters-simple folder, run kpt fn render -o unwrap | kubectl apply -f -
@ericzzzzzzz ericzzzzzzz added the bug Something isn't working label Feb 23, 2023
@github-project-automation github-project-automation bot moved this to Backlog in kpt Feb 23, 2023
@ericzzzzzzz ericzzzzzzz changed the title output from kpt fn render -o unwrap cannot be chained with kubectl apply -f - output from kpt fn render -o unwrap cannot be piped with kubectl apply -f - Feb 23, 2023
@natasha41575
Copy link
Contributor

natasha41575 commented Feb 24, 2023

Thanks for trying out kpt and for filing the issue. This is great feedback to get!

I think there are several problems here:

  1. the apply-setters example Kptfile should have the annotation config.kubernetes.io/local-config: "true". This annotation tells kpt that the resource is package metadata and should not be applied to the cluster. The default Kptfile that you create with kpt pkg init creates a Kptfile that does have this annotation, so we should update the examples to follow this.

  2. It seems that kpt fn render -o unwrap retains resources even with the annotation config.kubernetes.io/local-config: "true". I can reproduce your issue by just running kpt pkg init and kpt fn render -o unwrap | kubectl apply -f -.

@droot @yuwenma Is (2) intentional? I agree that it is correct and should be retaining these local-config resources, but it seems odd that we are documenting using kpt fn render -o unwrap | kubectl apply -f -, when it clearly cannot work in any case because it will try to apply the Kptfile to the cluster. I'm thinking we should instead update the documentation to kpt fn render -o unwrap | kpt live apply -, which does work, or document some other option of how to exclude meta resources or Kptfile when running kpt fn render.

@ericzzzzzzz For your use case, you can run add the config.kubernetes.io/local-config: "true" annotation to your Kptfile, then run kpt live init followed by kpt fn render -o unwrap | kpt live apply -. kpt live understands the local-config annotation, so this should correctly apply the resources to your cluster. We will follow up with changes to the documentation so that others don't run into the same issue.

@natasha41575 natasha41575 added area/hydrate triaged Issue has been triaged by adding an `area/` label documentation Improvements or additions to documentation labels Feb 24, 2023
@ericzzzzzzz
Copy link
Author

Hi @natasha41575 Thank you for the reply.
the apply-setters example Kptfile should have the annotation config.kubernetes.io/local-config: "true". This annotation tells kpt that the resource is package metadata and should not be applied to the cluster. The default Kptfile that you create with kpt pkg init creates a Kptfile that does have this annotation, so we should update the examples to follow this.

  • I think the annotation may works as expected the resource is package metadata and should not be applied to the cluster, kpt fn render is only just about hydrating manifests.. it has nothing to do with applying resources to cluster.

It seems that kpt fn render -o unwrap retains resources even with the annotation config.kubernetes.io/local-config: "true". I can reproduce your issue by just running kpt pkg init and kpt fn render -o unwrap | kubectl apply -f -.
kpt fn render -o unwrap | kubectl apply -f -. works with kpt version < 1.0.0-beta.15 regardless of the annotation. This seems a regression not just a documentation issue.

Regarding my use case, I only use a subset of functionalities from kpt, I believe it is reasonable to make kpt fn render and kpt live apply ... decoupled, so users can use the result from kpt fn render with another deployers like kubectl.

@natasha41575
Copy link
Contributor

natasha41575 commented Feb 24, 2023

I think the annotation may works as expected the resource is package metadata and should not be applied to the cluster, kpt fn render is only just about hydrating manifests.. it has nothing to do with applying resources to cluster.

Yes, this is fair. I agree that kpt fn render should not necessarily filter out local-config resources.

Another option for you is to create a .krmignore file with the Kptfile in it, by running echo "Kptfile" > .krmignore. This will cause the Kptfile to be excluded from the output of kpt fn render as well, and you should be able to run kpt fn render -o unwrap | kubectl apply -f - successfully.

@ericzzzzzzz
Copy link
Author

@natasha41575 Thank you for the suggestion, but as I mentioned in the description, kpt fn render -o unwrap | kubectl apply -f - will still cause error if kptfile references DataConfigFile.. the content in configFile will still in the render output..

Example:

apiVersion: kpt.dev/v1
kind: Kptfile
metadata:
  name: apply-setters-simple
upstream:
  type: git
  git:
    repo: https://github.com/GoogleContainerTools/kpt-functions-catalog
    directory: /examples/apply-setters-simple
    ref: apply-setters/v0.2.0
  updateStrategy: resource-merge
upstreamLock:
  type: git
  git:
    repo: https://github.com/GoogleContainerTools/kpt-functions-catalog
    directory: /examples/apply-setters-simple
    ref: apply-setters/v0.2.0
    commit: 9b6ce80e355a53727d21b2b336f8da55e760e20ca
pipeline:
  mutators:
   - image: gcr.io/kpt-fn/set-labels:v0.2.0
     configPath: fn-config.yaml

error message:

resource mapping not found for name: "my-config" namespace: "" from "STDIN": no matches for kind "SetLabels" in version "fn.kpt.dev/v1alpha1"
ensure CRDs are installed first
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"new\",\"color\":\"orange\",\"fruit\":\"apple\"},\"name\":\"my-nginx\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"new\",\"color\":\"orange\",\"fruit\":\"apple\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"new\",\"color\":\"orange\",\"fruit\":\"apple\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.16.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80,\"protocol\":\"TCP\"}]}]}}}}\n"},"labels":{"app":"new","color":"orange","fruit":"apple"}},"spec":{"selector":{"matchLabels":{"app":"new","color":"orange","fruit":"apple"}},"template":{"metadata":{"labels":{"app":"new","color":"orange","fruit":"apple"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "my-nginx", Namespace: "default"
for: "STDIN": error when patching "STDIN": Deployment.apps "my-nginx" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"new", "color":"orange", "fruit":"apple"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

if I include fn-config.yaml into .krmignore , the kpt fn render will just ignore fn-config.yaml completely, then it produces error like this

Package "a":
Kptfile is invalid:
Field: `pipeline.mutators[0].configPath`
Value: "fn-config.yaml"
Reason: functionConfig must exist in the current package

Could you make the output go back to the previous state where output doesn't include the content from Kptfile and dataConfigFile or introduce a flag to control this ?

@yuwenma
Copy link
Contributor

yuwenma commented Feb 24, 2023

Short answer

@ericzzzzzzz All of your findings and conclusion are correct! Not only Kptfile, the other kpt metadata (the ConfigMap in package-context.yaml) shall not be deployed to the cluster as well because it will cause other issues even if kubectl does not raise the complain.
Using local-config to filter out the app manifests before deploying is a good approach. I think you are looking for this function https://catalog.kpt.dev/remove-local-config-resources/v0.1/ which will remove any resources that has the local-config annotation.

Long answer

This is related to kpt v1.0.0-beta.15, which treats Kptfile and metadata ( e.g. the ConfigMap as @ericzzzzzzz pointed out) the same as other application manifests, which means kpt will pass the Kptfile and metadata to the kpt function pipeline so the kpt fn render -o will then contain them in the output. Here's the release note.

The rationale for this change is related to Variant Constructor, @ericzzzzzzz you can find out more here. The variant constructor won't directly benefit skaffold but it gives some really nice patterns that the skaffold may eventually want to rely on.

I think @droot has raised concerns about the potential regressions, and we tried hard to document the behavior well to warn users. Maybe some doc are missed out.

Great findings about those missing docs @natasha41575 . Please go ahead and update them.
To answer your question "Is (2) intentional?": It is intentional. However, since beta.15 it is no longer appropriate to concatenate kpt fn and kubectl apply directly, because kubectl does not have special treatment to the local-config annotation. If updating the document, we can use kpt fn | kpt eval -i gcr.io/kpt-fn/remove-local-config-resources:v0.1.0 - | kubectl apply for now.

@ericzzzzzzz
Copy link
Author

@yuwenma Thank you for providing the context and detail explanation! I think I can use remove-local-config-resources fn as a workaround. Please feel free to close the issue.

@droot
Copy link
Contributor

droot commented Feb 27, 2023

Catching up now on the issue. All the points are already covered here so nothing new to add, but wanted to thank everyone for detailed comments/analysis. I agree with @yuwenma suggestion of using remove-local-config-resources as an extra step in the piped syntax.

@natasha41575
Copy link
Contributor

Thanks @droot and @yuwenma! I wasn't aware of remove-local-config-resources, this is a good pointer. I will leave this issue open until I update the documentation. Thanks again @ericzzzzzzz for bringing the issue to our attention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/hydrate documentation Improvements or additions to documentation triaged Issue has been triaged by adding an `area/` label
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants