Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issuer name annotation is not set and a default issuer has not been configured #6107

Closed
1 of 2 tasks
rudolph9 opened this issue Nov 13, 2019 · 8 comments
Closed
1 of 2 tasks
Labels
area/ingress kind/bug Issue is a bug lifecycle/rotten priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@rudolph9
Copy link

rudolph9 commented Nov 13, 2019

Summary

Production and stage have both been configured to use tls but certificates are self signed

Steps to reproduce the behavior

Configure cluster with jx boot and configure tls for production and staging:
jx-requirements:

environments:
- ingress:
    cloud_dns_secret_name: external-dns-gcp-sa
    domain: domain.tld
    externalDNS: true
    namespaceSubDomain: -jx.
    tls:
      email: email@foo.com
      enabled: true
      production: true
  key: dev
- ingress:
    domain: domain.tld
    externalDNS: true
    namespaceSubDomain: ""
    tls:
      email: email@foo.com
      enabled: true
      production: true
  key: staging

Expected behavior

certs to be issued from letsencrypt

Actual behavior

Self signed kubernetes certs are being issued

Jx version

The output of jx version is:

jx version
NAME               VERSION
jx                 2.0.976
Kubernetes cluster v1.13.11-gke.9
kubectl            v1.16.2
helm client        Client: v2.13.1+g618447c
git                2.17.1
Operating System   Ubuntu 18.04.3 LT

Jenkins type

  • Serverless Jenkins X Pipelines (Tekton + Prow)
  • Classic Jenkins

Kubernetes cluster

jx create cluster gke --skip-installation -n clustername --region=us-west1 --max-num-nodes=9 --min-num-nodes=1

Operating system / Environment

$  jx version
.
.
.
Operating System   Ubuntu 18.04.3 LTS

Other Info

I can see the following messages in the cert-manager logs:

2019-11-13T18:39:08.394746Z cert-manager/controller/ingress-shim "level"=0 "msg"="syncing item" "key"="jx-production/appname"  I 
2019-11-13T18:39:08.395174Z cert-manager/controller/ingress-shim "level"=0 "msg"="failed to determine issuer to be used for ingress resource" "resource_kind"="Ingress" "resource_name"="appname" "resource_namespace"="jx-production"  I 
2019-11-13T18:39:08.395413Z cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="jx-production/appname"  I 
2019-11-13T18:40:01.261726Z cert-manager/controller/ingress-shim "level"=0 "msg"="syncing item" "key"="jx-production/appname"  I 
2019-11-13T18:40:01.262213Z cert-manager/controller/ingress-shim "level"=0 "msg"="failed to determine issuer to be used for ingress resource" "resource_kind"="Ingress" "resource_name"="appname" "resource_namespace"="jx-production"  I 
2019-11-13T18:40:01.262489Z cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="jx-production/appname"  I 

The content of environment-clustername-production/env/values.yaml is:

PipelineSecrets: {}
cleanup:
  Annotations:
    helm.sh/hook: pre-delete
    helm.sh/hook-delete-policy: hook-succeeded
  Args:
  - --cleanup
expose:
  Annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: hook-succeeded
  Args:
  - --v
  - 4
  config:
    domain: domain.tld
    exposer: Ingress
    http: "false"
    tlsSecretName: tls-domain.tld-p
    tlsacme: "true"
    urltemplate: '{{.Service}}-{{.Namespace}}.{{.Domain}}'
  production: true
jenkins:
  Servers:
    Global: {}
prow: {}
@rudolph9
Copy link
Author

The workarounds listed here seems relevant #5310 (comment)

@rudolph9
Copy link
Author

rudolph9 commented Nov 14, 2019

I ended up working around the issue by running jx upgrade ingress --namespace=jx-production.

One relevant quirk I noticed about this worked is the use of the url template. Background: Although the cluster was originally configured with a pattern that follows urltemplate: '{{.Service}}-{{.Namespace}}.{{.Domain}}' changing that field has no effect in my cluster.
Quirk: When prompted during the upgrade ingress for ? URLTemplate (press <Enter> to keep the current value): and just pressing <Enter> results in the default template, not the current, to be applied along with proper certs from let's encrypt.

@daveconde daveconde added kind/bug Issue is a bug priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. area/ingress labels Nov 19, 2019
@bitfactory-henno-schooljan
Copy link

bitfactory-henno-schooljan commented Nov 19, 2019

I noticed thatjx upgrade ingress only works for existing ingresses anyway. As soon as a new one is added (e.g. new application is promoted for the first time) it does not get a valid certificate. IMO it would be good if it would just do a dns-01 challenge for a wildcard certificate like how it is done on the jx namespace on initial installation, and re-use that certificate for all ingresses on that domain.

@rudolph9
Copy link
Author

Yeah I've seen the same behavior.

@jenkins-x-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle stale

@jenkins-x-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle rotten

@jenkins-x-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close

@jenkins-x-bot
Copy link
Contributor

@jenkins-x-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ingress kind/bug Issue is a bug lifecycle/rotten priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

4 participants