Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to query provider "https://argocd-host/api/dex": 502 Bad Gateway: #3975

Closed
3 tasks done
mstevens-a3 opened this issue Jul 21, 2020 · 22 comments · Fixed by #6183 or #13500
Closed
3 tasks done

Failed to query provider "https://argocd-host/api/dex": 502 Bad Gateway: #3975

mstevens-a3 opened this issue Jul 21, 2020 · 22 comments · Fixed by #6183 or #13500
Labels
bug Something isn't working

Comments

@mstevens-a3
Copy link

I reached-out about this issue in the ArgoCD slack but received no reply.

Checklist:

  • I've searched in the docs and FAQ for my answer: http://bit.ly/argocd-faq.
  • I've included steps to reproduce the bug.
  • I've pasted the output of argocd version.

Describe the bug

I've gotten ArgoCD v1.6.1 deployed to an EKS cluster running v1.15.11-eks-af3caf along with Istio 1.4.7 successfully. Now i'm trying to get SAML auth enabled with Dex via Okta as described in the documentation. However, when I click on the Login via Okta button at the login page I'm immediately met with an error page stating: Failed to query provider "https://argocd-host/api/dex": 502 Bad Gateway: Attempting to authenticate via the CLI returns the same error as well.

I've dug through the documentation and the issues here on github and haven't been able to find much but have ensured that the data.url config parameter is set correctly. I also haven't been able to find anyone through general googling that seems to have the same sort of setup with running EKS, istio ingress and AWS NLB, so I'm not sure if perhaps there's something about this particular combination that's screwing things up. I do have a few other services/apps running in this same cluster (grafana, jenkins, etc.) that are using Okta authentication without issue, so I don't believe there's any sort of ingress/egress rules getting in the way of things.

To Reproduce

Have Istio 1.4.7 running in an EKS 1.15 cluster deployed with a configuration very similar to what's documented in this issue with the --insecure flag set on argocd-server. The insecure flag is set because I am terminating at the AWS NLB. Configure the argocd-cm ConfigMap as the following:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
  labels:
    app.kubernetes.io/name: argocd-cm
    app.kubernetes.io/part-of: argocd
data:
  url: https://argocd-host
  redirectURL: https://argocd-host/api/dex/callback
  dex.config: |
    logger:
      level: debug
      format: json
    connectors:
    - type: saml
      id: okta
      name: Okta
      config:
        ssoURL: https://<myorgsoktassourl>/sso/saml
        redirectURI: https://argocd-host/api/dex/callback
        caData: <bas64-encoded cert>
        usernameAttr: email
        emailAttr: email
        groupsAttr: groups

Go to the configured ArgoCD URL of https://argocd-host and click on the Login Via Okta button that now appears.

Expected behavior

Authentication should happen successfully via Okta and I should be taken to the Applications page on my ArgoCD instance.

Screenshots

I don't think a screenshot is necessary but the error message displayed after trying login via okta is:

Failed to query provider "https://argocd-host/api/dex": 502 Bad Gateway: 

Version

argocd: v1.6.0+c10ae24
  BuildDate: 2020-06-16T22:41:56Z
  GitCommit: c10ae246ab02f1356147118a1979fedcd1ceb704
  GitTreeState: clean
  GoVersion: go1.14.1
  Compiler: gc
  Platform: darwin/amd64
argocd-server: v1.6.1+159674e
  BuildDate: 2020-06-19T00:41:05Z
  GitCommit: 159674ee844a378fb98fe297006bf7b83a6e32d2
  GitTreeState: clean
  GoVersion: go1.14.1
  Compiler: gc
  Platform: linux/amd64
  Ksonnet Version: v0.13.1
  Kustomize Version: {Version:kustomize/v3.6.1 GitCommit:c97fa946d576eb6ed559f17f2ac43b3b5a8d5dbd BuildDate:2020-05-27T20:47:35Z GoOs:linux GoArch:amd64}
  Helm Version: version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
  Kubectl Version: v1.14.0

Logs
I have enabled debug logging for the running services, though I didn't too many more log entries as a result

argocd-server log:

{"level":"info","msg":"Configmap/secret informer synced","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"Starting configmap/secret informers","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"configmap informer cancelled","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"secrets informer cancelled","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"Configmap/secret informer synced","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"Starting configmap/secret informers","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"secrets informer cancelled","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"configmap informer cancelled","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"Configmap/secret informer synced","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"Starting configmap/secret informers","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"configmap informer cancelled","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"secrets informer cancelled","time":"2020-07-21T21:33:35Z"}
{"level":"info","msg":"Configmap/secret informer synced","time":"2020-07-21T21:33:36Z"}
{"level":"info","msg":"Creating client app (argo-cd)","time":"2020-07-21T21:33:36Z"}
{"level":"info","msg":"argocd v1.6.1+159674e serving on port 8080 (url: https://argocd-host, tls: false, namespace: argocd, sso: true)","time":"2020-07-21T21:33:36Z"}
{"level":"info","msg":"0xc000243260 subscribed to settings updates","time":"2020-07-21T21:33:36Z"}
{"level":"info","msg":"Starting rbac config informer","time":"2020-07-21T21:33:36Z"}
{"level":"info","msg":"RBAC ConfigMap 'argocd-rbac-cm' added","time":"2020-07-21T21:33:36Z"}
{"grpc.method":"Version","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"version.VersionService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /version.VersionService/Version","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"dir":"","execID":"SS8qK","level":"info","msg":"ks  version","time":"2020-07-21T21:34:17Z"}
{"duration":67656209,"execID":"SS8qK","level":"debug","msg":"ksonnet version: 0.13.1\njsonnet version: v0.11.2\nclient-go version: kubernetes-1.10.4\n","time":"2020-07-21T21:34:17Z"}
{"dir":"","execID":"zmljB","level":"info","msg":"kustomize version","time":"2020-07-21T21:34:17Z"}
{"duration":48298630,"execID":"zmljB","level":"debug","msg":"{Version:kustomize/v3.6.1 GitCommit:c97fa946d576eb6ed559f17f2ac43b3b5a8d5dbd BuildDate:2020-05-27T20:47:35Z GoOs:linux GoArch:amd64}\n","time":"2020-07-21T21:34:17Z"}
{"dir":"","execID":"H6uda","level":"info","msg":"helm version --client","time":"2020-07-21T21:34:17Z"}
{"duration":50177755,"execID":"H6uda","level":"debug","msg":"version.BuildInfo{Version:\"v3.2.0\", GitCommit:\"e11b7ce3b12db2941e90399e874513fbd24bcb71\", GitTreeState:\"clean\", GoVersion:\"go1.13.10\"}\n","time":"2020-07-21T21:34:17Z"}
{"dir":"","execID":"VSlpQ","level":"info","msg":"kubectl version --client","time":"2020-07-21T21:34:17Z"}
{"duration":62498984,"execID":"VSlpQ","level":"debug","msg":"Client Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.0\", GitCommit:\"641856db18352033a0d96dbc99153fa3b27298e5\", GitTreeState:\"clean\", BuildDate:\"2019-03-25T15:53:57Z\", GoVersion:\"go1.12.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"Version","grpc.service":"version.VersionService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":229.519,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.method":"Get","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"cluster.SettingsService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /cluster.SettingsService/Get","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"Get","grpc.service":"cluster.SettingsService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":1.049,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"error":"rpc error: code = Unauthenticated desc = no session information","grpc.code":"Unauthenticated","grpc.method":"List","grpc.service":"cluster.ClusterService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":0.341,"level":"info","msg":"finished unary call with code Unauthenticated","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.method":"GetUserInfo","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /session.SessionService/GetUserInfo","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"GetUserInfo","grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":0.56,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"error":"rpc error: code = Unauthenticated desc = no session information","grpc.code":"Unauthenticated","grpc.method":"List","grpc.service":"application.ApplicationService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":0.377,"level":"info","msg":"finished unary call with code Unauthenticated","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.method":"Get","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"cluster.SettingsService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /cluster.SettingsService/Get","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"Get","grpc.service":"cluster.SettingsService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":1.133,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.method":"GetUserInfo","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /session.SessionService/GetUserInfo","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"GetUserInfo","grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":0.477,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.method":"GetUserInfo","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /session.SessionService/GetUserInfo","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"GetUserInfo","grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":0.371,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.method":"Get","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"cluster.SettingsService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /cluster.SettingsService/Get","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"Get","grpc.service":"cluster.SettingsService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":1.046,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.method":"GetUserInfo","grpc.request.claims":"null","grpc.request.content":{},"grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","level":"info","msg":"received unary call /session.SessionService/GetUserInfo","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"grpc.code":"OK","grpc.method":"GetUserInfo","grpc.service":"session.SessionService","grpc.start_time":"2020-07-21T21:34:17Z","grpc.time_ms":0.435,"level":"info","msg":"finished unary call with code OK","span.kind":"server","system":"grpc","time":"2020-07-21T21:34:17Z"}
{"level":"info","msg":"Initializing OIDC provider (issuer: https://argocd-host/api/dex)","time":"2020-07-21T21:34:21Z"}

argocd-dex log:

{"level":"info","msg":"config using log level: debug","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"config issuer: https://argocd-host/api/dex","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"config storage: memory","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"config static client: Argo CD","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"config static client: Argo CD CLI","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"config connector: okta","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"config skipping approval screen","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"keys expired, rotating","time":"2020-07-21T21:27:02Z"}
{"level":"info","msg":"keys rotated, next rotation: 2020-07-22 03:27:03.134612551 +0000 UTC","time":"2020-07-21T21:27:03Z"}
{"level":"info","msg":"listening (http/telemetry) on 0.0.0.0:5558","time":"2020-07-21T21:27:03Z"}
{"level":"info","msg":"listening (http) on 0.0.0.0:5556","time":"2020-07-21T21:27:03Z"}
{"level":"info","msg":"listening (grpc) on 0.0.0.0:5557","time":"2020-07-21T21:27:03Z"}
@mstevens-a3 mstevens-a3 added the bug Something isn't working label Jul 21, 2020
@samwhite
Copy link

I've been facing this same issue for months 😞

I found that this happens when the externalTrafficPolicy is set to REGISTRY_ONLY. In this instance, argocd-server is routing traffic to BlackHoleCluster which is where Istio's will route traffic for which there is no matching VirtualService/ServiceEntry.

With externalTrafficPolicy set to ALLOW_ALL (insecure, not for production usage) the authentication via dex -> okta works just fine. Interestingly logs then show that argocd-server is sending traffic to PassthroughCluster, which is where Istio will "route" egress traffic when ALLOW_ALL is set.

I would have expected to see traffic routing argocd-server -> argocd-dex-server instead of argocd-server -> PassthroughCluster.

This makes me suspect that Argo is trying to egress out of the server in order to talk to dex?

image

See above for a screenshot of Kiali when trying to visit /api/dex/anything.

For what it's worth, I can spin up a random pod elsewhere in the cluster and I can communicate with the argocd-dex-server just fine:

# curl -I http://argocd-dex-server.argocd.svc.cluster.local:5556/api/dex/.well-known/openid-configuration
HTTP/1.1 200 OK
content-length: 864
content-type: application/json
date: Thu, 23 Jul 2020 21:19:02 GMT
x-envoy-upstream-service-time: 1
server: envoy

Here is the accompanying envoy log for the random pod -> dex-server request:

{
    "authority": "argocd-dex-server.argocd.svc.cluster.local:5556",
    "bytes_received": "0",
    "bytes_sent": "864",
    "downstream_local_address": "10.22.21.75:5556",
    "downstream_remote_address": "10.21.85.93:60582",
    "duration": "23",
    "istio_policy_status": "-",
    "method": "GET",
    "path": "/api/dex/.well-known/openid-configuration",
    "protocol": "HTTP/1.1",
    "request_id": "e3d18c38-cf15-9f64-ac31-c69451875e7d",
    "requested_server_name": "-",
    "response_code": "200",
    "response_flags": "-",
    "route_name": "default",
    "start_time": "2020-07-23T20:47:54.938Z",
    "upstream_cluster": "outbound|5556||argocd-dex-server.argocd.svc.cluster.local",
    "upstream_host": "10.21.85.87:5556",
    "upstream_local_address": "10.21.85.93:36468",
    "upstream_service_time": "22",
    "upstream_transport_failure_reason": "-",
    "user_agent": "curl/7.67.0",
    "x_forwarded_for": "-"
}

Here is the envoy log for the argocd-server -> ??? request:

{
    "authority": "argocd.dev-test.nandos.services",
    "bytes_received": "0",
    "bytes_sent": "0",
    "downstream_local_address": "10.22.21.75:5556",
    "downstream_remote_address": "10.21.85.83:33884",
    "duration": "0",
    "istio_policy_status": "-",
    "method": "GET",
    "path": "/api/dex/.well-known/openid-configuration",
    "protocol": "HTTP/1.1",
    "request_id": "5072715b-1681-97e8-8641-e863fcb0856e",
    "requested_server_name": "-",
    "response_code": "502",
    "response_flags": "-",
    "route_name": "block_all",
    "start_time": "2020-07-23T20:58:09.774Z",
    "upstream_cluster": "-",
    "upstream_host": "-",
    "upstream_local_address": "-",
    "upstream_service_time": "-",
    "upstream_transport_failure_reason": "-",
    "user_agent": "Go-http-client/1.1",
    "x_forwarded_for": "-"
}

Things I've tried so far:

  • Adding additional VirtualService/ServiceEntry resources to try to allow argo to egress to itself (if that's what it is doing)
  • Manually routing /api/dex traffic to the dex-server in a bit to bypass Argo's internal reverse proxy
  • Setting the --dex-server parameter to argocd-server, in an attempt to make it talk directly to the dex pod (which it can access! it's curlable from within argocd-server) but to no avail

@mstevens-a3
Copy link
Author

@samwhite, out of curiosity, what version of Istio/k8s are you running in your setup? I've got a story in my backlog to get Istio upgraded and just curious if I'll be able to use this issue to throw some more weight behind that upgrade or not. Glad to know we're not the only ones facing this issue though. Are you just using password auth in the meantime?

@samwhite
Copy link

I've seen this in Istio 1.4, 1.5, and 1.6. Same results with each version:

  • Works with ALLOW_ALL externalTrafficPolicy, does not work with REGISTRY_ONLY
  • K8s version doesn't seem to make a difference either

In the meantime we're using password auth, yeah, but sharing a password with an ever-growing team is not wise...

@aaronmell
Copy link

@samwhite I did some additional testing on our cluster. If I changed the dex server endpoint via the the --dex-server flag to an https endpoint, the error I get back from argocd, is that the wrong protocol was being used when trying to reach the openapi endpoint, so it seems to be reaching dex in that case.

I tried disabling MTLS between the argo and dex, and that had no effect either.

@tonnyadhi
Copy link

Anyone got any hints for this problem ? Apparently i'm also having the similar problem. Argocd reverse proxy to dex server returned 503.
Failed to query provider "https://argocd.xyz.io/api/dex": 503 Service Unavailable: upstream connect error or disconnect/reset before headers. reset reason: connection termination

@clgcn
Copy link

clgcn commented Oct 12, 2020

same problem with github Oauth
istio version: 1.4.6
argocd version: 1.7

@FrediWeber
Copy link
Contributor

FrediWeber commented Dec 22, 2020

We have the same problem. In older versions of ArgoCD it worked sometimes. We wondered why and came to the conclusion, that if the Dex server and ArgoCD server pods are running on the same node, it works fine. We then went on to figure out, why it stopped working in newer versions and came to the conclusion, that an anti affinity rule was introduced to the Dex pod template.
As a workaround we changed the anti affiniti rule to an affinity rule.

I honestly don't know, why it behaves in such a way but this could maybe help someone to figure it out.

We use ArgoCD 1.8.1 and Istio 1.8.1

@okhaliavka
Copy link
Contributor

okhaliavka commented May 7, 2021

The problem here is that argocd reverse proxy to dex does not rewrite the Host header, based on which Istio routes HTTP traffic.
For anyone looking for a quick workaround: renaming a port in svc/argocd-dex-server from http to tcp solves this issue, because TCP services are not subject to Host-based routing.

@samwhite
Copy link

@akhalyavka that worked -- thank you! ⭐

@hanzala1234
Copy link
Contributor

Thanks @akhalyavka it worked! can you please tell me how changing the service name affects it? As the protocol of service is still same (TCP). is it sending TCP requests now to the dex server just because we have changed the name?

@okhaliavka
Copy link
Contributor

okhaliavka commented Jul 2, 2021

@hanzala1234
Port name affects how istio routes requests. If you call it tcp-*, Istio will just treat it as raw TCP and won't do any HTTP-specific routing. Here's the new istio doc that explains some lower-level details of traffic routing.

Please note that this fix affects load-balancing and telemetry. #6183 fixes it on ArgoCD side, so that you don't need to rename the service.

@crenshaw-dev
Copy link
Member

Reopening due to continued reports of issues on the fix PR: #6183

@crenshaw-dev crenshaw-dev reopened this Oct 14, 2022
@pre
Copy link

pre commented Apr 13, 2023

FWIW Here's a list of changes which are needed for ArgoCD to work with Istio STRICT mTLS: #2784 (comment)

In addition to the list in this link, also the dex service port needs to be renamed from http to either https or tcp as suggested in this issue and which was supposed to be fixed in this PR but wasn't: #6183 (comment)

@okhaliavka
Copy link
Contributor

okhaliavka commented May 9, 2023

Sorry for coming back to this so late. I must have screwed up my testing, it seems like there is at least one more place where the header needs to be rewritten. I'm gonna take another stab at it

@tchellomello
Copy link
Contributor

Hit this issue on 2.7.6 and renaming the service made the trick

@Bhima-patil
Copy link

hello @tchellomello can u please what changes u maid

@Bhima-patil
Copy link

@tchellomello can we connect on linkedin i sent the request

@tchellomello
Copy link
Contributor

hello @tchellomello can u please what changes u maid

For this DEX issue, basically renaming the service as explained on the comment above.

@okhaliavka
Copy link
Contributor

okhaliavka commented Dec 1, 2023

I tried reproducing it and everything works perfectly fine after the fix. Envoy correctly routes all requests to dex-server through outbound|5556||argocd-dex-server.argocd.svc.cluster.local cluster, according to access logs. Works with and without strict mTLS.

I believe the issue you've hit could be related to argocd's own TLS between server and dex. In the default argocd installation the port is named http and argo's own TLS is also enabled by default. So envoy tries to parse encrypted traffic as if it was HTTP and fails.
For this to work in Istio, you're supposed to either rename the port to https/tcp (so that envoy doesn't try to parse it) or disable argo's own TLS.

The latter option is far superior: no double-encryption (istio mTLS over argo's TLS), and Istio can parse, manage and meter http/grpc requests in argocd namespace normally.

Adding this block to argocd-cmd-params-cm configmap will do the trick:

  controller.repo.server.plaintext: "true"
  dexserver.disable.tls: "true"
  server.dex.server.plaintext: "true"
  server.insecure: "true"
  server.repo.server.plaintext: "true"

@r-trigo
Copy link

r-trigo commented Jun 11, 2024

@okhaliavka thank you very much for your input regarding Istio. After considering and applying your advice to disable Argo's TLS to avoid double-encryption, it fixes the Google login. However, every application enters "Unknown" status, becoming unable to compare the commits pushed to git with the app status in k8s. On the UI pop-ups, we can see 2 kinds of errors:

Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: EOF"

(this first is also available at "Application conditions") and

Unable to load data: connection error: desc = "error reading server preface: read tcp ipv4:46746->another_ipv4:8081: read: connection reset by peer"

Can you give any more insightful tips to deal with this issue?

ArgoCD version: 2.11.2 (helm chart 7.1.1)

@kristian-oqc
Copy link

@r-trigo I fixed it by also adding this config:

reposerver.disable.tls: "true"

@aviadhaham
Copy link

We are on argo version 2.11.4
Working with Azure SAML

And the below annotations on the ingress resource resolved the sudden issue for us:

      nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
      nginx.ingress.kubernetes.io/proxy-buffer-size: 16k

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet