Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GitLab Integration Problem #26851

Closed
Spice-King opened this issue Jun 23, 2021 · 14 comments
Closed

GitLab Integration Problem #26851

Spice-King opened this issue Jun 23, 2021 · 14 comments

Comments

@Spice-King
Copy link

Spice-King commented Jun 23, 2021

Important Details

How are you running Sentry?

On-Premise w/ Docker, version 21.6.1

Description

I'm trying to connect a new self hosted instance of GitLab to my existing self hosted Sentry instance. Now, my boss has been making a push to kill off non-secure HTTP traffic on our network, so we have everything set to require it, since it's easy enough to install the root cert on new servers and PCs automatically. Step-ca handles generating cert via ACME, so that takes care of that side, and for bonus points, it's got name restrictions to limit the domains it can sign for, so no one can steal it and sign a fake google.com one.

The GitLab instance is an internal only internal service sitting at gitlab.company_name.internal. It resolves correctly on the Sentry host machine and certificates work, checked with both curl and openssl on the Sentry host, once the root CA is installed. Sentry is public facing service (though we might change that at some point, since we really just need my tunnel end point exposed) and has both a valid public domain and TLS certificate (error.company_name.com and *.company_name.com).

Steps to Reproduce

  1. Attempt to add GitLab instance.
  2. Get SSL/TLS errors.
  3. Try to add the root CA to all the container.
  4. Get a different error.
  5. Cry, because the logs are not much help.

image

The only bit of logs that looked tied to it at the attempts:

web_1 | 20:34:17 [INFO] sentry.identity: identity.token-exchange-error (error='invalid_state' pipeline_state='9334b7b6a88d4d6eab80d5800dea4d49')
web_1 | 20:34:17 [ERROR] sentry.integration.gitlab: pipeline error (organization_id=1 provider='gitlab' error='An error occurred while validating your request.')

Docker Compose overrides, for injecting the cert.
https://gist.github.com/Spice-King/0275c8629dc7b6c2e615d6ceda1a699a

What you expected to happen

Be able to get a working GitLab connection, and leave work for the day with a bit of a smile hidden under my mask.

Mild joking aside, I'll need to do up an issue (or a PR if I get the time to) for make adding a root CA less of a pain. Injection via environment variable is probably the simplest, with a bit of script added to the entrypoints to update the global certs. Any pointers to some better logs or clues for figuring out what I missed? Java keystores or local openssl installs would be something I did not hunt for off the top of my head.

@getsentry-release
Copy link

Routing to @getsentry/open-source for triage. ⏲️

@BYK
Copy link
Member

BYK commented Jun 24, 2021

Heya, sorry for the trouble. Are you able to share some request logs so we can see the pipeline state parameter that gets passed and see how they differ? I suspect a type/encoding issue here but need to verify.

Did this work without SSL btw?

Mild joking aside, I'll need to do up an issue (or a PR if I get the time to) for make adding a root CA less of a pain. Injection via environment variable is probably the simplest, with a bit of script added to the entrypoints to update the global certs. Any pointers to some better logs or clues for figuring out what I missed? Java keystores or local openssl installs would be something I did not hunt for off the top of my head.

We try to refrain from using env variables for everything but I think some auto mount and detection in entrypoint scripts would be great!

@Spice-King
Copy link
Author

Heya, sorry for the trouble. Are you able to share some request logs so we can see the pipeline state parameter that gets passed and see how they differ? I suspect a type/encoding issue here but need to verify.

Back at work with a fresh mind, found out I overlooked resetting snuba-api's command in my override file. Docker forgets commands if you monkey with the entrypoint and I forgot about that in compose. Since &sentry-defaults sets both, it worked there and I never noticed since I was not looking at the dash where snuba failing would be more noticeable.

There appears to be zero access attempts from my Sentry server to the proxy that's doing routing on my dev-ops server, that or Traefik is discarding malformed, empty or partial requests with zero logging that it did (doubtful, but the logging is not cranked up to the limit). Disabling all the bits that forced HTTPS took far, far, far longer than expected since there was many places where it did stuff like that. (Pro tip, every URL for something GitLab config for something belong to it's routes might trigger redirects, including your SAML config, 😢) Spent the better part of the morning digging around and waiting 3 minutes for GitLab to come back for each config change.

That the hard HTTPS requirement was removed and my overrides file, I could add GitLab to Sentry via HTTP, and we get SSL cert errors over HTTPS. Or it did. Now it errors after removed and tried to re-add. Again, no access logs from GitLab/Traefik, so here are logs in the form of docker-compose logs output.

https://gist.github.com/Spice-King/ddced1747a9384ecf13286cfe25bb959

We try to refrain from using env variables for everything but I think some auto mount and detection in entrypoint scripts would be great!
Fair, env vars came to mind as a first option. Probably could just assume a given directory exists (unlike a file, one can assume that because Docker will make it for you) and add any certs inside said directory to the container's global CA roots store.

Now for lunch, a brief escape from this headache.

@Spice-King
Copy link
Author

Post lunch it now accepts the HTTP URL, HTTPS as well (verifying turned off, still counter to boss' orders). I wonder if something is not in a ready state for a while after start up. Would make the feeling of going crazy more accountable. Reapplying overrides and waiting 5 before retrying adding GitLab.

@BYK
Copy link
Member

BYK commented Jun 24, 2021

From your logs:

nginx_1 | 172.23.0.1 - - [24/Jun/2021:16:48:10 +0000] "GET /extensions/gitlab/setup/?completed_installation_guide HTTP/1.0" 200 12089 "https://error.company_name.com/organizations/company-name/integrations/gitlab/setup/?" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" "10.0.0.156"

I think something is stripping some query string arguments from this request, causing the state argument being removed causing the error.

@Spice-King
Copy link
Author

That is the referrer header set by Firefox and it is the URL opened for the pop up to start the process. I do have an Nginx instance fronting the sentry one, close to vanilla as one can get, just doing TLS termination and taking an unused route for handling the tunnel (we have multiple sites, written by other devs in other languages, so externalizing the tunnel endpoint was chosen to lessen the load of integration).

Here is the config for that, sentry is the lower location block. https://gist.github.com/Spice-King/beb14d62ad081f8b8853a53eaf4da576 I doubt that it's the issue. Looking at the post form request, it appears to be working as intended. Even more irritating, it opts to randomly "work" like it did just now.

image

https://gist.github.com/Spice-King/9d6e73f452f803e7afefdc6c0a6b196a

Other than setting SENTRY_BIND=127.0.0.1:9000 our Sentry config is more or less stock, outside the generated key, API keys and email settings. sentry.conf.py is untouched. I could nix the SENTRY_BIND, but I'll probably spend more time fighting iptables and Docker than that is worth.

@Spice-King
Copy link
Author

I have at least had the epiphany as to why Sentry ignores my custom root CA. The python module certifi bundles all of Mozilla's roots, was extracted out of the request library. So I'll be figuring out how to bonk that.

@chadwhitacre
Copy link
Member

I believe in you, @Spice-King.

popcorn

@Spice-King
Copy link
Author

Just got back into the office, let Sentry chill over night with this added

x-core-defaults: &core_defaults
  environment:
    REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
  volumes:
    - /etc/root_ca.crt:/usr/local/share/ca-certificates/root_ca.crt:ro

And it worked first time. I have no clue if there is something that is flipping around between a working and non working state for adding the GitLab integration but there really feels like there is something akin to that in regards to rebooting sentry.

I found three other ca bundles in the sentry image, all python pacakges, a vendored certifi for pip, botocore (unchanged since added since 2018, but uses certifi's if it's around), and grpc. While I don't excatly need those things off the top of my head (AWS S3 and gRPC) I have found the magic env vars for them. DEFAULT_CA_BUNDLE for botocre and GRPC_DEFAULT_SSL_ROOTS_FILE_PATH_ENV_VAR for, well, grpc. The snuba image just has the two copies of certifi's bundle (the future package has some certs, but they look to be from their test suite). The relay and symbolicator don't have any clear signs of other CA roots. The rest of the images were not checked since they really should not need to retach out of the docker network.

This all of course leads me to ask why the hell python devs like to embed their own CA roots? Of which introduces issues ranging from the more benign of letting them fall out of sync with their source caused by inaction, to the more malicious of someone tampering with that set of CA roots and no one really thinking much of it as part of an update to the whole file.

@BYK
Copy link
Member

BYK commented Jun 28, 2021

This all of course leads me to ask why the hell python devs like to embed their own CA roots? Of which introduces issues ranging from the more benign of letting them fall out of sync with their source caused by inaction, to the more malicious of someone tampering with that set of CA roots and no one really thinking much of it as part of an update to the whole file.

This sounds quite crazy to me, thanks a lot for the detective work! I'll dig into this a bit more but in the meantime, we may use some of your findings on the docs if you wanna share something (otherwise I'll be summarizing these myself somehow but prefer to give you the credit on the commits).

In the maentime, is these anything we can do to help you for that PR making it easier to add new CA roots on the self-hosted version?

@Spice-King
Copy link
Author

I think I'm set for doing up my PRs, assuming it's safe to assume that there is no reason the snuba image ever needs to reach out side of the Docker network.

Been tired over the last few days (2nd COVID shot), not had the time, and ultimately forgot due to work. I'll commit and do up PRs to sentry, onpremise and develop when I get home tonight.

The random bucking of setting up at least the GitLab integration still leaves me scratching my head. There is too many moving parts with things outside my wheelhouse to really be effective at pinpointing why. That said, now that it randomly worked once with the right settings, we kinda don't want to mess with it much anymore to pin point the cause, least we break it again. Headaches aside, my boss has been quite happy with tying Sentry in to GitLab and the amount of insight it helped bring in.

@github-actions
Copy link
Contributor

This issue has gone three weeks without activity. In another week, I will close it.

But! If you comment or otherwise update it, I will reset the clock, and if you label it Status: Backlog or Status: In Progress, I will leave it alone ... forever!


"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀

BYK pushed a commit to getsentry/self-hosted that referenced this issue Jul 30, 2021
Mount a certificate folder to local ca storage in containers,
and add update command to cron image's entrypoint.

Result of poking and prodding from getsentry/sentry#26851
BYK pushed a commit to getsentry/develop that referenced this issue Jul 30, 2021
@Spice-King
Copy link
Author

I forgot to close this and stumbled across is in my tabs, closing now.

@chadwhitacre
Copy link
Member

So you're one of those people with so many tabs open you can't see the favicons? :P

Thanks for following up, @Spice-King. ☺️

@github-actions github-actions bot locked and limited conversation to collaborators Dec 29, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants