-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GitLab Integration Problem #26851
Comments
Routing to @getsentry/open-source for triage. ⏲️ |
Heya, sorry for the trouble. Are you able to share some request logs so we can see the pipeline state parameter that gets passed and see how they differ? I suspect a type/encoding issue here but need to verify. Did this work without SSL btw?
We try to refrain from using env variables for everything but I think some auto mount and detection in entrypoint scripts would be great! |
Back at work with a fresh mind, found out I overlooked resetting snuba-api's command in my override file. Docker forgets commands if you monkey with the entrypoint and I forgot about that in compose. Since &sentry-defaults sets both, it worked there and I never noticed since I was not looking at the dash where snuba failing would be more noticeable. There appears to be zero access attempts from my Sentry server to the proxy that's doing routing on my dev-ops server, that or Traefik is discarding malformed, empty or partial requests with zero logging that it did (doubtful, but the logging is not cranked up to the limit). Disabling all the bits that forced HTTPS took far, far, far longer than expected since there was many places where it did stuff like that. (Pro tip, every URL for something GitLab config for something belong to it's routes might trigger redirects, including your SAML config, 😢) Spent the better part of the morning digging around and waiting 3 minutes for GitLab to come back for each config change. That the hard HTTPS requirement was removed and my overrides file, I could add GitLab to Sentry via HTTP, and we get SSL cert errors over HTTPS. Or it did. Now it errors after removed and tried to re-add. Again, no access logs from GitLab/Traefik, so here are logs in the form of https://gist.github.com/Spice-King/ddced1747a9384ecf13286cfe25bb959
Now for lunch, a brief escape from this headache. |
Post lunch it now accepts the HTTP URL, HTTPS as well (verifying turned off, still counter to boss' orders). I wonder if something is not in a ready state for a while after start up. Would make the feeling of going crazy more accountable. Reapplying overrides and waiting 5 before retrying adding GitLab. |
From your logs:
I think something is stripping some query string arguments from this request, causing the |
That is the referrer header set by Firefox and it is the URL opened for the pop up to start the process. I do have an Nginx instance fronting the sentry one, close to vanilla as one can get, just doing TLS termination and taking an unused route for handling the tunnel (we have multiple sites, written by other devs in other languages, so externalizing the tunnel endpoint was chosen to lessen the load of integration). Here is the config for that, sentry is the lower location block. https://gist.github.com/Spice-King/beb14d62ad081f8b8853a53eaf4da576 I doubt that it's the issue. Looking at the post form request, it appears to be working as intended. Even more irritating, it opts to randomly "work" like it did just now. https://gist.github.com/Spice-King/9d6e73f452f803e7afefdc6c0a6b196a Other than setting |
I have at least had the epiphany as to why Sentry ignores my custom root CA. The python module certifi bundles all of Mozilla's roots, was extracted out of the request library. So I'll be figuring out how to bonk that. |
I believe in you, @Spice-King. |
Just got back into the office, let Sentry chill over night with this added x-core-defaults: &core_defaults
environment:
REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
volumes:
- /etc/root_ca.crt:/usr/local/share/ca-certificates/root_ca.crt:ro And it worked first time. I have no clue if there is something that is flipping around between a working and non working state for adding the GitLab integration but there really feels like there is something akin to that in regards to rebooting sentry. I found three other ca bundles in the sentry image, all python pacakges, a vendored certifi for pip, botocore (unchanged since added since 2018, but uses certifi's if it's around), and grpc. While I don't excatly need those things off the top of my head (AWS S3 and gRPC) I have found the magic env vars for them. This all of course leads me to ask why the hell python devs like to embed their own CA roots? Of which introduces issues ranging from the more benign of letting them fall out of sync with their source caused by inaction, to the more malicious of someone tampering with that set of CA roots and no one really thinking much of it as part of an update to the whole file. |
This sounds quite crazy to me, thanks a lot for the detective work! I'll dig into this a bit more but in the meantime, we may use some of your findings on the docs if you wanna share something (otherwise I'll be summarizing these myself somehow but prefer to give you the credit on the commits). In the maentime, is these anything we can do to help you for that PR making it easier to add new CA roots on the self-hosted version? |
I think I'm set for doing up my PRs, assuming it's safe to assume that there is no reason the snuba image ever needs to reach out side of the Docker network. Been tired over the last few days (2nd COVID shot), not had the time, and ultimately forgot due to work. I'll commit and do up PRs to sentry, onpremise and develop when I get home tonight. The random bucking of setting up at least the GitLab integration still leaves me scratching my head. There is too many moving parts with things outside my wheelhouse to really be effective at pinpointing why. That said, now that it randomly worked once with the right settings, we kinda don't want to mess with it much anymore to pin point the cause, least we break it again. Headaches aside, my boss has been quite happy with tying Sentry in to GitLab and the amount of insight it helped bring in. |
This issue has gone three weeks without activity. In another week, I will close it. But! If you comment or otherwise update it, I will reset the clock, and if you label it "A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀 |
Mount a certificate folder to local ca storage in containers, and add update command to cron image's entrypoint. Result of poking and prodding from getsentry/sentry#26851
Result of poking and prodding from getsentry/sentry#26851 Documentation for getsentry/self-hosted#1015
I forgot to close this and stumbled across is in my tabs, closing now. |
So you're one of those people with so many tabs open you can't see the favicons? :P Thanks for following up, @Spice-King. |
Important Details
How are you running Sentry?
On-Premise w/ Docker, version 21.6.1
Description
I'm trying to connect a new self hosted instance of GitLab to my existing self hosted Sentry instance. Now, my boss has been making a push to kill off non-secure HTTP traffic on our network, so we have everything set to require it, since it's easy enough to install the root cert on new servers and PCs automatically. Step-ca handles generating cert via ACME, so that takes care of that side, and for bonus points, it's got name restrictions to limit the domains it can sign for, so no one can steal it and sign a fake google.com one.
The GitLab instance is an internal only internal service sitting at
gitlab.company_name.internal
. It resolves correctly on the Sentry host machine and certificates work, checked with both curl and openssl on the Sentry host, once the root CA is installed. Sentry is public facing service (though we might change that at some point, since we really just need my tunnel end point exposed) and has both a valid public domain and TLS certificate (error.company_name.com and *.company_name.com).Steps to Reproduce
The only bit of logs that looked tied to it at the attempts:
Docker Compose overrides, for injecting the cert.
https://gist.github.com/Spice-King/0275c8629dc7b6c2e615d6ceda1a699a
What you expected to happen
Be able to get a working GitLab connection, and leave work for the day with a bit of a smile hidden under my mask.
Mild joking aside, I'll need to do up an issue (or a PR if I get the time to) for make adding a root CA less of a pain. Injection via environment variable is probably the simplest, with a bit of script added to the entrypoints to update the global certs. Any pointers to some better logs or clues for figuring out what I missed? Java keystores or local openssl installs would be something I did not hunt for off the top of my head.
The text was updated successfully, but these errors were encountered: