-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow worker to load from a URL instead of a blob
#6739
Comments
I think the way forward for this is, as a first step, to probably land #6665. |
OK, so we've been talking about this some more, and would propose the following changes:
Advantages:
If there are no objections against this, we'll start work on this cc @Lms24 @bruno-garcia ? |
My impression is that this seems rather complex. Relying on the CDN might always be a failure mode in some scenarios, so we'll be relying on the fallback to no compression anyway but always trying to fetch the resource from the CDN. On the other hand the advantages seem to pay off the added complexity. How impactful is this issue right now? Does this affect a large number of users? |
Right now? I would say not too impactful, but this is just a guess. we would have to see if we can determine the # of uncompressed vs compressed segments. However, I think this is a better solution and worth the complexity as it gives the users a more secure way to load our worker. |
So, we have started working on this. However, there is a core question:
What do you think, how should we approach this? One alternative version for 2. would be to update the docs & changelog to indicate that you need to whitelist our CDN url for compression to work (even though the actual code hasn't landed yet), then we can do this change after GA and technically not be breaking. cc @billyvg @Lms24 @bruno-garcia |
Let's not try to rush this for GA, that's too risky. Waiting for v8 is too long of a wait. I think updating our docs to call this out for GA is a great idea and releasing the change asap after GA. The security implications of allowing all blobs to be run in a worker are great enough to justify the potential breaking changes IMO. |
@mydea started a draft PR here: getsentry/sentry-docs#6269 |
Yeah, I agree and this would also be my vote! |
As we talked about this yesterday, I also agree that it's okay to do this between GA and v8. Especially because it's "soft-breaking" in the sense that we'd fall back to sending uncompressed payloads. |
blob
blob
So after some research, I figured out that actually it is impossible to serve a web worker from a different origin 😬 so we cannot serve this from a CDN. The only thing that would be possible is to serve the worker from the CDN, and have a minimal wrapper served as a blob: importScripts('https://our-cdn-url.com/replay-worker.js'); This should work, and would reduce our bundle size, but would not address the CSP issue - actually, it would make it worse because users would need to whitelist both the blob as well as the CDN URL. |
Is there any update on the preferred solution for this? I have a strict CSP and cannot allow "blob:" as this could potentially open up areas of attack. I know there is a fallback but I am now reluctant to turn on CSP reporting to sentry as this will create a lot of entries. |
Hi @tj-kev! Sadly, as of now we do not have a good solution for this other than to disable compression ( We are exploring other ways to work around this without having to disable compression - see e.g. #7755 I will close this issue for now as I believe loading the worker from a URL is not the solution for this problem. |
I think this should remain open, it's a valid issue and disabling compression is not a good call. Datadog added support for this on their SDK per user request by allowing one to self-host the Worker: DataDog/browser-sdk#1578 (comment) |
I guess we could allow to define a custom URL, but like for datadog, it would be up to users to implement a compression worker. We could provide a blueprint for this, I guess 🤔 cc @billyvg |
…9409) This PR does two things: 1. Allow to configure a `workerUrl` in replay config, which is expected to be an URL of a self-hosted worker script. a. Added an example worker script, which is a built version of the pako-based compression worker a. Users can basically host this file themselves and point to it in `workerUrl`, as long as it is on the same origin as the website itself. a. We can eventually document this in docs 1. Allows to configure `__SENTRY_EXCLUDE_REPLAY_WORKER__` in your build to strip the default included web worker. You can configure this if you're disabling compression anyhow, or if you want to configure a custom web worker as in the above step. Fixes #6739, and allows to reduce bundle size further. Once merged/released we can also add this to the bundler plugins `bundleSizeOptimizations` options. Note that we _do not recommend_ to disable the web worker completely. We only recommend to tree shake the worker code if you provide a custom worker URL - else, replay payloads will not be compressed, resulting in much larger payloads sent over the network, which is bad for your applications performance. Also note that when providing a custom worker, it is your own responsibility to keep it up to date - we try to keep the worker interface stable, and the worker is generally not updated often, but you should still check regularly when updating the SDK if the example worker has changed. --------- Co-authored-by: Billy Vong <billyvg@users.noreply.github.com>
The issue with using blobs to load our Worker is that stricter CSPs will not allow it. You cannot control the "source" of the blob, so allowing our worker blob would allow all blobs.
Instead, the proposed solution would be to release the worker to our CDN and add an option to the replay SDK to load from the CDN instead of a blob. This would mean that users could then allow
worker-src
for our CDN.The text was updated successfully, but these errors were encountered: