-
-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Massive increase in transactions upgrading from 5.13.0 to 5.17.3 #2318
Comments
@rnystrom the only thing that changed on the ruby side during those releases for transactions is more
I need more info to help you further:
|
Thanks @sl0thentr0py! I can provide everything here in the hopes that it helps anyone else.
campsite-software
sentry.client.config.jsimport * as Sentry from '@sentry/nextjs'
const SENTRY_DSN = process.env.SENTRY_DSN || process.env.NEXT_PUBLIC_SENTRY_DSN
Sentry.init({
dsn: SENTRY_DSN,
beforeSend(event, hint) {
// Check if the error is a TypeError: Failed to fetch
if (hint.originalException instanceof TypeError && hint.originalException.message === 'Failed to fetch') {
// Check if the URL contains "_vercel/speed-insights/vitals"
const url = event.request.url
if (url && url.includes('_vercel/speed-insights/vitals')) {
// Ignore the error
return null
}
}
// If conditions are not met, return the event
return event
},
ignoreErrors: [
// a bunch of error regexes and strings from extensions and more
],
denyUrls: [
// facebook and other extension urls...
],
tracePropagationTargets: ['api.campsite.test', 'api.campsite.design', 'api.campsite.co'],
tracesSampleRate: 0.025,
profilesSampleRate: 1.0,
integrations: [Sentry.httpClientIntegration(), Sentry.browserTracingIntegration(), Sentry.browserProfilingIntegration()]
}) next.config.jsconst path = require('path')
/**
* @type {import('next').NextConfig}
*/
const { withSentryConfig } = require('@sentry/nextjs')
// bunch of setup and config...
/** @type {import('next').NextConfig} */
const moduleExports = {
// next config...
async headers() {
return [
// other stuff...
{
// Sentry Profiling
// @see https://docs.sentry.io/platforms/javascript/profiling/#step-2-add-document-policy-js-profiling-header
source: '/(.*)',
headers: [
{
key: 'Document-Policy',
value: 'js-profiling'
}
]
}
]
},
sentry: {
widenClientFileUpload: true,
hideSourceMaps: true
},
// webback config...
}
const sentryWebpackPluginOptions = {
silent: true, // Suppresses all logs
// For all available options, see:
// https://github.com/getsentry/sentry-webpack-plugin#options.
authToken: process.env.SENTRY_AUTH_TOKEN
}
// Make sure adding Sentry options is the last code to run before exporting, to
// ensure that your source maps include changes from all other Webpack plugins
module.exports = withBundleAnalyzer(withSentryConfig(moduleExports, sentryWebpackPluginOptions))
Here's a before & after of 2-weeks of API transactions. There doesn't appear to be any one product area that increased, instead it seems to be fairly unilateral between controller actions and Sidekiq jobs. Here's a sampling:
Side-note: I'd love to view "previous period" in a column as well. It's hard to compare results between time periods grouped by columns. The dotted line previous-period is nice but its just a total for the interval.
|
thx @rnystrom, don't know the exact problem yet but some observations Trace differences before / afterif I look at the ![]() compare that to a more recent trace which has 177 events in the compared to the 21 above. ![]() So for some reason, the traces started on your frontend now are getting linked to far more Rails transactions (via the Because you're using Further steps
|
for cross reference, here's a running tracker of problems with NextJS in v8 |
ok @Lms24 helped me figure out what's up. v8 changed how long traces live here So if your application has a page that lives for very long without navigation, your traces will stay alive for much longer with v8. This is valuable feedback on the change, so we might make it more visible or add some tweaks/config to not let traces get too big. For now, if you're not happy with this change, you can either
Sentry.getCurrentScope().setPropagationContext({
traceId: uuid4(),
spanId: uuid4().substring(16),
})
|
Ahh ok sounds good. I'll start by lowering the client sample significantly to start. The change makes sense, it was just quite shocking the impact it had on us (particularly our bill...). I'll report back once we deploy the change. |
So if I lower the client sample rate to, say, We may need to implement a custom trace boundary like you suggested, though this entire change is just quite a headache as it changes the trace sampling mental model. |
Hey @rnystrom, I'm really sorry for the trouble! We didn't take this decision light-hearted but rather out of the necessity for resolving the confusion from the previous behaviour in version 7 of the SDKs. We knew that there are cases where the new, page-pased trace lifetime would cause unexpected behaviour but it's not great that you're affected in this way. As an alternative to manually resetting the trace yourself, (which I agree, is probably the best you can do for now) you could also downgrade to v7 of the NextJS SDK. It's still being supported although new features will most likely only go into v8. We're still re-evaluating the page-based trace lifetime internally and ideally we can come up with a better approach. However, unfortunately, this entails quite a lot of changes across the Sentry product so I can't make any promises or ETAs. |
No worries appreciate the understanding. We definitely want to stay up to date with SDKs so we can tag along with improvements and new features. If you all have any advice on our sampling setup so that we can receive more client-side sampling without blowing up our quotas with API sampling, that'd be very helpful. |
You could in principle ignore |
This issue has gone three weeks without activity. In another week, I will close it. But! If you comment or otherwise update it, I will reset the clock, and if you remove the label "A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀 |
We did some routine upgrades on 5/24:
This was later on a Friday. When our weekday traffic picked up next week, we noticed an alarming number of transactions being logged:
This resulted in a much larger Sentry bill than we were expecting. We've been trying to trace where this came from, but we can't find anything. All the increase in transactions are coming from our API service (rails).
Our frontend is NextJS, and we also upgraded those Sentry dependencies from 7.110.1 to 8.4.0. I mention that because of our tracing setup. If the client were to send many more requests with tracing enabled, I guess that could result in a large increase in transactions.
However I do see our client sample rate still at 2.5% when looking at traces. Example:
We're having a hard time understanding where the increase in transactions is coming from. I've looked at the releases between 5.13 and 5.17 and don't see anything obvious.
I'd love some help to understand what is happening. Either some explanation on what new transactions are being sent, where to look in Sentry to see more details, or anything else.
Thank you!
The text was updated successfully, but these errors were encountered: