-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@sentry/serverless adds ~300ms to function duration #3051
Comments
There is AWS Lambda Extensions API that can help integrate Sentry without increasing function duration: |
Okay, I was able to make it work in a reasonable way: I've removed Sentry.AWSLambda.wrapHandler and added to requestHandler flushTimeout: |
@kozzztya
Using Sentry this way you won't get tracing events. For each incoming request Sentry captures not only errors but also performance metics. If you want only to capture errors, you could just skip Also want to say that using |
Unfortunately, the delay caused by I think, the only thing that could probably help to add some concurrency is to write an extension using Extension API. I hope it'll lower the response time but you'll still be billed for the duration of |
Is there any more information on this? I noticed the same across all our lambda functions, an increase of about 300ms. A extension would be great! I notice a lot of other platforms are or have moved to this. I can't find much about sentry doing this? Unless I'm mistaken? |
We're also having this issue and after speaking with Sentry support they seem aware of it. For now we've removed Sentry from all our lambda services. This has cut it down from 400m/s to less than 80m/s! This needs to be looked into as a matter of priority. This cant be the recommended approach? It also seems like the nodejs variant works fine, so its something in the lambda variant for sure. |
Same problem here. A fix would be appreciated! |
This issue has gone three weeks without activity. In another week, I will close it. But! If you comment or otherwise update it, I will reset the clock, and if you label it "A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀 |
I am also having this issue, my lambda increases from about 90 ms to 290 ms! We use a datadog extension that does not increase the time, as they do there work in the background (extensions let you do this). I was looking for a way to use sentry as an extension, where they would flush between invocations in the background. That would be very helpful, as currently it almost triples the time of the lambda. |
Same issue here, we recently moved some of our services to lambda and I couldn't understand the 150-300ms overhead. Thought it would be easy to use this native package, but we won't be able to until this is resolved. |
same, bros, sentry in lambdas super slow( |
Same issues here. |
I don't know what I am doing wrong, but for me Sentry takes more than 300ms. I'd be happy with "only" that amount of increase in case a request hits the sample rate. Instead, I am getting 2 seconds of extra time, until the timeout hits - and still no data gets sent to the sentry data collection. If sentry is disabled, the requests are all sub 100ms. |
does anyone have any workaround for this? Its a pretty big problem and makes sentry useless in an AWS lambda situation. I am assuming its not just safe to use the normal node SDK, as that might not flush out everything to sentry before the lambda dies? (hence there being a specific serverless one) |
Created this if its of help to anyone: https://github.com/microadam/sentry-lambda-extension Add that as a lambda layer and then configure sentry similar to this (custom transport is the key part):
|
Hey folks, I realize this issue has been getting quite some replies but sadly this went unnoticed on our end. Sorry about that! I thought about reopening this issue but I don't think this makes it any more actionable. So I wanna at least share an update on where things are currently: There are some core problems with AWS lambda, specifically that we have to stall the execution to ensure our data is Sent. And yes, we're aware of Lambda Extensions which in theory can/should be used. However, we tried this previously for Python and ran into a lot of issues where the extension would freeze or, the expected reduction in stalling time wouldn't be as noticeable as we hoped. Also we still need to somehow instrument/wrap lambda code to be able to capture errors and performance data. Which again needs to be sent to an (external?) extension. We're open to revisiting this some day (and currently talking about this internally) but for now I can't make any promises so I'll leave this closed. |
Package + Version
Description
I've noticed that the response time of my functions is slow, added tracing, and noticed that at the end of every invocation my lambda sends POST request to sentry. Here you can see a trace:

It's okay to increase response time for requests with error, but if there is no error, I expect that response would be sent immediately. There is minimal code that I've tested:
The text was updated successfully, but these errors were encountered: