Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@sentry/serverless adds ~300ms to function duration #3051

Closed
4 tasks done
kostia-official opened this issue Nov 14, 2020 · 18 comments
Closed
4 tasks done

@sentry/serverless adds ~300ms to function duration #3051

kostia-official opened this issue Nov 14, 2020 · 18 comments
Labels
Package: serverless Issues related to the Sentry Serverless SDK Type: Improvement

Comments

@kostia-official
Copy link

Package + Version

"@sentry/node": "^5.27.4",
"@sentry/serverless": "^5.27.4",

Description

I've noticed that the response time of my functions is slow, added tracing, and noticed that at the end of every invocation my lambda sends POST request to sentry. Here you can see a trace:
image

It's okay to increase response time for requests with error, but if there is no error, I expect that response would be sent immediately. There is minimal code that I've tested:

import * as Sentry from '@sentry/serverless';

Sentry.AWSLambda.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: 1.0
});

export const handler = Sentry.AWSLambda.wrapHandler(
  (event, context, callback) => {
    context.callbackWaitsForEmptyEventLoop = false;

    callback(null, { statusCode: 200 });
  },
  { callbackWaitsForEmptyEventLoop: false, flushTimeout: 2000 }
);
@kostia-official
Copy link
Author

There is AWS Lambda Extensions API that can help integrate Sentry without increasing function duration:
https://aws.amazon.com/blogs/compute/introducing-aws-lambda-extensions-in-preview/

@kostia-official
Copy link
Author

Okay, I was able to make it work in a reasonable way:
error -> send the error to sentry
no error -> don't send anything to sentry

I've removed Sentry.AWSLambda.wrapHandler and added to requestHandler flushTimeout:
app.use(Sentry.Handlers.requestHandler({ flushTimeout: 2000 }));

@kostia-official kostia-official changed the title @sentry/serverless adds 300ms to function duration @sentry/serverless adds ~300ms to function duration Nov 14, 2020
@p0wl
Copy link

p0wl commented Dec 1, 2020

Would be great to know if there are some efforts regarding creating a lambda extension from someone or if @sentry/serverless is here to stay.

We are using serverless-sentry-lib and can't / won't switch to @serverless/sentry because of issues like this one (#3051), #3049 and #2984

@marshall-lee
Copy link
Contributor

marshall-lee commented Dec 8, 2020

@kozzztya

error -> send the error to sentry
no error -> don't send anything to sentry

Using Sentry this way you won't get tracing events. For each incoming request Sentry captures not only errors but also performance metics.

If you want only to capture errors, you could just skip tracesSampleRate parameter.

Also want to say that using Sentry.AWSLambda.wrapHandler is preferable because it captures more lambda-specific context to the events than a generic requestHandler node middleware.

@marshall-lee
Copy link
Contributor

Unfortunately, the delay caused by flush() cannot be reduced on SDK side. We can't even do flush() in background because there's no "background" in AWS Lambda — the instance is frozen between calls.

I think, the only thing that could probably help to add some concurrency is to write an extension using Extension API. I hope it'll lower the response time but you'll still be billed for the duration of flush() done on the extension side.

@boyney123
Copy link

There is AWS Lambda Extensions API that can help integrate Sentry without increasing function duration:
aws.amazon.com/blogs/compute/introducing-aws-lambda-extensions-in-preview

Is there any more information on this?

I noticed the same across all our lambda functions, an increase of about 300ms. A extension would be great! I notice a lot of other platforms are or have moved to this.

I can't find much about sentry doing this? Unless I'm mistaken?

@TomBeckett
Copy link

TomBeckett commented Mar 1, 2021

We're also having this issue and after speaking with Sentry support they seem aware of it. For now we've removed Sentry from all our lambda services. This has cut it down from 400m/s to less than 80m/s!

This needs to be looked into as a matter of priority. This cant be the recommended approach? It also seems like the nodejs variant works fine, so its something in the lambda variant for sure.

@KennyDurand
Copy link

Same problem here. A fix would be appreciated!

@AbhiPrasad AbhiPrasad added the Package: serverless Issues related to the Sentry Serverless SDK label Sep 30, 2021
@github-actions
Copy link
Contributor

This issue has gone three weeks without activity. In another week, I will close it.

But! If you comment or otherwise update it, I will reset the clock, and if you label it Status: Backlog or Status: In Progress, I will leave it alone ... forever!


"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀

@joshorr
Copy link

joshorr commented Jul 28, 2022

I am also having this issue, my lambda increases from about 90 ms to 290 ms! We use a datadog extension that does not increase the time, as they do there work in the background (extensions let you do this).

I was looking for a way to use sentry as an extension, where they would flush between invocations in the background. That would be very helpful, as currently it almost triples the time of the lambda.

@alexw23
Copy link

alexw23 commented Feb 1, 2023

Same issue here, we recently moved some of our services to lambda and I couldn't understand the 150-300ms overhead. Thought it would be easy to use this native package, but we won't be able to until this is resolved.

@van4oza
Copy link

van4oza commented Mar 3, 2023

same, bros, sentry in lambdas super slow(

@wengzilla
Copy link

Same issues here.

@spaceemotion
Copy link

I don't know what I am doing wrong, but for me Sentry takes more than 300ms. I'd be happy with "only" that amount of increase in case a request hits the sample rate.

Instead, I am getting 2 seconds of extra time, until the timeout hits - and still no data gets sent to the sentry data collection.

If sentry is disabled, the requests are all sub 100ms.

@microadam
Copy link

microadam commented Feb 8, 2024

does anyone have any workaround for this? Its a pretty big problem and makes sentry useless in an AWS lambda situation. I am assuming its not just safe to use the normal node SDK, as that might not flush out everything to sentry before the lambda dies? (hence there being a specific serverless one)

@microadam
Copy link

Created this if its of help to anyone: https://github.com/microadam/sentry-lambda-extension

Add that as a lambda layer and then configure sentry similar to this (custom transport is the key part):

import { makePromiseBuffer } from "@sentry/utils";
const buffer = makePromiseBuffer();
const makeCustomTransport = (options: { url: string }) => ({
  flush: async (timeout: number): Promise<boolean> => buffer.drain(timeout),
  send: async (envelope: Envelope) => {
    const requestTask = async () => {
      try {
        // sandbox is the internal lambda extension domain. Essentially resolves to itself
        await fetch("http://sandbox:4000", {
          method: "POST",
          headers: {
            "Content-Type": "application/json"
          },
          body: JSON.stringify({
            options: { url: options.url },
            envelope
          })
        });
      } catch (error) {
        console.error(`Error sending to sentry lambda extension`, error);
      }
    };
    await buffer.add(requestTask);
  }
});

AWSLambda.init({
    debug: false,
    environment: env,
    dsn: config.dsn,
    transport: makeCustomTransport,
    integrations: [
      new Integrations.Http({ tracing: true })
    ],
    tracesSampleRate: config.traceSampleRate || 0.01
  });

@Lms24
Copy link
Member

Lms24 commented Sep 3, 2024

Hey folks, I realize this issue has been getting quite some replies but sadly this went unnoticed on our end. Sorry about that!

I thought about reopening this issue but I don't think this makes it any more actionable. So I wanna at least share an update on where things are currently:

There are some core problems with AWS lambda, specifically that we have to stall the execution to ensure our data is Sent. And yes, we're aware of Lambda Extensions which in theory can/should be used. However, we tried this previously for Python and ran into a lot of issues where the extension would freeze or, the expected reduction in stalling time wouldn't be as noticeable as we hoped. Also we still need to somehow instrument/wrap lambda code to be able to capture errors and performance data. Which again needs to be sent to an (external?) extension.

We're open to revisiting this some day (and currently talking about this internally) but for now I can't make any promises so I'll leave this closed.

@andreiborza
Copy link
Member

andreiborza commented Sep 3, 2024

In addition to what @Lms24 said, please use #12856 to discuss so we get properly notified.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Package: serverless Issues related to the Sentry Serverless SDK Type: Improvement
Projects
None yet
Development

No branches or pull requests