Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Profiling tail latency events #1738

Open
RainM opened this issue Feb 6, 2025 · 3 comments
Open

Profiling tail latency events #1738

RainM opened this issue Feb 6, 2025 · 3 comments

Comments

@RainM
Copy link

RainM commented Feb 6, 2025

Hello,

I think many people use Aeron in latency-sensitive applications. And it's constant issue to profile rare events which contribute to a tail latency a lot but doesn't affect average profile significantly.
I have an idea how to address this issue. Let's imagine Aeron's event loop writes current timestamp (heartbeat) to some memory-mapped file (heartbeat file) at the beginnig of every poll. Next, it's possible to hack some profiler to read this timestamp (via memory mapped file as well) and record samples if only current timestamp (sample's timestamp) is significantly greater than a timestamp of poll start. If poll has started recently, just skip such sample. In general, it's similar for 'time-to-safepoint' profiling.

I've implemented such thing for async-profiler (not upstreamed this yet).

Do you think such approach helps profiling tail latency issues? What do you think if I contribute such think (heartbeat agent) for Agrona/Aeron? I'm finishing PR to async-profiler and feedback from Aeron is highly valuable as well.

@vyazelenko
Copy link
Contributor

vyazelenko commented Feb 6, 2025

@RainM Are you aware about the duty cycle stall tracking that is implemented for all Aeron components? There are two counters per component: one tracks the max duty cycle time ever recorded whereas the other one contains the number of times a threshold has been breached (e.g. MediaDriver.Context#senderCycleThresholdNs() ).

For example, have a look at the Sender#doWork and DutyCycleStallTracker.

@vad0
Copy link

vad0 commented Feb 8, 2025

When DutyCycleStallTracker reports a slow cycle it is already too late - the cycle has ended. We already have no way to find out why it was slow. The idea is to store DutyCycleStallTracker.timeOfLastUpdateNs in an off heap variable. Async profiler will read this variable before collecting a stack trace. If deltaNs=asyncProfilerNowNs-timeOfLastUpdateNs<thresholdNs, then this stack trace is not collected. If deltaNs>thresholdNs, then stack trace is collected. If we set thresholdNs at a 0.99 latency percentile value, then we will completely ignore fast cycles and will collect stack traces only for the slow ones.

@RainM
Copy link
Author

RainM commented Feb 9, 2025

Yep, the key thing is to share knowledge between application and profiler if poll is slow or not.

One step back.If we want to get profile of tail events, we need either

  1. gather full profile and mark out it with poll's starts and ends. And filter out all fast polls.
  2. tell profiler if it's slow or fast poll while profiler wants to record sample. The target state here is to record all samples from poll start and either discard them (if it's fast poll) or record them (if it's slow poll). But it requires significant support from profiler side. At least, profiler should keep recorded samples while there is no knowledge if it's fast or slow poll. It requires some temporary storage for such samples. It's big change and requires a lot of work on profiler side.

I'd like is to simplify application-profiler interface. It's just one long integer variable. And it's pretty straightforward for profiler to understand if poll is slow or fast.

Back to your answer. Yep, I know DutyCycleStallTracker. But the knowledge that the current poll was slow doesn't let you get to know it's profile anyhow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants