Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TokenBucketRateLimiter.WaitAsync() does not immediately reduce the _queueCount on cancellation #64078

Closed
halter73 opened this issue Jan 21, 2022 · 4 comments · Fixed by #64825
Assignees
Milestone

Comments

@halter73
Copy link
Member

Description

TokenBucketRateLimiter.WaitAsync(int, CancellationToken) still counts the requested tokens against the QueueLimit after they've been canceled by the CancellationToken after an incomplete ValueTask is returned. This is eventually fixed when the request is dequeued from the Deque<RequestRegistration> by ReplenishInternal, but that could be too late.

Reproduction Steps

var limiter = new TokenBucketRateLimiter(new TokenBucketRateLimiterOptions(1, QueueProcessingOrder.OldestFirst, 1,
    TimeSpan.Zero, 1, autoReplenishment: false));

limiter.Acquire(1);

var cts = new CancellationTokenSource();
var firstWaitTask = limiter.WaitAsync(1, cts.Token);

cts.Cancel();
await Assert.ThrowsAsync<OperationCanceledException>(() => firstWaitTask.AsTask());

// This incorrectly returns a failed lease immediately because ctc.Cancel() did not reduce the _queueCount.
var secondWaitTask = limiter.WaitAsync(1, cts.Token);
//Assert.False(secondWaitTask.IsCompleted)

Assert.True(limiter.TryReplenish());

// WaitAsync() no longer returns a failed lease because we called TryReplenish(), but that shouldn't be necessary
var thirdWaitTask = limiter.WaitAsync(1, cts.Token);
Assert.False(thirdWaitTask.IsCompleted)

Expected behavior

The secondWaitTask should initially be incomplete and later complete successfully when the TokenBucketRateLimiter is replenished.

Actual behavior

The secondWaitTask immediately returns a non-acquired lease because the QueueLimit has supposedly been exceeded (but not really).

Regression?

No.

Known Workarounds

No response

Configuration

No response

Other information

No response

@dotnet-issue-labeler dotnet-issue-labeler bot added area-System.Threading untriaged New issue has not been triaged by the area owner labels Jan 21, 2022
@ghost
Copy link

ghost commented Jan 21, 2022

Tagging subscribers to this area: @mangod9
See info in area-owners.md if you want to be subscribed.

Issue Details

Description

TokenBucketRateLimiter.WaitAsync(int, CancellationToken) still counts the requested tokens against the QueueLimit after they've been canceled by the CancellationToken after an incomplete ValueTask is returned. This is eventually fixed when the request is dequeued from the Deque<RequestRegistration> by ReplenishInternal, but that could be too late.

Reproduction Steps

var limiter = new TokenBucketRateLimiter(new TokenBucketRateLimiterOptions(1, QueueProcessingOrder.OldestFirst, 1,
    TimeSpan.Zero, 1, autoReplenishment: false));

limiter.Acquire(1);

var cts = new CancellationTokenSource();
var firstWaitTask = limiter.WaitAsync(1, cts.Token);

cts.Cancel();
await Assert.ThrowsAsync<OperationCanceledException>(() => firstWaitTask.AsTask());

// This incorrectly returns a failed lease immediately because ctc.Cancel() did not reduce the _queueCount.
var secondWaitTask = limiter.WaitAsync(1, cts.Token);
//Assert.False(secondWaitTask.IsCompleted)

Assert.True(limiter.TryReplenish());

// WaitAsync() no longer returns a failed lease because we called TryReplenish(), but that shouldn't be necessary
var thirdWaitTask = limiter.WaitAsync(1, cts.Token);
Assert.False(thirdWaitTask.IsCompleted)

Expected behavior

The secondWaitTask should initially be incomplete and later complete successfully when the TokenBucketRateLimiter is replenished.

Actual behavior

The secondWaitTask immediately returns a non-acquired lease because the QueueLimit has supposedly been exceeded (but not really).

Regression?

No.

Known Workarounds

No response

Configuration

No response

Other information

No response

Author: halter73
Assignees: -
Labels:

area-System.Threading, untriaged

Milestone: -

@stephentoub
Copy link
Member

Sounds similar to #761. They're using the same helper type, right? We might want to address them together. I haven't fixed that one yet as I've been concerned about the perf impact.

@halter73
Copy link
Member Author

I'm not very familiar with the Channel code. What helper type are you referring to? AsyncOperation? I don't think were sharing anything. I think the following code will need to be changed to decrement _queueCount:

ctr = cancellationToken.Register(static obj =>
{
((TaskCompletionSource<RateLimitLease>)obj!).TrySetException(new OperationCanceledException());
}, tcs);

And this logic will need to be updated to not decrement _queueCount in the cancellation case:

_queueCount -= nextPendingRequest.Count;
_tokenCount -= nextPendingRequest.Count;
Debug.Assert(_queueCount >= 0);
Debug.Assert(_tokenCount >= 0);
if (!nextPendingRequest.Tcs.TrySetResult(SuccessfulLease))
{
// Queued item was canceled so add count back
_tokenCount += nextPendingRequest.Count;
}

We're also somewhat concerned about perf because the CancellationToken.Register callback will need a reference to the TokenBucketRateLimiter instance and the tcs forcing an extra allocation. It's far from the only allocation on this code path though, and we've discussed pooling a custom IValueTaskSource in the future which we could then use to store the extra reference avoiding the extra allocation.

@stephentoub
Copy link
Member

What helper type are you referring to? AsyncOperation? I don't think were sharing anything.

Ah, ok. I must be misremembering. I thought it was also using AsyncOperation, but I must have been thinking of this:

@BrennanConroy BrennanConroy self-assigned this Jan 30, 2022
@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Feb 4, 2022
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Feb 10, 2022
@BrennanConroy BrennanConroy added this to the 7.0.0 milestone Feb 10, 2022
@BrennanConroy BrennanConroy removed the untriaged New issue has not been triaged by the area owner label Feb 10, 2022
@ghost ghost locked as resolved and limited conversation to collaborators Mar 13, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants