Skip to content
This repository was archived by the owner on Dec 10, 2021. It is now read-only.

bounded channels #2

Closed
dvc94ch opened this issue Jul 21, 2021 · 2 comments
Closed

bounded channels #2

dvc94ch opened this issue Jul 21, 2021 · 2 comments

Comments

@dvc94ch
Copy link
Member

dvc94ch commented Jul 21, 2021

No description provided.

@dvc94ch
Copy link
Member Author

dvc94ch commented Jul 24, 2021

So this is not an easy one as mentioned here:

/// ## Back-pressure on `new_connections`
///
/// The [`quinn_proto::Endpoint`] object contains an accept buffer, in other words a buffer of the
/// incoming connections waiting to be accepted. When a new connection is signalled, we send this
/// new connection on the `new_connections` channel in an asynchronous way, and we only free a slot
/// in the accept buffer once the element has actually been enqueued on `new_connections`. There
/// are therefore in total three buffers in play: the `new_connections` channel itself, the queue
/// of elements being sent on `new_connections`, and the accept buffer of the
/// [`quinn_proto::Endpoint`].
///
/// Unfortunately, this design has the consequence that, on the network layer, we will accept a
/// certain number of incoming connections even if [`Endpoint::next_incoming`] is never even
/// called. The `quinn-proto` library doesn't provide any way to not accept incoming connections
/// apart from filling the accept buffer.
///
/// ## Back-pressure on connections
///
/// Because connections are processed by the user at a rate of their choice, we cannot properly
/// handle the situation where the channel from the background task to individual connections is
/// full. Sleeping the task while waiting for the connection to be processed by the user could
/// even lead to a deadlock if this processing is also sleeping waiting for some other action that
/// itself depends on the background task (e.g. if processing the connection is waiting for a
/// message arriving on a different connection).
///
/// In an ideal world, we would handle a background-task-to-connection channel being full by
/// dropping UDP packets destined to this connection, as a way to back-pressure the remote.
/// Unfortunately, the `quinn-proto` library doesn't provide any way for us to know which
/// connection a UDP packet is destined for before it has been turned into a [`ConnectionEvent`],
/// and because these [`ConnectionEvent`]s are sometimes used to synchronize the states of the
/// endpoint and connection, it would be a logic error to silently drop them.
///
/// We handle this tricky situation by simply killing connections as soon as their associated
/// channel is full.

I'm a bit skeptical of just killing the connection because a buffer is full. I think limiting the number of connections that quinn acceps in the TransportConfig and the size of the send/recv buffer are the way to go for now.

@dvc94ch
Copy link
Member Author

dvc94ch commented Jul 24, 2021

So going with the task sleeping approach for now. Fixed in #5

See quinn-rs/quinn#1167 for details.

@dvc94ch dvc94ch closed this as completed Jul 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant