Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UNDERTOW-2424: Fix the request race produced by #1495 #1645

Conversation

carterkozak
Copy link
Contributor

@carterkozak carterkozak commented Aug 12, 2024

Jira: https://issues.redhat.com/browse/UNDERTOW-2424
Merged PR: #1648
2.2.x PR: #1652
2.3.x PR: #1653

The failure was the result of disagreement over the owner of a connection, after an initial thread ends an exchange and releases the connection, but continues to operate upon it with a premature flush() from an illegal thread (per undertow thread model).

The HttpServerExchange uses a WriteDispatchChannel in order to avoid interfering with the underlying connection after the exchange is completed, however the WriteDispatchChannel is an outer layer, around all response-wrappers. Response wrappers themselves may still interact with the underlying connections even after the HttpReadListener state has been reset to allow another request to be parsed. We must wrap the lower level streams with a similar function to prevent this spillover.

In practice, the failure I've encountered and reproduced was the result of DeflatingStreamSinkConduit invoking next.terminateWrites() followed by next.flush(). The terminateWrites() invocation causes the delegate ChunkedStreamSinkConduit to flush within terminateWrites which triggers the end of the exchange. Then, the following flush() in DeflatingStreamSinkConduit applies to another request entirely. This can be reproduced quickly using the reproducer branch (#1495) with a slight code modification, adding a sleep(100) thusly:

if (performFlushIfRequired()) {
    state |= NEXT_SHUTDOWN;
    freeBuffer();
    next.terminateWrites();
+    try {
+        Thread.sleep(100);
+    } catch (InterruptedException e) {}
    return next.flush();
} else {

The failure was the result of disagreement over the owner of a
connection, after an initial thread ends an exchange and releases
the connection, but continues to operate upon it with a premature
`flush()` from an illegal thread (per undertow thread model).

The HttpServerExchange uses a `WriteDispatchChannel` in order to
avoid interfering with the underlying connection after the exchange
is completed, however the `WriteDispatchChannel` is an outer layer,
around all response-wrappers. Response wrappers themselves may still
interact with the underlying connections even after the `HttpReadListener`
state has been reset to allow another request to be parsed.
We must wrap the lower level streams with a similar function to prevent
this spillover.

In practice, the failure I've encountered and reproduced was the result
of `DeflatingStreamSinkConduit` invoking `next.terminateWrites()` followed
by `next.flush()`. The `terminateWrites()` invocation causes the delegate
`ChunkedStreamSinkConduit` to flush within `terminateWrites` which triggers
the end of the exchange. Then, the following `flush()` in
`DeflatingStreamSinkConduit` applies to another request entirely.
This can be reproduced quickly using the reproducer branch (undertow-io#1495)
with a slight code modification, adding a `sleep(100)` thusly:
```diff
if (performFlushIfRequired()) {
    state |= NEXT_SHUTDOWN;
    freeBuffer();
    next.terminateWrites();
+    try {
+        Thread.sleep(100);
+    } catch (InterruptedException e) {}
    return next.flush();
} else {
```
@@ -1397,7 +1397,7 @@ public StreamSinkChannel getResponseChannel() {
final WrapperStreamSinkConduitFactory factory = new WrapperStreamSinkConduitFactory(wrappers, responseWrapperCount, this, sinkChannel.getConduit());
sinkChannel.setConduit(factory.create());
} else {
sinkChannel.setConduit(connection.getSinkConduit(this, sinkChannel.getConduit()));
sinkChannel.setConduit(new ExchangeLifecycleDetachableStreamSinkConduit(connection.getSinkConduit(this, sinkChannel.getConduit()), this));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case there are no wrappers, so we don't necessarily need to wrap the conduit, however I opted to include the wrapper here for symmetry/safety. Please let me know if you'd prefer for this line to be reverted!


@Override
public long transferFrom(FileChannel src, long position, long count) throws IOException {
if (isFinished()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you done any performance testing on repeated calls to isFinished()? Is there any way to memoize/cache the check? Am I optimizing prematurely? :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This didn't incur a measurable cost in my benchmarks. This isFinished check is identical to the isFinished check within HttpServerExchange.WriteDispatchChannel which handles a similar check at a slightly different layer.
This shouldn't have much performance impact because the underlying check is state & 2048 == 2048 where state is a standard non-volatile field.
I considered splitting the existing WriteDispatchChannel implementation so that we only need to do this check in the new conduit implementation, however that seemed riskier, and this way we do still short-circuit wrapper logic after the response has completed.

@carterkozak
Copy link
Contributor Author

Trying a more precise alternative, I think this PR handles the root cause a bit better: #1648

@carterkozak
Copy link
Contributor Author

Closing this in favor of #1648

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate Duplicates other pull request(s)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants