-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[gzip]: allow gzip to work w/ http backend w/o content-lenght #14584
Conversation
…ransfer encoding Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
cc @rgs1 @alyssawilk |
name: test | ||
typed_config: | ||
"@type": type.googleapis.com/envoy.extensions.compression.gzip.compressor.v3.Gzip | ||
)EOF"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to add a similar change to the http2 integration test? To make sure compression actually works for h2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While my intuition is it shouldn't be done, I don't see anything in either spec about legality of compressing HTTP/1 connect payload which makes me wonder if it's legal, and if it is legal if we should support it by default.
https://tools.ietf.org/html/rfc7540#section-8.3
https://tools.ietf.org/html/rfc8441
@PiotrSikora @lizan thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually @rgs1 since you reported the issue in the first place, do you know if it broke WS on HTTP/1, HTTP/2 or both? If it's not supported globally it'd be a useful data point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on HTTP/1 (downstream & upstream)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair. Either way I agree that integration tests for HTTP/2 would be good to have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Transfer-Encoding is not allowed on CONNECT responses. RFC 7230 s3.3.1 states:
A server MUST NOT send a Transfer-Encoding header field in any 2xx (Successful) response to a CONNECT request
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry. i little bit lost here: are we talking about H/2 integration test for WS or ? If former - i think initializer (where this new compression config is added) is protocol agnostic and most likely used in all of the integration tests (and i assume we do have h/2 ws)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if it runs for HTTP/1 and HTTP/2 you're fine here.
Unfortunately I think including extensions in core tests is legal so I think you're going to have to add similar test to the compressor lib.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for jumping on this fix so fast!
@@ -64,6 +64,7 @@ Removed Config or Runtime | |||
|
|||
New Features | |||
------------ | |||
* compression: added new runtime guard `envoy.reloadable_features.enable_compression_without_chunked_header` (enabled by default) to control compression if no transfer-encoding=chunked header exists (now envoy would compress in such cases) to account for http/2 connections, where such transfer-encoding were removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compression: extended the compression allow compressing when the content length header is not present. This behavior may be temporarily reverted by setting envoy.reloadable_features.enable_compression_without_chunked_header
to false.
@@ -495,6 +497,10 @@ bool CompressorFilter::isEtagAllowed(Http::ResponseHeaderMap& headers) const { | |||
|
|||
bool CompressorFilterConfig::DirectionConfig::isMinimumContentLength( | |||
const Http::RequestOrResponseHeaderMap& headers) const { | |||
if (StringUtil::caseFindToken(headers.getTransferEncodingValue(), ",", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this ever be true? I think the transfer encoding headers are removed before the filter ever sees it. It would also result in HTTP/1.1 and HTTP/2 responses without content length being treated differently. I think this was buggy prior to your change but I'd be inclined to fix and test while we're in here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why h1 and h2 would be different? (i this diff we do not distinct h1 vs h2 in compressor. the runtime feature just either permit or deny compression if no content-length header). and yes it seems that you are right. at least manual testing shows that there is no compression if only trunsfer-encoding=chunked present. do you propose by somehow passing transfer-encoding into the filter or just remove this condition? I think originally idea was that is it is chunked - we making assumption that response is big enough and worth compressing (the whole idea behind isMinimumContentLength (from my understanding) is to be able to do some kind of performance optimization (do not compress something which is really small)) so from my point of view we can just omit this logic (unless @rojkov has some other thoughts on this)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So HTTP/1 on the wire, has a transfer-encoding: chunked header for (most) responses without a content length. The alternate is framing by connection close.
HTTP/2 disallows the transfer encoding header though data is implicitly chunked up by HTTP/2 data frames.
In Envoy, the HCM removes any transfer encoding header before this code is called. If you add an integration test with T-E:chunked, and an assert here, I believe it can never trigger.
I believe the code needs to simply have one behavior if content length is present, and one if content length is absent, and not rely on the transfer encoding header.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is quite old. I guess the original thinking was indeed to assume the payload is big enough if transfer-encoding: chunked
is present.
In Envoy, the HCM removes any transfer encoding header before this code is called.
Is it about the chunked
part only or fully removed? If any t-e header is removed in HCM then also CompressorFilter::isTransferEncodingAllowed()
becomes useless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I thought it was removed early but looks like I may be incorrect:
https://github.com/envoyproxy/envoy/blob/master/source/common/http/conn_manager_utility.cc#L398
response_headers.removeTransferEncoding();
called here:
https://github.com/envoyproxy/envoy/blob/master/source/common/http/conn_manager_impl.cc#L1385
which looks like it's right before it's sent the codec. Either way a test would be good to make sure we're consistent between HTTP/1 and HTTP/2 without content length, SG?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What tests are we talking about? for the WS/Update (to make sure that h1 and h2 are the same) or something else?
!hasCacheControlNoTransform(headers) && isEtagAllowed(headers) && | ||
!headers.getInline(response_content_encoding_handle.handle()); | ||
const bool isCompressible = | ||
isEnabledAndContentLengthBigEnough && !Http::Utility::isUpgrade(headers) && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't the new upgrade also be runtime guarded? maybe have a temporary bool for checking this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure will add runtime check here as well
name: test | ||
typed_config: | ||
"@type": type.googleapis.com/envoy.extensions.compression.gzip.compressor.v3.Gzip | ||
)EOF"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if it runs for HTTP/1 and HTTP/2 you're fine here.
Unfortunately I think including extensions in core tests is legal so I think you're going to have to add similar test to the compressor lib.
@@ -26,6 +26,9 @@ envoy_cc_extension( | |||
name = "config", | |||
srcs = ["config.cc"], | |||
hdrs = ["config.h"], | |||
extra_visibility = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately the reason this wasn't visible is we don't allow Envoy core code to depend on extensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just for the reference (mostly for myself so it would be easier to find in future). this should be reworked because of #9953
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's a relatively new policy. Before we occasionally broke people's builds if they opted out of certain extensions, then fixed it after they called it out. Better to not break in the first place :-P
If there's a place we could call that out where you would have seen it, please improve the docs in this area. It's totally intuitive if something fails to build due to visibility to just fix it. Normally we put such checks in the format scripts, but unfortunately we can't with this one until we clean up all the legacy visibility rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed visibility to test/extensions. i hope this is ok (for test/extensions to depend on source/extensions)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is it needed? I think we also don't want cross-extension dependencies, so explicit lists would be better. If that's onerous we could at least say //test/extensions/filters/http:subpackages ? I'll defer to @lizan if he has thoughts/preferences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah. looks like i still can use this target in tests even w/o explicit "extra_visibility". will remove
1. isUpgrade logic now runtime guarded as well 2. renaming runtime guard to enable_compression_without_content_length_header as it is now has nothing to do w/ chunked encoding 3. removed transfer-encoding=chunked logic (we stops to rely on it) 4. copy pasted websocket integration tests to compressor (only related to upgrade) Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@@ -0,0 +1,248 @@ | |||
#include "test/extensions/filters/http/common/compressor/websocket_with_compressor_integration_test.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this (and .h) is copy/pasted test/integration/websocket_integration_test (i've just removed few things which are not related to upgrade and renamed class/structs by adding WithCompressor (as well as added compressor configuration itself))
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@alyssawilk @rojkov somehow lost in translation/discussion. What does it take for this to move forward? there were some discussion about itest, but i missed the part - were we talking about WS related tests (which is done) or there were some other requirments (e.g. IT for transfer-encoding etc)? |
I think I mainly want an integration test that WS doesn't get zipped for
HTTP/1 or HTTP/2, and we do a manual check that the prior PR would fail
that test, to make sure we're regression testing the old bug.
If we have that test, I think you can just do a master merge and make sure
CI is happy, and I'm up for doing another pass!
…On Thu, Jan 14, 2021 at 12:34 PM Nikita V. Shirokov < ***@***.***> wrote:
@alyssawilk <https://github.com/alyssawilk> @rojkov
<https://github.com/rojkov> somehow lost in translation/discussion. What
does it take for this to move forward? there were some discussion about
itest, but i missed the part - were we talking about WS related tests
(which is done) or there were some other requirments (e.g. IT for
transfer-encoding etc)?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#14584 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AELALPP6IYZPXIWMYCSPCBTSZ4TLNANCNFSM4VYC4IDQ>
.
|
Oh, sorry, also making sure we integration test that both HTTP/1 and HTTP2
handle compressing when content length is absent (i.e. rely on the absence
of the C-L header, rather than the presence of the T-E header)
On Thu, Jan 14, 2021 at 2:53 PM Alyssa (Rzeszutek) Wilk <alyssar@google.com>
wrote:
… I think I mainly want an integration test that WS doesn't get zipped for
HTTP/1 or HTTP/2, and we do a manual check that the prior PR would fail
that test, to make sure we're regression testing the old bug.
If we have that test, I think you can just do a master merge and make sure
CI is happy, and I'm up for doing another pass!
On Thu, Jan 14, 2021 at 12:34 PM Nikita V. Shirokov <
***@***.***> wrote:
> @alyssawilk <https://github.com/alyssawilk> @rojkov
> <https://github.com/rojkov> somehow lost in translation/discussion. What
> does it take for this to move forward? there were some discussion about
> itest, but i missed the part - were we talking about WS related tests
> (which is done) or there were some other requirments (e.g. IT for
> transfer-encoding etc)?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#14584 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AELALPP6IYZPXIWMYCSPCBTSZ4TLNANCNFSM4VYC4IDQ>
> .
>
|
This one is done. As i've mentioned in "Testing" part of PR w/o new checks WS integration test fails as it starts to have "Accept-Encoding: vary" header added by compression filter |
T-E:chunked logic was removed from compression filter. so current logic is
and another check which is making sure that request is not "update" (by reusing Utilit::isUpgrade). so current version of the filter is http version agnostic (as t-e was removed) going to do merge with muster and update the diff |
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@@ -34,6 +34,45 @@ Removed Config or Runtime | |||
|
|||
New Features | |||
------------ | |||
* compression: extended the compression allow compressing when the content length header is not present. This behavior may be temporarily reverted by setting `envoy.reloadable_features.enable_compression_without_content_length_header` to false. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm looks like bad automerge. fill fix in a few
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
std::make_tuple("transfer-encoding", "test\t, chunked\t", false), | ||
std::make_tuple("x-garbage", "no_value", true))); | ||
|
||
TEST_P(IsTransferEncodingAllowedTest, Validate) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this test removed? I think we are still interested in testing that already compressed payloads are not compressed twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will move back
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait. no. this is transfer-encoding tests. already encoded content would have "content-encoding" headers. not transfer-encoding. this is why i've removed it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The upstream can set either "content-encoding" (encoding as a property of the resource) or "transfer-encoding" (encoding as a property of the message). In both cases the payload would be compressed, but in the latter case "content-encoding" would be missing.
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, looking good!
@@ -26,6 +26,9 @@ envoy_cc_extension( | |||
name = "config", | |||
srcs = ["config.cc"], | |||
hdrs = ["config.h"], | |||
extra_visibility = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is it needed? I think we also don't want cross-extension dependencies, so explicit lists would be better. If that's onerous we could at least say //test/extensions/filters/http:subpackages ? I'll defer to @lizan if he has thoughts/preferences.
if (!end_stream && request_config.compressionEnabled() && | ||
// temp variable to preserve old behavior w/ http upgrade. should be removed when | ||
// enable_compression_without_content_length_header runtime variable would be removed | ||
const bool allowUpgrade = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-> is_upgrade_allowed here and below?
Http::Headers::get().TransferEncodingValues.Chunked); | ||
if (Runtime::runtimeFeatureEnabled( | ||
"envoy.reloadable_features.enable_compression_without_content_length_header")) { | ||
// returning true to account for HTTP/2 where content-length is optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and HTTP/1.1 chunked bodies, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was this one overlooked?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please elaborate on the last comment?. w/ this new diff we would compress even if there is no "transfer-encoding=chunked" header (if runtime is set to true) (aka there is no point to do separate check for chunked. because we would have logic like:
if true || chunked {
return true;
}
this check would always be true
. if runtime set to false we would fall back to previous behavior aka "return (chunked in headers?)"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, true. Sorry, I meant to make the point that the comment implied that the return only applies to HTTP/2 but it applies to HTTP/1.1 without content length too. How about something like
Return true to ignore the minimum length configuration if no content length is specified in the headers?
)EOF"); | ||
doRequestCompression({{":method", "post"}}, false); | ||
Http::TestResponseHeaderMapImpl headers{{":status", "200"}}; | ||
doResponseNoCompression(headers); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
your call but I'd be inclined to combine the request and response to
CompressNoContentLength
test/extensions/filters/http/common/compressor/websocket_with_compressor_integration_test.cc
Outdated
Show resolved
Hide resolved
codec_client_->close(); | ||
} | ||
|
||
TEST_P(WebsocketWithCompressorIntegrationTest, RouteSpecificUpgrade) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can nix this test.
|
||
// Technically not a websocket tests, but verifies normal upgrades have parity | ||
// with websocket upgrades | ||
TEST_P(WebsocketWithCompressorIntegrationTest, NonWebsocketUpgrade) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we also snag
ProxyingConnectIntegrationTest, ProxyConnect
to make sure CONNECT requests and responses are handled correctly too?
1. moving back TE:chunked logic (to make sure that w/ disabled runtime there is no change) 2. colapsing UTs for req/response into 1 3. removing route specific update from integration tests 4. removed extra_visibility for gzip Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
have not added itests for CONNECT as requested by @alyssawilk. so probably no need to review right now |
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
ok. i've added analog of (ProxyingConnectIntegrationTest, ProxyConnect) (the difference is that in initialize compression filter was also added). i think i've addressed all comments and this is ready for review |
Retrying Azure Pipelines: |
hmm. presubmit failing with:
however my diff does not touch anything websocket related (core part of it. only handling upgrade in compressor, but looks like this filter does not even enabled in examples:
so no idea why it is failing. |
/retest |
Retrying Azure Pipelines: |
@rojkov @alyssawilk does it require anything else from my side? or we are just waiting for some one to merge this diff in? |
oh. looks like upstream changed and new merge required. |
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
presubmit failed with
|
I signed off but for most PRs we have 2 maintainers review. I've tagged @snowp who was out late last week but should be able to do a pass this week. |
Thanks for the explanation, @alyssawilk. Question: if GitHub marking pr as "merge conflict" - should I update it before second maintainer will look?(asking because it looks like that GitHub removes "reviewed" tag of the first person if PR is updated) |
It's always worth resolving merge conflicts - even if it removes the LGTM stamp, generally we operate on a maintainer signing off, not the actual stamp. If there's significant changes between first and second reviewer signing off, sometimes reviewer#2 will tag reviewer#1 towards the end but it's not the norm. |
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
try one more merge to see if it fixes coverage, and ping for @snowp |
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks this seems reasonable, just a few comments
const bool is_not_upgrade = | ||
!Http::Utility::isUpgrade(headers) || | ||
!Runtime::runtimeFeatureEnabled( | ||
"envoy.reloadable_features.enable_compression_without_content_length_header"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems odd to have the value of is_not_upgrade
be dependent on the runtime flag? Can we rename the variable to make a bit more sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
idea was that we would drop runtime flag eventually and wont need to rename. if you feel strong about it - i will try to rework this part
const bool is_not_upgrade = | ||
!Http::Utility::isUpgrade(headers) || | ||
!Runtime::runtimeFeatureEnabled( | ||
"envoy.reloadable_features.enable_compression_without_content_length_header"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Alternatively split it into two different variables
@@ -0,0 +1,297 @@ | |||
#include "test/extensions/filters/http/common/compressor/compressor_integration_tests.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love a high level explanation of what we're testing in this file. I think it's that upgrades work through the compressor filter, but its not clear without reading a lot of code what kind of interactions (if any?) we're trying to validate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. will add
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for iterating on this! Seems like CI is broken
@@ -8,6 +8,10 @@ | |||
|
|||
namespace Envoy { | |||
|
|||
/* | |||
Integration test to make sure that new logic, which does not care about content-length header |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment reads like a PR comment, not a code comment. Can we phrase this in a way that will age better?
For example, Test verifying that we don't add any additional headers when the upstream doesn't respond with a content-length header
or something to that effect
@@ -34,6 +38,10 @@ class WebsocketWithCompressorIntegrationTest : public HttpProtocolIntegrationTes | |||
IntegrationStreamDecoderPtr response_; | |||
}; | |||
|
|||
/* | |||
Integration test to make sure that new logic, which does not care about content-length header |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, the "new logic" is only new right now
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
…xy#14584) This is an attempt to address issue in envoyproxy#14121 tl;dr is that some http/2 backends does not send "content-length" header in replies, as http/2 spec do have the same info as a part of the frame (unfortunately it seems like there is no way to pass this value from the frame to the filter) and transfer-encoding=chunked (before this diff having that header/encoding was a prerequisite, if content-length is not defined) was removed from http/2 spec. As discussed in the issue itself, instead, if there is no content lengths header - we would try to gzip it by default. This new behavior is controlled by runtime guarded feature envoy.reloadable_features.enable_compression_without_chunked_header Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com> Signed-off-by: Auni Ahsan <auni@google.com>
Signed-off-by: Nikita V. Shirokov tehnerd@tehnerd.com
For an explanation of how to fill out the fields, please see the relevant section
in PULL_REQUESTS.md
Commit Message:
this is an attempt to address issue in #14121
tl;dr is that some http/2 backends does not send "content-length" header in replies, as http/2 spec do have the same info as a part of the frame (unfortunately it seems like there is no way to pass this value from the frame to the filter) and transfer-encoding=chunked (before this diff having that header/encoding was a prerequisite, if content-length is not defined) was removed from http/2 spec.
As discussed in the issue itself, instead, if there is no content lengths header - we would try to gzip it by default.
this new behavior is controlled by runtime guarded feature envoy.reloadable_features.enable_compression_without_chunked_header
Additional Description:
This is rework of PR#14405. Compare to original PR this one has proper handling of websockets (or any other) upgrade requests.
Compare to original PR this one includes:
Risk Level:
Low
Testing:
same tests as in original PR#14405 plus
websocket's integration tests now contains compression filter and passing (w/o isUpgrade fix and w/ compression filter this ITs were failing with:
Docs Changes:
Release Notes:
Platform Specific Features:
[Optional Runtime guard:]
[Optional Fixes #Issue]
[Optional Deprecated:]