Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial support for upstream HTTP/1.1 tunneling #13293

Merged
merged 10 commits into from
Nov 23, 2020

Conversation

irozzo-1A
Copy link
Contributor

@irozzo-1A irozzo-1A commented Sep 28, 2020

Signed-off-by: Iacopo Rozzo iacopo@kubermatic.com

For an explanation of how to fill out the fields, please see the relevant section
in PULL_REQUESTS.md

Commit Message:
Additional Description:
Risk Level: Low
Testing: unit test, integration, manual testing
Docs Changes: Added documentation on how to configure Envoy for tunneling TCP over HTTP/1
Release Notes: n/a (still hidden)
Fixes #11308

@irozzo-1A
Copy link
Contributor Author

irozzo-1A commented Sep 28, 2020

This is still WIP I have the following points to address:

  • Make sure that HttpUpstream::isValidBytestreamResponse accepts well formed http connect responses as definded in RFC7231
  • Add unit tests
  • Add integration tests
  • Update docs
  • (optional) Validate hostname provided with tunneling configuration has authority form

@alyssawilk
Copy link
Contributor

Sweet, so excited to see progress on this! lmk when you want me to take a pass!

@irozzo-1A
Copy link
Contributor Author

Sweet, so excited to see progress on this! lmk when you want me to take a pass!

Thx @alyssawilk! I'll let you know. I think I'll have time to continue on this tomorrow or Friday on this.

I have just a question. For HTTP2 you ensured that no data is sent before the response to the CONNECT is received by the upstream #10974. From what I observed the HTTP1 flow is different and the same logic does not work. Do you confirm that? Do you have an idea where I could implement this logic?

@alyssawilk
Copy link
Contributor

I think this is less about HTTP/2 and HTTP/1 and more about Enovy proxying CONNECT vs Envoy originating CONNECT. Either way I think it'd be great to fix.

I think to do that we could defer onPoolReadyBase from Filter::onPoolReady and instead call it when HttpUpstream::isValidBytestreamResponse is confirmed. That could be done as a standalone PR or with this one - whatever you prefer!

@irozzo-1A
Copy link
Contributor Author

Ok, you're right. I did not look carefully enough into this. I will leave it for another PR in this case as it is not exclusively related to HTTP/1.1 tunneling.

@irozzo-1A irozzo-1A force-pushed the support-http1-tunneling branch 3 times, most recently from cb6ca56 to 5bb021d Compare October 6, 2020 16:15
@irozzo-1A
Copy link
Contributor Author

irozzo-1A commented Oct 6, 2020

Hey @alyssawilk, I found the time to progress a bit on this and I have two issues related to the integration tests I added so far. I would like to have your opinion on those:

  • The first test is based on the Basic TcpTunnelingIntegrationTest and is failing on the waitForEndStream assertion. This looks like a real issue, if my understanding is correct after the downstream connection is closed from the client side, and all data is transmitted to the upstream, the TCP connection with the upstream should be closed as well. From the test failure and the traffic capture I made it does not seem to be the case. Do you agree this is the expected behavior? Do you have a rough idea on how this could be fixed?
  • The second is based on InvalidResponseHeaders TcpTunnelingIntegrationTest, it is failing on waitForReset assertion. In this case, it seems just to be an issue in the test itself. When the upstream respond with a 500 both connections with upstream and downstream are closed as expected. From what I observed so far the test failure seems to be related to onResetStream method not being called on the FakeStream, but I did not figure out the root cause yet.

@alyssawilk
Copy link
Contributor

Hm, the end stream issue is an interesting one.
In HTTP/2 there's a clear way to end the stream for non-chunked data, so if a connection is half closed on TCP (which we support), we can end-stream for HTTP/2, then further upstream if we downgrade back to TCP we can transfer that end-stream back into a half close.
For HTTP/1 there's equivalent way to end stream. We could half-close the TCP connection, but Envoy specifically doesn't support half close for HTTP, so it'd result in a connection close at the far side rather than a reset stream. I think the best thing we can do here is send a half close, and note in docs that CONNECT over HTTP/1.1 uses half-close semantics which will not fully work with servers which don't support half close over HTTP/1.1
(please snag me on slack if that doesn't make sense - I'm not sure how familiar you are with all the above and I can go into as much detail as you need)

for waitForReset you may need to wait for a connection close with HTTP/1.1 since that's how stream resets are communicated.

@irozzo-1A irozzo-1A force-pushed the support-http1-tunneling branch from d57a897 to cd7eb4e Compare October 9, 2020 19:21
Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking better and better. :-)
one drive by comment but otherwise I'm going to wait for you to tell me I should do another pass. Just ping me when you're ready!

upstream_ = std::make_unique<HttpUpstream>(*upstream_callbacks_,
config_->tunnelingConfig()->hostname());
if ((cluster->info()->features() & Upstream::ClusterInfo::Features::HTTP2) == 0) {
// TODO(irozzo): How to detect HTTP3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can replace the old early return-false with features() & HTTP3 != 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, as we discussed offline the only feature flag I see at the moment is about HTTP2: https://github.com/envoyproxy/envoy/blob/master/include/envoy/upstream/upstream.h#L718.
Anyway, HTTP3 upstreams do not seem to be supported yet.

@irozzo-1A irozzo-1A force-pushed the support-http1-tunneling branch 4 times, most recently from 84fa458 to c6800fd Compare October 19, 2020 14:42
@irozzo-1A
Copy link
Contributor Author

I'm summarizing here the issues are blocking me at the moment.

The first issue I met is that, the downstream client closing the connection (i.e. sending FIN) was not resulting in the downstream connection being closed (this is the test for this scenario). This is because with http1 upstream calling encodeData on the encoder with end_stream set to true does not result in the connection with upstream being closed. For comparison, with http2 upstream calling encodeData with end_stream set to true results in a frame with END STREAM flag set.

After discussing this with @alyssawilk, we decided not to implement half-closed semantics for the moment and to reset the encoder on doneWriting. Unfortunately this has a the side-effect of closing the upstream connection without flushing (TcpProxyUpstreamFlush integration test is revealing this issue).

@mattklein123 @snowp, do you have any suggestions on how to proceed?

@mattklein123
Copy link
Member

@mattklein123 @snowp, do you have any suggestions on how to proceed?

Sorry can you clarify what you are asking for? What are the current problems? I see the text above about half closed but I don't think I have enough context to help without more info.

@irozzo-1A
Copy link
Contributor Author

irozzo-1A commented Oct 20, 2020

Sorry can you clarify what you are asking for? What are the current problems? I see the text above about half closed but I don't think I have enough context to help without more info.

Sure. Hope this clarifies.

Description of the scenario

When a tcp client connect with envoy acting as a TCP proxy with http1 tunneling config, a HTTP/1.1 CONNECT request is generated and sent to the upstream. If the upstream respond with a 2xx return code the connection between the envoy and the upstream proxy is hijacked for streaming the TCP data in both directions. When the downstream client closes its end of the socket (the write end), I expect Envoy to flush all the pending data and close its write end with the upstream proxy.

What is happening

This is what happens when the downstream client finishes writing and closes it's end of the socket:

  1. Envoy::TcpProxy::Filter::onData(data, true) the second argument is end_stream
  2. Envoy::TcpProxy::HttpUpstream::encodeData(data, true)
  3. Envoy::Http::Http1::StreamEncoderImpl::encodeData(data, true)

The problem is that this is not triggering the termination of the connection with the upstream.

What I tried so far, as I mentioned above, is to reset the stream when encodeData is called with end_stream set (see the backtrace below to better understand). Now, this fix tentative failed because the connection is closed with NoFlush as you can see in the backtrace below, and this may result in data loss.

What I'm asking is basically some guidance on how to approach this scenario as I'm not familiar yet with the codebase. e.g. naively I thought I could simply convey somehow the FlushWrite type to Envoy::Network::ConnectionImpl::close. I already discussed with @alyssawilk and I think she would probably suggest starting adding half-close semantics for HTTP/1.1, but she also suggested me to bring the topic here to see if you guys see a less complicated solution.

  * frame #0: 0x00000001049f51e2 tcp_tunneling_integration_test`Envoy::Network::ConnectionImpl::close(this=0x00000000038c5080, type=NoFlush) at connection_impl.cc:105:28
    frame #1: 0x00000001041482ea tcp_tunneling_integration_test`Envoy::Http::CodecClient::close(this=0x00000000038db050) at codec_client.cc:54:42
    frame #2: 0x00000001011020ea tcp_tunneling_integration_test`Envoy::Http::Http1::ConnPoolImpl::onDownstreamReset(this=0x00000000038f0360, client=0x0000000003829a80) at conn_pool.cc:46:25
    frame #3: 0x0000000101105588 tcp_tunneling_integration_test`Envoy::Http::Http1::ConnPoolImpl::StreamWrapper::onResetStream(this=0x000000000389b580, (null)=LocalReset, (null)=(__data = 0x0000000000000000, __size = 0)) at conn_pool.h:54:24
    frame #4: 0x000000010418928e tcp_tunneling_integration_test`Envoy::Http::StreamCallbackHelper::runResetCallbacks(this=0x00000000037debe8, reason=LocalReset) at codec_helper.h:49:20
    frame #5: 0x00000001041eb6fa tcp_tunneling_integration_test`Envoy::Http::Http1::ClientConnectionImpl::onResetStream(this=0x00000000037de880, reason=LocalReset) at codec_impl.cc:1296:40
    frame #6: 0x00000001041d4e37 tcp_tunneling_integration_test`Envoy::Http::Http1::ConnectionImpl::onResetStreamBase(this=0x00000000037de880, reason=LocalReset) at codec_impl.cc:835:3
    frame #7: 0x00000001041d4a6f tcp_tunneling_integration_test`Envoy::Http::Http1::StreamEncoderImpl::resetStream(this=0x00000000037debe0, reason=LocalReset) at codec_impl.cc:313:15
    frame #8: 0x0000000103fec279 tcp_tunneling_integration_test`Envoy::TcpProxy::HttpUpstream::resetEncoder(this=0x00000000037dad70, event=LocalClose, inform_downstream=true) at upstream.cc:115:35
    frame #9: 0x0000000103fef671 tcp_tunneling_integration_test`Envoy::TcpProxy::Http1Upstream::doneWriting(this=0x00000000037dad70) at upstream.cc:302:3
    frame #10: 0x0000000103fec40c tcp_tunneling_integration_test`Envoy::TcpProxy::HttpUpstream::encodeData(this=0x00000000037dad70, data=0x0000000003813740, end_stream=true) at upstream.cc:76:5
    frame #11: 0x0000000103fa7bdc tcp_tunneling_integration_test`Envoy::TcpProxy::Filter::onData(this=0x00000000038ea358, data=0x0000000003813740, end_stream=true) at tcp_proxy.cc:563:16
    frame #12: 0x0000000104a74e45 tcp_tunneling_integration_test`Envoy::Network::FilterManagerImpl::onContinueReading(this=0x00000000038136f8, filter=0x0000000000000000, buffer_source=0x0000000003813680) at filter_manager_impl.cc:66:48
    frame #13: 0x0000000104a75772 tcp_tunneling_integration_test`Envoy::Network::FilterManagerImpl::onRead(this=0x00000000038136f8) at filter_manager_impl.cc:76:3
    frame #14: 0x00000001049f7e11 tcp_tunneling_integration_test`Envoy::Network::ConnectionImpl::onRead(this=0x0000000003813680, read_buffer_size=0) at connection_impl.cc:292:19
    frame #15: 0x0000000104a017be tcp_tunneling_integration_test`Envoy::Network::ConnectionImpl::onReadReady(this=0x0000000003813680) at connection_impl.cc:574:5
    frame #16: 0x00000001049fdf1c tcp_tunneling_integration_test`Envoy::Network::ConnectionImpl::onFileEvent(this=0x0000000003813680, events=1) at connection_impl.cc:534:5
    frame #17: 0x0000000104a3ff0e tcp_tunneling_integration_test`Envoy::Network::ConnectionImpl::ConnectionImpl(this=0x00000000038ea2f8, events=1)::$_6::operator()(unsigned int) const at connection_impl.cc:71:54

@mattklein123
Copy link
Member

The problem is that this is not triggering the termination of the connection with the downstream.

Do you mean upstream?

Also, in the description above when you say "close" do you mean "half close?" Can you clarify in terms of full/half close semantics what you want the behavior to be being extremely careful about using downstream and upstream accurately?

@alyssawilk
Copy link
Contributor

Yeah, basically for HTTP/2 we can "end stream" by sending a FIN frame. For HTTP/1 we want to signal the stream is ended but closing the connection, and there's no way to flush-close upstream. My suggestion was that we add an allow-frame-by-connection-close to the HTTP/1.1 protocol options and if it's true, there's no content length or transfer encoding, the end stream bool would result in a Network:::Connection half close.

@mattklein123
Copy link
Member

My suggestion was that we add an allow-frame-by-connection-close to the HTTP/1.1 protocol options and if it's true, there's no content length or transfer encoding, the end stream bool would result in a Network:::Connection half close.

+1. It's possible we don't need a config option and can do something like how I dealt witht he hystrix disable chunk encoding setting?

virtual void disableChunkEncoding() PURE;

@irozzo-1A
Copy link
Contributor Author

Do you mean upstream?

Yes, you're right I meant upstream. I fixed this in the original comment.

Also, in the description above when you say "close" do you mean "half close?" Can you clarify in terms of full/half close semantics what you want the behavior to be being extremely careful about using downstream and upstream accurately?

To be as precise as I can, the diagram below represents what I expect to happen from an L4 standpoint. But what I got so far is either:

  • No FIN sent from Envoy to Upstream (before e6978b4)
  • FIN sent but not all data transmitted because of no flush-close (after e6978b4)

envoy-http1-tunneling

@alyssawilk
Copy link
Contributor

ah yeah, that'd work too. Basically I think we want it "off by default" since we don't normally allow frame by close for requests and it'd limit the risk of the change.

@mattklein123
Copy link
Member

I think there 2 different things being requested here though:

  1. Possibly support half-close for HTTP/1.1 connections.
  2. If we do a full-close, support flush write

I think (2) is easy and can be done the way @alyssawilk and I are suggesting (modulo some minor details). If we need to support (1) I think that will be a larger and more scary change.

Is (2) sufficient for now?

@alyssawilk
Copy link
Contributor

I think you have difficulty inverted.
I think for flush write it'd need to be flush write and delay, which means the lifetime of the upstream stream needs to be prolonged which is really hard.
For half close, I think doing it for this corner case ("if end stream and half close is enabled which it isn't by default and no content length and not chunked, FIN the underlying connection") is pretty straight-forward. I think the only bit we need to do for upstream is make sure that if upstream closes the FIN doesn't arrive ahead of the data, which I think we already handle correctly.

@mattklein123
Copy link
Member

I think you have difficulty inverted.

Quite possible. :)

I think for flush write it'd need to be flush write and delay, which means the lifetime of the upstream stream needs to be prolonged which is really hard.

True, though I wonder if we could get away with no delay initially? For supporting the delay, TCP proxy already supports a "flush write connection holder." Given that the connection pools now share a lot of code I wonder if we could just make this generic and then opt-in for this case?

For half close, I think doing it for this corner case ("if end stream and half close is enabled which it isn't by default and no content length and not chunked, FIN the underlying connection") is pretty straight-forward. I think the only bit we need to do for upstream is make sure that if upstream closes the FIN doesn't arrive ahead of the data, which I think we already handle correctly.

Yeah, I just worry about the small details here and how many things might need to be fixed given that the HTTP code has never worked with half-close. It might just work. I'm guessing it won't. :)

@alyssawilk
Copy link
Contributor

yeah, I was looking at the TCP drain code but it takes raw connection pointer stuff and I think hooking that up would be a mess
The other question really is if we need to communicate half close. We do need half close support for TCP in general so while I agree it's likely to result in full on close semantics at the HTTP/1/.1 connect-reading peer today, I think in the long run the same way that the client can half close TCP, we're going to need to support sending that half close through an HTTP CONNECT tunnel and decapsulating. I think sending the FIN and acknowledging that it's only the first step on that journey and eventually for HTTP/1.1 connect we should take that FIN and convert it into a "stream ended" and half close the upstream TCP connection. But that's a set of PRs for another day :-P

@irozzo-1A irozzo-1A force-pushed the support-http1-tunneling branch from 05e3529 to d0501f4 Compare October 28, 2020 08:53
Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com>
@irozzo-1A
Copy link
Contributor Author

Looks pretty good. Just a few nits about some edge cases and tests.

Thx for your review @antoniovicente ! I think I addressed the points you raised, apart from:

#13293 (comment)
#13293 (comment)

where I would like also to have @alyssawilk opinion.

Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com>
antoniovicente
antoniovicente previously approved these changes Nov 16, 2020
@@ -26,10 +26,11 @@ static_resources:
stat_prefix: tcp_stats
cluster: "cluster_0"
tunneling_config:
hostname: host.com
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, why 10002 when it's connecting to port 10001?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My idea was to show that the destination port can be different from the port used by the upstream proxy, but maybe using 443 makes more sense to be consistent with terminate_connect.yaml

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending CONNECT foo.com:1234 to foo.com:1235 seems odd to me, but if you think it's worth explicitly testing for that how about a comment so other folks don't think it's just an off by one error :-)

Copy link
Contributor Author

@irozzo-1A irozzo-1A Nov 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to be sure we are on the same page, in this scenario, the CONNECT is sent to the upstream that is 127.0.0.1:10001 targeting host.com:10002. IMO it should be quite common to have the upstream proxy listening on a port and receiving CONNECT requests targeting a different one. What could be considered a bit odd maybe, is that the L2 proxy is receiving CONNECT host.com:10002 and it is connecting instead to www.google.com:443. WDYT?

Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com>
alyssawilk
alyssawilk previously approved these changes Nov 17, 2020
Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@snowp did you want to take a pass? if not I'll merge tomorrow.

antoniovicente
antoniovicente previously approved these changes Nov 18, 2020
Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good for the most part, just a couple comments

shutil.copy(os.path.join(SCRIPT_DIR, 'encapsulate_in_connect.yaml'), OUT_DIR)
shutil.copy(os.path.join(SCRIPT_DIR, 'encapsulate_in_http2_connect.yaml'), OUT_DIR)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the new HTTP1 example be included here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure about this, WDYT @alyssawilk ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, might as well for completeness.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 114 to 124

HTTP/1.1 CONNECT can be used to have TCP client connecting to its own
destination passing through an HTTP proxy server (e.g. corporate proxy):

[HTTP Server] --- raw HTTP --- [Upstream HTTP Proxy] --- HTTP tunneled over HTTP/1.1 --- [Envoy] --- raw HTTP --- [HTTP Client]

Examples of such a set up can be found in the Envoy example config :repo:`directory <configs/>`
If you run `bazel-bin/source/exe/envoy-static --config-path configs/encapsulate_in_http1_connect.yaml --base-id 1`
you will be running Envoy listening for TCP traffic on port 10000 and encapsulating it in an HTTP/1.1
CONNECT addressed to an HTTP proxy running on localhost and listenig on port
10001, having as a final destination host.com on port 443.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a first time reader this seems a bit confusing: which proxy are we configuring with the provided config? Both? Just the one labeled 'Envoy'?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this scenario, I made the assumption that the Upstream HTTP Proxy is not an Envoy instance, that's why I only labeled the proxy that is tunneling the traffic with Envoy and just provided one configuration configs/encapsulate_in_http1_connect.yaml.
The main driver of this PR is to support use cases where the upstream HTTP proxy is supporting HTTP/1 only e.g. #11308
Do you think I should make this more clear?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having re-read this a few times i realize my confusion stems from the fact that the flow is server -> client, not client -> server, and I was originally thinking of the encapsulation flow as originating from the client.

I think I would still have this example make use of two Envoy instances (assuming we support this) and then mention that this can be used when the upstream is another proxy that does not support HTTP/2 (to justify why you would ever use HTTP/1.1). I think this would be clearer and would make the example complete, as its not relying on the reader knowing how to set up another local proxy.

Copy link
Contributor Author

@irozzo-1A irozzo-1A Nov 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, this makes sense. If it's not a problem for you I could address this with a follow-up PR as I still have some work to do on this feature.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did this already as I had to do another change.

@snowp
Copy link
Contributor

snowp commented Nov 18, 2020

Also this should probably have a release note

@irozzo-1A
Copy link
Contributor Author

Also this should probably have a release note

I thought to add a release note in a follow-up PR after address two remaining issues. Find some context here: #13293 (comment)

Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com>
@irozzo-1A irozzo-1A dismissed stale reviews from antoniovicente and alyssawilk via eecf55c November 18, 2020 20:36
Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com>
@irozzo-1A
Copy link
Contributor Author

Thx for reviewing @snowp, I think I addressed your comments ;-)

Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com>
@irozzo-1A irozzo-1A requested a review from alyssawilk November 19, 2020 17:42
@irozzo-1A
Copy link
Contributor Author

@alyssawilk @antoniovicente @snowp Could you please let me know if you have any points left?

Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@alyssawilk alyssawilk merged commit 9d4763c into envoyproxy:master Nov 23, 2020
qqustc pushed a commit to qqustc/envoy that referenced this pull request Nov 24, 2020
Commit Message:
Additional Description:
Risk Level: Low
Testing: unit test, integration, manual testing
Docs Changes: Added documentation on how to configure Envoy for tunneling TCP over HTTP/1
Release Notes: n/a (still hidden)
Part of envoyproxy#11308

Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com>
Signed-off-by: Qin Qin <qqin@google.com>
@irozzo-1A irozzo-1A deleted the support-http1-tunneling branch December 17, 2020 10:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CONNECT support for HTTP/1.1 upstreams
5 participants