Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial support for upstream HTTP/1.1 tunneling #13293

Merged
merged 10 commits into from
Nov 23, 2020
2 changes: 1 addition & 1 deletion configs/configgen.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,5 +139,5 @@ def generate_config(template_path, template, output_file, **context):
mongos_servers=mongos_servers)

shutil.copy(os.path.join(SCRIPT_DIR, 'envoyproxy_io_proxy.yaml'), OUT_DIR)
shutil.copy(os.path.join(SCRIPT_DIR, 'encapsulate_in_connect.yaml'), OUT_DIR)
shutil.copy(os.path.join(SCRIPT_DIR, 'encapsulate_in_http2_connect.yaml'), OUT_DIR)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the new HTTP1 example be included here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure about this, WDYT @alyssawilk ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, might as well for completeness.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

shutil.copy(os.path.join(SCRIPT_DIR, 'terminate_connect.yaml'), OUT_DIR)
44 changes: 44 additions & 0 deletions configs/encapsulate_in_http1_connect.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# This configuration takes incoming data on port 10000 and encapsulates it in a CONNECT
# request which is sent upstream port 10001.
# It can be used to test TCP tunneling as described in docs/root/intro/arch_overview/http/upgrades.rst
# and running `curl --x 127.0.0.1:10000 https://www.google.com`

admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 9903
static_resources:
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 10000
filter_chains:
- filters:
- name: tcp
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: tcp_stats
cluster: "cluster_0"
tunneling_config:
hostname: host.com:10002
clusters:
- name: cluster_0
connect_timeout: 5s
# This ensures HTTP/1.1 CONNECT is used for establishing the tunnel.
http_protocol_options:
{}
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 10001
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,11 @@ static_resources:
stat_prefix: tcp_stats
cluster: "cluster_0"
tunneling_config:
hostname: host.com
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, why 10002 when it's connecting to port 10001?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My idea was to show that the destination port can be different from the port used by the upstream proxy, but maybe using 443 makes more sense to be consistent with terminate_connect.yaml

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending CONNECT foo.com:1234 to foo.com:1235 seems odd to me, but if you think it's worth explicitly testing for that how about a comment so other folks don't think it's just an off by one error :-)

Copy link
Contributor Author

@irozzo-1A irozzo-1A Nov 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to be sure we are on the same page, in this scenario, the CONNECT is sent to the upstream that is 127.0.0.1:10001 targeting host.com:10002. IMO it should be quite common to have the upstream proxy listening on a port and receiving CONNECT requests targeting a different one. What could be considered a bit odd maybe, is that the L2 proxy is receiving CONNECT host.com:10002 and it is connecting instead to www.google.com:443. WDYT?

hostname: host.com:10002
clusters:
- name: cluster_0
connect_timeout: 5s
# This ensures HTTP/2 CONNECT is used for establishing the tunnel.
http2_protocol_options:
{}
load_assignment:
Expand Down
26 changes: 20 additions & 6 deletions docs/root/intro/arch_overview/http/upgrades.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,17 +94,31 @@ will synthesize 200 response headers, and then forward the TCP data as the HTTP
For an example of proxying connect, please see :repo:`configs/proxy_connect.yaml <configs/proxy_connect.yaml>`
For an example of terminating connect, please see :repo:`configs/terminate_connect.yaml <configs/terminate_connect.yaml>`

Tunneling TCP over HTTP/2
^^^^^^^^^^^^^^^^^^^^^^^^^
Envoy also has support for transforming raw TCP into HTTP/2 CONNECT requests. This can be used to
proxy multiplexed TCP over pre-warmed secure connections and amortize the cost of any TLS handshake.
An example set up proxying SMTP would look something like this
Tunneling TCP over HTTP
^^^^^^^^^^^^^^^^^^^^^^^
Envoy also has support for tunneling raw TCP over HTTP CONNECT requests. Find
below some usage scenarios.

HTTP/2 CONNECT can be used to proxy multiplexed TCP over pre-warmed secure connections and amortize the cost of any TLS
handshake.
An example set up proxying SMTP would look something like this:

[SMTP Upstream] --- raw SMTP --- [L2 Envoy] --- SMTP tunneled over HTTP/2 --- [L1 Envoy] --- raw SMTP --- [Client]

Examples of such a set up can be found in the Envoy example config :repo:`directory <configs/>`
If you run `bazel-bin/source/exe/envoy-static --config-path configs/encapsulate_in_connect.yaml --base-id 1`
If you run `bazel-bin/source/exe/envoy-static --config-path configs/encapsulate_in_http2_connect.yaml --base-id 1`
and `bazel-bin/source/exe/envoy-static --config-path configs/terminate_connect.yaml`
you will be running two Envoys, the first listening for TCP traffic on port 10000 and encapsulating it in an HTTP/2
CONNECT request, and the second listening for HTTP/2 on 10001, stripping the CONNECT headers, and forwarding the
original TCP upstream, in this case to google.com.

HTTP/1.1 CONNECT can be used to have TCP client connecting to its own
destination passing through an HTTP proxy server (e.g. corporate proxy):

[HTTP Server] --- raw HTTP --- [Upstream HTTP Proxy] --- HTTP tunneled over HTTP/1.1 --- [Envoy] --- raw HTTP --- [HTTP Client]

Examples of such a set up can be found in the Envoy example config :repo:`directory <configs/>`
If you run `bazel-bin/source/exe/envoy-static --config-path configs/encapsulate_in_http1_connect.yaml --base-id 1`
you will be running Envoy listening for TCP traffic on port 10000 and encapsulating it in an HTTP/1.1
CONNECT addressed to an HTTP proxy running on localhost and listenig on port
10001, having as a final destination host.com on port 10002.
5 changes: 5 additions & 0 deletions include/envoy/network/connection.h
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,11 @@ class Connection : public Event::DeferredDeletable, public FilterManager {
*/
virtual void enableHalfClose(bool enabled) PURE;

/**
* @return true if half-close semantics are enabled, false otherwise.
*/
virtual bool isHalfCloseEnabled() PURE;

/**
* Close the connection.
*/
Expand Down
13 changes: 12 additions & 1 deletion source/common/http/codec_client.h
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,11 @@ class CodecClient : Logger::Loggable<Logger::Id::client>,
connection_->addConnectionCallbacks(cb);
}

/**
* Return if half-close semantics are enabled on the underlying connection.
*/
bool isHalfCloseEnabled() { return connection_->isHalfCloseEnabled(); }

/**
* Close the underlying network connection. This is immediate and will not attempt to flush any
* pending write data.
Expand Down Expand Up @@ -173,8 +178,14 @@ class CodecClient : Logger::Loggable<Logger::Id::client>,
CodecReadFilter(CodecClient& parent) : parent_(parent) {}

// Network::ReadFilter
Network::FilterStatus onData(Buffer::Instance& data, bool) override {
Network::FilterStatus onData(Buffer::Instance& data, bool end_stream) override {
parent_.onData(data);
if (end_stream && parent_.isHalfCloseEnabled()) {
// Note that this results in the connection closed as if it was closed
// locally, it would be more correct to convey the end stream to the
// response decoder, but it would require some refactoring.
parent_.close();
}
return Network::FilterStatus::StopIteration;
}

Expand Down
9 changes: 7 additions & 2 deletions source/common/http/http1/codec_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ const std::string StreamEncoderImpl::LAST_CHUNK = "0\r\n";
StreamEncoderImpl::StreamEncoderImpl(ConnectionImpl& connection,
HeaderKeyFormatter* header_key_formatter)
: connection_(connection), disable_chunk_encoding_(false), chunk_encoding_(true),
is_response_to_head_request_(false), is_response_to_connect_request_(false),
header_key_formatter_(header_key_formatter) {
connect_request_(false), is_response_to_head_request_(false),
is_response_to_connect_request_(false), header_key_formatter_(header_key_formatter) {
if (connection_.connection().aboveHighWatermark()) {
runHighWatermarkCallbacks();
}
Expand Down Expand Up @@ -261,6 +261,10 @@ void StreamEncoderImpl::endEncode() {

connection_.flushOutput(true);
connection_.onEncodeComplete();
// With CONNECT, half-closing the connection is used to signal end stream.
if (connect_request_) {
connection_.connection().close(Network::ConnectionCloseType::FlushWriteAndDelay);
}
}

void ServerConnectionImpl::maybeAddSentinelBufferFragment(Buffer::WatermarkBuffer& output_buffer) {
Expand Down Expand Up @@ -380,6 +384,7 @@ Status RequestEncoderImpl::encodeHeaders(const RequestHeaderMap& headers, bool e
head_request_ = true;
} else if (method->value() == Headers::get().MethodValues.Connect) {
disableChunkEncoding();
connection_.connection().enableHalfClose(true);
connect_request_ = true;
}
if (Utility::isUpgrade(headers)) {
Expand Down
2 changes: 1 addition & 1 deletion source/common/http/http1/codec_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ class StreamEncoderImpl : public virtual StreamEncoder,
uint32_t read_disable_calls_{};
bool disable_chunk_encoding_ : 1;
bool chunk_encoding_ : 1;
bool connect_request_ : 1;
bool is_response_to_head_request_ : 1;
bool is_response_to_connect_request_ : 1;

Expand Down Expand Up @@ -162,7 +163,6 @@ class RequestEncoderImpl : public StreamEncoderImpl, public RequestEncoder {
private:
bool upgrade_request_{};
bool head_request_{};
bool connect_request_{};
};

/**
Expand Down
9 changes: 7 additions & 2 deletions source/common/http/http1/codec_impl_legacy.cc
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ const std::string StreamEncoderImpl::LAST_CHUNK = "0\r\n";
StreamEncoderImpl::StreamEncoderImpl(ConnectionImpl& connection,
HeaderKeyFormatter* header_key_formatter)
: connection_(connection), disable_chunk_encoding_(false), chunk_encoding_(true),
is_response_to_head_request_(false), is_response_to_connect_request_(false),
header_key_formatter_(header_key_formatter) {
connect_request_(false), is_response_to_head_request_(false),
is_response_to_connect_request_(false), header_key_formatter_(header_key_formatter) {
if (connection_.connection().aboveHighWatermark()) {
runHighWatermarkCallbacks();
}
Expand Down Expand Up @@ -262,6 +262,10 @@ void StreamEncoderImpl::endEncode() {

connection_.flushOutput(true);
connection_.onEncodeComplete();
// With CONNECT half-closing the connection is used to signal end stream.
if (connect_request_) {
connection_.connection().close(Network::ConnectionCloseType::FlushWriteAndDelay);
}
}

void ServerConnectionImpl::maybeAddSentinelBufferFragment(Buffer::WatermarkBuffer& output_buffer) {
Expand Down Expand Up @@ -381,6 +385,7 @@ Status RequestEncoderImpl::encodeHeaders(const RequestHeaderMap& headers, bool e
head_request_ = true;
} else if (method->value() == Headers::get().MethodValues.Connect) {
disableChunkEncoding();
connection_.connection().enableHalfClose(true);
connect_request_ = true;
}
if (Utility::isUpgrade(headers)) {
Expand Down
2 changes: 1 addition & 1 deletion source/common/http/http1/codec_impl_legacy.h
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ class StreamEncoderImpl : public virtual StreamEncoder,
uint32_t read_disable_calls_{};
bool disable_chunk_encoding_ : 1;
bool chunk_encoding_ : 1;
bool connect_request_ : 1;
bool is_response_to_head_request_ : 1;
bool is_response_to_connect_request_ : 1;

Expand Down Expand Up @@ -166,7 +167,6 @@ class RequestEncoderImpl : public StreamEncoderImpl, public RequestEncoder {
private:
bool upgrade_request_{};
bool head_request_{};
bool connect_request_{};
};

/**
Expand Down
1 change: 1 addition & 0 deletions source/common/network/connection_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -375,6 +375,7 @@ void ConnectionImpl::readDisable(bool disable) {
}

void ConnectionImpl::raiseEvent(ConnectionEvent event) {
ENVOY_CONN_LOG(trace, "raising connection event {}", *this, event);
ConnectionImplBase::raiseConnectionEvent(event);
// We may have pending data in the write buffer on transport handshake
// completion, which may also have completed in the context of onReadReady(),
Expand Down
1 change: 1 addition & 0 deletions source/common/network/connection_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ class ConnectionImpl : public ConnectionImplBase, public TransportSocketCallback
// Network::Connection
void addBytesSentCallback(BytesSentCb cb) override;
void enableHalfClose(bool enabled) override;
bool isHalfCloseEnabled() override { return enable_half_close_; }
void close(ConnectionCloseType type) final;
std::string nextProtocol() const override { return transport_socket_->protocol(); }
void noDelay(bool enable) override;
Expand Down
2 changes: 2 additions & 0 deletions source/common/tcp_proxy/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ envoy_cc_library(
"//include/envoy/tcp:upstream_interface",
"//include/envoy/upstream:cluster_manager_interface",
"//include/envoy/upstream:load_balancer_interface",
"//source/common/http:codec_client_lib",
"//source/common/http:header_map_lib",
"//source/common/http:headers_lib",
"//source/common/http:utility_lib",
Expand Down Expand Up @@ -58,6 +59,7 @@ envoy_cc_library(
"//source/common/common:empty_string",
"//source/common/common:macros",
"//source/common/common:minimal_logger_lib",
"//source/common/http:codec_client_lib",
"//source/common/network:application_protocol_lib",
"//source/common/network:cidr_range_lib",
"//source/common/network:filter_lib",
Expand Down
Loading