-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customize Per Endpoint TLS Settings #8016
Labels
Comments
Welcome comments, thanks. |
A proposal using cluster filter for this customization can be found here Let me know if this is the right direction to proceed with some prototype, thanks! |
/cc @GregHanson @linsun |
This was referenced Aug 29, 2019
8 tasks
htuch
pushed a commit
that referenced
this issue
Sep 23, 2019
API for #8016 Customers adopting service mesh likes mTLS ability. However, rolling it out without breaking existing traffic is hard. This is because mTLS is configured on per cluster basis. In reality, a service consists of multiple endpoints, mixed with having Envoy sidecar and without-sidecar endpoints. Client envoy can't send mTLS traffic until all server migrated to having Envoy sidecar. This API tries to solve the issue by allowing mTLS/transport socket to configured at finer granularity, e.g. endpoint level. The endpoint has metadata label information, which will be used to decide which transport socket configuration to use from a map specified in the cluster. So the outcome is that, xDS management server is able to configure client envoy talks to endpoints with sidecar in mTLS and plain text to endpoints without sidecar, for a single cluster. Description: Risk Level: N/A (API change only) Release Notes: Cluster API change to use different transport socket based on endpoint label. Signed-off-by: Jianfei Hu <jianfeih@google.com>
danzh2010
pushed a commit
to danzh2010/envoy
that referenced
this issue
Oct 4, 2019
API for envoyproxy#8016 Customers adopting service mesh likes mTLS ability. However, rolling it out without breaking existing traffic is hard. This is because mTLS is configured on per cluster basis. In reality, a service consists of multiple endpoints, mixed with having Envoy sidecar and without-sidecar endpoints. Client envoy can't send mTLS traffic until all server migrated to having Envoy sidecar. This API tries to solve the issue by allowing mTLS/transport socket to configured at finer granularity, e.g. endpoint level. The endpoint has metadata label information, which will be used to decide which transport socket configuration to use from a map specified in the cluster. So the outcome is that, xDS management server is able to configure client envoy talks to endpoints with sidecar in mTLS and plain text to endpoints without sidecar, for a single cluster. Description: Risk Level: N/A (API change only) Release Notes: Cluster API change to use different transport socket based on endpoint label. Signed-off-by: Jianfei Hu <jianfeih@google.com>
nandu-vinodan
pushed a commit
to nandu-vinodan/envoy
that referenced
this issue
Oct 17, 2019
Description: Risk Level: Medium (Opt-in required) Testing: unit test, integration tested. Docs Changes: TODO(incfly) add an architecture docs. Release Notes: New feature, implement `Cluster.transport_socket_matches`. Envoy can be configured to use different transport socket configuration to different endpoints based on metadata match. Fixes envoyproxy#8016 Signed-off-by: Jianfei Hu <jianfeih@google.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Title: Feature Request, Customize Per Endpoint TLS config
Description:
Today Envoy each cluster has a single TLS configuration. We would like to be able to customize TLS config as per endpoint basis.
A potential solution could be extend the
cluster filter
, interface. At the time of a transport socket is about to be created, the cluster filter callbacks can be invoked, and return TLS configuration.Welcome some other API/mechanism design as alternatives.
Use Case
With context of Istio, we are able to let control plane distribute label information to each endpoint(able to accept mTLS or not). Then client envoy, with this information, initiate TCP or TLS connection to different endpoints, decided as per endpoint level.
This eases the mTLS rollout a lot, customers don't have to configure the entire cluster TLS config. Instead they can get more and more mTLS traffic as more and more server side endpoints migrated with Envoy sidecar injected.
Other use cases that I can think of
Rollout new PKI configuration slowly. Imagine a customer want to rollout new PKI, root for example. On server side can set up multiple filter chain, with different TLS context, and filter chain match. The client side, customer can implement their own cluster filter, to slowly roll out new TLS configuration for subset of the endpoints. The percentage can be specified via part of the cluster filter config.
Other similar TLS related config rollout can be applied as well, e.g. file based key/cert to SDS based.
The rollout of the TLS is then decoupled with Envoy binary rollout lifecycle.
The text was updated successfully, but these errors were encountered: