-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal redirect in filter chains #3250
Comments
@kyessenov, @alyssawilk was asking me about this earlier in the week. I think she is going to implement 2. |
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions. |
FWIW it's pretty far out on our requirements horizon. Worth keeping open in the interim though. |
Thanks. Agree, that it's worth waiting for more use-cases from filter developers. |
Does anyone know if there's been any progress on this issue? |
I talked to @alyssawilk about this recently and I think we roughly sketched out an implementation. Not sure of timelines but it sounds like someone from Google might implement this. |
Well my question for Matt had been if we could replace 1 with 2 (in the context of wanting custom filter chains, and custom filter chains being 'dangerous' with 1 since you'd not get the new filter chain) and the answer I got was 'probably not' since some folks probably actually want the preexisting behavior, so we'll have to document to not use (theoretical) custom filter chains with style-1 route changes. I think the IR case I'm more interested in implementing is actually getting an x-envoy-internal-redirect: scheme://new.domain/new_path from upstream, then re-sending the modified request to a new upstream. I think the work needed to do that has a ton of overlap with 2 and I hope to have the time to tackle that in the next week or two. |
Ok, first in what is I'm sure a very long series of design choices here, do we recreate the filter chain or rerun? I'm inclined to recreate. Where I suspect we will eventually want to allow a given filter to pass information from the first pass to the second pass, I don't think folks are going to correctly make every filter redirect-aware, and so I think creating and re-running the chain "from scratch" (data passing tdb when/if there's a request) is going to result in far fewer (though probably not non-zero) bugs than trying to get every filter to clear sufficient state that a second pass won't be harmful. |
+1 recreate. |
What would be the recommendation for network filter chain re-run? I'm thinking of two HTTP connection managers on ports 80/443, and an internal redirect in 80 triggering the HTTP filter chain in 443. Should we restrict this functionality to a single HTTP connection manager? |
Yeah, I don't think cross-scheme redirects should be allowed, due to security concerns. I mean I guess going from a secure to an insecure response isn't terrible, but serving an https response on an http connection is obviously unsafe and it's easier to be consistent |
Right, I didn't think through security implications. I'm more interested in plain http-to-http transition. Istio buckets RDS routes by capture ports, and I'm trying to find a way to jump between those RDS tables. |
Ah, interesting. We've traditionally done multi-tenant support via VIP rather than port, so can have one (or two) giant listeners and everything in the scope of that. We have to adapt Envoy to allow per-vip matching there (see scoped rds over on #4704) and it's possible once we have that landed we could add support for some or all listeners sharing scoped rather than duplicating the scopes for all listeners. |
Allows both upstream-driven and filter-controlled internal redirects, basically rerunning the whole filter chain for a new stream. The current implementation is limited to requests-sans-bodies and complete-requests, and num-redirects = 1, but could be fairly easily extended (probably in a follow-up) to remove any of these restrictions. I do need to add more unit tests here, but I want to make sure we're happy both the validation we're doing and where we do it. For example while this implementation forces N=1 for upstream internal redirects it allows filters to impose their own separate limits and allows them to screw up w.r.t. redirect loops. We could globally enforce by disallowing recreateStream if is_internally_created_ true but I could imagine wanting different limits for a filter redirect than an external redirect so am mildly inclined to allow filters to enforce on their own with internal checks as the router filter does. TODO(alyssawilk) in a follow-up before killing off the initial stream, pass it the original StreamInfo and copy relevant fields (downstream timing info etc.) Risk Level: Medium (some refactors of existing code, new high risk code paths well guarded) Testing: E2E tests. E_INSUFFICIENT_UNIT_TESTS Docs Changes: inline Release Notes: yep. Part of #3250 Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
I think this is complete so closing. |
Allows both upstream-driven and filter-controlled internal redirects, basically rerunning the whole filter chain for a new stream. The current implementation is limited to requests-sans-bodies and complete-requests, and num-redirects = 1, but could be fairly easily extended (probably in a follow-up) to remove any of these restrictions. I do need to add more unit tests here, but I want to make sure we're happy both the validation we're doing and where we do it. For example while this implementation forces N=1 for upstream internal redirects it allows filters to impose their own separate limits and allows them to screw up w.r.t. redirect loops. We could globally enforce by disallowing recreateStream if is_internally_created_ true but I could imagine wanting different limits for a filter redirect than an external redirect so am mildly inclined to allow filters to enforce on their own with internal checks as the router filter does. TODO(alyssawilk) in a follow-up before killing off the initial stream, pass it the original StreamInfo and copy relevant fields (downstream timing info etc.) Risk Level: Medium (some refactors of existing code, new high risk code paths well guarded) Testing: E2E tests. E_INSUFFICIENT_UNIT_TESTS Docs Changes: inline Release Notes: yep. Part of envoyproxy#3250 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> Signed-off-by: Fred Douglas <fredlas@google.com>
Title: Support re-routing within the proxy
HTTP filters are allowed to modify HTTP requests in progress. Sometimes, that leads to changing the routing decision (which is known prior to filter execution). Currently, it feels a bit ad-hoc to insert "clear routing table" call since it is not clear how that works with 1) multiple filters trying to clear routing tables 2) per-route config that can change mid-filter chain. I think there are three legitimate cases for this kind of behavior:
I think 1) and 3) are currently implementable in filters. Is there any support for case 2)? Does option 2) make more sense than 3)?
The text was updated successfully, but these errors were encountered: