Description
Background
Let me assume a situation where I configure SessionProtocol as H2 and create a WebClient
using an EndpointGroup
that contains a single Endpoint
. Example code snippet illustrating this setup is provided below:
final URI uri = URI.create("https://sub.example.com");
final EndpointGroup group = EndpointGroup.of(Endpoint.of(uri.getHost()));
final BlockingWebClient client = WebClient.builder(SessionProtocol.H2, group)
.factory(clientFactory)
.build()
.blocking();
When the client sends its first request to the server, the internal mechanism of Armeria WebClient performs a DNS query to retrieve a list of A records. Subsequently, the client attempts to use the HTTP/2 protocol with ALPN (Application-Layer Protocol Negotiation) on one of the retrieved IP addresses. If the negotiation with the server fails, a SessionProtocolNegotiationException
is thrown. Additionally, the result of this negotiation is cached in SessionProtocolNegotiationCache
.
Problem
Suppose that the DNS query returns multiple A records, and the protocol negotiation fails with a particular IP. In such a case, a request to a different IP might succeed. However, based on my understanding, the current implementation of SessionProtocolNegotiationCache
uses {domain}|{port}
as the cache key to store the result of the protocol negotiation failure. While this cache follows an LRU policy, it does not have a TTL (time-to-live) setting.
As a result, if the first attempted IP fails protocol negotiation and its failure is cached, subsequent requests that resolve to the same domain may also fail even when other IPs could potentially succeed.
Suggestion
I believe that changing the cache key of SessionProtocolNegotiationCache
to {domain}|{ip}|{port}
could resolve this issue. However, I would be grateful if the Armeria team could review this approach and let me know if there are any potential side effects or other considerations I may have overlooked. Additionally, if I have misunderstood any part of the current behavior, please feel free to correct me. Your feedback would be greatly appreciated!