-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kafka: upstream kafka facade in mesh-filter #17783
Conversation
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
|
||
// Impl leakage: real implementations of Kafka Producer need to stop a monitoring thread, then | ||
// they can close the producer. Because the polling thread should not be interrupted, we just mark | ||
// it as finished, and it's going to notice that change on the next iteration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'marking' as finished is just a simple volatile boolean
in other words:
invoking this markFinished() implementation allows us to wait shorter time if we want to destroy multiple objects with such a destructor because their thread cannot be interrupted here
and when are we closing multiple Kafka producers at the same time? when we shut down envoy, and the whole facade gets destroyed with the cluster -> producer
maps
so TLDR it implies that facade destruction should take only about ${poll_duration} (I should make that 1000ms
configurable)
|
||
// Placeholder for proper Kafka Producer object. | ||
// It will also keep a reference to a dedicated thread (that's why we need a factory) that's going | ||
// to be polling for delivery notifications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
/assign @mattklein123 |
Please check CI. /wait |
/retest |
Retrying Azure Pipelines: |
Looks like was some fluke in docs generation, went away with retest |
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
* main: (32 commits) Stop processing pending H/2 frames if connection transitioned to the closed state http2: limit use of deferred resets in the http2 codec to server-side connections Abort filter chain iteration on local reply Reject or strip fragment from request URI ext-authz: merge duplicate headers from client request in check request common: introduce stable logger /w examples in DNS (envoyproxy#17772) route: fast return when route matches failed (envoyproxy#17769) owners: add owners for dubbo proxy network filter (envoyproxy#17820) config/router/tcp_proxy/options: v2 API, boosting and --bootstrap-version CLI removal. (envoyproxy#17724) coverage: revert the limit http/cache to 92.6. (envoyproxy#17817) network: rename SocketAddressProvider as ConnectionInfoProvider (envoyproxy#17717) test: bumping coverage (envoyproxy#17757) conn_pool: Minor cleanups to ConnPoolBaseImpl (envoyproxy#17710) Split VaryHeader into VaryAllowList and VaryUtils to organize vary-related logic (envoyproxy#17728) ext_proc: Make tests more resilient to IPv6 support (envoyproxy#17784) Remove invlaid backquote from doc (envoyproxy#17797) rocketmq: move to contrib (envoyproxy#17796) kafka: upstream kafka facade in mesh-filter (envoyproxy#17783) ecds: create shared base class for DynamicFilterConfigProviderImpl (envoyproxy#17735) Change log level from debug to trace (envoyproxy#17774) ... Signed-off-by: Michael Puncel <mpuncel@squareup.com>
Commit Message: upstream kafka facade in mesh-filter
Additional Description: Thread-local container/manager for Kafka producers that will be accessed by Kafka produce requests like this. Only facade part, the Kafka producer wrapper (together with librdkafka dependency) will be in another PR.
Risk Level: Low
Testing: Unit tests
Docs Changes: N/A
Release Notes: N/A
Platform Specific Features: N/A