You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create a prototype implementation demonstrating that a single processor defined for a single input can be applied in the shipper instead of in a beat. For a concrete example, consider an agent policy snippet configuring a processor for a system metrics input like:
In the prototype implementation, the drop_event processor would not be configured in the beat, and would instead be instantiated in the shipper. The processor should be applied in the shipper before the event in enqueued. Note that for the purposes of this implementation, the exact input type and processor chosen do not matter. Choose whatever is most convenient to work with.
In this example, the shipper will have a single gRPC connection from a metricbeat instance that is potentially sending events from multiple inputs. The shipper can inspect the data_stream and source (see their definition in the shipper client) fields of the event to determine whether the processor should apply:
// Event is a translation of beat.Event into protobuf.messageEvent {
// Creation timestamp of the event.google.protobuf.Timestamptimestamp=1;
// Source of the generated event.Sourcesource=2;
// Data stream for the event.DataStreamdata_stream=3;
It may be desirable to split this implementation into smaller steps, for example:
Leaving the processor configured in the beat, create an instance of the same processor (or any arbitrary processor) in the shipper and have it unconditionally apply to all events. This accomplishes adding processors into the shipper event pipeline.
Modify the shipper to selectively apply the processor only to the desired events.
Stop instantiating the processor in both the Beat and the shipper, and have it only apply in the shipper. The agent should ideally not route the processor configuration to the Beat when the shipper is enabled. This step could alternatively be implemented first.
The text was updated successfully, but these errors were encountered:
@leehinman@cmacknz As the shipper will become a beats, I believe that by default it will benefit from the exact same set of processors the Beats have right? Thus is this issue still relevant?
It gets significantly smaller, but we still need to make sure the configuration parts work. So defining the processors at the integration level, and having the config split between the input filebeat and shipper filebeat. And lastly the routing of the events as they come in so they go to the right pipeline.
Create a prototype implementation demonstrating that a single processor defined for a single input can be applied in the shipper instead of in a beat. For a concrete example, consider an agent policy snippet configuring a processor for a system metrics input like:
In the prototype implementation, the drop_event processor would not be configured in the beat, and would instead be instantiated in the shipper. The processor should be applied in the shipper before the event in enqueued. Note that for the purposes of this implementation, the exact input type and processor chosen do not matter. Choose whatever is most convenient to work with.
In this example, the shipper will have a single gRPC connection from a metricbeat instance that is potentially sending events from multiple inputs. The shipper can inspect the
data_stream
andsource
(see their definition in the shipper client) fields of the event to determine whether the processor should apply:It may be desirable to split this implementation into smaller steps, for example:
The text was updated successfully, but these errors were encountered: