Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Zookeeper Plugin #499

Merged
merged 2 commits into from
Jun 21, 2022
Merged

feat: Add Zookeeper Plugin #499

merged 2 commits into from
Jun 21, 2022

Conversation

dmikolay
Copy link
Contributor

Proposed Change

LogRecord #32
ObservedTimestamp: 2022-06-16 19:53:47.538867 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
Severity: 
Body: 2020-12-09 20:38:42,979 INFO  [main] server.ZooKeeperServer: Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
Attributes:
     -> log.file.name: STRING(zookeeper.txt)
Trace ID: 
Span ID: 
Flags: 0
Checklist
  • Changes are tested
  • CI has passed

@dmikolay dmikolay requested review from armstrmi, a team and cpheps June 16, 2022 20:03
@BinaryFissionGames
Copy link
Contributor

That example parse doesn't look like it's working right; I'd expect to see the severity + timestamp parsed, and also a few attributes on the LogRecord like message and source.

Maybe we need to take some of the changes that were being made in observIQ/stanza-plugins#399? Seems like there is a regex there for messages w/ no ID, which seems to be the problem with this data.

I also wonder if the log format ends up being different for "standalone" zookeeper, as opposed to when it's run in conjunction with hbase/kafka. Seems strange that we didn't account for this in the original plugin.

@cpheps
Copy link
Contributor

cpheps commented Jun 17, 2022

@BinaryFissionGames and @dmikolay

That example parse doesn't look like it's working right; I'd expect to see the severity + timestamp parsed, and also a few attributes on the LogRecord like message and source.

I ran into this when working on ops agent. If the node is part of a cluster it'll have the myid:[0-9]+ before the severity. If it's not that field will be missing. We'll likely need a router unless we want to get extra clever with our regexes.

Here's what I did in ops-agent https://github.com/GoogleCloudPlatform/ops-agent/blob/master/apps/zookeeper.go#L76-L120

@dmikolay
Copy link
Contributor Author

After adding a router and making regex changes, here's a new example of the plugin at work

This line in the test file:

2022-01-31 17:51:45,451 [myid:1] - INFO  [NIOWorkerThread-3:NIOServerCnxn@514] - Processing mntr command from /0:0:0:0:0:0:0:1:50284

Produced this example log:

LogRecord #0
ObservedTimestamp: 2022-06-20 19:23:17.086837 +0000 UTC
Timestamp: 2022-01-31 22:51:45.451 +0000 UTC
Severity: Info
Body: 2022-01-31 17:51:45,451 [myid:1] - INFO  [NIOWorkerThread-3:NIOServerCnxn@514] - Processing mntr command from /0:0:0:0:0:0:0:1:50284
Attributes:
     -> log.file.name: STRING(zookeeper.txt)
     -> thread: STRING(NIOWorkerThread-3)
     -> message: STRING(Processing mntr command from /0:0:0:0:0:0:0:1:50284)
     -> zookeeper_severity: STRING(INFO)
     -> source: STRING(NIOServerCnxn)
     -> line: STRING(514)
     -> timestamp: STRING(2022-01-31 17:51:45,451)
     -> myid: STRING(1)
     -> log_type: STRING(zookeeper)
Trace ID: 
Span ID: 
Flags: 0

@cpheps cpheps merged commit 9b18d7c into main Jun 21, 2022
@cpheps cpheps deleted the zookeeper_plugin branch June 21, 2022 12:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants