Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Metadata is missing 1.7.1 on output #3112

Closed
cboothe opened this issue Feb 23, 2021 · 9 comments
Closed

Kubernetes Metadata is missing 1.7.1 on output #3112

cboothe opened this issue Feb 23, 2021 · 9 comments
Labels

Comments

@cboothe
Copy link

cboothe commented Feb 23, 2021

Bug Report

Describe the bug
I was trying to deploy multiple versions of the image to get the Kubernetes plugin to work. It looks like several versions have connectivity issues. However when I use 1.7.1 it does say it can connect.

Fluent Bit v1.7.1

  • Copyright (C) 2019-2021 The Fluent Bit Authors
  • Copyright (C) 2015-2018 Treasure Data
  • Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
  • https://fluentbit.io

[2021/02/23 16:57:54] [ info] [engine] started (pid=1)
[2021/02/23 16:57:54] [ info] [storage] version=1.1.0, initializing...
[2021/02/23 16:57:54] [ info] [storage] in-memory
[2021/02/23 16:57:54] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/02/23 16:57:54] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc port=443
[2021/02/23 16:57:54] [ info] [filter:kubernetes:kubernetes.0] local POD info OK
[2021/02/23 16:57:54] [ info] [filter:kubernetes:kubernetes.0] testing connectivity with API server...
[2021/02/23 16:57:54] [ info] [filter:kubernetes:kubernetes.0] API server connectivity OK
[2021/02/23 16:57:54] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2021/02/23 16:57:54] [ info] [sp] stream processor started

To Reproduce

No data on the annotations, namespace, pod, host, etc. in the log output.

{
  "log": "[1] 23 Feb 19:02:33.195 * Background saving terminated with success\n",
  "stream": "stdout",
  "time": "2021-02-23T19:02:33.195997409Z",
  "timestamp": "2021-02-23T19:02:33.195997Z"
}

Expected behavior

Your Environment

  • Version used: 1.7.1
  • Configuration:
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-logflare.conf

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.<namespace>.*
        Path              /var/log/containers/*<namespace>*.log
        Parser            docker
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10

  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.<namespace>.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

 output-logflare.conf: |
    [OUTPUT]
        Name             http
        Match            kube.<namespace>.*
        tls              On
        Host             api.logflare.app
        Port             443
        URI              /logs/json?api_key=
        Format           json
        Retry_Limit      5
        json_date_format iso8601
        json_date_key    timestamp

Not sure what K8S-Logging.Exclude does

  • Environment name and version (e.g. Kubernetes? What version?): v1.19.6
  • Server type and version: Container runtime docker://19.3.13
  • Operating System and version: Debian GNU/Linux 10 (buster)
  • Filters and plugins: Kubernetes
@servo1x
Copy link

servo1x commented Feb 24, 2021

Most likely the kubernetes metadata size is too big: #2033

Recommend setting the Buffer_Size to 0 for the kubernetes filter.

Doc: https://docs.fluentbit.io/manual/pipeline/filters/kubernetes

@cboothe
Copy link
Author

cboothe commented Feb 24, 2021

Thanks @servo1x that didn't make a difference though. Is there a way to debug the filter?

@bhavyalatha26
Copy link

Running into the same issue. Tried with versions 1.6, 1.7.1 & 1.7.2 as well.

Config:

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.*
        Path              /var/log/containers/*.log
        DB                /var/log/flb_kube.db
        Parser            docker
        Docker_Mode       On
        Mem_Buf_Limit     50MB
        Skip_Long_Lines   On
        Refresh_Interval  10

  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Regex_Parser        kube-custom
        Buffer_Size         0
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

    [FILTER]
        Name aws
        Match *
        imds_version v1
        az true
        ec2_instance_id true
        ec2_instance_type true
        private_ip true
        ami_id true
        account_id true
        hostname true
        vpc_id true

  output-elasticsearch.conf: |
    [OUTPUT]
        Name            es
        Match           *
        Host            ${FLUENT_ELASTICSEARCH_HOST}
        Port            ${FLUENT_ELASTICSEARCH_PORT}
        Logstash_Format Off
        Generate_ID     On
        Tag_Key         aws_agent
        Include_Tag_Key On 
        tls             On
        Retry_Limit     False
        HTTP_User       ${FLUENT_ELASTICSEARCH_USERNAME}
        HTTP_Passwd     ${FLUENT_ELASTICSEARCH_PASSWORD}
        Index           ${FLUENT_ELASTICSEARCH_INDEX}

The document on ES looks like this :

{
  "_index": "#####",
  "_type": "_doc",
  "_id": "###",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2021-03-08T09:44:18.825Z",
    "log": "some message",
    "time": "2021-03-08T09:44:18.825512135Z",
    "az": "##-####-##",
    "ec2_instance_id": "i-#####",
    "ec2_instance_type": "#####",
    "private_ip": "#.#.#.#",
    "vpc_id": "vpc-######",
    "ami_id": "ami-#########",
    "account_id": "#####",
    "hostname": "ip-#####"
  },
  "fields": {
    "@timestamp": [
      "2021-03-08T09:44:18.825Z"
    ],
    "time": [
      "2021-03-08T09:44:18.825Z"
    ]
  }
 }

@cboothe
Copy link
Author

cboothe commented Mar 9, 2021

Is there a way to troubleshoot/debug this? @edsiper

@cboothe
Copy link
Author

cboothe commented Mar 30, 2021

Still an issue with log_level debug on version 1.7.2

@turbotankist
Copy link

    [SERVICE]
        Log_Level     debug

should works
what is the issue with it?

@cboothe
Copy link
Author

cboothe commented Apr 19, 2021

Just tried 1.7.3 @turbotankist I can't see anything wrong with the Debug info.

`
[2021/04/19 12:50:42] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc port=443
[2021/04/19 12:50:42] [ info] [filter:kubernetes:kubernetes.0] local POD info OK
[2021/04/19 12:50:42] [ info] [filter:kubernetes:kubernetes.0] testing connectivity with API server...
[2021/04/19 12:50:42] [debug] [filter:kubernetes:kubernetes.0] Send out request to API Server for pods information
[2021/04/19 12:50:42] [debug] [http_client] not using http_proxy for header
[2021/04/19 12:50:42] [debug] [http_client] header=GET /api/v1/namespaces/logging/pods/fluent-bit-52tth HTTP/1.1
Host: kubernetes.default.svc
Content-Length: 0
User-Agent: Fluent-Bit
Connection: close
Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6I

[2021/04/19 12:50:42] [debug] [http_client] server kubernetes.default.svc:443 will close connection #29
[2021/04/19 12:50:42] [debug] [filter:kubernetes:kubernetes.0] Request (ns=logging, pod=fluent-bit-52tth) http_do=0, HTTP Status: 200
[2021/04/19 12:50:42] [ info] [filter:kubernetes:kubernetes.0] connectivity OK
[2021/04/19 12:50:42] [debug] [http:http.0] created event channels: read=29 write=30
`

Output Received:

"2021-04-19T13:00:18.781043671Z stdout F $ node server.js" metadata { "log": "2021-04-19T13:00:18.781043671Z stdout F $ node server.js", "timestamp": "2021-04-19T13:00:19.305915Z" }

@github-actions
Copy link
Contributor

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label May 22, 2021
@github-actions
Copy link
Contributor

This issue was closed because it has been stalled for 5 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants