Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

Fluentd only ships logs from it's own namespace. #59

Open
FrederikNJS opened this issue Oct 14, 2016 · 17 comments
Open

Fluentd only ships logs from it's own namespace. #59

FrederikNJS opened this issue Oct 14, 2016 · 17 comments

Comments

@FrederikNJS
Copy link

FrederikNJS commented Oct 14, 2016

Hi Deis,

I have been trying to run the deis/fluentd:v2.4.2 image as part of my Kubernetes cluster, as I wanted better tagging than the official fluentd container provided. The container seems to ship the logs to elasticsearch just fine, but unfortunately it only grabs the logs for the namespace which it's started up in.

I decided to start it up in the kube-system namespace, as it seemed like a "system" service, so now I can only see the logs from containers in the kube-system namespace. Is this how deis/fluentd works, or do I need to configure something? I can see that the configs in the repository uses the deis namespace.

The daemonset I created looks like this:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-elasticsearch
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: fluentd-elasticsearch
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: fluentd-elasticsearch
        image: deis/fluentd:v2.4.2
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        env:
          - name: DISABLE_DEIS_OUTPUT
            value: "true"
          - name: ELASTICSEARCH_HOST
            value: elasticsearch-logging
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

The fluentd logs mention my containers, for example:

2016-10-14 11:29:42 +0000 [info]: following tail of /var/log/containers/hello-kube-1544476892-ygjnd_default_hello-kube-de80e8bb19cebf6ca31935d4e9a692076212f82cd5a54c794c26b3ed6450a845.log
2016-10-14 11:29:42 +0000 [info]: following tail of /var/log/containers/hello-kube-1544476892-ygjnd_default_POD-59f14681b2e35862a76d92ccc7bd5f4639f465a41cef5d185a9faf85822c691b.log
@jchauncey
Copy link
Member

I am actually working on this plugin right now. It definitely was not intended for use in an environment like kubernetes. If I got you a custom image to try would you be willing to replace what you have with it? See if it works for you?

@FrederikNJS
Copy link
Author

Sure, I would love to help out!

@jchauncey
Copy link
Member

k let me get a pull request setup and ill get you an image out of the deisci registry to try out. stay tuneed =)

@jchauncey
Copy link
Member

jchauncey commented Oct 14, 2016

Alright so I'm not sure if this will fix your orignal problem (I never saw that problem yesterday) but this new image will allow you to do the following things:

  • You can now specify how you want to index your data. Before we lumped everything into a bucket called logstash. Now you can actually use the map of data we get from fluentd to create your index name. I am using kubernetes.namespace_name in my personal cluster. (I will provide a sample configuration below).
  • You can now also provide a custom index name and also use the logstash format. So you can do something like myapp_namespace-YYYY-MM-DD. This allows you to easily archive old log data.

Here is the image you can use - quay.io/deisci/fluentd:git-2aec7b0 please let me know if you have any issues. To install this image you can do 1 of hte following:

  • Edit your generate_params.toml and supply the tag and org in the fluentd section. Then run helmc generate
  • Edit the deis-logger-fluentd-daemon.yaml` file in the manifest directory of the chart you used to install workflow.

Then you just need to do kubectl create -f manifests/deis-logger-fluentd-daemon.yaml

        - name: "ELASTICSEARCH_LOGSTASH_FORMAT"
          value: "true"
        - name: "FLUENTD_FLUSH_INTERVAL"
          value: "10s"
        - name: TARGET_INDEX_KEY
          value: kubernetes.namespace_name

@jchauncey
Copy link
Member

Ive done a slight refactoring if you want to try this image instead - quay.io/deisci/fluentd:git-ad7196d

@FrederikNJS
Copy link
Author

FrederikNJS commented Oct 14, 2016

I just tried git-ad7196d out, and it worked beautifully. Now both the default and kube-system namespaces come through, and everything is tagged nicely. Thank you!

@jchauncey
Copy link
Member

yup. well get this merged and a release cut shortly.. waiting on some reviewers =)

@thenayr
Copy link

thenayr commented Oct 18, 2016

Just chiming in on this, @jchauncey Thank you for the recent changes, this Fluentd image is much improved over the cluster default image that ships with Kubernetes.

I especially like the new TARGET_INDEX_KEY functionality!

I'm testing this out in my clusters right now using the canary image

@thenayr
Copy link

thenayr commented Oct 18, 2016

After a few hours of testing with this, a few bits of feedback:

It doesn't seem possible to get the following format of index: custom-name-YYYY-MM-DD, I believe I've tried every combination of ELASTICSEARCH_LOGSTASH_FORMAT, ELASTICSEARCH_LOGSTASH_PREFIX and ELASTICSEARCH_INDEX_NAME.

Your example above with TARGET_INDEX_KEY does result in the correct index name (default-YYYY-MM-DD), but if I don't want to specify the TARGET_INDEX_KEY I always get stuck with fluentd as my index name, or whatever I specify as ELASTICSEARCH_INDEX_NAME, but with no YYYY-MM-DD suffix on the name.

Even just trying to get the default logstash-YYYY-MM-DD by setting ELASTICSEARCH_LOGSTASH_FORMAT to true doesn't want to work for me.

Also it would be nice to be able to customize reload_connections value as AWS Elasticsearch service has issues if that is set to true.

@jchauncey
Copy link
Member

k the PR is still open so i can make some small changes if you want.

You can see the logic here for writing out hte index name - https://github.com/deis/fluentd/pull/60/files#diff-5198419c579f23c85b30a0fed99ddee9R323

If you want to write all the logs to 1 index you can set ELASTICSEARCH_LOGSTASH_PREFIX while also setting ELASTICSEARCH_LOGSTASH_FORMAT

You can find these changes in - quay.io/deisci/fluentd:git-a89dfce

@thenayr
Copy link

thenayr commented Oct 18, 2016

All of the feedback was from the image you just specified: quay.io/deisci/fluentd:git-a89dfce

The issue I'm having is that

- name: ELASTICSEARCH_LOGSTASH_FORMAT
  value: "true"
- name: ELASTICSEARCH_LOGSTASH_PREFIX
  value: "abc123"

Results in writing to the index name fluentd

Not sure why this is happening as the logic for index name you pointed out seems to be accounting for this correctly..

@jchauncey
Copy link
Member

That image is fairly old so I would try deploying the image I specified above which has some of the newer environment variables.

@thenayr
Copy link

thenayr commented Oct 18, 2016

I'm slightly confused, the first image you referenced a few days ago was git-ad7196d, I caught that was a different commit and used the commit ID of the latest changes git-a89dfce (also the one you just specified in the previous comment).

I'm pretty certain this is the correct image git-a89dfce, I've also exec into the pod to make sure the configuration looked sound and it did.

For clarity sake, here is more of my daemonset spec:

spec:
 13       containers:
 14       - name: fluentd-logging
 15         image: quay.io/deisci/fluentd:git-a89dfce
 16         env:
 17         - name: KUBERNETES_VERIFY_SSL
 18           value: "false"
 19         - name: ELASTICSEARCH_HOST
 20           value: "elasticsearch-logging"
 21         - name: ELASTICSEARCH_LOGSTASH_FORMAT
 22           value: "true"
 23         - name: ELASTICSEARCH_LOGSTASH_PREFIX
 24           value: "abc123"

@jchauncey
Copy link
Member

Ah ok yeah if you are using git-a89dfce then you have the latest changes. =) just making sure.

Let me see if I can replicate the behavior you are seeing.

@thenayr
Copy link

thenayr commented Oct 18, 2016

After some testing, this appears to be the offending line - https://github.com/deis/fluentd/pull/60/files#diff-839dc2adb331ca2e1acf44b97bdf9796R40

Removing that line I can properly set a LOGSTASH_PREFIX and have it take effect.

So it seems like even if we don't set a TARGET_INDEX_KEY it will still evaluate as "" and cause the target_index to evaluate to the default fluentd

@jchauncey
Copy link
Member

hrm k their configuration is terrible and makes it really hard to have a shell script setup those values. I will have to come up with something to help with this

@Cryptophobia
Copy link

This issue was moved to teamhephy/fluentd#8

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants