Caution
README.md is generated from templates/README.md.erb
See also dockerhub tags page: https://hub.docker.com/r/fluent/fluentd-kubernetes-daemonset/tags
Tip
Since v1.17.0, the container image build process has been migrated from automated builds on hub.docker.com to GitHub Actions. This is because there were limitation about the number of automated builds on hub.docker.com. Now, there is no limitation about the number of build pipelines.
Note that there were some restrictions to ship daemonset images for v1.16.5 or older ones before:
papertrail
,syslog
images (x86_64/arm64) won't be published anymorelogentries
,loggly
,logzio
,s3
arm64 images won't be published anymore (x86_64 only supported) If you want to use above non published images, build it by yourself. Dockerfile itself is still maintained in this repository.
S3elasticsearch8
Dockerfiledocker pull fluent/fluentd-kubernetes-daemonset:v1.17.1-debian-s3elasticsearch8-2.04
You can also use v1-debian-PLUGIN
tag to refer latest v1 image, e.g. v1-debian-elasticsearch
. On production, strict tag is better to avoid unexpected update.
See dockerhub's tags page for older tags.
v0.12 development has been ended. These images are never updated.
v0.12-debian-elasticsearch
archived-image/v0.12/debian-elasticsearch/Dockerfilev0.12-debian-loggly
archived-image/v0.12/debian-loggly/Dockerfilev0.12-debian-logentries
archived-image/v0.12/debian-logentries/Dockerfilev0.12-debian-cloudwatch
archived-image/v0.12/debian-cloudwatch/Dockerfilev0.12-debian-stackdriver
archived-image/v0.12/debian-stackdriver/Dockerfilev0.12-debian-s3
archived-image/v0.12/debian-s3/Dockerfilev0.12-debian-gcs
archived-image/v0.12/debian-gcs/Dockerfilev0.12-debian-papertrail
archived-image/v0.12/debian-papertrail/Dockerfilev0.12-debian-syslog
archived-image/v0.12/debian-syslog/Dockerfilev0.12-debian-graylog
archived-image/v0.12/debian-graylog/Dockerfilev0.12-debian-logzio
archived-image/v0.12/debian-logzio/Dockerfilev0.12-debian-kafka
archived-image/v0.12/debian-kafka/Dockerfilev0.12-debian-splunkhec
archived-image/v0.12/debian-splunkhec/Dockerfilev0.12-debian-kinesis
archived-image/v0.12/debian-kinesis/Dockerfile
v0.12-alpine-elasticsearch
archived-image/v0.12/alpine-elasticsearch/Dockerfilev0.12-alpine-loggly
archived-image/v0.12/alpine-loggly/Dockerfilev0.12-alpine-logentries
archived-image/v0.12/alpine-logentries/Dockerfilev0.12-alpine-cloudwatch
archived-image/v0.12/alpine-cloudwatch/Dockerfilev0.12-alpine-stackdriver
archived-image/v0.12/alpine-stackdriver/Dockerfilev0.12-alpine-s3
archived-image/v0.12/alpine-s3/Dockerfilev0.12-alpine-gcs
archived-image/v0.12/alpine-gcs/Dockerfilev0.12-alpine-papertrail
archived-image/v0.12/alpine-papertrail/Dockerfilev0.12-alpine-syslog
archived-image/v0.12/alpine-syslog/Dockerfilev0.12-alpine-graylog
archived-image/v0.12/alpine-graylog/Dockerfilev0.12-alpine-logzio
archived-image/v0.12/alpine-logzio/Dockerfilev0.12-alpine-kafka
archived-image/v0.12/alpine-kafka/Dockerfilev0.12-alpine-kinesis
archived-image/v0.12/alpine-kinesis/Dockerfilev0.12-alpine-splunkhec
archived-image/v0.12/alpine-splunkhec/Dockerfile
Fluentd is an open source data collector, which lets you unify the data collection and consumption for a better use and understanding of data.
Fluentd versioning is as follows:
Series | Description |
---|---|
v1.x | current stable |
v0.12 | Old stable, no longer updated |
Default YAML uses latest v1 images like fluent/fluentd-kubernetes-daemonset:v1-debian-kafka
. If you want to avoid unexpected image update, specify exact version for image
like fluent/fluentd-kubernetes-daemonset:v1.8.0-debian-kafka-1.0
.
This is for v0.12 images.
In Kubernetes and default setting, fluentd needs root permission to read logs in /var/log
and write pos_file
to /var/log
.
To avoid permission error, you need to set FLUENT_UID
environment variable to 0
in your Kubernetes configuration.
These images have default configuration and support some environment variables for parameters but it sometimes doesn't fit your case. If you want to use your configuration, use ConfigMap feature.
Each image has following configurations:
- fluent.conf: Destination setting, Elaticsearch, kafka and etc.
- kubernetes.conf: k8s specific setting.
tail
input for log files andkubernetes_metadata
filter - tail_container_parse.conf: parser setting for
/var/log/containers/*.log
. See also "Use CRI parser for containerd/cri-o" logs section - prometheus.conf: prometheus plugin for fluentd monitoring
- systemd.conf: systemd plugin for collecting systemd-journal log. See also "Disable systemd input" section.
Overwrite conf file via ConfigMap. See also several examples:
This feature is available since v1.12.0-xxx-1.1.
By default, these images use json
parser for /var/log/containers/
files because docker generates json formatted logs.
On the other hand, containerd/cri-o use different log format. To parse such logs, you need to use cri
parser instead.
You can use cri
parser by overwriting tail_container_parse.conf
via ConfigMap.
# configuration example
<parse>
@type cri
</parse>
See also CRI parser README
You can update the default path for the container logs i.e /var/log/container/*.log and also one can add multiple path as defined in this fluentd document https://docs.fluentd.org/input/tail#path
Since v1.9.3 or later images.
You can exclude container logs from /var/log/containers/
with FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
.
If you have a trouble with specific log, use this envvar, e.g. ["/var/log/containers/logname-*"]
.
exclude_path
parameter document: https://docs.fluentd.org/input/tail#exclude_path- Fluentd log issue with backslash: fluent/fluentd#2545
Since v1.17.0-1.3/v1.16.5-1.3, jemalloc memory allocator is disabled by default.
This is because that combination of systemd plugin and jemalloc memory allocator causes a crash bug.
(free(): invalid pointer
in typical)
If you don't use systemd plugin at all, you can enable jemalloc memory allocator explicitly via env: parameter.
env:
...
LD_PRELOAD="/usr/lib/libjemalloc.so.2"
If you don't setup systemd in the container, fluentd shows following messages by default configuration.
[warn]: #0 [in_systemd_bootkube] Systemd::JournalError: No such file or directory retrying in 1s
[warn]: #0 [in_systemd_kubelet] Systemd::JournalError: No such file or directory retrying in 1s
[warn]: #0 [in_systemd_docker] Systemd::JournalError: No such file or directory retrying in 1s
You can suppress these messages by setting disable
to FLUENTD_SYSTEMD_CONF
environment variable in your kubernetes configuration.
By default, latest images launch prometheus
plugins to monitor fluentd.
You can disable prometheus input plugin by setting disable
to FLUENTD_PROMETHEUS_CONF
environment variable in your kubernetes configuration.
This is for older images. Latest elasticsearch images don't use sed.
By historical reason, elasaticsearch image executes sed
command during startup phase when FLUENT_ELASTICSEARCH_USER
or FLUENT_ELASTICSEARCH_PASSWORD
is specified. This sometimes causes a problem with read only mount.
To avoid this problem, set "true" to FLUENT_ELASTICSEARCH_SED_DISABLE
environment variable in your kubernetes configuration.
This daemonset setting mounts /var/log
as service account fluentd
so you need to run containers as privileged container.
Here is command example:
oc project kube-system
oc create -f https://raw.githubusercontent.com/fluent/fluentd-kubernetes-daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml
oc adm policy add-scc-to-user privileged -z fluentd
oc patch ds fluentd -p "spec:
template:
spec:
containers:
- name: fluentd
securityContext:
privileged: true"
oc delete pod -l k8s-app=fluentd-logging
This is from nekop's japanese article.
When you want to have multiple fluentd. For example push to multiple destination like: elsticsearch + S3.
You need to use FLUENT_POS_EXTRA_DIR
add additional directory for pos file.
Otherwise they share same pos file. You may found some log only on one destination.
zookeeper gem doesn't work on Debian 10, so kafka image doesn't include zookeeper gem.
Maintainers don't have k8s experience on Windows. Some users create k8s daemonset on Windows:
Please check them out.
Using debian-kafka2/debian-kafka2-arm64 images are better than using debian-kafka/debian-kafka-arm64 images.
Because debian-kafka2/debian-kafka2-arm64 images use out_kafka2
plugin but debian-kafka/debian-kafka-arm64 images use deprecated out_kafka_buffered
plugin.
Some images are contributed by users. If you have a problem/question for following images, ask it to contributors.
- azureblob : @elsesiy
- papertrail : @alexouzounis
- kafka : @erhudy
- graylog : @rtnpro
- gcs : @andor-pierdelacabeza
- Amazon Kinesis : @shiftky
- logz.io : @SaMnCo / @jamielennox
- splunkhec: @FutureSharks
Currently, we don't accept new destination request without contribution. See fluent#293
Kubernetes Logging with Fluentd
We can't notice comments in the DockerHub so don't use them for reporting issues or asking question.
If you have any problems with or questions about this image, please contact us through a GitHub issue.
Update templates
files instead of docker-image
files.
docker-image
files are automatically generated from templates
.
Note: This file is generated from templates/README.md.erb