FLG (Fluentd, Loki, Grafana) stack for apps monitoring
In my previous story, I had discussed PLG (Promtail, Loki, Grafana) for apps monitoring. One can read the article here.
In this post, I will discuss another alternative which is the FLG stack which refers to (Fluentd, Loki, and Grafana) stack for apps log monitoring. unlike PLG stack here we need Fluentd as a logging driver for all docker apps.
Fluentd is a logging driver supported natively by Docker. Docker has a default logging driver json-file
but one can configure another docker logging driver. All docker-supported logging drivers can be found here.
All the Fluentd related details can be found here. Fluentd is a CNCF (Cloud Native Computing Foundation) project and it works as a daemon and collects all the logs (access/error) and ships them remotely to other services and can dump logs to AWS S3 or any other storage. It’s a lightweight application which can also send alerts and event to remote system hence will be a great companion to Loki in conjunction with Grafana.
I tried FLG stack along with other services (djangoapp, nodeapp, and nginx) in the same docker-compose file. Below is the Fluentd configuration file.
[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
[Output]
Name grafana-loki
Match *
Url ${LOKI_URL}
RemoveKeys source
Labels {job="fluent-bit"}
LabelKeys container_name
BatchWait 1s
BatchSize 1001024
LineFormat json
LogLevel info
and the docker-compose file will look like below.
version: '3.2'
services:
postgres:
container_name: postgres
image: postgres:12
env_file:
- "secrets/postgres.env"
volumes:
- postgres-data:/var/lib/postgresql/data/
ports:
- '5432:5432'
restart: always
nginx:
container_name: nginx
image: nginx-private-registry-image
ports:
- 80:80
restart: always
depends_on:
- djangoapp
- nodeapp
logging:
driver: fluentd
options:
fluentd-address: host.docker.internal:24224
djangoapp:
container_name: djangoapp
image: django-private-registry-image
ports:
- '8087:8087'
restart: always
depends_on:
- postgres
logging:
driver: fluentd
options:
fluentd-address: host.docker.internal:24224
nodeapp:
container_name: nodeapp
image: node-private-registry-image
ports:
- '1234:1234'
restart: always
logging:
driver: fluentd
options:
fluentd-address: host.docker.internal:24224
loki:
container_name: loki
image: grafana/loki:2.0.0
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ./loki/config.yaml:/etc/loki/local-config.yaml
grafana:
container_name: grafana
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- ./grafana/grafana.ini:/etc/grafana/grafana.ini
- ./grafana/provisioning/datasources/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml
depends_on:
- loki
logging:
driver: fluentd
options:
fluentd-address: host.docker.internal:24224
fluent-bit:
image: grafana/fluent-bit-plugin-loki:latest
container_name: fluent-bit
environment:
- LOKI_URL=http://loki:3100/loki/api/v1/push
volumes:
- ./fluent-bit/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
ports:
- "24224:24224"
- "24224:24224/udp"
volumes:
postgres-data:
Once we run docker-compose up -d
it will start other services along side FLG stack. Fluentd will start on port 24224
and as we configured other services logging driver to fluentd
then apps will start sending logs to fluentd
and fluentd will send app logs to loki which in turn can be displayed to Grafana. we can see app logs in Grafana as below.
Fluentd also send more labels such as container_name, image_id etc. as we configure in fluentd config file. we can see labels in loki log browser as well.
Unlike PLG stack FLG stack some more enhanced features. such as we don’t need to write custom regex to add labels to logs.Fluentd config itself enough for configuring labels. Although, there might be one issue one can see which is below error
ERROR: for <service_name> Cannot start service <service_name>: failed to initialize logging driver: dial tcp 192.168.65.2:24224: connect: connection refused
which might be because service is not able to send logs to internal docker network and being timed out. To resolve this one can set below environment variables and then run docker-compose up -d
which might fix this issue.
export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120
Please let me know your thoughts and any suggestions are welcome. Thanks!