Generate and track metrics for Flask API applications using Prometheus and Grafana

The code for this entire implementation can be found here:

Flask is a very popular lightweight framework for writing web and web service applications in Python. In this blog post, I’m going to talk about how to monitor metrics on a Flask RESTful web service API application using Prometheus and Grafana. We’ll be tying it all together using docker-compose so that we can run everything using a single command, in an isolated Docker network.

Prometheus is a time-series database, that is extremely popular as a metrics and monitoring database, specially with Kubernetes. Promtheus is really cool because it is designed to scrape metrics from your application, instead of your application having to send metrics to it actively. Coupled with Grafana, this stack turns in to a powerful metrics tracking/monitoring tool, which is used in applications the world over.

To couple Flask with Prometheus and Grafana, we’re going to use the invaluable prometheus_flask_exporter library. This library allows us to create a /metrics endpoint for Prometheus to scrape with useful metrics regarding endpoint access, such as time taken to generate each response, CPU metrics, and so on.

The first thing we need to do in order to set up, is to create our Flask app. Here’s a really simple with the exporter library included:

import logging

from flask import Flask
from flask import jsonify
from prometheus_flask_exporter import PrometheusMetrics

logging.basicConfig(level=logging.INFO)"Setting LOGLEVEL to INFO")

api = Flask(__name__)
metrics = PrometheusMetrics(api)"app_info", "App Info, this can be anything you want", version="1.0.0")

def hello():
    return jsonify(say_hello())

def say_hello():
    return {"message": "hello"}

This code just returns a “hello” message when you access the flask-prometheus-grafana-example endpoint. The important part here is the integration of the prometheus_flask_exporter library. All you have to do is initialize a metrics object using metrics = PrometheusMetrics(yourappname) to get it working. It will automatically start exporting metrics to the /metrics endpoint of your application for the specified endpoint after that. If you go your app’s /metrics endpoint after running it, you’ll be greeted with something like this:

Now to set up Prometheus and Grafana. For Prometheus, you need a prometheus.yml file, which would look something like this:

# my global config
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    - targets: ['example-prometheus:9090']

  - job_name: 'flask-api'
    scrape_interval: 5s
    - targets: ['flask-api:5000']

In this example, we see that Prometheus is watching two endpoints, itself, example-prometheus:9090, and the Flask api, flask-api. These names are arbitrarily set inside the docker-compose config file, which we will get to later.

For Grafana, we need a datasource.yml file;

# config file version
apiVersion: 1

# list of datasources that should be deleted from the database
  - name: Prometheus
    orgId: 1

# list of datasources to insert/update depending
# whats available in the database
  # <string, required> name of the datasource. Required
- name: Prometheus
  # <string, required> datasource type. Required
  type: prometheus
  # <string, required> access mode. direct or proxy. Required
  access: proxy
  # <int> org id. will default to orgId 1 if not specified
  orgId: 1
  # <string> url
  url: http://example-prometheus:9090
  # <string> database password, if used
  # <string> database user, if used
  # <string> database name, if used
  # <bool> enable/disable basic auth
  basicAuth: false
  # <string> basic auth username, if used
  # <string> basic auth password, if used
  # <bool> enable/disable with credentials headers
  # <bool> mark as default datasource. Max one per org
  isDefault: true
  # <map> fields that will be converted to json and stored in json_data
     graphiteVersion: "1.1"
     tlsAuth: false
     tlsAuthWithCACert: false
  # <string> json object of data that will be encrypted.
    tlsCACert: "..."
    tlsClientCert: "..."
    tlsClientKey: "..."
  version: 1
  # <bool> allow users to edit datasources from the UI.
  editable: true

In this file, we are defining datasources.url, which is also derived from the name of the prometheus container on the Docker network via the docker-compose file.

Finally, we have a config.monitoring file:


This basically means we’ll be logging in to our Grafana dashboard using username: admin and password: pass@123.

Next, we’re going to load this all up using a Docker Compose file:

version: "3.5"

      context: ./api
    restart: unless-stopped
    container_name: flask-api
    image: example-flask-api
      - "5000:5000"

    image: prom/prometheus:latest
    restart: unless-stopped
    container_name: example-prometheus
      - 9090:9090
      - ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
      - '--config.file=/etc/prometheus/prometheus.yml'

    image: grafana/grafana:latest
    restart: unless-stopped
    user: "472"
    container_name: example-grafana
      - example-prometheus
      - 3000:3000
      - ./monitoring/datasource.yml:/etc/grafana/provisioning/datasource.yml
      - ./monitoring/config.monitoring

    name: example-network
    driver: bridge
      driver: default
        - subnet:

Note that we’re creating our own Docker network and putting all our applications on it, which allows them to talk to each other. However, this would be the same if you didn’t specify a network at all. Also important to note is that am using wsgi to run the Flask application.

Once Grafana is up, you should be able to log in and configure Prometheus as a datasource:

Add Prometheus as a datasource for Grafana

Once that’s done, you can use the example dashboard from the creator of the prometheus_flask_exporter library (use Import->JSON) which can be found here:

This gives you a cool dashboard like this:

Grafana dashboard for Flask metrics

As you can see, this gives us a killer implementation of Prometheus + Grafana to monitor a Flask web service application with minimum effort.

I wish everything in software development was this easy.