Part 4: Monitoring Raspberry Pi 4 performance in real time
Build a Chronograf dashboard of top of InfluxDB and Telegraf using Docker Compose
This is part 4 of the serie Hands on Development with Raspberry Pi 4, whose primary goal is to setup a workspace on a high performance and cost effective setup, intended both for rapid prototyping, as well as for fast transfer to a production environment. The scope is suitable for cluster management (using Docker, and even Kubernetes), as well as for IoT projects and High-Performance Computing (HPC).
Before going into the specific content of this article, it is worth remebering the scope of the four parts that compose this serie:
- Part 1: Getting the most from Raspberry Pi 4 , whose concrete scope is to integrate a M.2 SSD physical disk with a 64 bits operating system running the Raspberry Pi, that provides 4Gb RAM.
- Part 2: Installing Docker in Raspberry Pi 4, that focuses in development methodology and shows the process to prepare a Docker ready development environment.
- Part 3: Deploying Theia IDE using Docker. In this part we will explain how to have a full featured IDE for editing code in a headless Raspberry Pi. This way we skip the need of a monitor, keyboard and mouse for the board. Hence we will confortably work from our laptop without the cumbersome need of additional hardware.
- Part 4: Monitoring Raspberry Pi 4 performance in real time. To finish the serie we provide a simple and powerful example of how to deploy an application using Docker Compose. We will build Chronograf dashboard of top of InfluxDB andTelegraf using Docker Compose.
So let’s go straight into the last part, that will be the introductory step to your IoT journey.
The TICK Stack
The so called TICK stack seems quite a extrange name, but it’s really a mnemotechnic designation that- just by mentioning the acronym of each letter- will make you retrieve the 4 components and explain to others what it is. Then let’s introduce the TICK stack with a brief sentence of each component:
- -T-elegraf is the data acquisition component
- -I-nfluxDB is the time series database
- -C-hronograph is the visualization dashboard
- -K-apacitor is the rules based system to configure alerts
So, when asked what the hell is the TICK stack, you can quickly answer:
It’s a monitoring system that collects data from sensors and system with Telegraf, save them to InfluxDB time series database, and expose the time-dependent series in the visual interface of Chronograph. Furthermore you can use the real-time data processing engine called Kapacitor in order to configure rules that triggers specific alerts on the dashboard.
In summary, the TICK stack is the best fit for your monitoring system if you are in the path of choosing open source components. Influxdata, the company behind TICK hosts all of its repositories in Github.
Deployment using Docker Compose
Since this is the first stop in the journey about the TICK stack, let’s make it as easy as possibler and skip the Kapacitor component, deploying the minimum stack, i.e. TIC, composed of InfluxDB, Telegraf and Chronograf. You can find the code to deploy this simplified stack in Github.
Make sure you have installed Docker and Docker Compose in your Raspberry Pi. If not the case, please refer to Part 2: Installing Docker in Raspberry Pi 4 for detailed instructions on how to do it. Then clone the code in the Raspberry Pi to a folder called TIC:
Although we will run this example in the Raspberry Pi 4, the stack will work as well in any other Linux system. It Docker images cover all of the common CPU architectures:
- AMD64 for laptops and desktop computer,
- ARMv7 for single board computers likeRaspberry Pi (RPi) versions 2 & 3, and
- Arm64 running 64 bits OS in any of RPi3 or RPi4.
So, at the end of this article, it is left as a practical exercise for you to reproduce the process in your laptop.
Deploying the TIC stack
Just change to the folder where you cloned the repository, type in the following command, and let the magic happens:
$ cd TIC
$ docker-compose up -d
After the terminal gives you back control without any error, the 3 services will be running:
- InfluxDB database communicating through port 8086.
- Telegraf acquiring system metrics and writing data to InfluxDB in a database also called telegraf.
- Chronograph visualization dashboard available at http://RPi4_IP:8888
RPi4_IP is the IPv4 address of the Raspberry Pi in the local network.
Once that Chronograf interface is loaded, click on the left menu, item Host List, where there should only one: the machine from where you are running the code. In the right most column you can find the two default apps, i.e. system and docker:
To get the insight of where the apps’ definition come from, first be aware that the metrics to monitor are collected by the Telegraf service. Then you can access its configuration file byrunning a bash command inside its container:
$ docker exec -it telegraf cat /etc/telegraf/telegraf.conf
This command is a quick way to execute a command inside a Docker container. The first part accesses to the container shell, and the second (highligthed in bold letters) is the command that lists the file telegraf.conf.
Then, visiting the INPUTS section of that file you will find the two plugins providing each of the Chronograf dashboards, i.e.[[inputs.docker]] and [[inputs.system]]:
# INPUTS #
################################################################# Read metrics about docker containers
# Docker Endpoint
endpoint = "unix:///var/run/docker.sock"# Read metrics about system load & uptime
# no configuration
After this short explanation, and click on the host identifier in the Chronograf interface and the all the graphs of the two dashboards will be loaded. You will see graphs for host System as well as Docker containers metrics, all of them as time series:
If you want to access System or Docker app separately, click on the corresponding item of the Host List window.
It quite easy to import new dashboards to get custom metrics from the host. Let’s take the example of the set that monitors the basic metrics of system resources usage:
- RAM memory
- Swap memory
Chronograf dashboards are available in JSON format, that makes pretty easy importing one, as well as make available for others the new ones you create.
Monitoring system resources usage
In this section we are going to cover the steps to import the dashboard stored in the path ./dashboards/CPU-MEM-SWAP_SystemMetrics.json of the cloned of the repository.
Click on the Dashboards icon on the left side of the Chronograf interface. Then select Import Dashboard button on the top-right part of the table:
Drag and drop the file of the dashboard, i.e. CPU-MEM-SWAP_SystemMetrics.json as shown in the next image (the file is located in the folder dashboards of the clonned repository. i.e. TIC/dashboards/):
Once the file is available, press Continue to proceed with the upload:
Select the default Dynamic Source option, that will connect to the existing InfluxDB instance:
After the import, your new dashboard will appear in the dashboard list:
Click on it to watch the basic metrics of your host:
You can see 3 graphs: the long one for the CPU usage, the bottom-left one for the RAM memory usage, and the third for the swap memory.
If you have got stuck at any point of the deployment feel free to post your problem in the comment section below, and I will help you.
Remember that it is left as an exercise for you to deploy the stack in your laptop. Just follow the same steps and you will get it.
In summary, if you have successfully completed the 4 articles of the series, you covered all of the basic steps to convert your Raspberry Pi 4 into a test bench for rapid prototyping, as well as for fast transfer of a locally tested application to a production environment.
If you want to get deeper into the TICK stack, you can read the next article The new Raspberry Pi 400 Beats its Predecessor in Energetic Efficiency, that makes use of it to measure the CPU temperature of RPi 4 and compare with records from the new Raspberry Pi 400 model.
For upcoming articles regarding IoT projects, applications’ deployment using Docker, and work with Kubernetes in Raspberry Pi stay tuned to my Medium.com channel.