Data logger and basic visualisation tool for Belgian Smart meters
⬆️ Reads data from:
- P1-Port implementing the DSMR protocol with 1-second resolution.
- *Plain text file with DSMR telegrams (for test purposes only).
⬇️ Logs data into:
- Terminal (Default)
- Plain text file as JSON objects
- TimescaleDB instance
- MQTT Broker
- Prerequisites
- Hardware installation
- Configuration
- Standalone usage
- Docker image usage
- Docker compose usage
- Data back up and restore
- Example notebook
- Deployment to Raspberry Pi
The operation of this software is designed and tested to work with:
- Smart Meter: Fluvius 1-phase and 3-phase electricity meters, as well as gas and water meters connected to them via wM-Bus.
- P1-port interface such as Slimme meter kabel - P1 to USB.
- Computer or an embedded data logger such Raspberry Pi Models 3, 3B+, 4.
Note that the P1 reader must be activated in advance. Fluvius activation link
The P1 Port monitoring tool is developed in python
. Nevertheless, we also provide and maintain a
Docker image to facility its deployment using a docker-compose
file. The requirements for each case are provided below.
A working version of python
is required to run the monitoring tool.
To check if python
is already installed in your system simply requirements
$ python --version
Python 3.9.16
Otherwise, please refer to the Official Documentation for your operating system. We recommend to use python~3.9
or below.
To manage dependencies we make use of Poetry. Please follow the Installation Instructions. Then, check that is properly working by running:
$ poetry --version
Poetry (version 1.5.1)
Subsequently, one can install all the necessary dependencies by simply running
poetry install
And to activate the virtual environment just execute:
poetry shell
Alternatively, we also provide requirement.txt
files to create a virtual environment using other tools such as venv
. The following commands can be use to instantiate the new virtual environment and install all dependencies.
python3 -m venv .venv
.venv/bin/pip install --upgrade pip
.venv/bin/pip install -r requirements/prod.txt
.venv/bin/pip install -r requirements/opt.txt
.venv/bin/pip install -r requirements/test.txt
.venv/bin/pip install -r requirements/dev.txt
Please note that in both cases, poetry
or venv
, dependencies are divided into four groups
- Production (
main
orprod.txt
): are necessary to execute the monitoring tool - Optional (
opt
oropt.txt
): are required to run the examplejupyter notebooks
- Development (
dev
ordev.txt
): are using during development - Testing (
test
ortest.txt
): are needed to run the tests.
For those interested in simply running the utility, a docker image is provided.
A working docker
or podman
instance is required. Please follow the official documentation depending on your platform
Moreover, a set of convenient docker-compose
files is provide for an automated deployment.
In order to make use of these files, please install either docker-compose
or podman-compose
$ pip3 install docker-compose
# or podman
$ pip3 install podman-compose
To capture the data from your Smart Meter two simple connections should be made:
- Connect the P1 cable to the smart meter using the RJ12 connector.
- Connect the USB end of the P1 cable to the computer or embedded data logger.
The data sources and logging destinations are configured via environment variables or using a yaml
file
There are two input data sources which can be configured. Please note they CANNOT be use simultaneously.
Gets the input data stream from a serial port.
The following yaml
block must be used to configure the identifier id
of the port.
# config.yaml
port:
id: "/dev/ttyUSB0"
Alternatively, the following environment variable can be be given
# ENV VARIABLES
PORT__ID="/dev/ttyUSB0"
Note that in window systems the identifier will look something like
COM[x]
The telegram plain file data source is only used for testing purposes. Yet, it can be configured by indicated the path to the input data file, using either of the methods below.
# config.yaml
file:
path: "data/test.txt"
# ENV VARIABLES
FILE__PATH="data/test.txt"
In contrast with the input sources, the logging destinations are cumulative. This means that the utility can simultaneously store the data in a JSON objects file, a DB, and send them via MQTT if all data sinks are configured.
To log the data into a plain text file containing a JSON object for each measurement, we need to indicate the destination file either in the config.yaml
file or with an environment variable.
# config.yaml
dump:
file: "data/output.json"
# ENV VARIABLES
DUMP__FILE="data/output.json"
To use a TimescaleDB as data sink, we have to configure the:
host
: IP of the database instance (required)port
: Port on which the database is listening (optional, default: 5432)database
: Name of the database (optional, default: premises)user
: Username to log into the database (optional, default: postgres)password
: Passphrase to log into the database (optional, default: password)
These values can again be passed with a config.yaml
file or environment variables.
# config.yaml
db:
host: "localhost"
port: 5432
database: "premises"
user: "postgres"
password: "password"
# ENV VARIABLES
DB__HOST=timescaledb
DB__PORT=5432
DB__DATABASE=premises
DB__USER=postgres
DB__PASSWORD=password
The logging utility takes care of creating the necessary tables. However, the database must be up and running when the utility is started.
Finally, the last destination for the logged data is an MQTT broker. The configuration is provided with the blocks in the config.yaml
file or environment variables.
host
: IP of the MQTT broker instance (required)port
: Port on which the MQTT broker is listening (optional, default: 1883)qos
: Quality of service for the messages (optional, default: 1)
# config.yaml
mqtt:
host: "localhost"
port: 1883
qos: 1
# ENV VARIABLES
MQTT__HOST=mosquitto
MQTT__PORT=1883
MQTT__QOS=1
All telegram are published as JSON objects on the topic:
telgram/$DEVICE_ID
Where $DEVICE_ID
represents the identification number of the Smart Meter as written on the physical device.
To execute the utility once it has been configured via the config.yaml
file simply use the following command.
$ poetry run python -m p1reader --config config.yaml
# or using venv
$ .venv/bin/python -m p1reader --config config.yaml
This is the easiest way to manage the standalone version. However, it is also possible to NOT use a config.yaml
and provide all the configuration via environment variable such as:
# Export environment variables
$ export PORT__ID="/dev/ttyUSBO"
# Run utility with poetry
$ poetry run python -m p1reader
# or using venv
$ .venv/bin/python -m p1reader
Running the utility as a docker image is straightforward. First, pull the latest version of the image from dockerhub.
docker pull docker.io/ejpalacios/p1-reader
Then, spin up the container using the docker run command.
$ docker run -d \
-e PORT__ID='/dev/ttyUSB0' \
--name p1-reader \
docker.io/ejpalacios/p1-reader
Note, that the docker
image must be configured using environment variables, which are passed on to the container with the argument -e
followed by the name and value of that variable.
To facility even more the deployment of the system two docker-compose
files are provided.
The first one, located in the folder docker/services
deploys:
- A local TimescaleDB instance
- A local MQTT broker instance
- A local Grafana instance (visualisation)
All the default values in this
docker-compose
file will provide a working instance but it is not recommended for production environments.
The second compose
file in docker/p1reader
configures and run the P1 Port logging utility.
Note that the value of the environment variable defaults to PORT__ID="/dev/ttyUSB0
and might need to be adjusted.
To bring up the deployment a three step procedure is needed.
First, we need to create a docker
network to be shared by the deployments.
docker network create premises
By default, the name of this network is premises
. Nevertheless, a different name can be chosen, providing the docker-compose.yml
files are edited accordingly.
Secondly, we need to spin up all the services with the command
docker-compose -f docker/services/docker-compose.yml up -d
Here, the flag -d
is used to detach the terminal output.
Finally, once all the services are up and running, the logger utility can also be initialised.
docker-compose -f docker/p1reader/docker-compose.yml up -d
To verify that all services are deployed the following docker command can be used:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e629fceab44 docker.io/ejpalacios/p1-reader "python -m p1reader" 50 seconds ago Up 51 hours p1-reader
10eaf408d0d9 docker.io/timescale/timescaledb:latest-pg14 postgres 42 seconds ago Up 43 seconds (healthy) 0.0.0.0:5432->5432/tcp timescaledb
6cf23d8e9af1 docker.io/library/eclipse-mosquitto:latest /usr/sbin/mosquit... 41 seconds ago Up 41 seconds 0.0.0.0:1883->1883/tcp, 0.0.0.0:9001->9001/tcp mosquitto
b1854fba72ca docker.io/grafana/grafana:latest 39 seconds ago Up 40 seconds 0.0.0.0:3000->3000/tcp grafana
As part of the docker-compose
deploy, a visualisation utility is included using Grafana.
To access this page go to [http://localhost:3000] and log in with the default credentials.
- user:
admin
- password:
admin
. There are three dashboards available, which can be access by going to Dashboards → Browse → Premises → [Select Dashboard]
- Measurement logger: Real-time electricity measurements
- Gas & Water logger: When installed, it displays the real-time measurements from these type of meters as collected via the Wireless M-Bus interface.
- Maximum demand logger: Historical monthly consumption peak and real-time tracking of maximum demand
The data logger is conceived as a standalone device that can operate without any network connection. This means, however, that the data must be periodically backed up.
Two utility scripts are provided to back up the data stored in the database (DB) into comma separated files (CSV), as well as for the inverse restoration process.
To save the data in the DB as CSV files the script utils/backup.py
could be used. This script has a set of options which are provided via command line arguments.
The main argument of the script is the DEVICE_ID
. It determines the meter to be backed up.
-i DEVICE_ID
,--device_id DEVICE_ID
: Meter identifier (EAN). REQUIRED
These options control the connection to the source DB where the data is stored.
-H HOSTNAME
,--hostname HOSTNAME
: DB hostname. Defaultlocalhost
.-P PORT
,--port PORT
: DB port. Default5432
.-D DATABASE
,--database DATABASE
: DB name. Defaultpremises
.-U USER
,--user USER
: DB username. Defaultpostgres
.-p PASSWORD
,--password PASSWORD
: DB password. Defaultpassword
.
Note that his values match the default configuration of the locally deployed
docker-compose
environment.
This option control where the output CSV files will be saved.
-o OUTPUT_PATH
,--output_path OUTPUT_PATH
Output path for CSV files. Default./
.
The date range is automatically inferred from the stored data. However, two arguments are provided to explicitly select these values.
-s START
,--start START
: Initial date. If not given, the oldest date in the DB will be considered.-e END
,--end END
: Final date. If not given, the newest date in the DB will be considered.
Date must be provided in ISO format with timezone (YYYY-MM-DDThh:mm:ss±hh:mm)
These options control the data that is backed up from the DB. AT LEAST ONE of the flags must be provided, otherwise no data will be saved.
--all
: Backup all data. (Equivalent to--elec --mbus --peak --peak_history
)--elec
: Backup electricity data--mbus
: Backup mbus devices data--peak
: Backup peak consumption data--peak_history
: Backup peak consumption history data
The script should be run using the poetry environment. Please check the software prerequisites.
The command below will back up all data for the meter with EAN number 1SAG1100000292
with the default configuration.
poetry run python ./utils/backup.py -i 1SAG1100000292 --all
To revert the above operation and restore the data saved into the CSV files to the same or another DB, the script utils/restore.py
should be used. As in the previous case several options are provided.
The restoration script assumes that the name of the CSV files has not been altered.
The main argument of the script is again the DEVICE_ID
. It determines relevant CSV for that meter.
-i DEVICE_ID
,--device_id DEVICE_ID
: Meter identifier (EAN). REQUIRED
These options control the connection to the destination DB where the data will be stored.
-H HOSTNAME
,--hostname HOSTNAME
: DB hostname. Defaultlocalhost
.-P PORT
,--port PORT
: DB port. Default5432
.-D DATABASE
,--database DATABASE
: DB name. Defaultpremises
.-U USER
,--user USER
: DB username. Defaultpostgres
.-p PASSWORD
,--password PASSWORD
: DB password. Defaultpassword
.
Note that his values match the default configuration of the locally deployed
docker-compose
environment.
This option control where the input CSV files will be saved.
-f INPUT_FOLDER
,--input_folder INPUT_FOLDER
: Input folder with the CSV files. Default./
.
These options control the data that is saved into the DB. AT LEAST ONE of the flags must be provided, otherwise no data will be saved.
--all
: Backup all data. (Equivalent to--elec --mbus --peak --peak_history
)--elec
: Backup electricity data--mbus
: Backup mbus devices data--peak
: Backup peak consumption data--peak_history
: Backup peak consumption history data
The script should be run using the poetry environment. Please check the software prerequisites.
The command below will restore all CSV files in the current folder to the DB for the meter with EAN number 1SAG1100000292
with the default configuration.
poetry run python ./utils/restore.py -i 1SAG1100000292 --all
The data can be accessed by directly connection to the DB. An example notebook is provided in the folder notebooks
with the necessary steps.
Please note that although this code is provided under the GNU General Public License v3 some of the dependencies might have more restrictive clauses.
In particular the DSMR parser dsmr-parser
depends on the dlms-cosem
library.
Please note that its current LICENSE limits the use to no more than a combined number of 100 DLMS end devices.
This work has been supported by the Research Foundation Flanders (FWO) within the MSCA - SoE project, "Premises: PRoviding Energy Metering Infrastructures with Secure Extended Services", grant number 12ZZV22N. More Information