Author: Gabriel Rüeck
Protected: Road Trip nach Portugal (II)
Learnings from Dynamic Electricity Pricing
Executive Summary
Unlike previous articles, I chose to split this new blog post into two parts. In the first part (Findings), I will elaborate on my findings as a consumer with dynamic electricity prices (day-ahead market) in connection with a small solar electricity generation unit. We will look at various visualizations that help to understand the impact and to gain some more insight in how dynamic electricity prices can be useful. In the second part (Annex: Technical Details), technically interested folks will find the respective queries and sample data with which they can replicate the findings or even do their own examination with the sample data. As in previous articles, Grafana is used in connection with a MariaDB database.
Background
On 1st of March 2024, I switched from a traditional electricity provider to one with dynamic day-ahead pricing, in my case, Tibber. I wanted to try this contractual model and see if I could successfully manage to shift chunks of high electricity consumption such as:
- … loading the battery-electric vehicle (BEV) or the plug-in hybrid car (PHEV)
- … washing clothes
- … drying clothes in the electric dryer
to those times of the day when the electricity price is lower. I also wanted to see if that makes economic sense for me. And, after all, it is fun to play around with data and gain new insights.
As my electricity supplier, I had chosen Tibber because they were the first one I got to know and they offer a device called Pulse which can connect a digital electricity meter to their infrastructure for metering and billing purposes. Furthermore, they do have an API [1] which allows me to read out my own data; that was very important for me. I understand that meanwhile, there are several providers Tibber that have similar models and comparable features.
Findings
Price and consumption patterns
The graph below shows five curves and is an upgraded version of the respective graph in [6] visualizing data over two weeks:

As in [6], the green curve is the hourly price of the day-ahead market. It is well recognizable that the price has peaks in the evening and in the morning at breakfast time when residential consumption is high, but little solar energy is available in Germany. The yellow curve is a two-days floating average and shows that the average price is not below a fixed rate electricity contract, an important point that we shall discuss later. The orange curve is the consumption of my house; the higher peaks indicate times when I charged one of the cars using an ICCB (one phase, current: 10 A). The red curve shows the consumption of the grid. During the day, when there is sunshine, the red curve lies below the orange curve as a part of the overall consumed electricity comes from the solar panels. At nighttime, the red curve will exactly lie on the orange curve as there is no solar electricity generation. The blue curve shows the average electricity price per kWh based on a mixed calculation of the grid price and zero for my own solar electricity generation. One might argue whether zero is an adequate assumption as also solar panels cost money, but as I do have them installed already, I consider them to be sunk cost now. The blue curve show an interesting behavior: When my consumption is low, the blue curve shows a small average price. When the consumption of the house is less than the power that is generated by the solar panels, then the blue curve is flat zero. When, however, I consume a lot of power, then, the blue curve approaches the green curve. At nighttime when there is no solar energy generation, the blue curve lies identical with the green curve.
My goal is to consume more electricity in the times when either the green curve points to a low electricity price or when there is enough electricity generated by the solar panels so that my average price that I pay (blue curve) is reasonable low. This also explains why I mostly the ICCB (one phase, 10 A) to charge the cars as then, I still can get a good average price (although charging takes a lot more time then). I think that by looking at the curves, I have adapted my consumption pattern well to the varying electricity price.
Actual cost, minimal cost, maximal cost, average cost per day

The graph above shows four curves. The green curve is the cost that I would have incurred if I had bought all the electricity of the respective day in the hour of the cheapest electricity price. This would only be possible if I had a battery that could bridge the remaining 23 hours of the day (and probably some hours more as the cheapest hour of the following day is not necessarily 00:00-01:00). The red curve is the cost that I would have incurred if I had bought all the electricity of the respective day in the hour of the most expensive electricity price. The yellow curve is the average cost of the respective day, the multiplication of the average price per kWh of that day by my consumed energy. The blue curve is the real cost that I have paid. If the blue curve is between the yellow curve and the green curve, then this is very good, and I have succeeded in shifting my consumption versus the hours with cheaper electricity. Without a battery, it is almost impossible to come very close to the green line.
The graph above shows one peculiarity as on 2024-06-26, there were some hours with an extremely high price, but that was due to a communication error that decoupled the auction in Germany from the rest of Europe [7], clearly (and hopefully) a one-time event.
Price heatmap of the hourly prices per kWh
The one-time event with unusually high prices [7] is well visible in the price heatmap that was already introduced and explained in [6]. The one-time event overshadows all price fluctuations in the rest of the week.

Cost and amount of purchased electricity from grid with price indication
The first graph shown below was already introduced and explained in [6] and shows the cost of purchased electricity from the grid in 24 rectangles, whereby each rectangle represents an hour. The order of the rectangles starts with the most expensive hour at the top and ends with the least expensive hour at the bottom. The larger the rectangle, the more money has been spent in the respective hour. I already explained in [6] that the goal should be – if electricity has to be purchased from the grid at all – the purchases ideally should happen in times when the price is low. In the graph those are the rectangles with green or at maximum yellow color. The rectangles with orange and red color indicate purchases during times of a high electricity price. In reality, one will not be able to avoid purchases at times of a high price completely. My experience is that in summer, I let the air conditioning systems run also in the evening hours when the electricity price is higher, just to keep the house at reasonable temperatures inside. Similarly, in spring and autumn, I opt for leaving the central heating switched off and try to heat the rooms in which I am usually with the air conditioning systems (in heating mode) as in spring and autumn, the difference in temperature between outside and inside is not too high, and the air conditioning systems will have a high efficiency.

Then, for 13-Oct (10/13), one can see that in fact I bought substantial electricity at high prices. The reason for that was that I returned home at night and had to re-charge the car for the next day. So, one cannot always avoid purchasing electricity at high prices.
The next chart offers another view on the same topic. Rather than looking at the cost of the purchased electricity, we look at the amount of purchased electricity. This offers an interesting insight. Except for 13-Oct, the amount of electricity that is purchased at hours with a high price, is less than 5 kWh. This means that with an energy-storage system (ESS), it should be possible to charge a battery of around 5 kWh during times of a lower electricity price and to discharge the battery and supply the house during times of a high electricity price so that ideally, there would be no or only minimally sized orange and red rectangles. Of course, this only makes sense if the low prices and the high prices per day differ substantially as the ESS will have losses of 10%…20%. And in fact, this is exactly an idea that I am planning to try out within the next months (so stay tuned).

Hourly price distribution
Another finding which was interesting for me was that the price of electricity is not necessarily cheaper on the weekend (at least not in May 2024). One might assume that because industry consumption is low on the weekend, the price of electricity is lower. And that is true to some parts, as the lowest prices that occur within a whole month tend to happen on the weekend (green histogram). We can also see a peak at around 19 ¢/kWh in the green histogram which is much lower in the yellow histogram. However, there are also many hours with prices that are similar to the prices that we have during the week (yellow histogram).

Monthly hourly price curves
The following three graphs show the hourly price curves over the hour of the day per month; the selected Day of the Week is irrelevant for this graph as only the variable Month is used.
The green curve shows the minimum hourly price per kWh that occurred in the selected month. The red curve shows the maximum hourly price per kWh that occurred in the selected month. The yellow curve shows the average hourly price per kWh that occurred in the selected month. The blue curve shows the average hourly price per kWh that I have experienced in the selected month, calculating my own solar electricity generation as “free of cost” (like the blue curve of the graph in chapter Price and consumption patterns).



There are a couple of interesting findings that can easily be seen in the curves:
- Prices tend to be higher in the early morning (06:00-08:00) and in the evening (18:00-21:00). The evening price peak is higher than the one in the morning.
- The price gap in the morning, but especially in the evening hours between the minimum and the maximum hourly price seems to increase from May to September. I do not yet know if this is because of the season, or if the market has become more volatile in general.
- In my personal situation (the solar modules face East), I am not bothered by the price peak in the morning, as I seem to have already quite a good power generation (blue curve descending in the morning). That is, however, different in the evening; then, I am personally affected by the price peak.
For me, this is an indication that it might make sense to install a battery and charge it during the day and discharge it especially in the evening hours (18:00-21:00) when the price peaks.
Electricity price levels
In order to look into the hourly prices from a different perspective, I have grouped the hourly prices into five categories:
- Price per kWh ≤10 ¢/kWh (green block), labelled as very cheap
- Price per kWh >10 ¢/kWh and ≤20 ¢/kWh (yellow block), labelled as cheap
- Price per kWh >20 ¢/kWh and ≤30 ¢/kWh (blue block), labelled as normal
- Price per kWh >30 ¢/kWh and ≤40 ¢/kWh (orange block), labelled as expensive
- Price per kWh >40 ¢/kWh (red block), labelled as very expensive
For each month (except for Oct 2023, Oct 2024) the sum of the blocks in different colors are the hours of the respective month; depending on the number of days per month, the number of hours vary, of course. Larger blocks correspond to more hours in the respective price category. Do not pay attention to the numbers listed in the graph in white color as some numbers are missing. I will give a numerical statistic below the graph.

Month | very cheap (≤ 10 ¢/kWh) | cheap (≤ 20 ¢/kWh) | normal (≤ 30 ¢/kWh) | expensive (≤ 40 ¢/kWh) | very expensive (> 40 ¢/kWh) |
2023-10 | 0 | 86 | 248 | 189 | 6 |
2023-11 | 0 | 79 | 431 | 199 | 11 |
2023-12 | 0 | 173 | 392 | 123 | 8 |
2024-01 | 0 | 32 | 625 | 63 | 0 |
2024-02 | 0 | 58 | 635 | 3 | 0 |
2024-03 | 0 | 32 | 633 | 78 | 0 |
2024-04 | 0 | 109 | 491 | 117 | 5 |
2024-05 | 8 | 102 | 423 | 208 | 3 |
2024-06 | 1 | 106 | 353 | 230 | 30 |
2024-07 | 0 | 109 | 383 | 219 | 9 |
2024-08 | 0 | 94 | 297 | 332 | 21 |
2024-09 | 0 | 86 | 360 | 258 | 16 |
2024-10 | 0 | 46 | 305 | 141 | 10 |
Before I had generated this graph and the ones of the previous chapter, my initial assumption had been that in summer, the prices would go down compared to winter. And to some part, this is true. There is a higher number of hours in the category “cheap” (yellow block). However, the category “expensive” (orange block) increases even more in summer. And even the category “very expensive” (red block) becomes noticeable whereas the category “very cheap” (green block) does not manifest itself, not even in summer. I do not have an explanation for this; my assumption though is that maybe in winter, some fossil power stations are active that are switched off in summer and that therefore, the price is much more susceptible to the change in the electricity offer over the day. I suspect that during the day, there is mostly enough solar electricity generation, and in the evening, gas power stations need to be started that then determine the price level and lead to increased prices. However, I do not have any statistics underpinning this assumption.
Self-sufficiency (Autarkie) and sef-consumption (Eigenverbrauch)
I am not much of a friend of looking at the values of self-sufficiency (de: Autarkie) and self-consumption (de: Eigenverbrauch), and this is for the following reasons:
- If you do not live in a place where an electricity grid is not available or where the grid is very unreliable, it does not make sense to strive for complete self-sufficiency. It is still economically best if you use your electricity during the day and consume electricity from the grid at night. Similarly, you will probably need to buy electricity from the grid in winter, at least in Central or Northern Europe as in winter, the output from your own solar electricity generation is usually insufficient to power the consumption of your house.
- The value for self-consumption varies strongly with your consumption pattern and with the solar intensity on the respective day. The only conclusion that you can draw from observing the self-consumption values over a longer time frame is:
- If your values are constantly low, then you probably have spent into too large of a solar system.
- If your values are constantly high or 100%, then it might make sense to add some more solar panels to your system.
Nevertheless, for the sake of completion, here are some example values from July of this year.

Conclusion
- The field of dynamic electricity pricing is technically very interesting and certainly helps to sharpen awareness for the need to consume electricity precisely then, when there is a lot of it available.
- Economically, having a dynamic electricity price can make sense under certain conditions. I side with [8] when I say:
- For a household without an electric car or another large electricity consumer, it does not make sense to use dynamic electricity pricing. The consumption of the washing machine, the dryer, and the dishwasher are not large enough to really draw a benefit.
- For a household with an electric car that has a large battery (e.g., some 80+ kWh, unfortunately not my use case), it makes sense then when you have the flexibility to charge the car during times with a low electricity price.
Annex: Technical Details
Preconditions
In order to use the approach described below, you should:
- … have access to a Linux machine or account
- … have a MySQL or MariaDB database server installed, configured, up and running
- … have the package Grafana [2] installed, configured, up and running
- … have the package Node-RED [3] installed, configured, up and running
- … have access to the data of your own electricity consumption and pricing information of your supplier or use the dataset linked below in this blog
- … have some basic knowledge of how to operate in a Linux environment and some basic understanding of shell scripts
- … have read and understood the previous blog post [6], especially the part on how to connect Grafana to the MySQL or MariaDB database server
The Database
The base for the following visualizations is a fully populated MariaDB database with the following structure:
# Datenbank für Analysen mit Tibber
# V2.0; 2024-09-12, Gabriel Rüeck <gabriel@rueeck.de>, <gabriel@caipirinha.spdns.org>
# Delete existing databases
REVOKE ALL ON tibber.* FROM 'gabriel';
DROP DATABASE tibber;
# Create a new database
CREATE DATABASE tibber DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;
GRANT ALL ON tibber.* TO 'gabriel';
USE tibber;
SET default_storage_engine=Aria;
CREATE TABLE preise (uid INT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT,\
zeitstempel DATETIME NOT NULL UNIQUE,\
preis DECIMAL(5,4),\
niveau ENUM('VERY_CHEAP','CHEAP','NORMAL','EXPENSIVE','VERY_EXPENSIVE'));
CREATE TABLE verbrauch (uid INT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT,\
zeitstempel DATETIME NOT NULL UNIQUE,\
energie DECIMAL(5,3),\
kosten DECIMAL(5,4));
CREATE TABLE zaehler (uid INT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT,\
zeitstempel TIMESTAMP NOT NULL DEFAULT UTC_TIMESTAMP,\
bezug DECIMAL(9,3) DEFAULT NULL,\
erzeugung DECIMAL(9,3) DEFAULT NULL,\
einspeisung DECIMAL(9,3) DEFAULT NULL);
Data Acquisition
Data for the MariaDB database is acquired with two different methods.
The tables preise and verbrauch are populated with the following bash script. You will need a valid API token of your Tibber account [4]; in this script, you need to replace _API_TOKEN with your personal API token ([4]) and _HOME_ID with your personal Home ID ([1]). The script further assumes that the user gabriel can login to the MySQL database without further authentication; this can be achieved by writing the MySQL login information in the file ~/.my.cnf.
#!/bin/bash
#
# Dieses Skript liest Daten für die Tibber-Datenbank und speichert das Ergebnis in einer MySQL-Datenbank ab.
# Das Skript wird einmal pro Tag aufgerufen.
#
# V2.0; 2024-09-12, Gabriel Rüeck <gabriel@rueeck.de>, <gabriel@caipirinha.spdns.org>
#
# CONSTANTS
declare -r MYSQL_DATABASE='tibber'
declare -r MYSQL_SERVER='localhost'
declare -r MYSQL_USER='gabriel'
declare -r TIBBER_API_TOKEN='_API_TOKEN'
declare -r TIBBER_API_URL='https://api.tibber.com/v1-beta/gql'
declare -r TIBBER_HOME_ID='_HOME_ID'
# VARIABLES
# PROGRAM
# Read price information for tomorrow
curl -s -S -H "Authorization: Bearer ${TIBBER_API_TOKEN}" -H "Content-Type: application/json" -X POST -d '{ "query": "{viewer {home (id: \"'"${TIBBER_HOME_ID}"'\") {currentSubscription {priceInfo {tomorrow {total startsAt level }}}}}}" }' "${TIBBER_API_URL}" | jq -r '.data.viewer.home.currentSubscription.priceInfo.tomorrow[] | .total, .startsAt, .level' | while read cost; do
read LINE
read level
timestamp=$(echo "${LINE%%+*}" | tr 'T' ' ')
# Determine timezone offset and store the UTC datetime in the database
offset="${LINE:23}"
mysql --default-character-set=utf8mb4 -B -N -r -D "${MYSQL_DATABASE}" -h ${MYSQL_SERVER} -u ${MYSQL_USER} -e "INSERT INTO preise (zeitstempel,preis,niveau) VALUES (DATE_SUB(\"${timestamp}\",INTERVAL \"${offset}\" HOUR_MINUTE),${cost},\"${level}\");"
done
# Read consumption information from the past 24 hours
curl -s -S -H "Authorization: Bearer ${TIBBER_API_TOKEN}" -H "Content-Type: application/json" -X POST -d '{ "query": "{viewer {home (id: \"'"${TIBBER_HOME_ID}"'\") {consumption (resolution: HOURLY, last: 24) {nodes {from to cost consumption}}}}}" }' "${TIBBER_API_URL}" | jq -r '.data.viewer.home.consumption.nodes[] | .from, .consumption, .cost' | while read LINE; do
read consumption
read cost
timestamp=$(echo "${LINE%%+*}" | tr 'T' ' ')
# Determine timezone offset and store the UTC datetime in the database
offset="${LINE:23}"
mysql --default-character-set=utf8mb4 -B -N -r -D "${MYSQL_DATABASE}" -h ${MYSQL_SERVER} -u ${MYSQL_USER} -e "INSERT INTO verbrauch (zeitstempel,energie,kosten) VALUES (DATE_SUB(\"${timestamp}\",INTERVAL \"${offset}\" HOUR_MINUTE),${consumption},${cost});"
done
The table zaehler is populated via a Node-RED flow that reads data from the Tibber API [4] as well as from the AVM socket FRITZ!DECT 210 and stores new values for the columns bezug, erzeugung, einspeisung every 15 minutes. I used a Node-RED flow for this because otherwise, I would have to program code for the access of the Tibber API [4] and include code [5] for the AVM socket FRITZ!DECT 210. However, I do plan to enlarge the functionality later with an Energy Storage System (ESS) and a battery, and so Node-RED seemed suitable to me. The flow is shown here; it contains some additional nodes apart from the now needed functionality, and it is still in development:

I have decided not to publish the JSON code here as I am not yet entirely familiar with Node-RED [3] and would not know how to remove my personal tokens from the JSON code.
Queries for the Graphs
Price and Consumption Patterns
The graph is based on four queries that correspond to the green and yellow, to the red, to the orange and to the blue curve:
SELECT zeitstempel,
preis AS 'grid price',
AVG(preis) OVER (ORDER BY zeitstempel ROWS BETWEEN 47 PRECEDING AND CURRENT ROW) as 'rolling 2-days average grid price'
FROM preise;
SELECT zeitstempel, energie AS 'grid consumption' FROM verbrauch;
SELECT alt.zeitstempel,
(neu.bezug-alt.bezug+neu.erzeugung-alt.erzeugung-neu.einspeisung+alt.einspeisung) AS 'residential consumption'
FROM zaehler AS neu JOIN zaehler AS alt ON neu.uid=(alt.uid+4)
WHERE MINUTE(alt.zeitstempel)=0;
SELECT alt.zeitstempel,
ROUND(preise.preis*(neu.bezug-alt.bezug)/(neu.bezug-alt.bezug+neu.erzeugung-alt.erzeugung-neu.einspeisung+alt.einspeisung),4) AS 'price for mixed grid/solar'
FROM zaehler AS neu
JOIN zaehler AS alt ON neu.uid=(alt.uid+4)
LEFT JOIN preise ON DATE_FORMAT(preise.zeitstempel, "%Y%m%d%H%i")=DATE_FORMAT(alt.zeitstempel, "%Y%m%d%H%i")
WHERE MINUTE(alt.zeitstempel)=0
AND preise.zeitstempel>=DATE_SUB(CURRENT_DATE(),INTERVAL 30 DAY);
Actual cost, minimal cost, maximal cost, average cost per day
This graph uses only one query:
SELECT DATE(preise.zeitstempel) AS 'date',
MIN(preise.preis)*SUM(verbrauch.energie) AS 'minimal cost',
MAX(preise.preis)*SUM(verbrauch.energie) AS 'maximal cost',
AVG(preise.preis)*SUM(verbrauch.energie) AS 'average cost',
SUM(verbrauch.kosten) AS 'actual cost'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>'2024-03-03'
GROUP BY DATE(preise.zeitstempel);
Cost and amount of purchased electricity from grid with price indication
The first graph (Cost of purchased electricity from grid with price indication) uses the query:
SELECT DATE_FORMAT(preise.zeitstempel,'%m/%d') AS 'Datum',
ROW_NUMBER() OVER (PARTITION BY DATE(preise.zeitstempel) ORDER BY preise.preis ASC) AS 'Row',
verbrauch.kosten AS 'Kosten'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>DATE_ADD(CURDATE(), INTERVAL -15 DAY)
ORDER BY DATE(preise.zeitstempel) ASC, preise.preis ASC;
The second graph (Amount of purchased electricity from grid with price indication) uses the query:
SELECT DATE_FORMAT(preise.zeitstempel,'%m/%d') AS 'Datum',
ROW_NUMBER() OVER (PARTITION BY DATE(preise.zeitstempel) ORDER BY preise.preis ASC) AS 'Row',
verbrauch.energie AS 'Energie'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>DATE_ADD(CURDATE(), INTERVAL -15 DAY)
ORDER BY DATE(preise.zeitstempel) ASC, preise.preis ASC;
Hourly price distribution
The red histogram uses the query:
SELECT preis FROM preise WHERE DATE_FORMAT(zeitstempel,'%Y-%m')='${Month}';
The blue histogram uses the query:
SELECT preis FROM preise
WHERE DATE_FORMAT(zeitstempel,'%Y-%m')='${Month}'
AND DAYNAME(zeitstempel)='${Weekday}';
The yellow histogram uses the query:
SELECT preis FROM preise
WHERE DATE_FORMAT(zeitstempel,'%Y-%m')='${Month}'
AND WEEKDAY(zeitstempel)<5
AND DATE(zeitstempel) NOT IN (SELECT datum FROM aux_feiertage);
The green histogram uses the query:
SELECT preis FROM preise
WHERE DATE_FORMAT(zeitstempel,'%Y-%m')='${Month}'
AND WEEKDAY(zeitstempel)>=5
OR DATE(zeitstempel) IN (SELECT datum FROM aux_feiertage WHERE DATE_FORMAT(zeitstempel,'%Y-%m')='${Month}');
The tables use these variables on the respective Grafana dasboard as well as the additional helper table aux_feiertage (indicating national holidays) in the MariaDB database.
The variable Month on the Grafana dasboard has the query:
SELECT DISTINCT(DATE_FORMAT(zeitstempel,'%Y-%m')) FROM preise;
The variable Weekday on the Grafana dasboard has the query:
SELECT aux_wochentage.wochentag AS Weekday
FROM aux_wochentage JOIN aux_sprachen ON aux_sprachen.id_sprache=aux_wochentage.id_sprache
WHERE aux_sprachen.sprachcode='en'
ORDER BY aux_wochentage.id_tag ASC;
The last query also uses the tables aux_wochentage and aux_sprachen in order to offer a multi-lingual interface (not really necessary for the functionality). The tables are:
CREATE TABLE aux_wochentage (id_tag TINYINT UNSIGNED NOT NULL,\
id_sprache TINYINT UNSIGNED NOT NULL,\
wochentag VARCHAR(30) NOT NULL);
CREATE TABLE aux_sprachen (id_sprache TINYINT UNSIGNED NOT NULL PRIMARY KEY,\
sprachcode CHAR(2),\
sprache VARCHAR(30));
CREATE TABLE aux_feiertage (uid SMALLINT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT,\
datum DATE NOT NULL,\
beschreibung VARCHAR(50));
INSERT INTO aux_sprachen VALUES (0,'en','English'),\
(1,'de','Deutsch'),\
(2,'pt','Português'),\
(3,'zh','中文');
INSERT INTO aux_wochentage VALUES (0,0,'Monday'),\
(1,0,'Tuesday'),\
(2,0,'Wednesday'),\
(3,0,'Thursday'),\
(4,0,'Friday'),\
(5,0,'Saturday'),\
(6,0,'Sunday'),\
(0,1,'Montag'),\
(1,1,'Dienstag'),\
(2,1,'Mittwoch'),\
(3,1,'Donnerstag'),\
(4,1,'Freitag'),\
(5,1,'Samstag'),\
(6,1,'Sonntag'),\
(0,2,'segunda-feira'),\
(1,2,'terça-feira'),\
(2,2,'quarta-feira'),\
(3,2,'quinta-feira'),\
(4,2,'sexta-feira'),\
(5,2,'sábado'),\
(6,2,'domingo'),\
(0,3,'星期一'),\
(1,3,'星期二'),\
(2,3,'星期三'),\
(3,3,'星期四'),\
(4,3,'星期五'),\
(5,3,'星期六'),\
(6,3,'星期日');
INSERT INTO aux_feiertage (datum, beschreibung) VALUES ('2023-01-01','Neujahr'),\
('2023-01-06','Heilige Drei Könige'),\
('2023-04-07','Karfreitag'),\
('2023-04-10','Ostermontag'),\
('2023-05-01','Tag der Arbeit'),\
('2023-05-18','Christi Himmelfahrt'),\
('2023-05-29','Pfingstmontag'),\
('2023-06-08','Fronleichnam'),\
('2023-08-15','Mariä Himmelfahrt'),\
('2023-10-03','Tag der Deutschen Einheit'),\
('2023-11-01','Allerheiligen'),\
('2023-12-25','1. Weihnachtstag'),\
('2023-12-26','2. Weihnachtstag'),\
('2024-01-01','Neujahr'),\
('2024-01-06','Heilige Drei Könige'),\
('2024-03-29','Karfreitag'),\
('2024-04-01','Ostermontag'),\
('2024-05-01','Tag der Arbeit'),\
('2024-05-09','Christi Himmelfahrt'),\
('2024-05-20','Pfingstmontag'),\
('2024-05-30','Fronleichnam'),\
('2024-08-15','Mariä Himmelfahrt'),\
('2024-10-03','Tag der Deutschen Einheit'),\
('2024-11-01','Allerheiligen'),\
('2024-12-25','1. Weihnachtstag'),\
('2024-12-26','2. Weihnachtstag'),\
('2025-01-01','Neujahr'),\
('2025-01-06','Heilige Drei Könige'),\
('2025-04-18','Karfreitag'),\
('2025-04-21','Ostermontag'),\
('2025-05-01','Tag der Arbeit'),\
('2025-05-29','Christi Himmelfahrt'),\
('2025-06-09','Pfingstmontag'),\
('2025-06-19','Fronleichnam'),\
('2025-08-15','Mariä Himmelfahrt'),\
('2025-10-03','Tag der Deutschen Einheit'),\
('2025-11-01','Allerheiligen'),\
('2025-12-25','1. Weihnachtstag'),\
('2025-12-26','2. Weihnachtstag');
Monthly hourly price curves
This graph uses only one query:
SELECT preise.zeitstempel AS 'time',
MIN(preise.preis) AS 'Minimum Price',
ROUND(AVG(preise.preis),4) AS 'Average Price',
MAX(preise.preis) AS 'Maximum Price',
ROUND((SUM(verbrauch.kosten)/(SUM(neu.bezug)-SUM(alt.bezug)+SUM(neu.erzeugung)-SUM(alt.erzeugung)-SUM(neu.einspeisung)+SUM(alt.einspeisung))),4) AS 'Achieved (Mixed) Price'
FROM preise
INNER JOIN verbrauch ON preise.zeitstempel=verbrauch.zeitstempel
LEFT JOIN zaehler AS alt ON DATE_FORMAT(alt.zeitstempel,"%Y%m%d%H%i")=DATE_FORMAT(preise.zeitstempel,"%Y%m%d%H%i")
INNER JOIN zaehler AS neu ON neu.uid=(alt.uid+4)
WHERE DATE_FORMAT(preise.zeitstempel,'%Y-%m')='${Month}'
GROUP BY TIME(preise.zeitstempel);
It is important to keep in mind that the results from the query show the timespan of the first day of the month (and only the first day) between 00:00 UTC and 23:59 UTC; hence the time window on the Grafana dashboard has to be chosen accordingly (e.g. from 2024-07-01 02:00:00 to 2024-07-02 01:00:00 in the timezone CEST).
Electricity price levels
This graph uses only one query:
SELECT DATE_FORMAT(zeitstempel,'%Y-%m') AS 'Month',
COUNT(CASE WHEN preis<=0.1 THEN preis END) AS 'very cheap (≤ 0.10 €/kWh)',
COUNT(CASE WHEN preis>0.1 AND preis<=0.2 THEN preis END) AS 'cheap (≤ 0.20 €/kWh)',
COUNT(CASE WHEN preis>0.2 AND preis<=0.3 THEN preis END) AS 'normal (≤ 0.30 €/kWh)',
COUNT(CASE WHEN preis>0.3 AND preis<=0.4 THEN preis END) AS 'expensive (≤ 0.40 €/kWh)',
COUNT(CASE WHEN preis>0.4 THEN preis END) AS 'very expensive (> 0.40 €/kWh)'
FROM preise
GROUP BY Month;
Self-sufficiency (Autarkie) and sef-consumption (Eigenverbrauch)
This graph uses only one query:
SELECT DATE_FORMAT(alt.zeitstempel,'%m/%d') AS 'Datum',
ROUND((neu.erzeugung-alt.erzeugung-neu.einspeisung+alt.einspeisung)*100/(neu.bezug-alt.bezug+neu.erzeugung-alt.erzeugung-neu.einspeisung+alt.einspeisung)) AS 'Autarkiegrad',
ROUND((neu.erzeugung-alt.erzeugung-neu.einspeisung+alt.einspeisung)*100/(neu.erzeugung-alt.erzeugung)) AS 'Eigenverbrauchsquote'
FROM zaehler AS neu
JOIN zaehler AS alt ON neu.uid=(alt.uid+96)
WHERE HOUR(alt.zeitstempel)=0
AND MINUTE(alt.zeitstempel)=0
AND alt.zeitstempel>=DATE_SUB(CURRENT_DATE(),INTERVAL 14 DAY);
Files
The following dataset was used for the graphs:
Sources
- [1] = Tibber Developer
- [2] = Download Grafana | Grafana Labs
- [3] = Node-RED
- [4] = Tibber Developer: Communicating with the API
- [5] = Smarthome: AVM-Steckdosen per Skript auslesen
- [6] = Grafana Visualizations (Part 2)
- [7] = Strom: heute Extrempreise für Kunden im EPEX SPOT – ISPEX
- [8] = Dynamischer Stromtarif – Warum fallen ALLE darauf rein?
Disclaimer
- Program codes and examples are for demonstration purposes only.
- Program codes are not recommended be used in production environments without further enhancements in terms of speed, failure-tolerance or cyber-security.
- While program codes have been tested, they might still contain errors.
- I am neither affiliated nor linked to companies named in this blog post.
Grafana Visualizations (Part 2)
Executive Summary
In this article, we use Grafana in order to examine real-world data of electricity consumption stored in a MariaDB database. As dynamic pricing (day-ahead market) is used, we also try to investigate how well I have fared so far with dynamic pricing.
Background
On 1st of March 2024, I switched from a traditional electricity provider to one with dynamic day-ahead pricing, in my case, Tibber. I wanted to try this contractual model and see if I could successfully manage to shift chunks of high electricity consumption such as:
- … loading the battery-electric vehicle (BEV) or the plug-in hybrid car (PHEV)
- … washing clothes
- … drying clothes in the electric dryer
to those times of the day when the electricity price is lower. I also wanted to see if that makes economic sense for me. And, after all, it is fun to play around with data and gain new insights.
As my electricity supplier, I had chosen Tibber because they were the first one I got to know and they offer a device called Pulse which can connect a digital electricity meter to their infrastructure for metering and billing purposes. Furthermore, they do have an API [1] which allows me to read out my own data; that was very important for me. I understand that meanwhile, there are several providers Tibber that have similar models and comparable features.
In my opinion, dynamic electricity prices will play an important role in the future. As we generate ever more energy from renewables (solar and wind power), there are times when a lot of electricity is generated or when we even see over-production, and there are times when less electricity will be produced (“dunkelflaute”). Dynamic prices are an excellent tool to motivate people to shift a part of their consumption pattern to times with a large offer in electricity supply (cheap price). An easy-to-understand example is washing clothes during the day in summer (rather than in the evening) when there is a large supply of solar energy; then, the price will experience a local minimum between lunch time and dinner time.
Preconditions
In order to use the approach described here, you should:
- … have access to a Linux machine or account
- … have a MySQL or MariaDB database server installed, configured, up and running
- … have a populated MySQL or MariaDB database like in our example to which you have access
- … have the package Grafana [2] installed, configured, up and running
- … have access to the data of your own electricity consumption and pricing information of your supplier or use the dataset linked below in this blog
- … have some understanding of day-ahead pricing in the electricity market [3]
- … have some basic knowledge of how to operate in a Linux environment and some basic understanding of shell scripts
Description and Usage
The Database
The base for the following visualizations is a fully populated MariaDB database with the following structure:
# Datenbank für Analysen mit Tibber
# V1.1; 2023-10-19, Gabriel Rüeck <gabriel@rueeck.de>, <gabriel@caipirinha.spdns.org>
# Delete existing databases
REVOKE ALL ON tibber.* FROM 'gabriel';
DROP DATABASE tibber;
# Create a new database
CREATE DATABASE tibber DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;
GRANT ALL ON tibber.* TO 'gabriel';
USE tibber;
SET default_storage_engine=Aria;
CREATE TABLE preise (zeitstempel DATETIME NOT NULL,\
preis DECIMAL(5,4) NOT NULL,\
niveau ENUM('VERY_CHEAP','CHEAP','NORMAL','EXPENSIVE','VERY_EXPENSIVE'));
CREATE TABLE verbrauch (zeitstempel DATETIME NOT NULL,\
energie DECIMAL(5,3) NOT NULL,\
kosten DECIMAL(5,4) NOT NULL);
The section Files at the end of this blog post provides real-world sample data which you can use to populate the database and do your own calculations and graphs. The database contains two tables. The one named preise contains the day-ahead prices and some price level tag which is determined by Tibber themselves according to [1]. The second tables named verbrauch contains the electrical energy I have consumed, and the cost associated with the consumption. zeitstempel in both tables indicates the date and the hour when the respective 1-hour-block with the respective electricity price or the electricity consumption starts. Consequently, the day is divided in 24 blocks of 1 hour. Data from the tables might look like this:
MariaDB [tibber]> SELECT * FROM preise WHERE DATE(zeitstempel)='2024-03-18';
+---------------------+--------+-----------+
| zeitstempel | preis | niveau |
+---------------------+--------+-----------+
| 2024-03-18 00:00:00 | 0.2658 | NORMAL |
| 2024-03-18 01:00:00 | 0.2575 | NORMAL |
| 2024-03-18 02:00:00 | 0.2588 | NORMAL |
| 2024-03-18 03:00:00 | 0.2601 | NORMAL |
| 2024-03-18 04:00:00 | 0.2661 | NORMAL |
| 2024-03-18 05:00:00 | 0.2737 | NORMAL |
| 2024-03-18 06:00:00 | 0.2922 | NORMAL |
| 2024-03-18 07:00:00 | 0.3059 | EXPENSIVE |
| 2024-03-18 08:00:00 | 0.3019 | EXPENSIVE |
| 2024-03-18 09:00:00 | 0.2880 | NORMAL |
| 2024-03-18 10:00:00 | 0.2761 | NORMAL |
| 2024-03-18 11:00:00 | 0.2688 | NORMAL |
| 2024-03-18 12:00:00 | 0.2700 | NORMAL |
| 2024-03-18 13:00:00 | 0.2707 | NORMAL |
| 2024-03-18 14:00:00 | 0.2715 | NORMAL |
| 2024-03-18 15:00:00 | 0.2768 | NORMAL |
| 2024-03-18 16:00:00 | 0.2834 | NORMAL |
| 2024-03-18 17:00:00 | 0.3176 | EXPENSIVE |
| 2024-03-18 18:00:00 | 0.3629 | EXPENSIVE |
| 2024-03-18 19:00:00 | 0.3400 | EXPENSIVE |
| 2024-03-18 20:00:00 | 0.3129 | EXPENSIVE |
| 2024-03-18 21:00:00 | 0.2861 | NORMAL |
| 2024-03-18 22:00:00 | 0.2827 | NORMAL |
| 2024-03-18 23:00:00 | 0.2781 | NORMAL |
+---------------------+--------+-----------+
24 rows in set (0,002 sec)
MariaDB [tibber]> SELECT * FROM verbrauch WHERE DATE(zeitstempel)='2024-03-18';
+---------------------+---------+--------+
| zeitstempel | energie | kosten |
+---------------------+---------+--------+
| 2024-03-18 00:00:00 | 0.554 | 0.1472 |
| 2024-03-18 01:00:00 | 0.280 | 0.0721 |
| 2024-03-18 02:00:00 | 0.312 | 0.0808 |
| 2024-03-18 03:00:00 | 0.307 | 0.0799 |
| 2024-03-18 04:00:00 | 0.282 | 0.0750 |
| 2024-03-18 05:00:00 | 0.315 | 0.0862 |
| 2024-03-18 06:00:00 | 0.377 | 0.1102 |
| 2024-03-18 07:00:00 | 0.368 | 0.1126 |
| 2024-03-18 08:00:00 | 0.275 | 0.0830 |
| 2024-03-18 09:00:00 | 0.793 | 0.2284 |
| 2024-03-18 10:00:00 | 1.041 | 0.2875 |
| 2024-03-18 11:00:00 | 0.453 | 0.1217 |
| 2024-03-18 12:00:00 | 0.362 | 0.0977 |
| 2024-03-18 13:00:00 | 0.005 | 0.0014 |
| 2024-03-18 14:00:00 | 0.027 | 0.0073 |
| 2024-03-18 15:00:00 | 0.144 | 0.0399 |
| 2024-03-18 16:00:00 | 0.248 | 0.0703 |
| 2024-03-18 17:00:00 | 0.363 | 0.1153 |
| 2024-03-18 18:00:00 | 0.381 | 0.1382 |
| 2024-03-18 19:00:00 | 0.360 | 0.1224 |
| 2024-03-18 20:00:00 | 0.354 | 0.1108 |
| 2024-03-18 21:00:00 | 0.382 | 0.1093 |
| 2024-03-18 22:00:00 | 0.373 | 0.1055 |
| 2024-03-18 23:00:00 | 0.417 | 0.1159 |
+---------------------+---------+--------+
24 rows in set (0,001 sec)
Connecting Grafana to the Database
Now we shall visualize the data in Grafana. Grafana is powerful and mighty visualization tool with which you can create state-of-the-art dashboards and professional visualizations. I must really laude the team behind Grafana for making such a powerful tool free for personal and other usage (for details to their licenses und usage models, see Licensing | Grafana Labs).
Before you can use data from a MySQL database in Grafana, you have to set up MySQL as a data source in Connections. Remember that MySQL is one of many possible data sources for Grafana and so, you have to walk through the jungle of offered data sources and find the MySQL connection and set up your data source accordingly. On my server, both Grafana and MariaDB run on the same machine, so there is no need for encryption, etc. My setup simply looks like this:

One step where I always stumble again is that in the entry mask for the connection setup, localhost:3306 is proposed in grey color as Host, but unless you type that in, too, Grafana will actually not use localhost:3306. So be sure to physically type that in.
Populating the Database
Tibber customers who have created an API token for themselves [4] can populate the database with the following bash script; in this script, you need to replace _API_TOKEN with your personal API token ( [4]) and _HOME_ID with your personal Home ID ([1]). The script further assumes that the user gabriel can login to the MySQL database without further authentication; this can be achieved by writing the MySQL login information in the file ~/.my.cnf.
#!/bin/bash
#
# Dieses Skript liest Daten für die Tibber-Datenbank und speichert das Ergebnis in einer MySQL-Datenbank ab.
# Das Skript wird einmal pro Tag aufgerufen.
#
# V1.3; 2024-03-24, Gabriel Rüeck <gabriel@rueeck.de>, <gabriel@caipirinha.spdns.org>
#
# CONSTANTS
declare -r MYSQL_DATABASE='tibber'
declare -r MYSQL_SERVER='localhost'
declare -r MYSQL_USER='gabriel'
declare -r TIBBER_API_TOKEN='_API_TOKEN'
declare -r TIBBER_API_URL='https://api.tibber.com/v1-beta/gql'
declare -r TIBBER_HOME_ID='_HOME_ID'
# VARIABLES
# PROGRAM
# Read price information for tomorrow
curl -s -S -H "Authorization: Bearer ${TIBBER_API_TOKEN}" -H "Content-Type: application/json" -X POST -d '{ "query": "{viewer {home (id: \"'"${TIBBER_HOME_ID}"'\") {currentSubscription {priceInfo {tomorrow {total startsAt level }}}}}}" }' "${TIBBER_API_URL}" | jq -r '.data.viewer.home.currentSubscription.priceInfo.tomorrow[] | .total, .startsAt, .level' | while read cost; do
read LINE
read level
timestamp=$(echo "${LINE%%+*}" | tr 'T' ' ')
# Determine timezone offset and store the UTC datetime in the database
offset="${LINE:23}"
mysql --default-character-set=utf8mb4 -B -N -r -D "${MYSQL_DATABASE}" -h ${MYSQL_SERVER} -u ${MYSQL_USER} -e "INSERT INTO preise (zeitstempel,preis,niveau) VALUES (DATE_SUB(\"${timestamp}\",INTERVAL \"${offset}\" HOUR_MINUTE),${cost},\"${level}\");"
done
# Read consumption information from the past 24 hours
curl -s -S -H "Authorization: Bearer ${TIBBER_API_TOKEN}" -H "Content-Type: application/json" -X POST -d '{ "query": "{viewer {home (id: \"'"${TIBBER_HOME_ID}"'\") {consumption (resolution: HOURLY, last: 24) {nodes {from to cost consumption}}}}}" }' "${TIBBER_API_URL}" | jq -r '.data.viewer.home.consumption.nodes[] | .from, .consumption, .cost' | while read LINE; do
read consumption
read cost
timestamp=$(echo "${LINE%%+*}" | tr 'T' ' ')
# Determine timezone offset and store the UTC datetime in the database
offset="${LINE:23}"
mysql --default-character-set=utf8mb4 -B -N -r -D "${MYSQL_DATABASE}" -h ${MYSQL_SERVER} -u ${MYSQL_USER} -e "INSERT INTO verbrauch (zeitstempel,energie,kosten) VALUES (DATE_SUB(\"${timestamp}\",INTERVAL \"${offset}\" HOUR_MINUTE),${consumption},${cost});"
done
In my case, I call this script with cron once per day, at 14:45 as Tibber releases the price information for the subsequent day only at 14:00. The script furthermore stores UTC timestamps in the database. Grafana will adjust them to the local time for graphs of the type Time series.
Easy Visualizations
Price level
One of my first visualizations was a table that shows the price level tags for today and tomorrow. The idea behind is that I would look at the table once per day during breakfast and immediately identify the sweet spots on where I could charge the car, turn on the washing machine, etc.

For this table, we use the built-in Grafana visualization Table. We select the according database, and our MySQL query is:
SELECT IF(DATE(DATE_ADD(zeitstempel,INTERVAL HOUR(TIMEDIFF(SYSDATE(),UTC_TIMESTAMP())) HOUR))=CURDATE(),'TODAY','TOMORROW') AS Tag, HOUR(DATE_ADD(zeitstempel,INTERVAL HOUR(TIMEDIFF(SYSDATE(),UTC_TIMESTAMP())) HOUR)) AS Stunde, niveau
FROM preise
WHERE DATE_ADD(zeitstempel,INTERVAL HOUR(TIMEDIFF(SYSDATE(),UTC_TIMESTAMP())) HOUR)>=CURDATE()
Group BY Tag, Stunde;
The query above lists one price level per line. However, in order to get the nice visualization shown above, we need to transpose the table with respect to the months, and therefore, we must select the built-in transformation Grouping to Matrix and enter our column names according to the image below:

Now, in order to get a beautiful visualization, we need to adjust some of the panel options, and those are:
- Cell options → Cell type: Set to Auto
- Standard options → Color scheme: Set to Single color
and we need to define some Value mappings:
- NORMAL → Select yellow color
- EXPENSIVE → Select orange color
- VERY_EXPENSIVE → Select red color
- CHEAP → Select green color
- VERY_CHEAP → Select blue color
and we need to define some Overrides:
- Override 1 → Fields with name: Select TODAY
- Override 1 → Cell options → Cell type: Set to Colored background
- Override 2 → Fields with name: Select TOMORROW
- Override 2 → Cell options → Cell type: Set to Colored background
- Override 3 → Fields with type: Select Stunde\Tag
- Override 3 → Column width → Set to 110
- Override 3 → Standard options → Display name: Type HOUR (That is only necessary because I initially did my query with German column names)
In the MySQL query above, you can find the sequence
DATE_ADD(zeitstempel,INTERVAL HOUR(TIMEDIFF(SYSDATE(),UTC_TIMESTAMP())) HOUR)
which actually transforms the UTC timestamp into the timestamp of the local time of the machine where the MySQL server resides. In my case, this is the same machine that I use for Grafana. The sub-sequence
HOUR(TIMEDIFF(SYSDATE(),UTC_TIMESTAMP()))
gives – in my case – back a 2 when we are in summer time as then we are at UTC+2h, and 1 when we are in winter time as then we are at UTC+1h. This sequence was necessary for me as the Grafana visualization Table does not do an automatic time conversion to the local time.
Electricity price (per kWh) and grid consumption
In the next visualization, we will look at a timeline of the electricity price per hour and our consumption pattern. This will look like this graph:

For this graph, we use the built-in Grafana visualization Time series. We select the according database, and we define two MySQL queries. The first one is named Price:
SELECT zeitstempel,
preis AS 'price',
AVG(preis) OVER (ORDER BY zeitstempel ROWS BETWEEN 47 PRECEDING AND CURRENT ROW) as 'rolling 2-days average price'
FROM preise;
The first query has the peculiarity that we do not only retrieve the price for each 1-hour block, but that we also calculate a rolling 2-days average over the extracted data. The second query is named Consumption and retrieves our energy consumption:
SELECT zeitstempel, energie AS 'grid consumption' FROM verbrauch;
Again, we need to adjust some of the panel options, and those are:
- Standard options → Unit: Select (Euro) €
- Standard options → Min: Set to 0
- Standard options → Decimals: Set to 2
and we need to define some Overrides:
- Override 1 → Fields with name: Select grid consumption
- Override 1 → Axis → Placement: Select Right
- Override 1 → Standard options → Unit: Select Kilowatt-hour (kWh)
- Override 1 → Standard options → Decimals: Set to 1
In this graph, we can already get an indication on how well we time our energy consumption. Ideally, the peaks (local maximums) in energy consumption (blue line) should coincide with the local minimums of the electricity price (green line). As you can see with the two peaks of the blue line (charging the BEV), I did not always match this sweet spot. There might be good reasons for consuming electricity also at hours of higher prices, and some examples are:
- You have to drive away at a certain hour, and you need to charge the BEV now.
- You have solar generation at certain times of the day which you intend to use and therefore consume electricity (like for charging a BEV) when the sun shines. While you then might not consume at the cheapest hour, you might ultimately make good use of the additional solar energy.
- You want to watch TV in the evening when the electricity price is typically high.
This graph visualizes the data according to the timezone of the dashboard, so there is no need to add an offset to the UTC timestamps in the database.
Heatmap of the Tibber flexible hourly prices per kWh
In the next visualization, we look at a heatmap of the hourly prices and therefore, we use the plug-in Grafana visualization Hourly heatmap.

The query is very simple:
SELECT UNIX_TIMESTAMP(DATE_ADD(zeitstempel,INTERVAL HOUR(TIMEDIFF(SYSDATE(),UTC_TIMESTAMP())) HOUR)) AS time, preis FROM preise;
However, we need to adjust some of the panel options, and those are:
- Dimensions → Time: Select time
- Dimensions → Value: Select price
- Hourly heatmap → From: Set to 00:00
- Hourly heatmap → To: Set to 00:00
- Hourly heatmap → Group by: Select 60 minutes
- Hourly heatmap → Calculation: Select Sum
- Hourly heatmap → Color palette: Select Spectral
- Hourly heatmap → Invert color palette → Activate
- Legend → Show legend → Activate
- Legend → Gradient quality: Select Low
- Standard options → Unit: Select (Euro) €
- Standard options → Decimals: Set to 2
- Standard options → Color scheme: Select From thresholds (by value)
With the heatmap, we have an easy-to-understand visualization on when the prices are high and on when they are low and on whether there is a regular pattern that we can observe. In this case, we can identify that in the evening (at dinner or TV time), the electricity price often seems to be high. Hence, this is not a good time to switch on powerful electric consumers or to start charging a BEV.
Similar to the Grafana visualization Table, you can find the sequence
DATE_ADD(zeitstempel,INTERVAL HOUR(TIMEDIFF(SYSDATE(),UTC_TIMESTAMP())) HOUR)
because the Grafana visualization Hourly heatmap also does not do an automatic time conversion to the local time.
Complex Visualizations
Actual cost versus minimal cost, maximal cost, average cost per day
This visualization shows the daily electricity cost that I have incurred (blue line) and the cost I would have incurred if I had purchased the whole amount of electricity on the respective day:
- … during the cheapest hour on that day (green line)
- … during the most expensive hour on that day (red line)
- … at an average price (average of all hours) on that day (yellow line)
The closer the blue line is to the green line, the better I have shifted my consumption pattern to the hours of cheap electricity. In real life, one will always have to purchase electricity also in hours of expensive electricity unless one switches off all devices in a household at certain hours. So, in real life, the blue line will never be the same as the green line. A blue line between the yellow and the green line already indicates that one does well. The graph also shows that my consumption varies substantially from day to day. The graph uses the built-in Grafana visualization Time series. The last data point must not be considered as the last day is only calculated until 14:00 local time.

For this graph, we join the tables preise and verbrauch in a MySQL query and group the result by day:
SELECT DATE(preise.zeitstempel) AS 'date',
MIN(preise.preis)*SUM(verbrauch.energie) AS 'minimal cost',
MAX(preise.preis)*SUM(verbrauch.energie) AS 'maximal cost',
AVG(preise.preis)*SUM(verbrauch.energie) AS 'average cost',
SUM(verbrauch.kosten) AS 'actual cost'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>'2024-03-03'
GROUP BY DATE(preise.zeitstempel);
The following panel options are recommended:
- Standard options → Unit: Select (Euro) €
- Standard options → Decimals: Set to 2
and the following Overrides are recommended:
- Override 1 → Fields with name: Select minimal cost
- Override 1 → Standard options → Color scheme: Select Single color, then select green
- Override 2 → Fields with name: Select maximal cost
- Override 2 → Standard options → Color scheme: Select Single color, then select red
- Override 3 → Fields with name: Select actual cost
- Override 3 → Standard options → Color scheme: Select Single color, then select blue
- Override 4 → Fields with name: Select average cost
- Override 4 → Standard options → Color scheme: Select Single color, then select yellow
Important: This graph adds up the data from a UTC day (not the local calendar day) and visualizes the data points according to the timezone of the dashboard.
Cumulated savings versus various fictive static electricity prices
This visualization shows us in a cumulative manner how much money I have saved using dynamic pricing versus fictive electricity contracts with different static prices per kWh (traditional contracts). The graph uses the built-in Grafana visualization Time series. The last data point must not be considered as the last day is only calculated until 14:00 local time.

One can see that for traditional prices below 27 ¢/kWh, I would not have saved any money so far. 27 ¢/kWh is the price that I could get in a traditional electricity contract at the place where I live. However, it is too early yet to draw final conclusions. I intend to observe how the graphs advance when we get into summer as I expect that during summer, I will have more times at cheap electricity whereas in the subsequent winter, I will probably have more times at higher prices. The graph is done with this query:
SELECT DATE(preise.zeitstempel) AS 'Datum',
SUM(0.25*SUM(verbrauch.energie)-SUM(verbrauch.kosten)) OVER (ORDER BY DATE(preise.zeitstempel)) as '25 ¢/kWh',
SUM(0.26*SUM(verbrauch.energie)-SUM(verbrauch.kosten)) OVER (ORDER BY DATE(preise.zeitstempel)) as '26 ¢/kWh',
SUM(0.27*SUM(verbrauch.energie)-SUM(verbrauch.kosten)) OVER (ORDER BY DATE(preise.zeitstempel)) as '27 ¢/kWh',
SUM(0.28*SUM(verbrauch.energie)-SUM(verbrauch.kosten)) OVER (ORDER BY DATE(preise.zeitstempel)) as '28 ¢/kWh',
SUM(0.29*SUM(verbrauch.energie)-SUM(verbrauch.kosten)) OVER (ORDER BY DATE(preise.zeitstempel)) as '29 ¢/kWh'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>'2024-03-03'
GROUP BY DATE(preise.zeitstempel);
The following panel options are recommended:
- Standard options → Unit: Select (Euro) €
- Standard options → Decimals: Set to 2
Important: This graph adds up the data from a UTC day (not the local calendar day) and visualizes the data points according to the timezone of the dashboard.
Daily savings using Tibber flexible versus a fictive static price of 27 ¢/kWh
This next visualization shows how much money I have saved (green) or lost (red) using dynamic pricing versus a fictive electricity contract with a price of 27 ¢/kWh that – as mentioned before – I could get in a traditional electricity contract at the place where I live. The graph uses the built-in Grafana visualization Bar chart. The last data point must not be considered as the last day is only calculated until 14:00 local time.

One can see that due to the nature of dynamic prices, I do not save money every day. It has been quite a mixed result so far. The graph is done with this query:
SELECT DATE_FORMAT(preise.zeitstempel,'%m/%d') AS 'Datum',
0.27*SUM(verbrauch.energie)-SUM(verbrauch.kosten) AS 'Daily cost vs. average'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>'2024-03-03'
GROUP BY DATE(preise.zeitstempel);
The following panel options are recommended:
- Bar chart → Color by field: Select Daily cost vs. average
- Standard options → Unit: Select (Euro) €
- Standard options → Decimals: Set to 2
- Standard options → Color scheme: Select From thresholds (by value)
- Thresholds → Enter 0, then select green
- Thresholds → Base: Select red
Important: This graph adds up the data from a UTC day (not the local calendar day).
Daily savings using Tibber flexible versus Tibber daily average price
A similar visualization shows how much money I have saved (green) or lost (red) using dynamic pricing versus the average price per day, calculated on all dynamic prices of that day. This graph shows me if I am successful in making use of dynamic pricing (green) or not (red). The graph uses the built-in Grafana visualization Bar chart. The last data point must not be considered as the last day is only calculated until 14:00 local time.

So far, it seems that while I do have “good” and “bad” days, on on the good days, I seem to save more money compared to the Tibber average price. The large green bars are those where I charge the BEV, and before I charge the BEV, I really carefully consider the electricity price of today and tomorrow. This graph is done with this query:
SELECT DATE_FORMAT(preise.zeitstempel,'%m/%d') AS 'Datum',
AVG(preise.preis)*SUM(verbrauch.energie)-SUM(verbrauch.kosten) AS 'Daily cost vs. average'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>'2024-03-03'
GROUP BY DATE(preise.zeitstempel);
The following panel options are recommended:
- Bar chart → Color by field: Select Daily cost vs. average
- Standard options → Unit: Select (Euro) €
- Standard options → Decimals: Set to 2
- Standard options → Color scheme: Select From thresholds (by value)
- Thresholds → Enter 0, then select green
- Thresholds → Base: Select red
Important: This graph adds up the data from a UTC day (not the local calendar day).
Cost of purchased electricity from grid with price indication
This is one of my favorite graphs as it contains a lot of information in one visualization. It shows the electricity cost per day, but also, how this cost is composed. Each day is a concatenation of 24 rectangles. The 24 rectangles represent the 24 hours of the day. Their color is different, ranging in shades from green to red. The green only rectangle (rgb: 0, 255, 0) is the cost that incurred at the cheapest hour of the day. The red only rectangle (rgb: 255, 0, 0) is the cost that incurred at the most expensive hour of the day. The shades in between ranging from green to red represent an ordered list of the hours from cheap to expensive. Large rectangles mean that a large part of the daily cost can be attributed to a consumption in that hour. Essentially, this means that the more green shades a day has, the more cost has incurred at hours of cheap electricity and the better I have used the dynamic pricing for me. The more red shades a day has, the more cost has incurred at hours of expensive electricity. A day with more red shades is not automatically a “bad” day. There might be reasons for a consumption at expensive hours, and some which are true in my case, are:
- I might cover the electricity demand at cheap hours by my solar panels so that during these hours, I do not have any grid consumption at all.
- I might deliberately decide to consume electricity at expensive hours during daylight because my solar panels cover a large part of the consumption at my home, maybe, because I charge the car and 50% of the electric energy comes from the solar panels anyway and I just buy the remaining 50% from the grid. While the grid consumption might be expensive, I anyway get 50% “for free” from the solar panels.
- I might deliberately decide to consume electricity at expensive hours because the ambient temperature outside the house is moderate or warm and I can heat the house with the air conditioner at high efficiency while I can switch off the central heating which runs by natural gas. In that case, even at high prices, I expect to have a better outcome overall because I might be able to switch off the central heating completely.
The graph uses the built-in Grafana visualization Bar chart. The last data point must not be considered as the last day is only calculated until 14:00 local time.

This graph uses this query:
SELECT DATE_FORMAT(preise.zeitstempel,'%m/%d') AS 'Datum',
ROW_NUMBER() OVER (PARTITION BY DATE(preise.zeitstempel) ORDER BY preise.preis ASC) AS 'Row',
verbrauch.kosten AS 'Kosten'
FROM preise JOIN verbrauch ON verbrauch.zeitstempel=preise.zeitstempel
WHERE DATE(preise.zeitstempel)>'2024-03-03'
ORDER BY DATE(preise.zeitstempel) ASC, preise.preis ASC;
and the built-in transformation Grouping to Matrix:

The following panel options must be used:
- Standard options → Unit: Select (Euro) €
- Standard options → Min: Set to 0
- Standard options → Decimals: Set to 2
Additionally, we need to define exactly 24 Overrides:
- Override 1 → Fields with name: Select 1
- Override 1 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(0, 255, 0)
- Override 2 → Fields with name: Select 2
- Override 2 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(22, 255, 0)
- Override 3 → Fields with name: Select 3
- Override 3 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(44, 255, 0)
- Override 4 → Fields with name: Select 4
- Override 4 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(66, 255, 0)
- Override 5 → Fields with name: Select 5
- Override 5 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(89, 255, 0)
- Override 6 → Fields with name: Select 6
- Override 6 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(111, 255, 0)
- Override 7 → Fields with name: Select 7
- Override 7 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(133, 255, 0)
- Override 8 → Fields with name: Select 8
- Override 8 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(155, 255, 0)
- Override 9 → Fields with name: Select 9
- Override 9 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(177, 255, 0)
- Override 10 → Fields with name: Select 10
- Override 10 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(199, 255, 0)
- Override 11 → Fields with name: Select 11
- Override 11 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(222, 255, 0)
- Override 12 → Fields with name: Select 12
- Override 12 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(244, 255, 0)
- Override 13 → Fields with name: Select 13
- Override 13 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 244, 0)
- Override 14 → Fields with name: Select 14
- Override 14 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 222, 0)
- Override 15 → Fields with name: Select 15
- Override 15 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 200, 0)
- Override 16 → Fields with name: Select 16
- Override 16 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 177, 0)
- Override 17 → Fields with name: Select 17
- Override 17 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 155, 0)
- Override 18 → Fields with name: Select 18
- Override 18 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 133, 0)
- Override 19 → Fields with name: Select 19
- Override 19 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 110, 0)
- Override 20 → Fields with name: Select 20
- Override 20 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 89, 0)
- Override 21 → Fields with name: Select 21
- Override 21 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 66, 0)
- Override 22 → Fields with name: Select 22
- Override 22 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 44, 0)
- Override 23 → Fields with name: Select 23
- Override 23 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 22, 0)
- Override 24 → Fields with name: Select 24
- Override 24 → Standard options → Color scheme: Select Single color, then select Custom, then rgb(255, 0, 0)
Important: This graph adds up the data from a UTC day (not the local calendar day).
Conclusion
Even with a relatively simple dataset, we can create insightful visualizations with Grafana that can help us to interpret complex relationships. In my case, the visualizations shall help me to answer the questions:
- Do I fare well with dynamic pricing as compared to a traditional electricity contract with static prices?
- Am I able to shift consumption pattern of large energy chunks efficiently to those hours where electricity is cheaper?
As I mentioned, there might be reasons to buy electricity also at expensive hours. The fact that I have support of solar panels during the day might tilt the decision to consume electric energy aways from the cheapest hours to hours where a large part of that consumption is anyway covered by the solar panels. Or I might decide to heat with the air conditioners because the temperature difference between outside and inside of the house is small and the air conditioners can run with a high efficiency. In that case, I trade in natural gas consumption versus electricity consumption.
Outlook
It would be interesting to consider the energy that the solar panels generate “for free” (not really for free, but as they have been installed, they are already “sunk cost”) and visualize the resulting electricity cost from the mix of solar energy with energy consumed from the grid.
Likewise, it might be interesting as well as challenging to derive a good model that uses a battery and dynamic electricity prices as well as the energy from the solar panels to minimize cost of the energy consumption from the grid. How large should this battery be? When should it be charged from the grid and when should it return its energy to the consumers in the house?
Files
The following dataset was used for the graphs:
Sources
- [1] = Tibber Developer
- [2] = Download Grafana | Grafana Labs
- [3] = day-ahead market – an overview
- [4] = Tibber Developer: Communicating with the API
Disclaimer
- Program codes and examples are for demonstration purposes only.
- Program codes are not recommended be used in production environments without further enhancements in terms of speed, failure-tolerance or cyber-security.
- While program codes have been tested, they might still contain errors.
- I am in neither affiliated nor linked to companies named in this blog post.
Getting around Carrier-grade NAT
Executive Summary
This blog post explains how a small internet-based shared server (“vServer”, “VPS”) can be used to tunnel connections from the internet back to a SoHo-based router that does not have a publicly routable IPv4 internet address, maybe because the internet service provider (ISP) uses Carrier-grade NAT (CG-NAT) and only offers “real” IPv4 addresses at cost. As internet-based shared servers can be rented for small fees, the approach described below is a viable concept to overcome the limitations of CG-NAT connections which might only allow outgoing connections for IPv4 or even for both IPv4 and IPv6. This concept can even be used if the SoHo server is connected via the mobile network to the internet-based shared server.
Background
The implementation proved useful for me when I switched from my DSL ISP who happily had provided me with “real” (routable) IPv4 and IPv6 addresses to a new fiber-optics ISP that provides IPv6, but that uses CG-NAT on IPv4 so that no incoming IPv4 connections are possible from the internet. As I feared that my server at home would only be accessible from the internet via IPv6, I had to develop this counterstrategy.
Preconditions
In order to use the approach described here, you should:
- … have access to a Linux machine which is already properly configured for dual stack on its principal network interface (e.g., eth0)
- … additionally have access to a cloud-based Linux server which is already properly configured for dual stack on its principal network interface
- … have access to a DNS resolver where you can enter an IPv4 and an IPv6 addresses for your SoHo server so that your domain resolves properly
- … have the package openvpn installed on both machines (preferably from a repository of your Linux distribution)
- … know how to create client and server certificates for openvpn [1]
- … have knowledge of routing concepts, networks, some understanding of shell scripts and configuration files
- … know related system commands like sysctl
- … familiarize yourself with [2], [3], [4], [5]
Description and Usage

In this setup, we have a full-blown SoHo server (Server 1) which is hosting numerous services that we want to offer to the world. However, while the provider allocates an IPv6 /64 subnet, he does not offer an IPv4 address that would be reachable from the internet. Rather than that, he employs Carrier-grade NAT (CG-NAT) for IPv4. This is a typical setup for fiber-optics or cable providers or for mobile network providers. In some countries (which came late to the internet), IPv4 addresses are scarce in general, and so you might experience CG-NAT for all private internet connections.
This is where Server 2 comes into play. Server 2 is a hosted shared server, it just needs a good internet connection and a fixed IPv4 address, but it does not need a lot of computational power. It will only be used to forward traffic to Server 1. In my case, I rented a small “VPS” server from Ionos as I personally found their offer compelling [6], but there are alternatives. My VPS has a dual-stack and a fixed IPv4 address and a fixed IPv6 /64 subnet allocated. The IPv4 address of the VPS is 85.215.215.32, and we will use this IPv4 address as entry address for our SoHo server (Server 1) caipirinha.spdns.org.
A new Routing Table
We want to separate the traffic that we receive and send out in return via the VPN network (192.168.20.0/24) from the regular traffic that enters and leaves the server via the network 192.168.2.0/24. Therefore, we set up a new routing table as described in [7] and name it “VPS”. In order to access it via its name, we modify /etc/iproute2/rt_tables:
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
#1 inr.ruhep
...
201 VPS
...
Setting up a permanent VPN
First, we need to set up a permanent VPN connection from Server 1 to Server 2. Server 1 will be the VPN client, and Server 2 will be the VPN server. I chose this direction because in that approach, Server 1 may even be connected to the internet via a mobile only connection CG-NAT both on IPv4 and IPv6. In my approach, I use the network 192.168.20.0/24 for the VPN connection; Server 1 gets the address 192.168.20.3 and Server 2 gets the address 192.168.20.1.
Server 2: The VPN Server
On Server 2, we set up a VPN server listening on port 9010 (UDP), using dev tun3. The configuration file is shown below. In my case, Server 2 is an Ubuntu-based server, and so it is recommended to adjust the settings in the configuration file /etc/default/openvpn which governs the behavior of openvpn connections on Ubuntu. I modified this configuration file so that only one openvpn service is started. This is done via the configuration option
AUTOSTART="server-9010"
For the configuration of the server side, I meanwhile include the CA certificate, the server certificate and the private key in one configuration file. I find that more convenient, but it certainly may have disadvantages, The key message is that it is possible to do so. My own certificates and private keys have been substituted by “…” here, of course.
# Konfigurationsdatei für den openVPN-Server auf IONOS VPS (UDP:9010)
client-config-dir /etc/openvpn/server/conf-9010
crl-verify /etc/openvpn/server/crl.pem
dev tun3
dh /etc/openvpn/server/dh.pem
hand-window 90
ifconfig 192.168.20.1 255.255.255.0
ifconfig-pool 192.168.20.2 192.168.20.254 255.255.255.0
ifconfig-ipv6 fd01:0:0:14::1 2a01:239:24e:1800::1
ifconfig-ipv6-pool fd01:0:0:14::2/112
ifconfig-pool-persist /etc/openvpn/server/ip-pool-9010.txt
keepalive 20 80
log /var/log/openvpn/server-9010.log
mode server
persist-key
persist-tun
port 9010
proto udp6
reneg-sec 86400
script-security 2
status /var/run/openvpn/status-9010
tls-server
topology subnet
verb 1
writepid /var/run/openvpn/server-9010.pid
# Topologie des VPN und Default-Gateway
push "topology subnet"
push "tun-ipv6"
<ca>
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
</ca>
<cert>
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
</key>
We also must take care that our client always gets the IP address 192.168.20.3 as we originally envisioned. This is done with the client-specific configuration file /etc/openvpn/server/conf-9010/caipirinha_client:
# Spezielle Konfigurationsdatei für den Server caipirinha.spdns.org als Client
#
ifconfig-push 192.168.20.3 255.255.255.0
ifconfig-ipv6-push fd01:0:0:14::3/111 fd01:0:0:14::1
The client-specific configuration file additionally also allocates the static IPv6 address fd01:0:0:14::3 to our client. Finally, the service can then be started with:
systemctl start openvpn@server-9010.service
Server 1: The VPN Client
On Server 1, we set up a VPN client using dev tun3. The local port shall always be 5475 (arbitrarily chosen but fixed so that we can track the connection easily if necessary). Server 2 is addressed via its public IPv6 address (2a01:239:24e:1800::1), but we could also have used its public IPv4 address (85.215.215.32). I chose the IPv6 address because the IPv4 connection would run via the provider gateway, and that might slow down the connection or make it less reliable.
# Konfigurationsdatei für den openVPN-Client auf caipirinha.spdns.org zum IONOS-Server
client
dev tun3
explicit-exit-notify
hand-window 90
keepalive 10 60
log /var/log/openvpn_ionos_vpn.log
lport 5475
persist-key
persist-tun
proto udp
remote 2a01:239:24e:1800::1 9010
remote-cert-tls server
remote-random
reneg-sec 86400
route-nopull
script-security 2
status /var/run/openvpn/status_ionos_vpn
up /etc/openvpn/start_piavpn.sh
verb 1
<ca>
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
</ca>
<cert>
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
</key>
One peculiarity is the referenced script /etc/openvpn/start_piavpn.sh. At the start of the VPN connection, this script populates the routing table VPS:
#!/bin/bash
#
# This script requires the tool "ipcalc" which needs to be installed on the target system.
# Set the correct PATH environment
PATH='/sbin:/usr/sbin:/bin:/usr/bin'
VPN_DEV=$1
VPN_SRC=$4
VPN_MSK=$5
VPN_GW=$(ipcalc ${VPN_SRC}/${VPN_MSK} | sed -n 's/^HostMin:\s*\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*/\1/p')
VPN_NET=$(ipcalc ${VPN_SRC}/${VPN_MSK} | sed -n 's/^Network:\s*\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\/[0-9]\{1,2\}\).*/\1/p')
case "${VPN_DEV}" in
"tun0") ROUTING_TABLE='Portugal';;
"tun1") ROUTING_TABLE='Brasilien';;
"tun2") ROUTING_TABLE='Singapur';;
"tun3") ROUTING_TABLE='VPS';;
"tun8") ROUTING_TABLE='China';;
esac
...
ip route add ${VPN_NET} dev ${VPN_DEV} proto static scope link src ${VPN_SRC} table ${ROUTING_TABLE}
ip route replace default dev ${VPN_DEV} via ${VPN_GW} table ${ROUTING_TABLE}
When the VPN connection is stopped, then the VPN network and the default route are automatically deleted from the routing table VPS as the VPN network collapses. While the VPN connection is up, we can view the routing table VPS with:
caipirinha:~ # ip route list table VPS
default via 192.168.20.1 dev tun3
192.168.20.0/24 dev tun3 proto static scope link src 192.168.20.3
Finally, the client can be started with:
systemctl start openvpn@client_ionos_vps.service
Of course, the actual name after “openvpn@” in this command depends on how you named the respective client configuration file.
Channeling the Traffic
Now, we must make sure that traffic that is received by Server 2 and that shall be forwarded to Server 1 is channeled in an appropriate way through the VPN connection. We need to execute some commands on both servers. [3], [4] explain how that can be achieved.
Server 2: Forward the traffic
We need to enable IPv4 routing and simply forward connections to those ports where we offer our service on Server 1. This is done by:
sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A PREROUTING -p tcp -m multiport --dports 20,21,25,80,443,465,587,873,993,3000,4078:4088,8009,8080:8082 -j DNAT --to-destination 192.168.20.3
iptables -t nat -A PREROUTING -p udp -m multiport --dports 1194,2372:2380,4396,44576 -j DNAT --to-destination 192.168.20.3
We need two iptables commands, one for TCP connections and one for UDP connections. Both are located in the PREROUTING chain. As we can see, we can combine various ports and even port ranges that shall be forwarded in one command, that is very handy. Of course, you should only forward the ports that correspond to services on Server 1 that you want to offer to the world. It is also possible to offer the services on different ports on Server 2, so that http is listening on port 81 TCP rather than on 80 TCP although in my opinion, that does not make much sense.
Let us assume that a client initiates a connection to Server 2 on port 80 (http). The first iptables command changes the destination IP from the IP address of Server 2 (85.215.215.32) to the IP address 192.168.20.3 which is the VPN client on Server 1. As we have enabled routing on Server 2, the packet is routed from ens6 to tun3 and leaves Server 2 via the VPN connection to Server 1.
Server 1: Accept the traffic in Routing Table VPS
Server 1 receives the traffic and needs to channel it via routing table VPS. This is done with the command:
ip rule add from 192.168.20.3 priority 1000 table VPS
The beauty of this command is that the outgoing traffic will also use table VPS and therefore leave Server 1 not via the default interface eth0 to the SoHo router, but via tun3 back to Server 2 (see [4], [8]). We can identify the traffic that we receive from Server 2 at Server 1 with the conntrack command:
caipirinha:~ # conntrack -L | fgrep "192.168.20"
tcp 6 117 TIME_WAIT src=109.250.125.241 dst=192.168.20.3 sport=55370 dport=80 src=192.168.20.3 dst=109.250.125.241 sport=80 dport=55370 [ASSURED] mark=0 use=1
tcp 6 117 TIME_WAIT src=109.250.125.241 dst=192.168.20.3 sport=55366 dport=80 src=192.168.20.3 dst=109.250.125.241 sport=80 dport=55366 [ASSURED] mark=0 use=1
tcp 6 117 TIME_WAIT src=109.250.125.241 dst=192.168.20.3 sport=55368 dport=80 src=192.168.20.3 dst=109.250.125.241 sport=80 dport=55368 [ASSURED] mark=0 use=1
In that case, we have observed a https request (dport=80) from the source IP 109.250.125.241 which has been tunneled via our VPN from Server 2 to Server 1. The client (represented by a mobile phone in the image above) has basically access Server 2 along the red arrow drawn in the image. The benefit of the concept described here is also that the source address (here: 109.250.125.241) is not concealed and therefore, filtering can be done on Server 1 with iptables as if Server 1 was accessed directly. Furthermore, the correct client IP address is in the respective log files.
Other approaches which use SNAT on Server 2 would conceal the source address of the client and therefore, such filtering would have to occur on Server 2 already. The logs on Server 1 however would contain 192.168.20.1 as sole source address for all incoming connections which is why such an approach is not suitable.
Updating the DNS Server
Now we should spend some thoughts on the domain name resolution of Server 1. In my case, I already had a script that communicated any change of the IP address of Server 1 provoked by the ISP almost on a daily basis to a Dynamic DNS (DDNS) provider which so far has done the name resolution for clients that want to access Server 1. I use my own script, but most DDNS providers also offer pre-written scripts for certain architectures or routers.
In our concept though, we must communicate the IPv6 address of Server 1 and the IPv4 address of Server 2 to our DDNS service. As Server 2 has a static IPv4 address, adapting any script should be easy. Alternatively, one could limit the script to only update the IPv6 address of Server 1 and enter the static IPv4 address via the web-based user interface of the DDNS hoster.
Conclusion
This blog post shows how we can channel back traffic via a small internet-based server to a powerful server that is connected via CG-NAT and that may therefore not be accessible directly from the internet. With the approach described here, Server 1 can even be located in a mobile network or inside a firewalled environment as long as the firewall permits outgoing openvpn connections.
Sources
- [1] = Setting up your own Certificate Authority (CA) and generating certificates and keys for an OpenVPN server and multiple clients
- [2] = Setting up Dual Stack VPNs
- [3] = iptables – Port forwarding over OpenVpn
- [4] = Routing for multiple uplinks/providers
- [5] = Predictable Network Interface Names
- [6] = vServer Günstig Mieten » VPS ab 1€ / M.
- [7] = Setting up Client VPNs, Policy Routing
- [8] = Two Default Gateways on One System
Grafana Visualizations (Part 1)
Executive Summary
In this article, we do first steps into visualizations in Grafana of data stored in a MariaDB database and draw conclusions from the visualizations. In this case, we look at meaningful data from solar power generation.
Preconditions
In order to use the approach described here, you should:
- … have access to a Linux machine or account
- … have a MySQL or MariaDB database server installed, configured, up and running
- … have a populated MySQL or MariaDB database like in our example to which you have access
- … have the package Grafana [2] installed, configured, up and running
- … have some basic knowledge of how to operate in a Linux environment and some basic understanding of shell scripts
Description and Usage
The base for the following visualizations is a fully populated database of the following structure:
# Datenbank für Analysen der Solaranlage
# V1.1; 2023-12-10, Gabriel Rüeck <gabriel@rueeck.de>, <gabriel@caipirinha.spdns.org>
# Delete existing databases
REVOKE ALL ON solaranlage.* FROM 'gabriel';
DROP DATABASE solaranlage;
# Create a new database
CREATE DATABASE solaranlage DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;
GRANT ALL ON solaranlage.* TO 'gabriel';
USE solaranlage;
SET default_storage_engine=Aria;
CREATE TABLE anlage (uid INT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT,\
zeitstempel TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,\
leistung INT UNSIGNED DEFAULT NULL,\
energie INT UNSIGNED DEFAULT NULL);
[1] explains how such a database can be filled with real-world data using smart home devices whose data is then stored in the database. For now, we assume that the database already is fully populated and contains large amount of data of the type:
MariaDB [solaranlage]> SELECT * FROM anlage LIMIT 20;
+-----+---------------------+----------+---------+
| uid | zeitstempel | leistung | energie |
+-----+---------------------+----------+---------+
| 1 | 2023-05-02 05:00:08 | 0 | 5086300 |
| 2 | 2023-05-02 05:15:07 | 0 | 5086300 |
| 3 | 2023-05-02 05:30:07 | 0 | 5086300 |
| 4 | 2023-05-02 05:45:07 | 0 | 5086300 |
| 5 | 2023-05-02 06:00:07 | 0 | 5086300 |
| 6 | 2023-05-02 06:15:07 | 12660 | 5086301 |
| 7 | 2023-05-02 06:30:07 | 39830 | 5086307 |
| 8 | 2023-05-02 06:45:08 | 44270 | 5086318 |
| 9 | 2023-05-02 07:00:07 | 78170 | 5086333 |
| 10 | 2023-05-02 07:15:07 | 187030 | 5086367 |
| 11 | 2023-05-02 07:30:07 | 312630 | 5086424 |
| 12 | 2023-05-02 07:45:08 | 665900 | 5086556 |
| 13 | 2023-05-02 08:00:07 | 729560 | 5086733 |
| 14 | 2023-05-02 08:15:08 | 573700 | 5086889 |
| 15 | 2023-05-02 08:30:07 | 288030 | 5086985 |
| 16 | 2023-05-02 08:45:07 | 444170 | 5087065 |
| 17 | 2023-05-02 09:00:08 | 655880 | 5087217 |
| 18 | 2023-05-02 09:15:07 | 974600 | 5087476 |
| 19 | 2023-05-02 09:30:08 | 1219150 | 5087839 |
| 20 | 2023-05-02 09:45:07 | 772690 | 5088024 |
+-----+---------------------+----------+---------+
20 rows in set (0,000 sec)
whereby the values in column leistung represent the current power generation in mW and the values in energy represent the accumulated energy in Wh.
Now we shall visualize the data of the solar power generation in Grafana. Grafana is powerful and mighty visualization tool with which you can create state-of-the-art dashboards and professional visualizations. I must really laude the team behind Grafana for making such a powerful tool free for personal and other usage (for details to their licenses und usage models, see Licensing | Grafana Labs).
Before you can use data from a MySQL database in Grafana, you have to set up MySQL as a data source in Connections. Remember that MySQL is one of many possible data sources for Grafana and so, you have to walk through the jungle of offered data sources and find the MySQL connection and set up your data source accordingly. On my server, both Grafana and MariaDB run on the same machine, so there is no need for encryption, etc. My setup simply looks like this:

One step where I always stumble again is that in the entry mask for the connection setup, localhost:3306 is proposed in grey color as Host, but unless you type that in, too, Grafana will actually not use localhost:3306. So be sure to physically type that in.
Average Power per Month and Hour of the Day (I)
For this, we use the built-in Grafana visualization Table. We select the according database, and our MySQL query is:
SELECT MONTHNAME(zeitstempel) AS Monat, HOUR(zeitstempel) AS Stunde, ROUND(AVG(leistung)/1000) AS Leistung
FROM anlage
GROUP BY Monat, Stunde
ORDER BY FIELD (Monat,'January','February','March','April','May','June','July','August','September','October','November','December'), Stunde ASC;
This query delivers us an ordered list of power values grouped and ordered by (the correct sequence of) month, and subsequently by the hour of the day as shown below:

However, this is not enough to get the table view we want in Grafana as the power values are still one-dimensional (only one value per line). We need to transpose the table with respect to the months, and therefore, we must select the built-in transformation Grouping to Matrix for this visualization and enter our column names according to the image below:

Now, in order to get a beautiful visualization, we need to adjust some of the panel options, and those are:
- Cell options → Cell type: Set to Colored background
- Standard options → Color scheme: Set to Green-Yellow-Red (by value)
and we need to define some Overrides:
- Override 1 → Fields with name: Select Stunde\Monat
- Override 1 → Cell options → Cell type: Set to Auto
- Override 2 → Fields with type: Select Number
- Override 2 → Standard options → Unit: Select Watt (W)
Then, ideally, you should see something like this:

Average Power per Month and Hour of the Day (II)
This visualization is similar to the one of the previous chapter. However, we slightly modify the MySQL query to:
SELECT MONTHNAME(zeitstempel) AS Monat, HOUR(zeitstempel) AS Stunde, ROUND(GREATEST((AVG(leistung)-STDDEV(leistung))/1000,0.0)) AS 'Ø-1σ'
FROM anlage
GROUP BY Monat, Stunde
ORDER BY FIELD (Monat,'January','February','March','April','May','June','July','August','September','October','November','December'), Stunde ASC;
Doing so, we assume that there is some reasonable normal distribution in the timeframe of one month among the values in the same hourly time interval which, strictly speaking, is not true. Further below, we will look into this assumption. Using this unproven assumption, if we subtract 1σ from the average power value and use the resulting value or zero (whichever is higher), then we come to a value where we can say: “Probably (84%), in this month and in this hour, we generate at least x watts of power.” So if someone was looking, for example, for an answer to the question on “When would be the best hour to start the washing machine and ideally use my solar energy for it?”, then, in June, this would be somewhen between 10:00…12:00, but in September, it might be 11:00…13:00, a detail which we might not have uncovered in the visualization of the previous chapter. Of course, you might argue that common sense (“Look out of the window and turn on the washing machine when the sun is shining.”) makes most sense, and that is true in our example. However, for automated systems that must run daily, such information might be valuable.

Heatmap of the Hourly and Daily Power Generation
For this, we use the plug-in Grafana visualization Hourly heatmap. We select the according database, and our MySQL query is very simple:
SELECT UNIX_TIMESTAMP(zeitstempel) AS time, ROUND(leistung/1000) AS power FROM anlage;
This time, there is also no need for any transformation, but again, we should adjust some panel options in order to beautify our graph, and these are:
- Hourly heatmap → From: Set to 05:00
- Hourly heatmap → To: Set to 23:00
- Hourly heatmap → Group by: Select 60 minutes
- Hourly heatmap → Calculation: Select Mean
- Hourly heatmap → Color palette: Select Spectral
- Hourly heatmap → Invert color palette: (enabled)
- Legend → Gradient quality: Select Low (it already computes long enough in Low mode)
- Standard options → Unit: Select Watt (W)
Then, ideally, after some seconds, you should see something like this:

This visualization is time-consuming. Keep in mind that if you have large amounts of data, it might take several seconds until the graph build up. The Hourly Heatmap is also more detailed than the previous visualizations, and we can point with the mouse pointer on a certain rectangle and get to know the average power on this day and in this hour (if the database has at least one power value per hour). It even shows fluctuations occurring in one day only which might go unnoticed in the previous visualizations. We can realize, for example, that the lower average power which was generated in July this year was not because the sky is greyer in July than in other months, but that there were a few 2…3 days periods with clouded sky where the electricity generation dropped while other days were just perfectly sunny as in June or August. We can also see how the daylight hours dramatically decrease when we reach November.
Daily Energy Generation, split into Time Ranges
The next graph shall visualize the daily energy generation of the solar system over the time, and we split the day into 4 time ranges (morning, lunch, afternoon, evening). The 4 time ranges are not equally large, so you have to pay close attention to the description of the time range. Of course, you can easily change the MySQL query and adapt the time ranges to your own needs. We use the built-in Grafana visualization Time Series. We select the according database, and our MySQL query uses the timestamp and the energy values of the dataset. We also use the uid as we calculate the differences in the (ever increasing) energy value between two consecutive value sets in the dataset.
SELECT DATE(past.zeitstempel) AS time,
CASE
WHEN (HOUR(past.zeitstempel)<11) THEN '05:00-11:00'
WHEN (HOUR(past.zeitstempel)>=11) AND (HOUR(past.zeitstempel)<14) THEN '11:00-14:00'
WHEN (HOUR(past.zeitstempel)>=14) AND (HOUR(past.zeitstempel)<17) THEN '14:00-17:00'
WHEN (HOUR(past.zeitstempel)>=17) THEN '17:00-22:00'
END AS metric,
SUM(IF(present.energie>=past.energie,IF((present.energie-past.energie<4000),(present.energie-past.energie),NULL),NULL)) AS value
FROM anlage AS present INNER JOIN anlage AS past ON present.uid=past.uid+1
GROUP BY time,metric;
In order to better understand the MySQL query, here are some explanations:
- We determine the time ranges in the form of a CASE statement in which we determine our time ranges and what shall be the textual output for a certain time range.
- We sum up the differences between consecutive energy values in the database over the respective time ranges. This is the task of the SUM operator.
- There might be missing energy values in the dataset, maybe because of network outages or (temporary) issues in the MySQL socket. The IF clauses inside the SUM operator help us to determine if the result of (present.energie-past.energie) would be negative (in case that present.energie is NULL) or if we have an unrealistic value (≥4000) because past.energie is NULL. Such values will be skipped in the calculation.
- We consult the same tables (anlage) two times, one time named as present and one time named as past whereby past the dataset immediately before present is.
- Keep in mind that as time we use the value of past.zeitstempel so that we include the last value that still fits into the time range (although for the calculation of the energy difference, present.zeitstempel would already be the next hour value, e.g.: (past.zeitstempel=16:45 and present.zeitstempel=17:00).
We furthermore should adjust some panel options in order to beautify our graph, and these are:
- Graph styles → Fill opacity: Set to 45%
- Graph styles → Stack Series: Select Normal
- Standard options → Unit: Select Watt-hour (Wh)
- Standard options → Min: Set to 0
- Standard options → Decimals: Set to 0
- Standard options → Color scheme: Select to Classic palette
On the left side, just above the code window, we have to switch from Table to Time Series:

The graph should now look like this:

There is one interesting characteristic of the built-in Grafana visualization Time Series which we make use of. If we edit the graph and select the Table view, we will see that Grafana automatically has created a matrix from the results of our MySQL query.

This is astonishing because the MySQL query does not create the matrix itself, but rather delivers an output like:
MariaDB [solaranlage]> SELECT DATE(past.zeitstempel) AS time,
-> CASE
-> WHEN (HOUR(past.zeitstempel)<11) THEN '05:00-11:00'
-> WHEN (HOUR(past.zeitstempel)>=11) AND (HOUR(past.zeitstempel)<14) THEN '11:00-14:00'
-> WHEN (HOUR(past.zeitstempel)>=14) AND (HOUR(past.zeitstempel)<17) THEN '14:00-17:00'
-> WHEN (HOUR(past.zeitstempel)>=17) THEN '17:00-22:00'
-> END AS metric,
-> SUM(IF(present.energie>=past.energie,IF((present.energie-past.energie<4000),(present.energie-past.energie),NULL),NULL)) AS value
-> FROM anlage AS present INNER JOIN anlage AS past ON present.uid=past.uid+1
-> GROUP BY time,metric
-> LIMIT 15;
+------------+-------------+-------+
| time | metric | value |
+------------+-------------+-------+
| 2023-05-02 | 05:00-11:00 | 3588 |
| 2023-05-02 | 11:00-14:00 | 4428 |
| 2023-05-02 | 14:00-17:00 | 2524 |
| 2023-05-02 | 17:00-22:00 | 375 |
| 2023-05-03 | 05:00-11:00 | 4260 |
| 2023-05-03 | 11:00-14:00 | 6047 |
| 2023-05-03 | 14:00-17:00 | 2648 |
| 2023-05-03 | 17:00-22:00 | 369 |
| 2023-05-04 | 05:00-11:00 | 6326 |
| 2023-05-04 | 11:00-14:00 | 5867 |
| 2023-05-04 | 14:00-17:00 | 2568 |
| 2023-05-04 | 17:00-22:00 | 477 |
| 2023-05-05 | 05:00-11:00 | 3113 |
| 2023-05-05 | 11:00-14:00 | 3496 |
| 2023-05-05 | 14:00-17:00 | 1657 |
+------------+-------------+-------+
15 rows in set (0,197 sec)
I found out that this only works if metric is not a numerical value. For numerical values, the matrix would not be auto generated, the graph would not work like this. Maybe, an additional transformation in Grafana might then be necessary.
Detailed Examinations of the Dataset
Let us look closer to subsets of the dataset and visualize them, in order to get a better understanding of how the solar power generation behaves in reality at selected times.
Power Distribution between 12:00 and 13:00 (4 values per day)
For this, we use the built-in Grafana visualization Heatmap. We select the according database, and our MySQL query is:
SELECT zeitstempel AS time, ROUND(leistung/1000) AS 'Power' FROM anlage
WHERE (HOUR(zeitstempel)=12);
Keep in mind that in my dataset, I register 4 samples per hour in the database. This is defined by the frequency of how often the script described in [1] is executed, and this itself is defined in crontab. You can also have the script be executed every minute, and hence you will register 60 samples per hour in the database which will be more precise but put higher load on your server and on the evaluation of the data in MySQL queries. So, in my case, in the time frame [12:00-13:00], we consider 4 power values, and we filter for these power values simply by looking at timestamps with the value “12” as the hour part. The Heatmap will give the distribution of these 4 power values per day. We must furthermore adjust some panel options, and these are:
- Heatmap → X Bucket: Select Size and set to 24h
- Heatmap → Y Bucket: Select Size and set to Auto
- Heatmap → Y Bucket Scale: Select Linear
- Y Axis → Unit: Select Watt (W)
- Y Axis → Decimals: Set to 1
- Y Axis → Min value: Set to 0
- Colors → Mode: Select Scheme
- Colors → Scheme: Select Oranges
- Colors → Steps: Set to 64

With this Heatmap, we can immediately understand an important detail, and that is: If someone had the assumption that between [12:00-13:00], we generate a lot of power every day in summer, this would clearly be very wrong. In fact, we can recognize that the power generation varies from low to high values on many days. The reason is that even in summer, there are many cloudy days in the location where this solar power generator is located. This might not be true for a different location, and if we had data from Southern Europe (Portugal, Spain, Italy), this might look very different.
Average Power [12:00-13:00]
Now, we shall look at the average power generation between [12:00-13:00] over the year, using the 4 power values that we have register in this time interval every day. For this, we use the built-in Grafana visualization Time Series. We select the according database, and we use two MySQL queries (A and B). Both curves will then be visualized in the diagram with different colors. Our query A calculates the average value per day in the time interval [12:00-13:00] and is displayed in green color:
SELECT zeitstempel AS time, ROUND(AVG(leistung)/1000) AS 'Power [12:00-13:00]' FROM anlage
WHERE (HOUR(zeitstempel)=12)
GROUP BY DATE(zeitstempel);
Our query B takes a 7d average of the average values per day, it therefore is kind of a trend line of the curve we have generated with query A and is displayed in yellow color:
SELECT zeitstempel AS time, (SELECT ROUND(AVG(leistung)/1000) FROM anlage WHERE (HOUR(zeitstempel)=12) AND zeitstempel<=time AND zeitstempel>DATE_ADD(time,INTERVAL-7 DAY)) AS 'Power [12:00-13:00; 7d average]' FROM anlage
WHERE (DATE(zeitstempel)>='2023-05-09')
GROUP BY DATE(zeitstempel);

This graph visualizes the same finding as the previous heatmap. In my opinion, it even shows in a more dramatic way that the average power between [12:00-13:00] even during summer can vary dramatically, and that we simply cannot assume that there is a certain minimum power “guaranteed” in a certain time frame during summer. We also see that in winter, the power generation is minimal, and there is no chance that the solar power generator can contribute to the electricity demand of the home in any meaningful way.
Average Power [12:00-13:00] (calculated as difference of energy values)
One might argue that 4 sampling values per hour are not enough, maybe we were just unlucky and there were clouds in the sky exactly when we were sampling the data. Let us therefore repeat the graph from above, not by using the average of the 4 samples per hour but let us divide the increase of the overall generated energy in the time frame [12:00-13:00] as the base for the power calculation. Again, we will do two queries (A and B) whereby query A is the value on the daily basis:
SELECT present.zeitstempel AS time, (present.energie-past.energie) AS 'Power [12:00-13:00]'
FROM anlage AS present INNER JOIN anlage AS past ON present.uid=past.uid+4
WHERE (TIME_FORMAT(present.zeitstempel,'%H:%i')='13:00');
and where query B shows the 7d average of the daily values:
SELECT zeitstempel AS time, (SELECT AVG(present.energie-past.energie) FROM anlage AS present INNER JOIN anlage AS past ON present.uid=past.uid+4 WHERE (TIME_FORMAT(present.zeitstempel,'%H:%i')='13:00') AND present.zeitstempel<=time AND present.zeitstempel>DATE_ADD(time,INTERVAL-7 DAY)) AS 'Power [12:00-13:00; 7d average]' FROM anlage
WHERE (DATE(zeitstempel)>='2023-05-09')
GROUP BY DATE(zeitstempel);

While we can identify some differences in the green curve between this graph and the previous one, the yellow curve is almost identical. In general, the calculation via the energy values is the more precise one; 4 sample values simply do not seem to suffice as we can see in the differences of the green curves.
The overall conclusion of this curve is the same as in the previous one, however: There are many days where the average power generation between [12:00-13:00] stays well below the expected (larger) values.
Average Hourly Power Generation [June]
The following visualization, again using the built-in Grafana visualization Time Series, shows how the average value of power generated per hour in the month of June varies if we look at the first half and the second half of June. This visualization uses three queries. Query A shows the average power generation of the whole month of June:
SELECT zeitstempel AS time, ROUND(AVG(leistung)/1000) AS 'Power [June 2023]' FROM anlage
WHERE (zeitstempel>'2023-06-01') AND (zeitstempel<'2023-07-01')
GROUP BY HOUR(zeitstempel);
Query B looks at the average power generation of the first half of June:
SELECT zeitstempel AS time, ROUND(AVG(leistung)/1000) AS 'Power [June 2023; first half]' FROM anlage
WHERE (zeitstempel>'2023-06-01') AND (zeitstempel<'2023-06-16')
GROUP BY HOUR(zeitstempel);
Query C looks at the average power generation of the second half of June
SELECT DATE_ADD(zeitstempel, INTERVAL -15 DAY) AS time, ROUND(AVG(leistung)/1000) AS 'Power [June 2023; second half]' FROM anlage
WHERE (zeitstempel>'2023-06-16') AND (zeitstempel<'2023-07-01')
GROUP BY HOUR(zeitstempel);
As time period for the visualization, we select 2023-06-01 00:00:00 to 2023-06-01 23:59:59. You might wonder why we only select one day for the visualization when we examine a whole month. But as we calculate the average value over the full month (indicated in the WHERE clause of the MySQL query), we will receive only data for one day:
MariaDB [solaranlage]> SELECT zeitstempel AS time, ROUND(AVG(leistung)/1000) AS 'Power [June 2023]' FROM anlage
-> WHERE (zeitstempel>'2023-06-01') AND (zeitstempel<'2023-07-01')
-> GROUP BY HOUR(zeitstempel);
+---------------------+-------------------+
| time | Power [June 2023] |
+---------------------+-------------------+
| 2023-06-01 05:00:07 | 9 |
| 2023-06-01 06:00:08 | 220 |
| 2023-06-01 07:00:07 | 827 |
| 2023-06-01 08:00:07 | 1346 |
| 2023-06-01 09:00:07 | 1530 |
| 2023-06-01 10:00:07 | 1711 |
| 2023-06-01 11:00:07 | 1814 |
| 2023-06-01 12:00:08 | 1689 |
| 2023-06-01 13:00:07 | 1512 |
| 2023-06-01 14:00:08 | 1215 |
| 2023-06-01 15:00:07 | 853 |
| 2023-06-01 16:00:07 | 458 |
| 2023-06-01 17:00:07 | 264 |
| 2023-06-01 18:00:07 | 192 |
| 2023-06-01 19:00:07 | 133 |
| 2023-06-01 20:00:07 | 59 |
| 2023-06-01 21:00:07 | 9 |
| 2023-06-01 22:00:07 | 0 |
+---------------------+-------------------+
18 rows in set (0,035 sec)
This also explains the sequence DATE_ADD(zeitstempel, INTERVAL -15 DAY) in query C. That sequence simply “brings back” the result curves from the day 2023-06-16 to the day 2023-06-01 and thus into our selected time period.
We also need to adjust some panel options, and these are:
- Standard options → Unit: Select Watt (W)
- Standard options → Min: Set to 0
- Standard options → Max: Set to 2500
- Standard options → Decimals: Set to 1

The green curve displays the average data over the whole month of June whereas the yellow curve represents the first half, and the blue curve represents the second half. We can see that this year (2023), the second half of June was much worse than the first half of June, in terms of power generation. Only in the time window [16:00-18:00], all three curves show the same values.
Power Distribution between 12:00 and 13:00 [June 2023]
Let us go back to the assumption we used for the second visualization on this article where we assumed a normal distribution of power values for a reasonably small timeframe (one hour over one month). Is this actually a true or maybe a wrong assumption? In order to get a better feeling, we use the built-in Grafana visualization Histogram and look how the power value samples (4 per hour) distribute in the month of June, in the first half of June (where we had a sunny sky), and in the second half of June (where we had a partially cloudy sky).
For the following three histograms, we use the panel options:
- Histogram → Bucket size: Set to 50
- Histogram → Bucket offset : Set to 0
- Standard options → Unit: Select Watt (W)
- Standard options → Min: Set to 0
- Standard options → Max: Set to 2500
- Standard options → Color scheme: Select Single color
The first histogram shows the distribution of power samples (120 in total) over the whole month of June:
SELECT zeitstempel, ROUND(leistung/1000) AS 'Power 12:00-13:00 [June 2023]' FROM anlage
WHERE (zeitstempel>'2023-06-01') AND (zeitstempel<'2023-07-01')
AND (HOUR(zeitstempel)=12);

The second histogram shows the distribution of power samples (60 in total) over the first half of June:
SELECT zeitstempel, ROUND(leistung/1000) AS 'Power 12:00-13:00 [June 2023; first half]' FROM anlage
WHERE (zeitstempel>'2023-06-01') AND (zeitstempel<'2023-06-16')
AND (HOUR(zeitstempel)=12);

The third histogram shows the distribution of power samples (60 in total) over the second half of June:
SELECT zeitstempel, ROUND(leistung/1000) AS 'Power 12:00-13:00 [June 2023; second half]' FROM anlage
WHERE (zeitstempel>'2023-06-16') AND (zeitstempel<'2023-07-01')
AND (HOUR(zeitstempel)=12);

From all curves, we can see that the distribution that we see is by no means a normal distribution. It certainly has a region where we have a bell-shaped density of power values, but during periods with cloudy skies, we can also have many values in the range below the bell-shaped values of the solar generator. Therefore, the assumption of a normal distribution does not hold true in reality.
Conclusion
Visualizations
Grafana allows us to create advanced visualizations from MySQL data with little effort. The point with some less used visualizations in Grafana however, is that there is little documentation available, or to say it in a different way: Sometimes the documentation is sufficient if you already know how it works, but insufficient if you do not yet know how it works. I hope that at least with this article, I could contribute to enlarge knowledge.
Solar Power Generation
The data used in this example shows a solar power generation unit in Germany, facing the East. From the visualized data, we can draw several conclusions:
- Peak power of the installed solar power system is shortly before lunch time and decreases already significantly after 14:00.
- While power generation looks good until even October, from November on, the yield is poor.
- While the generated electricity in the winter months might still help to decrease the electricity bill, it becomes clear that if you were to use a heat pump for the heating of your home, you cannot count on solar power generation to support you in a meaningful way until you have some 50 kW+ peak solar capacity on a large roof.
- As the solar power generation peaks shortly before noon, it might make sense to combine units that face to the East and units that face to the West. This observation is in-line with recent proposals that claim that units facing South are not much better than a combination of units facing East and West (e.g., [3]).
Outlook
The visualizations given here are only an easy beginning. We might want to use more serious statistical analysis to answer questions like:
- Can we infer from a bad start in the morning (little power) that the rest of the day will stay like this?
- How probable is it that we have 3+ cloudy days (with little electricity) in a row, and that translates to: If it rained today, should I wait for 1…3 more days before I turn on the washing machine?
- Does it make economic sense to invest in a battery storage in order to capture excess electricity in summer days? How large should this battery be?
Files
The following dataset was used for the graphs:
Sources
- [1] = Smarthome: AVM-Steckdosen per Skript auslesen – Gabriel Rüeck (caipirinha.spdns.org)
- [2] = Download Grafana | Grafana Labs
- [3] = Ost-West-Ausrichtung von Photovoltaikanlagen (solarenergie.de)
Disclaimer
- Program codes and examples are for demonstration purposes only.
- Program codes are not recommended be used in production environments without further enhancements in terms of speed, failure-tolerance or cyber-security.
- While program codes have been tested, they might still contain errors.
Smarthome: AVM-Steckdosen per Skript auslesen
Zusammenfassung
In diesem Artikel stelle ich ein bash-Skript vor, mit welchem man Smart-Home-Daten der Steckdosen FRITZ!DECT 200 oder FRITZ!DECT 210 aus einer FRITZ!Box der Firma AVM auslesen kann. Ein weiteres Skript bedient sich des ersten Skripts und speichert Daten in einer einfachen MySQL-Datenbank.
Voraussetzungen
- Es wird eine der neueren FRITZ!Boxen der Firma AVM mit aktueller Firmware eingesetzt. Eine 7490 ist zählt bereits als eine der “neueren” Boxen; insofern stehen die Chancen gut, dass viele Interessenten dieses Skript einsetzen können.
- Es werden eine oder mehrere der AVM-Steckdosen FRITZ!DECT 200 oder FRITZ!DECT 210 eingesetzt. Mit diesen ist das Skript erfolgreich getestet worden.
- Getestet wurde mit einer schon älteren FRITZ!Box 7490 und der Firmware 7.57.
- Idealerweise hat man einen dauerhaft laufenden kleinen Linux-Server, auf dem die hier gezeigten Skripte und die MySQL-Datenbank laufen können.
Beschreibung und Nutzung
Die beiden AVM-Steckdosen FRITZ!DECT 200 oder FRITZ!DECT 210 sind, in Verbindung mit einer der neueren FRITZ!-Boxen, eine sehr gute Möglichkeit zur Steuerung von Strom-Verbrauchen und -Erzeugern und zur Leistungs-, Energie- und Temperaturmessung. Die Steckdosen messen dabei die durchfließende Leistung unabhängig von der Richtung und können daher auch sehr gut zur Erfassung der Stromerzeugung bei Balkon-Kraftwerken benutzt werden.
Wenn die FRITZ!Box konfiguriert wird, legt sie einen Standardbenutzer fritzxxxx an, sofern man den Login mit lediglich einem Passwort aus dem Heimnetzwerk erlaubt hat. In der FRITZ!Box findet man diesen Benutzer auf der Seite System → FRITZ!Box-Benutzer. Dieser Benutzer sollte für unsere Zwecke die Rechte Smart Home haben. Bei mir sieht das so aus:

Die üder das Protokoll DECT verbundenen Smarthome-Geräte findet man in der FRITZ!-Box auf der Seite Smart Home → Geräte und Gruppen.

Klickt man auf ein Gerät, so findet man in den Konfigurationseinstellungen, dann findet man die Aktor Identifikationsnummer (AIN) des entsprechenden Geräts. Diese wird für das Skript benötigt. Man beachte, dass es sich um zwei Zahlenblöcke handelt, die durch genau ein Leerzeichen getrennt sind.

Im folgenden Skript sind die AINs meiner vier Steckdosen mit den bei diesen Modellen möglichen Abfragen für Leistung, Energie, Temperatur und Spannung eingetragen. Das Skript basiert auf einer Vorlage aus [1], die aber nach einem Firmware-Update nicht mehr richtig funktionierte, weswegen ich Änderungen durchgeführt habe. Dabei habe ich mich an der technischen Dokumentation aus [2], [3] orientiert. Man muss natürlich noch die IP-Adresse seiner FRITZ!-Box (hier mit 192.168.2.8 angenommen), den Benutzernamen (USER) und das Passwort (PASS) anpassen.
Der erste Teil des Skripts erzeugt eine gültige Session ID, die man im Verlauf der eigentlichen Abfrage benötigt. Das Vorgehen dazu ist in [2] beschrieben; im Wesentlichen müssen Zeichenfolgen ausgewertet werden, zwischendrin muss mal auf UTF-16LE-Codierung gewechselt werden, und es müssen auch noch der Benutzername und das Passwort übermittelt werden.
Im zweiten Teil nutzen wir die Session ID und fragen eines der vier Geräte und eines der vier möglichen Argumente ab. Aufgerufen wird das Skript (es heißt bei mir “avm_smartsocket.sh”) mit:
./avm_smartsocket.sh Fernseher|Solaranlage|Wärmepumpe|Entfeuchter energy|power|temperature|voltage
Ich denk, durch die klare Struktur lässt sich das Skript auch sehr leicht für eigene Zwecke anpassen.
#!/bin/bash
#
# https://raspberrypiandstuff.wordpress.com/2017/08/03/reading-the-temperature-from-fritzdect-devices/
# modified by Gabriel Rüeck, 2023-12-06
#
# -----------
# definitions
# -----------
readonly TERM_LRED='\e[91m'
readonly TERM_RESET='\e[0m'
readonly FBF="http://192.168.2.8"
readonly USER="fritz1234"
readonly PASS="geheimes_fritzbox_passwort"
# -------------------
# check for arguments
# -------------------
if [ $# -lt 2 ]; then
echo -e "${TERM_LRED}Call the function with ${0} Fernseher|Solaranlage|Wärmepumpe|Entfeuchter energy|power|temperature|voltage.${TERM_RESET}\n"
exit 1
fi
# ---------------
# fetch challenge
# ---------------
CHALLENGE=$(curl -s "${FBF}/login_sid.lua" | grep -Po '(?<=<Challenge>).*(?=</Challenge>)')
# -----
# login
# -----
MD5=$(echo -n ${CHALLENGE}"-"${PASS} | iconv -f UTF-8 -t UTF-16LE | md5sum -b | awk '{print substr($0,1,32)}')
RESPONSE="${CHALLENGE}-${MD5}"
SID=$(curl -i -s -k -d "response=${RESPONSE}&username=${USER}" "${FBF}/login_sid.lua" | sed -n 's/.*<SID>\([[:xdigit:]]\+\)<\/SID>.*/\1/p')
# ----------
# define AIN
# ----------
case "${1}" in
Fernseher) AIN="11630%200239598";;
Solaranlage) AIN="11657%200732166";;
Wärmepumpe) AIN="11630%200325768";;
Entfeuchter) AIN="11630%200325773";;
*) exit 1;;
esac
# ------------
# fetch values
# ------------
case "${2}" in
energy) RESULT=$(curl -s ${FBF}'/webservices/homeautoswitch.lua?ain='${AIN}'&sid='${SID}'&switchcmd=getswitchenergy');;
power) RESULT=$(curl -s ${FBF}'/webservices/homeautoswitch.lua?ain='${AIN}'&sid='${SID}'&switchcmd=getswitchpower');;
temperature) RESULT=$(curl -s ${FBF}'/webservices/homeautoswitch.lua?ain='${AIN}'&sid='${SID}'&switchcmd=gettemperature')
[[ ${RESULT} -lt 0 ]] && RESULT=0;;
voltage) RESULT=$((curl -s ${FBF}'/webservices/homeautoswitch.lua?ain='${AIN}'&sid='${SID}'&switchcmd=getdevicelistinfos') | sed -n 's/.*<voltage>\(.*\)<\/voltage>.*/\1/p');;
*) exit 1;;
esac
# -------------
# output values
# -------------
echo ${RESULT}
Wichtig ist noch wissen, welche Werte wir jeweils erhalten, und das sind bei den AVM-Steckdosen FRITZ!DECT 200 oder FRITZ!DECT 210:
Argument | Angezeigtes Resultat |
power | Momentane Leistung in mW |
energy | Akkumulierte Energiemenge in Wh |
temperature | Momentane Temperatur in 0,1 °C (Beispiel: 65 ≙ 6,5 °C) |
voltage | Momentane Netzspannung in mV |
Nun wollen wir mit dem oben gelisteten Skript Daten der Steckdose Solaranlage auslesen und in einer MySQL-Datenbank speichern. Dazu setzen wir eine kleine und sehr einfache Datenbank auf:
# Datenbank für Analysen der Solaranlage
# V1.0; 2023-05-01, Gabriel Rüeck <gabriel@rueeck.de>, <gabriel@caipirinha.spdns.org>
# Create a new database
CREATE DATABASE solaranlage DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;
GRANT ALL ON solaranlage.* TO 'gabriel';
USE solaranlage;
SET default_storage_engine=Aria;
CREATE TABLE anlage (zeitstempel TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,\
leistung INT UNSIGNED DEFAULT NULL,\
energie INT UNSIGNED DEFAULT NULL);
Zugegebenermaßen ist das CHARSET der Datenbank ziemlich egal; man hätte nicht unbedingt noch utf8mb4 vorsehen müssen. Ich setze das aber generell für meine Datenbanken, nachdem ich mal längere Zeit nach einem Fehler in einer komplexeren Datenbank gesucht hatte und dann lernen musste, dass UTF-8 bei MySQL nicht automatisch alle UTF-8-Zeichen beinhaltet [4].
Der Zeitstempel muss beim Beschreiben der Datenbank nicht explizit gesetzt werden; per Default wird der momentane Zeitstempel des Datenbank-Servers benutzt.
Das nun folgende Skript nutzt unser erstes Skript, liest Daten für Leistung und Energie aus und speichert sie in der MySQL-Datenbank:
#!/bin/bash
#
# Dieses Skript liest die Leistung und die erzeugte Energiemenge der Solaranlage und speichert das Ergebnis in einer MySQL-Datenbank ab.
#
# V1.0; 2023-05-01, Gabriel Rüeck <gabriel@rueeck.de>, <gabriel@caipirinha.spdns.org>
#
# CONSTANTS
declare -r MYSQL_DATABASE='solaranlage'
declare -r MYSQL_SERVER='localhost'
declare -r MYSQL_USER='gabriel'
declare -r MYSQL_PASS='geheimes_mysql_passwort'
declare -r READ_SCRIPT='~/avm_smartsocket.sh'
# VARIABLES
declare -i power energy
# PROGRAM
power=$(${READ_SCRIPT} Solaranlage power)
energy=$(${READ_SCRIPT} Solaranlage energy)
mysql --default-character-set=utf8mb4 -B -N -r -D "${MYSQL_DATABASE}" -h ${MYSQL_SERVER} -u ${MYSQL_USER} -p ${MYSQL_PASS} -e "INSERT INTO anlage (leistung,energie) VALUES (${power},${energy});"
Abhängig davon, wie oft man dieses Skript laufen lässt und Daten abspeichern lässt, ergeben sich dann im folgenden Beispiel genannten Einträge in der Datenbank:

Diese Daten können dann in anderen Systemen ausgewertet und visualisiert werden.
Zusammenfassung
In diesem einfachen Beispiel sieht man, wie man Smart-Home-Daten der Steckdosen Fritz!DECT 200 oder Fritz!DECT 210 aus einer FRITZ!-Box der Firma AVM abfragen und sich die Daten in einer MySQL-Datenbank zur weiteren Verwendung speichern kann. Sicherlich lässt sich dieses Prinzip auch auf Smart-Home-Geräte anderer Hersteller anwenden, sofern deren Schnittstellen hinreichend offen und beschrieben sind.
Quellenangaben
- [1] = Reading the temperature from FRITZ!DECT devices – Raspberry Pi and Stuff (wordpress.com)
- [2] = Session-IDs im FRITZ!Box Webinterface
- [3] = AHA-HTTP-Interface
- [4] = encoding – What is the difference between utf8mb4 and utf8 charsets in MySQL? – Stack Overflow
Disclaimer
- Der Programmcode und die Beispiele sind lediglich zu Demonstrationszwecken gedacht.
- Der Programmcode wurde nicht auf Geschwindigkeit optimiert (Es ist sowieso ein Bash-Skript, da darf man keine Wunder erwarten.).
- Der Programmcode wurde nicht unter dem Gesichtspunkt der Cybersicherheit geschrieben. Für den Einsatz in Produktivumgebungen müssen möglicherweise Anpassungen vorgenommen werden.
- Der Programmcode wurde zwar getestet, kann aber dennoch Fehler enthalten.
Protected: Urlaub in Brasilien (2023-02)
Protected: Road-Trip nach Portugal
Setting up Client VPNs, Policy Routing
Executive Summary
This blog post is the continuation my previous blog post Setting up Dual Stack VPNs and explains how I use client VPNs together with simple Policy Routing on my Linux server in order to relegate outgoing connections to various network interfaces and, ultimately, to different countries. The examples use IPv4 only.
Background
The approach was originally developed back in 2011…2014 when I lived in China and maintained several outgoing VPN connections from my Linux server to end points “in the West” so that I could circumvent internet censorship in China [8]. With the VPN service described Setting up Dual Stack VPNs, it was then possible for me to be in town and to connect the smartphone to my Linux server (in the same town). From there, the connections to sites blocked in China would run over the client VPNs of the Linux server so that I could use Google Maps on my smartphone, for example (which at that time had already been blocked in China).
Preconditions
Routing in Linux follows some very clever approaches which can be combined in mighty ways. Those readers who want to understand all of the underlying theory, are encouraged to study the (older) documents [1], [2], [3], even if parts of the content might not be relevant any more. Those readers who just want to follow and replicate the approach in this blog, should at least study the documents [4], [5], [6].
Apart from that, in order to replicate the approach described here, you should:
- … fulfil all preconditions listed in the blog post Setting up Dual Stack VPNs
- … have running the setup similar to the one described in the blog post Setting up Dual Stack VPNs
- … have access to a commercial VPN provider allowing you to run several client connections on the same machine
- … have at least read the documents [4], [5], [6]
Description and Usage
The graph below shows the setup on my machine caipirinha.spdns.org with. The 5 VPN services (blue, green color) were already described in blog post Setting up Dual Stack VPNs. Now, we have a close look at the 3 VPN clients which use a commercial VPN service (ocker color) in order to connect to VPN end points in 3 different countries (Portugal, Singapore, Thailand).

Enabling Routing
Routing needs to be enabled on the Linux server. I personally also decided to switch off the privacy extensions on the Linux server, but that is a personal matter of taste:
# Enable "loose" reverse path filtering and prohibit icmp redirects
sysctl -w net.ipv4.conf.all.rp_filter=2
sysctl -w net.ipv4.conf.all.send_redirects=0
sysctl -w net.ipv4.conf.eth0.send_redirects=0
sysctl -w net.ipv4.icmp_errors_use_inbound_ifaddr=1
# Enable IPv6 routing, but keep SLAAC for eth0
sysctl -w net.ipv6.conf.eth0.accept_ra=2
sysctl -w net.ipv6.conf.all.forwarding=1
# Switch off the privacy extensions
sysctl -w net.ipv6.conf.eth0.use_tempaddr=0
Routing Tables
We now must have a closer look at the concept of the routing table. A routing tables basically lists routes to particular network destinations. An example is the routing table main on my Linux server. It reads:
caipirinha:~ # ip route list table main
default via 192.168.2.1 dev eth0 proto dhcp
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 dev tun4 proto kernel scope link src 192.168.10.1
192.168.11.0/24 dev tun5 proto kernel scope link src 192.168.11.1
192.168.12.0/24 dev tun6 proto kernel scope link src 192.168.12.1
192.168.13.0/24 dev tun7 proto kernel scope link src 192.168.13.1
192.168.14.0/24 dev wg0 proto kernel scope link src 192.168.14.1
This table has 7 entries, and they have this meaning:
- (“default via…”) Connections to IP addresses that do not have a corresponding entry in the routing table shall be forwarded via the interface eth0 and to the router IP address 192.168.2.1 (an AVM Fritz! Box).
- Connections to the network 192.168.2.0/24 shall be forwarded via the interface eth0 using the source IP address 192.168.2.3 (the Linux server itself).
- Connections to the network 192.168.10.0/24 shall be forwarded via the interface tun4 using the source IP address 192.168.10.1 (the Linux server itself). This network belongs to one of the 5 VPN services on my Linux server.
- Connections to the network 192.168.11.0/24 shall be forwarded via the interface tun5 using the source IP address 192.168.11.1 (the Linux server itself). This network belongs to one of the 5 VPN services on my Linux server.
- Connections to the network 192.168.12.0/24 shall be forwarded via the interface tun6 using the source IP address 192.168.12.1 (the Linux server itself). This network belongs to one of the 5 VPN services on my Linux server.
- Connections to the network 192.168.13.0/24 shall be forwarded via the interface tun7 using the source IP address 192.168.13.1 (the Linux server itself). This network belongs to one of the 5 VPN services on my Linux server.
- Connections to the network 192.168.14.0/24 shall be forwarded via the interface wg0 using the source IP address 192.168.14.1 (the Linux server itself). This network belongs to one of the 5 VPN services on my Linux server.
Usually, a routing table should have a default entry which sends all IP traffic that is not explicitly routed to other network interfaces to the default router of a network. Otherwise, no meaningful internet access is possible.
A Linux system can have up to 256 routing tables which are defined in /etc/iproute2/rt_tables. They can either be used by their number or by their name. On my Linux server, I have set up 3 additional routing tables, named “Portugal”, “Singapur”, “Thailand”. You can see in the file /etc/iproute2/rt_tables that besides the table main, the tables local, default, and unspec do already exist, but they are not of interest for our purposes.
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
#1 inr.ruhep
240 Portugal
241 Singapur
242 Thailand
Right now (before we set up the client VPNs), all 3 routing tables look the same as shown here:
caipirinha:~ # ip route list table Portugal
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 via 192.168.10.1 dev tun4
192.168.11.0/24 via 192.168.11.1 dev tun5
192.168.12.0/24 via 192.168.12.1 dev tun6
192.168.13.0/24 via 192.168.13.1 dev tun7
192.168.14.0/24 via 192.168.14.1 dev wg0
caipirinha:~ # ip route list table Singapur
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 via 192.168.10.1 dev tun4
192.168.11.0/24 via 192.168.11.1 dev tun5
192.168.12.0/24 via 192.168.12.1 dev tun6
192.168.13.0/24 via 192.168.13.1 dev tun7
192.168.14.0/24 via 192.168.14.1 dev wg0
caipirinha:~ # ip route list table Thailand
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 via 192.168.10.1 dev tun4
192.168.11.0/24 via 192.168.11.1 dev tun5
192.168.12.0/24 via 192.168.12.1 dev tun6
192.168.13.0/24 via 192.168.13.1 dev tun7
192.168.14.0/24 via 192.168.14.1 dev wg0
The content of routing tables can be listed with the command ip route list table ${tablename}, and ${tablename} needs to exist in /etc/iproute2/rt_tables. It is important to notice that so far, none of these 3 routing tables have a default route. They only contain the home network and the networks of the 5 VPN services. Right now, these tables are not yet useful. In case you wonder how it comes that these 3 routing tables are populated with their entries. That needs to done either manually or by a script (see next chapter).
OpenVPN Server Configuration (Update)
Now that we have 3 additional routing tables, we must ensure that the networks of our 5 VPN services are also inserted in these 3 routing tables. Therefore, we modify the configuration files described in the blog post Setting up Dual Stack VPNs so that a script runs when the VPN service is started. In the configuration files for the openvpn configuration, we insert the statement:
up /etc/openvpn/start_vpn.sh
In the configuration files for the WireGuard configuration, we insert the statement:
PostUp = /etc/openvpn/start_vpn.sh %i - - 192.168.14.1
The effect of these statements is that the script /etc/openvpn/start_vpn.sh is executed when the VPN service has been set up. If no arguments are specified, openvpn hands over 5 arguments to the scripts (see [9], section “–up cmd”). In the WireGuard configuration, we have to explicitly specify the arguments, the “%i” means the interface (see [10], “PostUp”). In my case, “%i” hence stands for wg0.
The script /etc/openvpn/start_vpn.sh was originally developed for the openvpn configuration and therefore intakes all the default arguments that openvpn transmits, although only the first and the fourth argument are used. Therefore, in the WireGuard configuration, there are two “-” inserted as bogus arguments. That is surely something that can be solved more elegantly.
What does this script do? It essentially writes the same entry that is done automatically in the routing table main to the 3 additional routing tables Portugal, Singapur, and Thailand. It assumes that VPN services have a /24 network (true in my own case, not necessarily for other setups).
#!/bin/bash
#
# This script sets the VPN parameters in the routing tables "Portugal", "Singapur" and "Thailand" once the server has been started successfully.
# Set the correct PATH environment
PATH='/sbin:/usr/sbin:/bin:/usr/bin'
VPN_DEV="${1}"
VPN_SRC="${4}"
VPN_NET=$(echo "${VPN_SRC}" | cut -d . -f 1-3)".0/24"
for TABLE in Portugal Singapur Thailand; do
ip route add ${VPN_NET} dev ${VPN_DEV} via ${VPN_SRC} table ${TABLE}
done
For our experiments, we now also need to allocate 3 dedicated IP addresses to 3 devices in one of the VPN services on the Linux server so that the devices always get the same IP address by the VPN service when they connect (pseudo-static IP configuration). As described in the blog post Setting up Dual Stack VPNs, section “Dedicated Configurations”, we can achieve this by creating 3 files with the common names of the devices (gabriel-SM-G991B, gabriel-SM-N960F, gabriel-SM-T580) that were used to create their certificates. I did that for the UDP-based VPN, full tunneling openvpn, and the 3 configuration files are listed here:
caipirinha:~ # cat /etc/openvpn/conf-1194/gabriel-SM-G991B
# Spezielle Konfigurationsdatei für Gabriels Galaxy S20 (gabriel-SM-G991B)
#
ifconfig-push 192.168.10.250 255.255.255.0
ifconfig-ipv6-push fd01:0:0:a:0:0:1:fa/111 fd01:0:0:a::1
caipirinha:~ # cat /etc/openvpn/conf-1194/gabriel-SM-N960F
# Spezielle Konfigurationsdatei für Gabriels Galaxy Note 9 (gabriel-SM-N960F)
#
ifconfig-push 192.168.10.251 255.255.255.0
ifconfig-ipv6-push fd01:0:0:a:0:0:1:fb/111 fd01:0:0:a::1
caipirinha:~ # cat /etc/openvpn/conf-1194/gabriel-SM-T580
# Spezielle Konfigurationsdatei für Gabriels Galaxy Tablet A (gabriel-SM-T580)
#
ifconfig-push 192.168.10.252 255.255.255.0
ifconfig-ipv6-push fd01:0:0:a:0:0:1:fc/111 fd01:0:0:a::1
One can easily identify the respective IPv4 and IPv6 addresses which shall be allocated to the 3 named devices:
- gabriel-SM-G991B shall get the IPv4 192.168.10.250 and the IPv6 fd01:0:0:a:0:0:1:fa.
- gabriel-SM-N960F shall get the IPv4 192.168.10.251 and the IPv6 fd01:0:0:a:0:0:1:fb.
- gabriel-SM-T580 shall get the IPv4 192.168.10.252 and the IPv6 fd01:0:0:a:0:0:1:fc.
Let us not forget that this is the configuration for only one out of the 5 VPN services. If the devices connect to a VPN service different from the UDP-based VPN, full tunneling openvpn, then, these configurations do not have any effect.
OpenVPN Client Configuration
For the experiments below, we will set up 3 client VPN connections to different countries. As I do not have infrastructure outside of Germany, I use a commercial VPN provider, in my case this is PureVPN™ (as I once got an affordable 5-years subscription). Choosing a suitable VPN provider is not easy, and I strongly recommend to research test reports and forums which deal with the configuration on Linux before you choose any subscription to a commercial VPN provider. In my case, the provider (PureVPN™) offers openvpn Linux configuration as a download. I just had to make some modifications as otherwise, the VPN wants to be the default connection for all internet traffic; this is not what we want when we do our own policy routing. I chose the TCP configuration as the UDP configuration, which is normally preferred, did not run in a stable fashion at the time of writing this article. The client configuration files also contain the ca, the certificate, and the key file at the end (not shown here).
TCP-based split VPN to Portugal
# Konfigurationsdatei für den openVPN-Client auf CAIPIRINHA zur Verbindung nach PureVPN (Portugal)
auth-user-pass /etc/openvpn/purevpn.login
auth-nocache
auth-retry nointeract
client
comp-lzo
dev tun0
ifconfig-nowarn
key-direction 1
log /var/log/openvpn_PT.log
lport 5456
mute 20
proto tcp
persist-key
persist-tun
remote pt2-auto-tcp.ptoserver.com 80
remote-cert-tls server
route-nopull
script-security 2
status /var/run/openvpn/status_PT
up /etc/openvpn/start_purevpn.sh
down /etc/openvpn/stop_purevpn.sh
verb 3
<ca>
-----BEGIN CERTIFICATE-----
MIIE6DCCA9CgAwIBAgIJAMjXFoeo5uSlMA0GCSqGSIb3DQEBCwUAMIGoMQswCQYD
...
4ZjTr9nMn6WdAHU2
-----END CERTIFICATE-----
</ca>
<cert>
-----BEGIN CERTIFICATE-----
MIIEnzCCA4egAwIBAgIBAzANBgkqhkiG9w0BAQsFADCBqDELMAkGA1UEBhMCSEsx
...
21oww875KisnYdWjHB1FiI+VzQ1/gyoDsL5kPTJVuu2CoG8=
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBAMbJ8p+L+scQz57g
...
d7q7xhec5WHlng==
-----END PRIVATE KEY-----
</key>
<tls-auth>
#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
e30af995f56d07426d9ba1f824730521
...
dd94498b4d7133d3729dd214a16b27fb
-----END OpenVPN Static key V1-----
</tls-auth>
TCP-based split VPN to Singapore
# Konfigurationsdatei für den openVPN-Client auf CAIPIRINHA zur Verbindung nach PureVPN (Singapur)
auth-user-pass /etc/openvpn/purevpn.login
auth-nocache
auth-retry nointeract
client
comp-lzo
dev tun1
ifconfig-nowarn
key-direction 1
log /var/log/openvpn_SG.log
lport 5457
mute 20
proto tcp
persist-key
persist-tun
remote sg2-auto-tcp.ptoserver.com 80
remote-cert-tls server
route-nopull
script-security 2
status /var/run/openvpn/status_SG
up /etc/openvpn/start_purevpn.sh
down /etc/openvpn/stop_purevpn.sh
verb 3
...
TCP-based split VPN to Thailand
# Konfigurationsdatei für den openVPN-Client auf CAIPIRINHA zur Verbindung nach PureVPN (Thailand)
auth-user-pass /etc/openvpn/purevpn.login
auth-nocache
auth-retry nointeract
client
comp-lzo
dev tun2
ifconfig-nowarn
key-direction 1
log /var/log/openvpn_TH.log
lport 5458
mute 20
proto tcp
persist-key
persist-tun
remote th2-auto-tcp.ptoserver.com 80
remote-cert-tls server
route-nopull
script-security 2
status /var/run/openvpn/status_TH
up /etc/openvpn/start_purevpn.sh
down /etc/openvpn/stop_purevpn.sh
verb 3
...
I stored these configurations in the files:
- /etc/openvpn/client_PT.conf
- /etc/openvpn/client_SG.conf
- /etc/openvpn/client_TH.conf
Let us discuss some configuration items:
- auth-user-pass refers to the file /etc/openvpn/purevpn.login which contains the login and password for my VPN service. It is referenced here so that I do not have to enter them when I start the connection or when the connection restarts after a breakdown.
- cipher refers to an algorithm that PureVPN™ uses on their server side.
- PureVPN™ also uses compression on the VPN connection, and this is turned on by the line comp-lzo.
- As we want to do policy routing, we need to know which VPN we are dealing with. Therefore, I attribute a dedicated tun device as well as a dedicated lport (source port) to each connection.
- remote names the server and port given in the downloaded configuration files.
- route-nopull is very important as otherwise, the default route would be changed. However, for our purposes, we do not want any routes to be changed automatically, we will do that by policy routing later.
- up and down name a start and a stop script. The start script is executed after the connection has been established, and the stop script is executed when the connection is disbanded. As the scripts use various command, we need to set script-security accordingly.
- The initial configuration always takes some time, and so I have set verb to “3” in order to have more verbosity in the log file, for debugging purposes.
Let’s now look at the start script /etc/openvpn/start_purevpn.sh. This script depends on the installation of the tool library ipcalc as this library eases some computations.
#!/bin/bash
#
# This script sets the VPN parameters in the routing tables "main", "Portugal", "Singapur" and "Thailand" once the connection has been successfully established.
# This script requires the tool "ipcalc" which needs to be installed on the target system.
# Set the correct PATH environment
PATH='/sbin:/usr/sbin:/bin:/usr/bin'
VPN_DEV=$1
VPN_SRC=$4
VPN_MSK=$5
VPN_GW=$(ipcalc ${VPN_SRC}/${VPN_MSK} | sed -n 's/^HostMin:\s*\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*/\1/p')
VPN_NET=$(ipcalc ${VPN_SRC}/${VPN_MSK} | sed -n 's/^Network:\s*\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\/[0-9]\{1,2\}\).*/\1/p')
case "${VPN_DEV}" in
"tun0") ROUTING_TABLE='Portugal';;
"tun1") ROUTING_TABLE='Singapur';;
"tun2") ROUTING_TABLE='Thailand';;
esac
iptables -t filter -A INPUT -i ${VPN_DEV} -m state --state NEW,INVALID -j DROP
iptables -t filter -A FORWARD -i ${VPN_DEV} -m state --state NEW,INVALID -j DROP
ip route add ${VPN_NET} dev ${VPN_DEV} proto kernel scope link src ${VPN_SRC} table ${ROUTING_TABLE}
ip route replace default dev ${VPN_DEV} via ${VPN_GW} table ${ROUTING_TABLE}
What does this script do? It executes these steps:
- It blocks connections with the state NEW or INVALID in the filter chains INPUT and FORWARD. Later (down in this article), this shall be explained more in detail. For now, it suffices to know that we want to avoid those connections that originate from the commercial VPN network shall be blocked. We must keep in mind that by using commercial VPN connections, we make the Linux server vulnerable to connections that might come from these networks. If everything was correctly configured on the side of the VPN provider, there should never be such a connection that originates from the network because individual VPN users should not be able to “see” each other. There should only be connections that originate from our Linux server, and subsequently, we will get reply packets, of course, and have a bidirectional communication. Nevertheless, my own experience with various VPN providers has shown that there is a certain amount of unrelated stray packets that reach the Linux server, and I want to filter those out.
- It adds the client network (here, a /27 network) to the respective routing table Portugal, Singapore, or Thailand.
- It sets the default route in the respective routing table to the VPN endpoint. Ultimately, every routing table gets a default route if all 3 client VPNs are engaged. I use ip route replace rather than ip route add because ip route replace does not throw an error if there is already a default route in the routing table.
Consequently, the script /etc/openvpn/stop_purevpn.sh serves to clean up the entries in the filter table. We do not have to remove the entries in the 3 additional routing tables as they disappear automatically when the VPN connection is disbanded. This script is somewhat smaller:
#!/bin/bash
#
# This script removes some routing table entries when the connection is terminated.
# Set the correct PATH environment
PATH='/sbin:/usr/sbin:/bin:/usr/bin'
VPN_DEV=$1
iptables -t filter -D INPUT -i ${VPN_DEV} -m state --state NEW,INVALID -j DROP
iptables -t filter -D FORWARD -i ${VPN_DEV} -m state --state NEW,INVALID -j DROP
Now, that we have all these pieces together, we start the 3 client VPNs with the commands:
systemctl start openvpn@client_PT
systemctl start openvpn@client_SG
systemctl start openvpn@client_TH
After some seconds, the 3 client VPN connections should have fully been set up, and the respective network devices tun0, tun1, tun2 should exist. Similar to what was described in the blog post Setting up Dual Stack VPNs, we must configure network address translation for the 3 client VPNs so that outgoing packets get modified in a way that they have the source IP address of the Linux server for the specific interface over which those packets shall travel. That is done with:
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o tun1 -j MASQUERADE
iptables -t nat -A POSTROUTING -o tun2 -j MASQUERADE
We use MASQUERADE in this case because the IP address of the Linux server can change at each VPN connection, and we do not know the source address beforehand. Otherwise SNAT would be the better option that consumes less CPU power.
Now, we should be able to ping a machine (in this example, Google‘s DNS) via each of the 3 client VPN connections, as shown here:
caipirinha:~ # ping -c 3 -I tun0 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 172.17.66.34 tun0: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=119 time=57.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=119 time=54.5 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=119 time=54.7 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 54.516/55.649/57.727/1.483 ms
caipirinha:~ # ping -c 3 -I tun1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.12.42.41 tun1: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=249 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=247 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=247 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 247.120/247.972/249.111/1.015 ms
caipirinha:~ # ping -c 3 -I tun2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.31.6.38 tun2: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=13.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=14.2 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=22.6 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 13.910/16.934/22.641/4.039 ms
caipirinha:~ # traceroute -i eth0 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 Router-EZ (192.168.2.1) 3.039 ms 2.962 ms 2.927 ms
2 fra1813aihr002.versatel.de (62.214.63.145) 15.440 ms 16.978 ms 18.866 ms
3 62.72.71.113 (62.72.71.113) 16.116 ms 19.534 ms 19.506 ms
4 89.246.109.249 (89.246.109.249) 24.717 ms 25.460 ms 24.659 ms
5 72.14.204.148 (72.14.204.148) 20.530 ms 20.602 ms 89.246.109.250 (89.246.109.250) 24.573 ms
6 * * *
7 dns.google (8.8.8.8) 20.265 ms 16.966 ms 14.751 ms
caipirinha:~ # traceroute -i tun0 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.96.10.33 (10.96.10.33) 50.579 ms 101.574 ms 102.216 ms
2 91.205.230.65 (91.205.230.65) 121.175 ms 121.171 ms 151.320 ms
3 cr1.lis1.edgoo.net (193.163.151.1) 102.156 ms 102.155 ms 102.150 ms
4 Google.AS15169.gigapix.pt (193.136.250.20) 102.145 ms 103.099 ms 103.145 ms
5 74.125.245.100 (74.125.245.100) 103.166 ms 74.125.245.118 (74.125.245.118) 103.156 ms 74.125.245.117 (74.125.245.117) 103.071 ms
6 142.250.237.83 (142.250.237.83) 120.681 ms 142.250.237.29 (142.250.237.29) 149.742 ms 142.251.55.151 (142.251.55.151) 110.302 ms
7 74.125.242.161 (74.125.242.161) 108.651 ms 108.170.253.241 (108.170.253.241) 108.594 ms 108.170.235.178 (108.170.235.178) 108.426 ms
8 74.125.242.161 (74.125.242.161) 108.450 ms 108.429 ms 142.250.239.27 (142.250.239.27) 107.505 ms
9 142.251.54.149 (142.251.54.149) 107.406 ms 142.251.60.115 (142.251.60.115) 108.446 ms 142.251.54.151 (142.251.54.151) 157.613 ms
10 dns.google (8.8.8.8) 107.380 ms 89.640 ms 73.506 ms
caipirinha:~ # traceroute -i tun1 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.12.34.1 (10.12.34.1) 295.583 ms 561.820 ms 625.195 ms
2 146.70.67.65 (146.70.67.65) 687.601 ms 793.792 ms 825.806 ms
3 193.27.15.178 (193.27.15.178) 1130.988 ms 1198.522 ms 1260.560 ms
4 37.120.220.218 (37.120.220.218) 1383.152 ms 37.120.220.230 (37.120.220.230) 825.525 ms 37.120.220.218 (37.120.220.218) 925.081 ms
5 103.231.152.50 (103.231.152.50) 1061.923 ms 1061.945 ms 15169.sgw.equinix.com (27.111.228.150) 993.095 ms
6 108.170.240.225 (108.170.240.225) 1320.654 ms 74.125.242.33 (74.125.242.33) 1164.303 ms 108.170.254.225 (108.170.254.225) 1008.590 ms
7 74.125.251.205 (74.125.251.205) 1009.043 ms 74.125.251.207 (74.125.251.207) 993.251 ms 142.251.49.191 (142.251.49.191) 969.879 ms
8 dns.google (8.8.8.8) 1001.502 ms 1065.558 ms 1073.731 ms
caipirinha:~ # traceroute -i tun2 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.31.3.33 (10.31.3.33) 189.134 ms 399.914 ms 399.941 ms
2 * * *
3 * * *
4 * * *
5 198.84.50.182.static-corp.jastel.co.th (182.50.84.198) 411.679 ms 411.729 ms 411.662 ms
6 72.14.222.138 (72.14.222.138) 433.761 ms 74.125.48.212 (74.125.48.212) 444.509 ms 72.14.223.80 (72.14.223.80) 444.554 ms
7 108.170.250.17 (108.170.250.17) 647.768 ms * 108.170.249.225 (108.170.249.225) 439.883 ms
8 142.250.62.59 (142.250.62.59) 635.318 ms 142.251.224.15 (142.251.224.15) 417.842 ms dns.google (8.8.8.8) 600.600 ms
A traceroute to Google‘s DNS via the 3 client VPN connections shows us the route the packets travel; the first example shows the route via the default connection (eth0):
Finally, we look at the routing tables that have changed after we have established the 3 client VPN connections:
caipirinha:~ # ip route list table main
default via 192.168.2.1 dev eth0 proto dhcp
10.12.42.32/27 dev tun1 proto kernel scope link src 10.12.42.41
10.31.6.32/27 dev tun2 proto kernel scope link src 10.31.6.38
172.17.66.32/27 dev tun0 proto kernel scope link src 172.17.66.34
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 dev tun4 proto kernel scope link src 192.168.10.1
192.168.11.0/24 dev tun5 proto kernel scope link src 192.168.11.1
192.168.12.0/24 dev tun6 proto kernel scope link src 192.168.12.1
192.168.13.0/24 dev tun7 proto kernel scope link src 192.168.13.1
192.168.14.0/24 dev wg0 proto kernel scope link src 192.168.14.1
caipirinha:~ # ip route list table Portugal
default via 172.17.66.33 dev tun0
172.17.66.32/27 dev tun0 proto kernel scope link src 172.17.66.34
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 via 192.168.10.1 dev tun4
192.168.11.0/24 via 192.168.11.1 dev tun5
192.168.12.0/24 via 192.168.12.1 dev tun6
192.168.13.0/24 via 192.168.13.1 dev tun7
192.168.14.0/24 via 192.168.14.1 dev wg0
caipirinha:~ # ip route list table Singapur
default via 10.12.42.33 dev tun1
10.12.42.32/27 dev tun1 proto kernel scope link src 10.12.42.41
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 via 192.168.10.1 dev tun4
192.168.11.0/24 via 192.168.11.1 dev tun5
192.168.12.0/24 via 192.168.12.1 dev tun6
192.168.13.0/24 via 192.168.13.1 dev tun7
192.168.14.0/24 via 192.168.14.1 dev wg0
caipirinha:~ # ip route list table Thailand
default via 10.31.6.33 dev tun2
10.31.6.32/27 dev tun2 proto kernel scope link src 10.31.6.38
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.3
192.168.10.0/24 via 192.168.10.1 dev tun4
192.168.11.0/24 via 192.168.11.1 dev tun5
192.168.12.0/24 via 192.168.12.1 dev tun6
192.168.13.0/24 via 192.168.13.1 dev tun7
192.168.14.0/24 via 192.168.14.1 dev wg0
In the routing tables, we can observe the following new items:
- Each client VPN connection has added a /27 network to the routing table main.
- The script /etc/openvpn/start_purevpn.sh has added the /27 networks to the corresponding routing tables Portugal, Singapur, Thailand so that each routing table only has the /27 network of the connection that leads to the corresponding destination.
- The script /etc/openvpn/start_purevpn.sh has also modified the default route of each of the routing tables Portugal, Singapur, Thailand so that each routing table has the default route of the connection that leads to the corresponding destination.
Routing Policies
Now, we are all set to define routing policies and do our first steps in the field of policy routing.
Simple Policy Routing
In the first example, we will “place” each device (gabriel-SM-G991B, gabriel-SM-N960F, gabriel-SM-T580) in a different country. Let us recall that, when each of these devices connects to the Linux server via the UDP-based, full tunneling openvpn, then each device gets a defined IP address. This allows us to define routing policies based on the IP address [11]. In order to modify the routing policy database of the Linux server, we enter the commands:
ip rule add from 192.168.10.250/32 table Portugal priority 2000
ip rule add from 192.168.10.251/32 table Singapur priority 2000
ip rule add from 192.168.10.252/32 table Thailand priority 2000
The resulting routing policy database looks like this:
caipirinha:~ # ip rule list
0: from all lookup local
2000: from 192.168.10.250 lookup Portugal
2000: from 192.168.10.251 lookup Singapur
2000: from 192.168.10.252 lookup Thailand
32766: from all lookup main
32767: from all lookup default
The number at the beginning of each line in the routing policy database is the priority; this allows us to define routing policies in a defined order. As soon as the selector of a rule matches the a packet, the corresponding action is executed, and no further rules are checked for this packet. [11] lists the possible selectors and actions, and we can see that there are a lot of possibilities, especially when we combine different matching criteria. In the case shown here, our rules tell the Linux server the following:
- Packets with the source IP 192.168.10.250 (device gabriel-SM-G991B) shall be processed in the routing table Portugal.
- Packets with the source IP 192.168.10.251 (device gabriel-SM-N960F) shall be processed in the routing table Singapur.
- Packets with the source IP 192.168.10.252 (device gabriel-SM-T580) shall be processed in the routing table Thailand.
An important rule is the one with the priority 32766; this one tells all packets to use the routing table main. This rule has a very low priority because we want to enable administrators to create many other rules with higher priority that match packets and that are subsequently dealt with in a special way. The rules with the priorities 0, 32766, 32767 are already in the system by default.
When we place the 3 devices gabriel-SM-G991B, gabriel-SM-N960F, and gabriel-SM-T580 outside the home network, either in a different WiFi network or in a mobile network and connect to the Linux server via the VPN services, then, because of the routing policy defined above, the devices will appear in:
- Portugal (gabriel-SM-G991B)
- Singapore (gabriel-SM-N960F)
- Thailand (gabriel-SM-T580)
We can test this with one of the websites that display IP geolocation, for example [13], and the result will look like this:



We must keep in mind that this kind of routing policy routes all outgoing traffic from the 3 devices to the respective countries, irrespective whether this is web or email or any other traffic. This is true for any protocol, and so, a traceroute to Google‘s DNS (8.8.8.8) will really go via the respective country. The images below compare the device gabriel-SM-N960F without VPN (4G mobile network) and with the VPN to the Linux server which then routes the connection via Singapore. One can easily recognize the much higher latency via Singapore. The traceroutes were taken with [14].


Policy Routing with Firewall Marking
While the ip-rule command [11] already offers a lot of possible combinations for the selection of packets, sometimes, one needs more elaborate selection criteria. This is when we use policy routing using firewall marking and the mangle table [15]. We first delete our rule set from above with the sequence:
ip rule del from 192.168.10.250/32 table Portugal priority 2000
ip rule del from 192.168.10.251/32 table Singapur priority 2000
ip rule del from 192.168.10.252/32 table Thailand priority 2000
Then, we enter new rules. Instead of using IP addresses in the selector, we use a so-called “firewall mark” (fwmark). We tell the Linux server to process packets that have a special mark in the routing tables mentioned in the action field of ip-rule:
ip rule add from all fwmark 0x1 priority 5000 lookup Portugal
ip rule add from all fwmark 0x2 priority 5000 lookup Singapur
ip rule add from all fwmark 0x3 priority 5000 lookup Thailand
But how do we mark packets? This is done in the mangle table, one of the 4 tables of the iptables [12] command. With command listed below we specify the marking of TCP packets originating from the listed IP address and going to the destination ports 80 (http) and 443 (https). All other traffic from the device with the listed IP address (e.g., smtp, imap, UDP, ICMP, …) will not be marked.
iptables -t mangle -F
iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark
iptables -t mangle -A PREROUTING -m mark ! --mark 0 -j ACCEPT
iptables -t mangle -A PREROUTING -s 192.168.10.250/32 -p tcp -m multiport --dports 80,443 -m state --state NEW,RELATED -j MARK --set-mark 1
iptables -t mangle -A PREROUTING -s 192.168.10.251/32 -p tcp -m multiport --dports 80,443 -m state --state NEW,RELATED -j MARK --set-mark 2
iptables -t mangle -A PREROUTING -s 192.168.10.252/32 -p tcp -m multiport --dports 80,443 -m state --state NEW,RELATED -j MARK --set-mark 3
iptables -t mangle -A PREROUTING -j CONNMARK --save-mark
Let us have a closer look at the 7 iptables commands:
- This command flushes all chains of the mangle table so that the mangle table is empty.
- This command restores the marks of packets. Here, one must know that the mark of a packet is not stored in the packet itself, as the IP header does not contain a field for such a mark. Rather than that, the Linux Kernel keeps track of the mark and the packet it belongs to. However, when the Linux server sends out a packet to its destination, and the computer at the destination (e.g., a web server) answers with his own packets, then when these packets arrive at our Linux server, we want to mark them, too, because they belong to a data connection whose packets were initially marked and we might need the mark in order to process them correctly. Therefore, we “restore” the mark in the PREROUTING chain of the mangle table.
- This command accepts all packets that have a non-zero mark. I am not really sure if that command is needed at all (should be tested).
- This command sets the mark “1” to those packets that fulfil all these requirements:
- It comes from the source IP address 192.168.10.250.
- It uses TCP.
- It goes to one of the destination ports 80 (http) or 443 (https).
- It constitutes a NEW or RELATED connection.
- This command sets the mark “2” to those packets that fulfil all these requirements:
- It comes from the source IP address 192.168.10.251.
- It uses TCP.
- It goes to one of the destination ports 80 (http) or 443 (https).
- It constitutes a NEW or RELATED connection.
- This command sets the mark “3” to those packets that fulfil all these requirements:
- It comes from the source IP address 192.168.10.252.
- It uses TCP.
- It goes to one of the destination ports 80 (http) or 443 (https).
- It constitutes a NEW or RELATED connection.
- This command stores in the mark of the packets in the connection tracking table.
After we have entered these commands, the mangle table should look somewhat like this:
caipirinha:/etc/openvpn # iptables -t mangle -L -n -v
Chain PREROUTING (policy ACCEPT 210K packets, 106M bytes)
pkts bytes target prot opt in out source destination
953K 384M CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK restore
177K 88M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x0
989 76505 MARK tcp -- * * 192.168.10.250 0.0.0.0/0 multiport dports 80,443 state NEW,RELATED MARK set 0x1
1233 82791 MARK tcp -- * * 192.168.10.251 0.0.0.0/0 multiport dports 80,443 state NEW,RELATED MARK set 0x2
1017 72624 MARK tcp -- * * 192.168.10.252 0.0.0.0/0 multiport dports 80,443 state NEW,RELATED MARK set 0x3
776K 296M CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save
Chain INPUT (policy ACCEPT 203K packets, 104M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 137K packets, 69M bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 199K packets, 107M bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 335K packets, 176M bytes)
pkts bytes target prot opt in out source destination
The values in the columns pkts, bytes will most probably be different in your case. They show how many IP packets and bytes have matched this rule and can help in controlling or debugging the configuration and the traffic flows.
The entries in the mangle table consequently mark the packets that traverse our Linux router and that are from one of the devices gabriel-SM-G991B, gabriel-SM-N960F, or gabriel-SM-T580 and that are destined to a web server (TCP ports 80, 443) with either the fwmark “1”, “2”, or “3”. Based on this fwmark, the packets are then sent to the routing tables Portugal, Singapur or Thailand. Using both the routing policy database and the mangle table is a powerful instrument for selecting packets and connections that shall be routed in a special way which gives us a lot of flexibility.
For example, if we only had one device to be considered (Gabriel-SM-G991B), the two command that we have issues before:
ip rule add from all fwmark 0x1 priority 5000 lookup Portugal
iptables -t mangle -A PREROUTING -s 192.168.10.250/32 -p tcp -m multiport --dports 80,443 -m state --state NEW,RELATED -j MARK --set-mark 1
have the same effect as these two commands:
ip rule add from 192.168.10.250/32 fwmark 0x1 priority 5000 lookup Portugal
iptables -t mangle -A PREROUTING -p tcp -m multiport --dports 80,443 -m state --state NEW,RELATED -j MARK --set-mark 1
In the first case, the source address selection is done in the iptables command, in the second case it is done in the ip rule command. With the 3 devices that have, the second way is not a solution as it would create the fwmark “1” on all packets that go to a web server and the subsequent entries in the mangle table would not be executed any more. We therefore have to create rules with extreme caution, in order not to jeopardize our intended routing behavior. I therefore recommend being as specific as possible already in the iptables command, also in order to avoid excessive packet marking as this complicates your life when you have to debug your setup.
When we have the setup described in this chapter active, the 3 devices (gabriel-SM-G991B, gabriel-SM-N960F, gabriel-SM-T580) will appear to be in the respective countries, similar to the setup in the previous chapter where we only modified the routing policy database. We can test this with one of the websites that display IP geolocation, for example [13], and the result will look like this:



However, if we execute a traceroute command on one of the devices, the traceroute does not go via the respective country because it uses the protocol ICMP rather than TCP and is therefore not marked and consequently is not routed via any of the client VPNs. This can be seen in the rightest image in the following gallery where a traceroute to Google‘s DNS (8.8.8.8) has been made. The traceroutes were taken with [14].



Connection Tracking
In the previous chapter, the iptables statements for the mangle table apply to NEW or RELATED connections only. Let us therefore look into the concept of connection tracking [4], [21]. This is achieved by the netfilter component [16] which is linked to the Linux Kernel keeps track of stateful connections in the connection tracking table, very similar to a stateful firewall [17]. Important states are [4]:
- NEW: The packet does not belong to an existing connection.
- ESTABLISHED: The packet belongs to an “established” connection. A connection is changed from NEW to ESTABLISHED when it receives a valid response in the opposite direction.
- RELATED: Packets that are not part of an existing connection but are associated with a connection already in the system are labeled RELATED. An example are ftp connections which open connections adjacent to the initial one [18].
- INVALID: Packets can be marked INVALID if they are not associated with an existing connection and are not appropriate for opening a new connection.
The mighty conntrack toolset [19] contains the command conntrack [20] which can be used to see the connection tracking table (actually, there are also different tables) and to inspect various behaviors around the connection tracking table. We can, for example, examine which connections have the fwmark “2” set, that is, which connections have been set up by the device Gabriel-Tablet using the source IP address 192.168.10.251:
caipirinha:/etc/openvpn # conntrack -L -m2
tcp 6 431952 ESTABLISHED src=192.168.10.251 dst=172.217.19.74 sport=38330 dport=443 src=172.217.19.74 dst=10.12.42.37 sport=443 dport=38330 [ASSURED] mark=2 use=1
tcp 6 431832 ESTABLISHED src=192.168.10.251 dst=34.107.165.5 sport=38924 dport=443 src=34.107.165.5 dst=10.12.42.37 sport=443 dport=38924 [ASSURED] mark=2 use=1
tcp 6 431955 ESTABLISHED src=192.168.10.251 dst=142.250.186.170 sport=58652 dport=443 src=142.250.186.170 dst=10.12.42.37 sport=443 dport=58652 [ASSURED] mark=2 use=1
tcp 6 65 TIME_WAIT src=192.168.10.251 dst=142.250.185.66 sport=57856 dport=443 src=142.250.185.66 dst=10.12.42.37 sport=443 dport=57856 [ASSURED] mark=2 use=1
tcp 6 431954 ESTABLISHED src=192.168.10.251 dst=142.250.186.170 sport=58640 dport=443 src=142.250.186.170 dst=10.12.42.37 sport=443 dport=58640 [ASSURED] mark=2 use=1
conntrack v1.4.5 (conntrack-tools): 5 flow entries have been shown.
Each line contains a whole set of information which is explained in detail in [2], § 7. For a quick orientation, we have to know the following points:
- Each line has two parts. The first part lists the IP header information of the newly initiated connection; in our case, we observe:
- The source IP address is always 192.168.10.251 (Gabriel-Tablet).
- The destination port is either 80 or 443 as we change the routing for exactly these destination ports only
- The second part lists the IP header information of the expected or received (as an answer) packets, and, we observe:
- The source IP address of the answering packet is the destination IP address of the initiating packet which makes sense.
- The source port of the answering packet is the destination port of the initiating packet which again makes sense.
- The destination IP address of the answering packet is not 192.168.10.251, but 10.12.42.37. This is because 10.12.42.37 is the IP address of the tun1 device of the Linux server. When we send out the initiating packet from 192.168.10.251, the packet will go to the Linux server who acts as router. In the server, the packet will be changed, and the server will use its source address on the outgoing interface tun1 as source address on the packet as the remote end point of the client VPN connection that we use would not know how to route a packet to 192.168.10.251 (the remote end point does not know anything of the network 192.168.10.0/24 on our side).
- [ASSURED] means that the connection has already seen traffic in both directions, the connections has therefore been set up successfully.
- Coincidentally, in our example, we only have TCP connections; however, the connection tracking table can also comprise UDP or ICMP connections.
The command conntrack [20] offers much more opportunities and even allows to change entries in the connection tracking table. So, we barely scratched the surface.
Conclusion
In this blog post, we have used client VPN connections to execute some experiments on policy routing in which we make different devices appear to be located in different countries. We touched the concepts of routing tables, the routing policy database, the mangle table, and the connection tracking table. The possibilities of all these items go far beyond of what we discussed in this blog post. The interested reader is referred to the sources listed below to get in-depth knowledge and to understand the vast possibilities that these items offer to the expert.
Sources
- [1] = Linux Advanced Routing & Traffic Control HOWTO
- [2] = Iptables Tutorial 1.2.2
- [3] = Guide to IP Layer Network Administration with Linux
- [4] = A Deep Dive into Iptables and Netfilter Architecture
- [5] = Policy Routing With Linux – Online Edition
- [6] = Understanding modern Linux routing (and wg-quick)
- [7] = Netfilter Connmark
- [8] = Internet Censorship in China
- [9] = Reference manual for OpenVPN 2.4
- [10] = man 8 wg-quick
- [11] = man 8 ip-rule
- [12] = man 8 iptables
- [13] = Where is my IP location? (Geolocation)
- [14] = PingTools Network Utilities [Google Play]
- [15] = Mangle table for iptables
- [16] = The netfilter.org project
- [17] = Stateful firewall [Wikipedia]
- [18] = Active FTP vs. Passive FTP, a Definitive Explanation
- [19] = The conntrack-tools user manual
- [20] = man 8 conntrack
- [21] = Connection tracking
Setting up Dual Stack VPNs
Executive Summary
This blog post explains how I set up dual stack (IPv4, IPv6) virtual private networks (VPN) with the open-source packages openvpn and WireGuard on my Linux server caipirinha.spdns.org. Clients (smartphones, tablets, notebooks) which connect to my server will be supplied with a dual stack VPN connection and can therefore use both IPv4 as well as IPv6 via the Linux server to the internet.
Background
The implementation was originally intended to help a friend who lived in China and who struggled with his commercial VPN that only tunneled IPv4 and did not block IPv6. He often experienced blockages when he tried to access social media sites as his system would prefer IPv6 over IPv4 and so the connection would not run through his VPN. However, due to [6], openvpn alone is no longer suited to circumvent censorship in China [7]. WireGuard might still work in geographies where openvpn is blocked, however.
Preconditions
In order to use the approach described here, you should:
- … have a full dual stack internet connection (IPv4, IPv6)
- … have access to a Linux machine which is already properly configured for dual stack on its principal network interface (e.g., eth0)
- … have the Linux machine set up as a router
- … have the package openvpn and/or WireGuard installed (preferably from a repository of your Linux distribution)
- … know how to create client and server certificates for openvpn [11] and/or WireGuard [12], [13], [19] although this blog post will also contain a short description on how to create a set of certificates and keys for openvpn
- … have knowledge of routing concepts, networks, some understanding of shell scripts and configuration files
- … know related system commands like sysctl
Description and Usage
The graph below shows the setup on my machine caipirinha.spdns.org with 5 VPN services (blue, green color) that will be described in this blog post. The machine has also 3 VPN clients configured which are mapped to a commercial service (ocker color), but this will not be topic of this blog post.

Home Network Setup
Let us now look at some details of the network setup:
- The Linux server is not connected to the internet directly, but it is connected to a small SoHo router which acts as basic firewall and forwards a selection of ports and protocols to the Linux server.
- The internal IPv4 network which is setup by the SoHo router is 192.168.2.0/24.
- The internal IPv6 network which is setup by the SoHo router is fd00:0:0:2/64; this is configured in the respective menu of the SoHo router as shown below and is within the IPv6 unique local address space [1], [2]. I decided to use an IPv6 with a “2” in the network address like the “2” in the IPv4 network.
- The Linux server also gets a public IPv6 address allocated (like all other devices in my home network); this is accomplished by the SoHo router that has IPv6 enabled.

When everything has been set up correctly, the Linux server should get various IP addresses, and among them various IPv6 addresses:
- a “real” and routable one starting with numbers in the range from “2000:” until “3fff:”.
- a SLAAC [3] one starting with “fe80:”
- a “private” one starting with “fd00::2:”
An example is shown here:
caipirinha:~ # ifconfig eth0
eth0: flags=4163 mtu 1500
inet 192.168.2.3 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 2001:16b8:306c:c700:76d4:35ff:fe5c:d2c3 prefixlen 64 scopeid 0x0
inet6 fe80::76d4:35ff:fe5c:d2c3 prefixlen 64 scopeid 0x20
inet6 fd00::2:76d4:35ff:fe5c:d2c3 prefixlen 64 scopeid 0x0
...
Enabling Routing
Routing for IPv4 and IPv6 needs to be enabled on the Linux server. I personally also decided to switch off the privacy extensions on the Linux server, but that is a personal matter of taste:
# Enable "loose" reverse path filtering and prohibit icmp redirects
sysctl -w net.ipv4.conf.all.rp_filter=2
sysctl -w net.ipv4.conf.all.send_redirects=0
sysctl -w net.ipv4.conf.eth0.send_redirects=0
sysctl -w net.ipv4.icmp_errors_use_inbound_ifaddr=1
# Enable IPv6 routing, but keep SLAAC for eth0
sysctl -w net.ipv6.conf.eth0.accept_ra=2
sysctl -w net.ipv6.conf.all.forwarding=1
# Switch off the privacy extensions
sysctl -w net.ipv6.conf.eth0.use_tempaddr=0
OpenVPN Key Management with Easy-RSA
For the openVPN server and clients, we need a certification authority and we ultimately need to create signed certificates and keys. This can be done with the help of the package easy-rsa that is available for various platforms [22] and often is part of Linux distributions, too. Documentation and hands-on examples are given in [20] and [21].
We start with the initialization of a Public Key Infrastructure (PKI) and the creation of a Certificate Authority (CA) followed by the creation of a Certificate Revocation List (CRL)
easyrsa init-pki
easyrsa build-ca nopass
easyrsa gen-crl
The next step is the creation of a key pair for the server. The public key will be signed by the CA and thus become our server certificate. Furthermore, we create Diffie-Hellman parameters for the server (not needed if you create elliptic keys). All this can be done by:
easyrsa --days=3652 build-server-full caipirinha.spdns.org nopass
easyrsa gen-dh
In this example, the server certificate is valid for some 3652 days (10 years), the certificate is named caipirinha.spdns.org.crt, and the private key which must remain on the server is named caipirinha.spdns.org.key.
Now, we can create client certificates in a similar way. In the example, the client certificates will have a validity of 5 years only:
easyrsa --days=1825 build-client-full gabriel-SM-G991B nopass
easyrsa --days=1825 build-client-full gabriel-SM-N960F nopass
easyrsa --days=1825 build-client-full gabriel-SM-N915FY nopass
easyrsa --days=1825 build-client-full gabriel-SM-T580 nopass
...
I chose not to use passwords for the private key in order to facilitate the handling. Furthermore, I went for the easy way and created all certificates and keys on one system only. If you intend to deploy a professional solution, you have to keep cyber-security in mind and you may therefore want to exercise more caution and separate the certificate authority on a secured system from the creation of server and client key pairs as it has been advised in [20].
OpenVPN Server Configuration
Before we go into details of the configuration, we must distinguish 3 concepts of VPNs:
- A (full) tunneling VPN tunnels all connections through the VPN, once the VPN connection has been established. This offers possibilities, but also has implications:
- The VPN client appears to be in the geographic location of the VPN server unless the server itself tunnels through more nodes. This can be useful to circumvent censorship in the geography where the VPN client is located as all connections from the client to services in the internet are channeled through the VPN server.
- In a complex multi-level server setup, it can make the client appear in different countries, depending on which destination the client is trying to access. A VPN client might, for example, be in Angola, but connect to a VPN server in Germany which itself has VPN connections to Brazil and to Portugal. If the VPN server is configured accordingly, the VPN client in Angola may appear as being in Portugal when accessing Portuguese web sites and might appear as being located in Brazil when accessing Brazilian web sites and might appear as being located in Germany for everything else.
- The VPN server can implement filtering services like filtering out ad servers or doing virus scans of downloads.
- The VPN server can implement access restrictions; companies use this sometimes to disallow clients to access web sites which they deem to be related to “sex, hate, crime, gambling, …”.
- A split tunneling VPN tunnels only connections to certain networks between the VPN client and the VPN server while all other connections from the VPN client access the internet through the local provider. The typical usage scenario is not related to censorship, but to dedicated resources to which the VPN server grants access (e.g., network shares aka “samba”, proxy services, etc.) that shall be accessed from the VPN client while the latter is not physically connected to the home or company network.
- An inverse split tunneling VPN tunnels almost all connections, with a few exceptions. This concept is often used in companies which want basically all connections to run through their infrastructure so that they can execute virus scans and access restrictions, but which have (correctly) realized that bandwidth-intensive operations like cloud access, access to video conferencing services, etc. should be taken off the VPN tunnel as their performance is deteriorated otherwise.
The following configurations will create 4 different VPNs based on 2 concepts above.
- UDP-based VPN, full tunneling: This is the preferred VPN when all connections shall be tunneled.
- UDP-based VPN, split tunneling: This is the preferred VPN when you want to blend in resources from your home network.
- TCP-based VPN, full tunneling: TCP can be used when the connection quality to the VPN server is unstable or when UDP is blocked by some gateway in between.
- TCP-based VPN, split tunneling: This is the preferred VPN when you want to blend in resources from your home network, but when the connection quality to the VPN server is unstable or when UDP is blocked by some gateway in between.
UDP-based VPN, full tunneling
# Konfigurationsdatei für den openVPN-Server auf CAIPIRINHA (UDP:1194)
ca /root/pki/ca.crt
cert /etc/openvpn/caipirinha.spdns.org.crt
client-config-dir /etc/openvpn/conf-1194
crl-verify /root/pki/crl.pem
dev tun4
dh /root/pki/dh.pem
hand-window 90
ifconfig 192.168.10.1 255.255.255.0
ifconfig-pool 192.168.10.2 192.168.10.239 255.255.255.0
ifconfig-ipv6 fd01:0:0:a::1 fd00::2:3681:c4ff:fecb:5780
ifconfig-ipv6-pool fd01:0:0:a::2/112
ifconfig-pool-persist /etc/openvpn/ip-pool-1194.txt
keepalive 20 80
key /etc/openvpn/caipirinha.spdns.org.key
log /var/log/openvpn-1194.log
mode server
persist-key
persist-tun
port 1194
proto udp6
reneg-sec 86400
script-security 2
status /var/run/openvpn/status-1194
tls-server
topology subnet
up /etc/openvpn/start_vpn.sh
verb 1
writepid /var/run/openvpn/server-1194.pid
# Topologie des VPN und Default-Gateway
push "topology subnet"
push "route-gateway 192.168.10.1"
push "redirect-gateway def1 bypass-dhcp"
push "tun-ipv6"
push "route-ipv6 2000::/3"
# DNS-Server
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
UDP-based VPN, split tunneling
# Konfigurationsdatei für den openVPN-Server auf CAIPIRINHA (UDP:4396)
ca /root/pki/ca.crt
cert /etc/openvpn/caipirinha.spdns.org.crt
client-config-dir /etc/openvpn/conf-4396
crl-verify /root/pki/crl.pem
dev tun7
dh /root/pki/dh.pem
hand-window 90
ifconfig 192.168.13.1 255.255.255.0
ifconfig-pool 192.168.13.2 192.168.13.239 255.255.255.0
ifconfig-ipv6 fd01:0:0:d::1 fd00::2:3681:c4ff:fecb:5780
ifconfig-ipv6-pool fd01:0:0:d::2/112
ifconfig-pool-persist /etc/openvpn/ip-pool-4396.txt
keepalive 20 80
key /etc/openvpn/caipirinha.spdns.org.key
log /var/log/openvpn-4396.log
mode server
persist-key
persist-tun
port 4396
proto udp6
reneg-sec 86400
script-security 2
status /var/run/openvpn/status-4396
tls-server
topology subnet
up /etc/openvpn/start_vpn.sh
verb 1
writepid /var/run/openvpn/server-4396.pid
# Topologie des VPN und Default-Gateway
push "topology subnet"
push "route-gateway 192.168.13.1"
push "tun-ipv6"
push "route-ipv6 2000::/3"
# Routen zum internen Netzwerk setzen
push "route 192.168.2.0 255.255.255.0"
TCP-based VPN, full tunneling
# Konfigurationsdatei für den openVPN-Server auf CAIPIRINHA (TCP:8080)
ca /root/pki/ca.crt
cert /etc/openvpn/caipirinha.spdns.org.crt
client-config-dir /etc/openvpn/conf-8080
crl-verify /root/pki/crl.pem
dev tun5
dh /root/pki/dh.pem
hand-window 90
ifconfig 192.168.11.1 255.255.255.0
ifconfig-pool 192.168.11.2 192.168.11.239 255.255.255.0
ifconfig-ipv6 fd01:0:0:b::1 fd00::2:3681:c4ff:fecb:5780
ifconfig-ipv6-pool fd01:0:0:b::2/112
ifconfig-pool-persist /etc/openvpn/ip-pool-8080.txt
keepalive 20 80
key /etc/openvpn/caipirinha.spdns.org.key
log /var/log/openvpn-8080.log
mode server
persist-key
persist-tun
port 8080
proto tcp6-server
reneg-sec 86400
script-security 2
status /var/run/openvpn/status-8080
tls-server
topology subnet
up /etc/openvpn/start_vpn.sh
verb 1
writepid /var/run/openvpn/server-8080.pid
# Topologie des VPN und Default-Gateway
push "topology subnet"
push "route-gateway 192.168.11.1"
push "redirect-gateway def1 bypass-dhcp"
push "tun-ipv6"
push "route-ipv6 2000::/3"
# DNS-Server
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
TCP-based VPN, split tunneling
# Konfigurationsdatei für den openVPN-Server auf CAIPIRINHA (TCP:8081)
ca /root/pki/ca.crt
cert /etc/openvpn/caipirinha.spdns.org.crt
client-config-dir /etc/openvpn/conf-8081
crl-verify /root/pki/crl.pem
dev tun6
dh /root/pki/dh.pem
hand-window 90
ifconfig 192.168.12.1 255.255.255.0
ifconfig-pool 192.168.12.2 192.168.12.239 255.255.255.0
ifconfig-ipv6 fd01:0:0:c::1 fd00::2:3681:c4ff:fecb:5780
ifconfig-ipv6-pool fd01:0:0:c::2/112
ifconfig-pool-persist /etc/openvpn/ip-pool-8081.txt
keepalive 20 80
key /etc/openvpn/caipirinha.spdns.org.key
log /var/log/openvpn-8081.log
mode server
persist-key
persist-tun
port 8081
proto tcp6-server
reneg-sec 86400
script-security 2
status /var/run/openvpn/status-8081
tls-server
topology subnet
up /etc/openvpn/start_vpn.sh
verb 1
writepid /var/run/openvpn/server-8081.pid
# Topologie des VPN und Default-Gateway
push "topology subnet"
push "route-gateway 192.168.12.1"
push "tun-ipv6"
push "route-ipv6 fd00:0:0:2::/64"
# Routen zum internen Netzwerk setzen
push "route 192.168.2.0 255.255.255.0"
We shall now look at some configuration parameters and their meaning:
- cert, key: The location of the server certificate and the server key has to be listed.
- ca: The location of the certificate authority certificate has to be listed.
- crl-verify: This point to a certificate revocation list and contains the certificates that once were issued for devices that have been retired meanwhile or for users that only needed a temporary VPN access .
- dev: This determines the tun device that shall be used for the connection. I recommend using dedicated tun devices for all VPNs rather than having them randomly assigned during start-up.
- ifconfig, ifconfig-pool: This determines the IPv4 address of the server and the pool from which IPv4 addresses are granted to the clients. I decided to use a different /24 network for each VPN configuration, that is, the networks 192.168.10.0/24, 192.168.11.0/24, 192.168.12.0/24, and 192.168.13.0/24. However, I decided not to use the full IP address range for dynamic allocation as I have some VPN clients (smartphones, notebooks) which get a dedicated client address so that I can easily tweak settings on the Linux server for those clients. These clients have a small, dedicated configuration file in the folder named in client-config-dir. Such a dedicated configuration can be used to allocate the same IP address to a certain VPN client.
- ifconfig-ipv6: The first parameter is the IPv6 address of the server, and the second IPv6 address is the one of the router to the internet; in that case, I put the SoHo router there (fd00::2:3681:c4ff:fecb:5780), see the image Configuration of the internal IPv6 network.
- ifconfig-ipv6-pool: This is the pool of IPv6 addresses that are granted to the clients. I follow a similar approach as with the IPv4 networks and set up separate networks for each VPN, that is, the networks fd01:0:0:a::2/112, fd01:0:0:b::2/112, fd01:0:0:c::2/112 and fd01:0:0:d::2/112. Keep in mind that the first address of the IP address pool is the one mentioned here, e.g., fd01:0:0:a:0:0:0:2, as fd01:0:0:a:0:0:0:1 is already used for the server.
- keepalive: Sets the interval of ping-alive requests and its timeout. This is useful as gateways that are in between the VPN client and the VPN server might keep connections open and port allocations reserved only for some time; subsequently, they might be freed up. Ideally, you want to use the longest time periods possible as shorter periods create unnecessary traffic (and might eat up the data volume of mobile clients).
- port: While the standard port for openvpn is 1194, with more than one VPN you are better advised to use different, dedicated ports that are not used by other service on your server.
- proto: This determines the protocol used and is either udp6 or tcp6-server. The “6” in both arguments indicates that the service shall be provided both on the IPv4 as well as on the IPv6 address. Leaving the “6” away only provides the service on the IPv4 address.
- push “redirect-gateway def1 bypass-dhcp” tells the VPN client to bypass the VPN for DHCP queries. Otherwise, the client machine gets stuck when the DHCP lease on the client side terminates.
- push “route-ipv6 2000::/3” tells the VPN client machine to use the VPN for all IPv6 addresses that start with “2000::/3”, and those are currently all routable IPv6 addresses [2].
- push “dhcp-option DNS 8.8.8.8” and push “dhcp-option DNS 8.8.4.4” set Google‘s DNS servers for the VPN client machine and so gives us an excellent and fast service.
- reneg-sec: The specified 86400 seconds re-negotiate new encryption keys only once per day. For security reasons, a lower time period would be better, but some countries have put in efforts to detect and block encrypted communication, and this detection happens though the key exchange which seems to have a characteristic bit pattern [6]; therefore, a longer period has been set here.
OpenVPN Client Configuration
The generation of client configuration files is explained in [11], and there are numerous guidelines in the internet. Therefore, I just want to give 2 examples and briefly point out some useful considerations.
UDP-based VPN, full tunneling
# Konfigurationsdatei für den openVPN-Client auf ...
client
dev tun
explicit-exit-notify
hand-window 90
keepalive 10 60
nobind
persist-key
persist-tun
proto udp
remote caipirinha.spdns.org 1194
remote-cert-tls server
reneg-sec 86400
script-security 2
verb 1
<ca>
-----BEGIN CERTIFICATE-----
MIIE2D...NNmlTg=
-----END CERTIFICATE-----
</ca>
<cert>
-----BEGIN CERTIFICATE-----
MIIFJj...nbuzbI=
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
MIIEvA...QcO+Q==
-----END PRIVATE KEY-----
</key>
TCP-based VPN, full tunneling
# Konfigurationsdatei für den openVPN-Client auf ...
client
dev tun
hand-window 90
keepalive 10 60
nobind
persist-key
persist-tun
proto tcp
remote caipirinha.spdns.org 8080
remote-cert-tls server
reneg-sec 86400
script-security 2
verb 1
<ca>
-----BEGIN CERTIFICATE-----
MIIE2D...NNmlTg=
-----END CERTIFICATE-----
</ca>
<cert>
-----BEGIN CERTIFICATE-----
MIIFJj...nbuzbI=
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
MIIEvA...QcO+Q==
-----END PRIVATE KEY-----
</key>
The following points proved to be useful for me:
- Rather than using separate files for the ca, the certificate, and the key of the client, all information can actually be packed into one file which then gives a complete configuration of the client. This eases the installation on mobile clients (smartphones, tablets) as you do not have to consider path names on the target device. For security reasons, the respective code blocks are not printed here.
- It is possible to use several remote statements. You can refer to different servers (if the client configuration is suitable for the servers) or you can name the very same server with different ports. On my server, I have different ports open which all point to the very same 4 different server processes. The reason is that sometimes, the dedicated openvpn port 1194 is blocked in some geographies, but other ports might still work. In that case, you have to ensure that connections coming to all these ports are mapped back to the port of the server process. And you might include the statement remote-random so that one connection of the ones listed is chosen randomly.
- On the client, is it normally not necessary to bind the openvpn process to a certain tun device or a certain port, different from the VPN server.
Further Improvements (OpenVPN)
Different network conditions might require tweaking one or the other parameters of the openvpn service. [8], [9], [10] contain some indications, especially with respect to different hardware and network conditions.
Dedicated Configurations
As mentioned above, the client-config-dir directive can be used to refer to a folder that contains configurations for dedicated devices. An example is the file /etc/openvpn/conf-1194/Gabriel-SM960F. It contains the content:
# Spezielle Konfigurationsdatei für Gabriels Galaxy Note 9 (gabriel-SM-N960F)
#
ifconfig-push 192.168.10.251 255.255.255.0
ifconfig-ipv6-push fd01:0:0:a:0:0:1:fb/111 fd01:0:0:a::1
This file makse the device with the client certificate named gabriel-SM-N960F always receive the same IP addresses, namely 192.168.10.251 (IPv4) and fd01:0:0:a:0:0:1:fb (IPv6). The name of the file must exactly match the VPN client’s common name (CN) that was defined when the client certificate was created [5].
WireGuard Server Configuration
WireGuard is a new and promising VPN protocol [15] which is not yet as widespread as openvpn; it may therefore “escape” censorship authorities easier than openvpn which can be detected by statistical analysis [16]. The setup of the server is described in [12], [13], a more complex configuration is described in [19]. A big advantage of WireGuard is also that the code base is very lean, and hence performance on any given platform is higher than with openvpn.
I personally find it unusual that you have to list the clients in the server configuration file rather than just having a general server configuration where any number of allowed clients can connect to. On my Linux server caipirinha.spdns.org, the WireGuard server configuration contains of 3 files:
- /etc/wireguard/wg_caipirinha_public.key is the public key of the service (the generation is described in [12], [13], [19]).
- /etc/wireguard/wg_caipirinha_private.key is the private key of the service (the generation is described in [12], [13], [19]).
- /etc/wireguard/wg0.conf is the configuration file (the network device is named “wg0” on my machine).
Similar to the openvpn configurations described above, I spent a dedicated IPv4 and IPv6 subnet for the WireGuard server, in this case 192.168.14.0/24 and fd01:0:0:e::/64. The configuration file /etc/wireguard/wg0.conf is easy to understand and contains important parameters that shall be discussed below:
[Interface]
Address = 192.168.14.1/24,fd01:0:0:e::1/64
ListenPort = 44576
PrivateKey = SHo...
[Peer]
PublicKey = pjp2PEboXA4RJhVoybXKuicNkz4XDZaW+c9yLtJq1gE=
AllowedIPs = 192.168.14.2/32,fd01:0:0:e::2/128
PersistentKeepalive = 30
...
[Peer]
PublicKey = fcEcFYQ6cOqe7H9L2PvkM78mkKottJLnKwiqp4WO91s=
AllowedIPs = 192.168.14.7/32,fd01:0:0:e::7/128
PersistentKeepalive = 30
...
The section [Interface] describes the server setup:
- Address lists the server’s IPv4 and IPv6 addresses.
- ListenPort is the UDP port on which the service will listen.
- PrivateKey is the private server key (can be read from the file /etc/wireguard/wg_caipirinha_private.key). For security reasons, the key has only been displayed partly here.
Each section [Peer] lists a possible client configuration. If you want to enable 10 clients on your server, you therefore need 10 such sections.
- PublicKey is the public key of the client.
- AllowedIPs lists the IPv4 and IPv6 addresses which will be allocated to the client upon connection.
- PersistentKeepalive configures the time in seconds after which “keep-alive packets” will be exchanged between the server and the client. This helps to keep connection settings on gateways that are in between the VPN client and the VPN server open; often firewalls and routers in between might otherwise delete the connection from their tables. A value of 25…30 is recommended.
WireGuard Client Configuration
WireGuard clients exist for all major operating systems. I would like to show a Windows 10 configuration that I set up on one of my notebooks according to [14].

As we can see, I also named the respective network interface on the client wg0, but you can use any other name, too. The detailed configuration of the only client connection is also easy to understand:
[Interface]
PrivateKey = 2B2...
Address = 192.168.14.7/32, fd01:0:0:e::7/128
DNS = 192.168.14.1, fd01:0:0:e::1
[Peer]
PublicKey = GvgCag5cvRaE18YUkAd+q/NSOb54JYvXhylm1oz8OxI=
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = caipirinha.spdns.org:44576
The section [Interface] describes the client setup:
- PrivateKey is the private key of the client which is generated in the application itself [14]. For security reasons, the key has only been displayed partly here.
- Address lists the client’s IPv4 and IPv6 addresses.
- DNS lists the DNS servers which shall be used when the VPN connection is active. In this case, the Linux server caipirinha.spdns.org has a DNS service running, hence I listed the server IPv4 and IPv6 addresses here. You might also use Google’s DNS with the IPv4 addresses 8.8.8.8, 8.8.4.4, for example. It is important to list the DNS servers as the ones that you were using before the VPN was established (e.g., the router’s IP like 192.168.4.1) might no longer be accessible after the VPN has been established.
The section [Peer] contains information related to the server.
- PublicKey is the public key of the server which is located on the server in the file /etc/wireguard/wg_caipirinha_public.key.
- AllowedIPs is set to 0.0.0.0/0, ::/0 on this client which means that all traffic shall be sent via the VPN (fully tunneling VPN). Here, you have the chance to move from a fully tunneling VPN to a split VPN by listing subnets like 192.168.2.0/24, for example.
- Endpoint lists the server FQDN and the port to which the client shall connect to.
In a similar way, I set up another client on an Android smartphone using the official WireGuard – Apps bei Google Play, following the configuration model at [23].

Let’s see how that works out in reality. For the experiment, I am in Brazil and connect to my Linux server caipirinha.spdns.org in Germany with the configurations described above. Once, the connection has been established, I do a traceroute in Windows to www.google.com in IPv4 and IPv6:
C:\Users\Dell>tracert www.google.com
Routenverfolgung zu www.google.com [142.250.185.68]
über maximal 30 Hops:
1 235 ms 236 ms 236 ms CAIPIRINHA [192.168.14.1]
2 241 ms 238 ms 238 ms Router-EZ [192.168.2.1]
3 259 ms 248 ms 249 ms fra1813aihr002.versatel.de [62.214.63.145]
4 250 ms 249 ms 248 ms 62.214.38.105
5 249 ms 247 ms 249 ms 72.14.204.149
6 249 ms 250 ms 251 ms 72.14.204.148
7 249 ms 249 ms 248 ms 108.170.236.175
8 251 ms 249 ms 250 ms 142.250.62.151
9 248 ms 247 ms 247 ms fra16s48-in-f4.1e100.net [142.250.185.68]
Ablaufverfolgung beendet.
C:\Users\Dell>tracert -6 www.google.com
Routenverfolgung zu www.google.com [2a00:1450:4001:829::2004]
über maximal 30 Hops:
1 235 ms 235 ms 235 ms fd01:0:0:e::1
2 239 ms 239 ms 237 ms 2001:16b8:30b3:f100:3681:c4ff:fecb:5780
3 249 ms 249 ms 247 ms 2001:1438::62:214:63:145
4 248 ms 249 ms 247 ms 2001:1438:0:1::4:302
5 250 ms 248 ms 248 ms 2001:1438:1:1001::1
6 250 ms 249 ms 248 ms 2001:1438:1:1001::2
7 252 ms 250 ms 249 ms 2a00:1450:8163::1
8 250 ms 249 ms 250 ms 2001:4860:0:1::5894
9 248 ms 250 ms 251 ms 2001:4860:0:1::5009
10 250 ms 249 ms 249 ms fra24s06-in-x04.1e100.net [2a00:1450:4001:829::2004]
Ablaufverfolgung beendet.
C:\Users\Dell>
We can see a couple of interesting points:
- The latency from Brazil to my server is already > 200 ms, that is not a very competitive connection.
- Despite the fact that between my Linux server caipirinha.spdns.org and Google, there are a range of machines, that connections has quite a low (additional) latency.
Let’s now switch off the VPN and do a traceroute to www.google.com directly from the local ISP:
C:\Users\Dell>tracert www.google.com
Routenverfolgung zu www.google.com [172.217.29.100]
über maximal 30 Hops:
1 <1 ms <1 ms <1 ms fritz.box [192.168.4.1]
2 1 ms <1 ms <1 ms 192.168.100.1
3 4 ms 4 ms 3 ms 179-199-160-1.user.veloxzone.com.br [179.199.160.1]
4 5 ms 5 ms 4 ms 100.122.52.96
5 12 ms 6 ms 5 ms 100.122.25.245
6 11 ms 11 ms 11 ms 100.122.17.180
7 12 ms 12 ms 12 ms 100.122.18.52
8 12 ms 11 ms 11 ms 72.14.218.158
9 14 ms 13 ms 14 ms 108.170.248.225
10 15 ms 14 ms 14 ms 142.250.238.235
11 13 ms 13 ms 13 ms gru09s19-in-f100.1e100.net [172.217.29.100]
Ablaufverfolgung beendet.
For this test, only IPv4 was possible as I did not have IPv6 connection at the Ethernet port where my notebook was connected to. The overall connection is much faster, and we can clearly identify that it uns in the Brazilian internet (“179-199-160-1.user.veloxzone.com.br”).
Further Improvements (WireGuard)
A range of graphical user interfaces (GUIs) for the configuration of WireGuard have come up that seek to overcome the need to deal with various configuration files on the server and the client side and align public keys and IP addresses. [17] compares some GUIs for WireGuard, [18] shows a further possibility.
Configuring Network Address Translation (NAT)
Additionally, to the configuration of the VPNs (and their respective start on the VPN server), we need to set up network address translation so that connections from the VPN networks are translated to the SoHo network. This is done with the sequence:
readonly STD_IF='eth0'
…
iptables -t nat -F
iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -o ${STD_IF} -j MASQUERADE
iptables -t nat -A POSTROUTING -s 192.168.11.0/24 -o ${STD_IF} -j MASQUERADE
iptables -t nat -A POSTROUTING -s 192.168.12.0/24 -o ${STD_IF} -j MASQUERADE
iptables -t nat -A POSTROUTING -s 192.168.13.0/24 -o ${STD_IF} -j MASQUERADE
iptables -t nat -A POSTROUTING -s 192.168.14.0/24 -o ${STD_IF} -j MASQUERADE
…
# Setup the NAT table for the VPNs.
ip6tables -t nat -F
ip6tables -t nat -A POSTROUTING -s fd01:0:0:a::/64 -o ${STD_IF} -j MASQUERADE
ip6tables -t nat -A POSTROUTING -s fd01:0:0:b::/64 -o ${STD_IF} -j MASQUERADE
ip6tables -t nat -A POSTROUTING -s fd01:0:0:c::/64 -o ${STD_IF} -j MASQUERADE
ip6tables -t nat -A POSTROUTING -s fd01:0:0:d::/64 -o ${STD_IF} -j MASQUERADE
ip6tables -t nat -A POSTROUTING -s fd01:0:0:e::/64 -o ${STD_IF} -j MASQUERADE
...
Conclusion
The VPN configuration mentioned above shows how to set up different VPNs that allow dual stack operations on a Linux server. Thus, VPN clients can initiate connections in IPv4 or IPv6 mode using the assigned IP addresses from a private address space; the configured network address translation (NAT) translates the connections to a real-world IP address on the server.
Outlook
The VPN server itself can also act as VPN client itself, and so, the connection from the original VPN client can be forwarded via other VPNs to other countries allowing the original VPN client to appear in different geographies depending upon the destination address of its outgoing connection. This can be useful in order to circumvent geo-blocking of media content, for example.
Sources
- [1] = Unique local address [Wikipedia]
- [2] = IPv6: Basics
- [3] = Stateless address autoconfiguration (SLAAC)
[4] = OpenVPN / easy-rsa-old [Github](superseded)- [5] = Instalando e configurando o OpenVPN
- [6] = OpenVPN Traffic Identification Using Traffic Fingerprints and Statistical Characteristics
- [7] = Internet Censorship in China
- [8] = Improving OpenVPN performance and throughput
- [9] = Optimizing OpenVPN Throughput
- [10] = Optimizing performance on gigabit networks
- [11] = Setting up your own Certificate Authority (CA) and generating certificates and keys for an OpenVPN server and multiple clients
- [12] = WireGuard Quick Start
- [13] = How to setup a VPN server using WireGuard (with NAT and IPv6)
- [14] = How to configure a WireGuard Windows 10 VPN client
- [15] = WireGuard vs OpenVPN: 7 Big Difference
- [16] = OpenVPN Traffic Identification Using Traffic Fingerprints and Statistical Characteristics
- [17] = Wireguard GUIs im Vergleich
- [18] = WireGuard VPN Server mit Web Interface einrichten
- [19] = WireGuard [ArchWiki]
- [20] = Home – Easy RSA (easy-rsa.readthedocs.io)
- [21] = Easy-RSA – ArchWiki (archlinux.org)
- [22] = GitHub – OpenVPN/easy-rsa: easy-rsa – Simple shell based CA utility
- [23] = DualStack VPN mit Wireguard – sebastian heg.ge