IoT Telemetry and Control System
IoT Telemetry and Control System
ENGEGNERIA DELL’INFORMAZIONE
Project Report
Pisa, 2023
Contents
1. Introduction ................................................................................................................... 3
2. Use-case scenario .......................................................................................................... 4
3. Data Acquisition system .............................................................................................. 6
3.1 MQTT Sensor .................................................................................................... 7
3.2 RPL Border Router and MQTT Broker ............................................................. 9
3.3 Cloud Application .............................................................................................. 9
3.4 Grafana and data display .................................................................................. 10
4. Temperature control system .................................................................................... 11
4.1 Control Application ......................................................................................... 12
4.2 COAP Actuator ................................................................................................ 15
5. Testing: Cooja simulation and nRF52840 deployment...................................... 15
6. Conclusions .................................................................................................................. 19
1. Introduction
The objective of the project is to develop a telemetry system based on IoT devices.
The platform will use several nodes forming a Wireless Sensor Network (WSN) to collect
and transmit data (sensor readings) to a remote cloud application, that will store it in a
database. The system must also use a control application that reads the information stored
in the database and sends commands back to actuators deployed in the WSN. The overall
architecture of the system is illustrated in Figure 1:
To diversify the technologies used in the project the Nodes deployed in the
monitoring WSN will use the Message Queuing Telemetry Transport (MQTT) protocol
and the Constrained Application Protocol (COAP) in its application layer. The sensor
Nodes will be MQTT Clients, connected to a MQTT broker (Mosquitto), and the actuator
Nodes will be COAP servers exposing a resource.
For the Cloud Application service, a Python based module will subscribe to the
topic where the sensors are publishing and, as it receives the messages containing the
sensor readings, will write the information in a table on a MySQL database. Once the data
is stored a Grafana dashboard will be used to generate a web page, displaying the sensor
readings in a chart, allowing a user to observe and analyze the behavior of the system that
is being monitored.
Also, a second Python module, called Control Application, will periodically read
the data from the database (DB) and by using a simple logic, will issue commands to the
actuators deployed. This control application will also allow users to manually send
messages to the actuators.
2. Use-case scenario
When growing corps indoor, in many cases there is a need to control the
temperature inside the greenhouses, to ensure that the plants can develop well and to
increase the crop yield. For example, when growing tomatoes indoors, the temperature
must be kept below a certain value so that the fruits are not damaged.
To provide such temperature regulation, the most common approach is to use a
combination of temperature sensors to collect the local temperature in the crops and a set
of cooling fans, that can reduce the temperature inside the greenhouse. A closed-loop
control system is generally used in the regulation to active the cooling system when the
average temperature is above a certain value.
This type of use-case scenario can benefit from a modern wireless system, that
can accurately collect the data and provide automatic control over a large monitoring area.
The IoT telemetry system proposed could help to improve the quality of the temperature
control over the corps and to provide better statistics and data that could be included in
the study of the plants development. This type of platform can even reduce the cost of
deploying the monitoring system in a large farming area as it eliminates the use of cables
and optimizes of the activation of the fans, allowing remote control over the actuators.
The project developed then aims to provide a WSN and a control system that will
collect temperature data using Sensor Nodes and activate or deactivate a set of Actuator
Nodes deployed in the WSN, to reduce the temperature in the monitored area according
to a set threshold value.
To better adapt the IoT platform described to this application and considering that
some of the monitoring areas could be large, the sensors and actuators will be divided in
monitoring Sections. The control system then evaluates the average temperature of each
independent section to decide if it is necessary to activate the actuators and the cooling
systems on that specific Section. More details of the data formats and identifiers will be
explained in the Data Acquisition section of the report.
To illustrate the Node distribution in a possible scenario, Figure 3 shows two
perspectives of a greenhouse, where the WSN is placed:
The Sensor Node will use a MQTT Client program, based on the example
provided by Contiki-NG. By exploiting the many network libraries that Contiki-NG has,
the client node can be easily configured, as the network stack is handled in the lower
layers.
With that, a Finite State Machine is used in the program to track the Sensor Node
connection state. The states are checked and updated based on a loop that waits for an
event to occur. To generate the events, the MQTT function mqtt_register() will register
a set of callbacks that can trigger the FSM (used to identify when the Sensor Node has
established a successful connection with the broker, or it has been disconnected). Also,
an event timer is used in the program (defined as process_timer), and the expiration time
is set according to the state and action necessary. With both these triggers, the FSM will
be constantly polled, allowing the system to know its connection status and when to
publish a message containing the sensor data. The transitions of the sates are then defined
as:
When connected, as mentioned above, the MQTT Sensor node will periodically
publish a message. This operation is configured to publish in the “sensor/data” topic. The
frame that is generated in the node provides all the necessary info for the Cloud
Application to identify the sender and the Section that the sensor reading is from. The
referred data frame uses a JSON encoded payload. An example of payload is:
{
“SectionID”: 1,
“SensorID”: 101,
“DataType”: “Temperature”,
“Data”: 17,
“Platform”: “cooja”,
“MsgNumber”: 10,
“Uptime (sec)”: 82200
}
The SensorID and SectionID parameters are both hard coded in the node program
and remain constant during the operation (the nodes are not considered to be mobile).
When simulating the network in Cooja, a different approach was used, and to obtain its
SensorID parameter, the nodes will use its link-local IPv6 as a base to generate the id
number. For the SectionID, the nodes with even IPv6 addresses are assigned to Section
1, and nodes with odd IPv6 addresses are assigned to Section 2.
In the project, since the devices used (nRF52840) do not contain actual
temperature sensors, the data added to the messages is randomly generated, and it is used
only for test purposed and to demonstrate the platform operation.
The Border Router used in the project is the rpl-border-router example provided
by Contiki-NG. No modifications were done in the device, as it is only intended to provide
the WSN access to the external network. For the MQTT Broker system, the Mosquitto
MQTT Broker software was used, as part of the project specifications. The broker then
automatically sets the topic where the sensor nodes are publishing and retransmit the
messages the subscribers. In the project deployment the only subscriber to this topic will
be the Cloud Application.
Following the project instructions, to allow users to view and monitor the
temperature sensor collected data a web-based visualization tool was used (Grafana). This
application allows the deployment of a local server that creates a web page showing in
real time charts with the sensor data (can be access from a browser in the URL
localhost:3000). To produce the dashboards, the queries had to be adapted to the relational
database format used:
The Control Application uses several python functions to execute the necessary
query in the DB, to retrieve the latest sensor readings, and to properly evaluate risk
conditions for the crops and set the actuators. The conditions that trigger the activation of
the colling system is if the average temperature measurement in a Section is above a
certain threshold value.
To obtain the average temperature values, the Control app will execute a loop
where in every cycle the component will send a GET message to the COAP server of all
Sections, requesting the status of the cooling system (ON or OFF). If a reply message is
received, the terminal will print the information on teach Section actuator.
After this initial message, the control application will perform a query in the DB,
on the sensor_data table. The query will return the last 10 entries in the table, and a
function will parse and combine the temperature values for each Section, compute the
average value. The MySQL query designed for this operation is the following:
sensor_data_read_query = '''
SELECT sensorid, section, datatype, data
FROM sensor_data
ORDER BY timestamp DESC
LIMIT 10
'''
An important consideration is that the number of entries read on each cycle must
be compatible with the number of sensors deployed (in the project design for two Sections
with a total of six sensors, the number was set to 10 entries).
In addition to that, the delay between the queries must also be compatible with the
use-case application and should be less than or equal to PUBLISH_DATA_PERIOD, in
seconds, otherwise information may be lost as the sensors will write more than once
before the control system evaluates the average temperature. The monitoring cycle is
illustrated in Figure 10:
POST:
coap://[fd00::302:304:506:708]:5683/actuator/control?activation=0
When these messages are sent, the application the updates its internal status only
if the COAP server replies with an acknowledge message. If the POST fails, the control
application will attempt again to set the cooling system, if it still detects a high
temperature value.
The control application can be set to automatically run the periodical temperature
checks, but the user can also use the available functions to transmit the POST and GET
messages manually and set the state of the actuators remotely. The two functions designed
are:
• post_event(section, action) : this function takes the Section number parameter
(integer) and the activation code to be send in the payload (either 0 or 1). It
generates a POST request to the actuator registered on that Section and prints the
reply message.
• get_actuator_status(section) : this function takes the Section number parameter
and sends a GET request to the resource. It will print the response message with
the actuator status.
The COAP actuator node is based on the Contiki-NG example of the COAP
server. In this program a simple loop will start the server and keep the system toggling a
LED, to identify that it is waiting for commands. When initialized, the actuator is set to
the state OFF (the cooling system is suspended), and the RED LED is set to off.
A resource file is created in the resource folder and exposes a resource that accepts
two types of requests, either a GET message or a POST message. The resource address
uses Port 5683 and is defined as “actuator/control” as the example:
coap://[fd00::302:304:506:708]:5683/actuator/control
When receiving a GET message the COAP Actuator must reply with a string
containing the actuator status and its Section identification. The possible states are ON,
OFF or FAULT (the latest is currently not in use but was kept is a possible expansion on
the actuator system).
If a POST message is received the program will check for a mandatory parameter
named “activation” that must be sent in the message payload. This parameter can only
have two values set, 0 or 1. When the actuator receives a POST message with an
activation=1, it must update its internal state and set the cooling system to ON. The reply
sent on this POST message is the confirmation that the temperature control system has
being activated. In this state the node will turn on the RED LED of the device.
If the parameter sent is activation=0 the COAP Actuator must set its internal state
to OFF and reply with a confirmation message. The RED LED is off in this state.
The logic used in the actuator node was made simple, in a way that it prevents the
program to go to an unknow or error state due to some misconfiguration, or wrong
parameter set. When receiving a payload with wrong encoding or missing parameter the
node will respond the message with an error text, but it will not change its internal state.
To complement the operation of the actuator a manual trigger was included in the
device program. This was made by adding an event trigger by pressing the device button.
When the button is pressed, the device will toggle its internal state.
Once the data frames are received by the Cloud app the information in the payload
is printed. The Control app has a similar behavior, and it prints the reply messages
received by the COAP server as the monitoring cycles are execute and the requests are
sent. A screenshot of the logs from those applications are shown in Figures 13 and 14:
Figure 13 – Cloud application terminal.
After the simulation in Cooja, the system was tested using the Nordic nRF52840
dongle boards. The overall setup for this test used 3 dongles, one operating as the border
router (necessary to convert the frames sent over the radio into the USB port), one as a
Sensor node and one as an Actuator node. With that, the complete data path that the
monitoring system developed uses could be tested. The behavior of the platform when
deployed in the dongles was like the simulation, and the information generated by the
sensor node was properly sent to the Cloud application and stored in the database. Also,
the commands form the Control application were able to configure the state of the
actuators, demonstrating the complete operation of the telemetry and monitoring system.
An illustration of the test setup is in Figure 15:
Figure 15 – Dongle test setup.
6. Conclusions
The simulation environment shows that the integration between the Data
acquisition system and the Monitoring system were working without issues. The Sensor
nodes were able to transmit the data frames over the WSN and the DB access was properly
done with the designed queries. The conclusion on the system is that the platform
developed can accurately collect data from a WSN and to send commands over the
network to control the remote devices. By studying the protocols used in the network and
physical layers it is possible to conclude that this system could be deployed over a large
greenhouse environment, as the nodes were able to transmit the messages even when
positioned far away from the border router node.
To deploy the developed monitoring system in a production environment some
additional features must be present, such to detect fault in the sensor and actuators, as
well as to track the number of packs that are lost. A secure protocol can also be added to
the frames sent in the WSN to prevent intrusions and attacks. With some improvements
the monitoring system can be adapted and provide a better quality of service for
applications similar to the use-case defined in the project.