The Digital Shopfloor
The Digital Shopfloor
During the last couple of years, the digital transformation of industrial processes is John Soldatos, Oscar Lazaro and
propelled by the emergence and rise of the fourth industrial revolution (Industry 4.0). Franco Cavadini (Editors)
The latter is based on the extensive deployment of Cyber-Physical Production Systems
(CPPS) and Industrial Internet of Things (IIoT) technologies in the manufacturing
shopfloor, as well as on the seamless and timely exchange of digital information across
supply chain participants. Despite early implementations and proof-of-concepts,
CPPS/IIoT deployments are still in their infancy for a number of reasons, including:
(i) Manufacturers’ poor awareness about digital manufacturing solutions and their
business value potential; (ii) The costs that are associated with the deployment,
maintenance and operation of CPPS systems; (iii) The time needed to implement
CPPS/IIoT and the lack of a smooth and proven migration path; (iv) The uncertainty
over the business benefits and impacts of IIoT and CPPS technologies; (v) The absence
Series Editors:
ISHWAR K. SETHI
Oakland University
USA
TAREK SOBH
University of Bridgeport
USA
Indexing: All books published in this series are submitted to the Web of Science
Book Citation Index (BkCI), to SCOPUS, to CrossRef and to Google Scholar for
evaluation and indexing.
Editors
John Soldatos
Athens Information Technology
Greece
Oscar Lazaro
Innovalia Association
Spain
Franco Cavadini
Synesis-Consortium
Italy
River Publishers
Published, sold and distributed by:
River Publishers
Alsbjergvej 10
9260 Gistrup
Denmark
River Publishers
Lange Geer 44
2611 PW Delft
The Netherlands
Tel.: +45369953197
www.riverpublishers.com
c The Editor(s) (if applicable) and The Author(s) 2019. This book is
published open access.
Open Access
This book is distributed under the terms of the Creative Commons Attribution-Non-
Commercial 4.0 International License, CC-BY-NC 4.0) (http://creativecommons.org/
licenses/by/4.0/), which permits use, duplication, adaptation, distribution and
reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, a link is provided to the Creative Commons license and
any changes made are indicated. The images or other third party material in this book
are included in the work’s Creative Commons license, unless indicated otherwise in the
credit line; if such material is not included in the work’s Creative Commons license and
the respective action is not permitted by statutory regulation, users will need to obtain
permission from the license holder to duplicate, adapt, or reproduce the material.
The use of general descriptive names, registered names, trademarks, service marks,
etc. in this publication does not imply, even in the absence of a specific statement, that
such names are exempt from the relevant protective laws and regulations and therefore
free for general use.
The publisher, the authors and the editors are safe to assume that the advice and
information in this book are believed to be true and accurate at the date of publication.
Neither the publisher nor the authors or the editors give a warranty, express or implied,
with respect to the material contained herein or for any errors or omissions that may have
been made.
Printed on acid-free paper.
Contents
Foreword xix
Preface xxiii
PART I
2 Open Automation Framework for Cognitive Manufacturing 27
Oscar Lazaro, Martijn Rooker, Begoña Laibarra, Anton Ružić,
Bojan Nemec and Aitor Gonzalez
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 28
v
vi Contents
PART II
PART III
16 Epilogue 447
Index 451
xix
xx Foreword
KEY ENABLERS AND PATHWAYS
CROSS-CUTTING FACTORS
1. 2. 3. 4. 5.
...
Industrial state of play Cases that illustrate Approaches & cases from
in manufacturing advanced state of the art research & innovation projects
Chris Decubber
Technical Director
European Factories of the Future Research Association
Brussels
April 4th, 2019
Preface
xxiii
xxiv Preface
September 2018,
John Soldatos
Oscar Lazaro
Franco Cavadini
List of Contributors
xxvii
xxviii List of Contributors
xxxi
xxxii List of Figures
xxxvii
List of Abbreviations
xxxix
xl List of Abbreviations
John Soldatos
1.1 Introduction
In the era of globalization, industrial organizations are under continuous
pressure to innovate, improve their competitiveness and perform better than
their competitors in the global market. Digital technologies are one of their
most powerful allies in these efforts, as they can help them increase automa-
tion, eliminate error prone processes, enhance their proactivity, streamline
their business operations, make their processes knowledge intensive, reduce
costs, increase their smartness and overall do more with less. Moreover, the
technology acceleration trends provide them with a host of opportunities for
innovating in their processes and transforming their operations in a way that
results not only in marginal productivity improvements, but rather in a disrup-
tive paradigm shift in their operations. This is the reason why many industrial
organizations are heavily investing in the digitization of their processes as
part of a wider and strategic digital transformation agenda.
1
2 Introduction to Industry 4.0 and the Digital Shopfloor Vision
In this landscape, the term Industry 4.0 has been recently introduced.
This introduction has signalled the “official” start of the fourth industrial
revolution, which is based on the deployment and use of Cyber-Physical
Systems (CPS) in industrial plants, as means of fostering the digitization,
automation and intelligence of industrial processes [1]. CPS systems facil-
itate the connection between the physical world of machines, industrial
automation devices and Operational Technology (OT), with the world of
computers, cloud data centres and Information Technology (IT). In simple
terms, Industry 4.0 advocates the seamless connection of machines and
physical devices with the IT infrastructure, as means of completely digitizing
industrial processes.
In recent years, Industy 4.0 is used more widely, beyond CPS systems and
physical processes, as a means of signifying the disruptive power of digital
transformation in virtual all industries and application domains. For example,
terms like Healthcare 4.0 or Finance 4.0 are commonly used as derivatives
of Industry 4.0. Nevertheless, the origins of the term lie in the digitization
of industrial organizations and their processes, notably in the digitization
of factories and industrial plants. Note also that in most countries Industry
4.0 is used to signify the wider ecosystem of business actors, processes and
services that underpin the digital transformation of industrial organizations,
which makes it also a marketing concept rather than strictly a technological
concept.
The present book refers to Industry 4.0 based on its original definition
i.e. as the fourth industrial revolution in manufacturing and production,
aiming to present some tangible digital solutions for manufacturing, but
also to develop a vision for the future where plant operations will be fully
digitized. However, it also provides insights on the complementary assets that
should accompany technological developments towards successful adoption.
For example, the book presents concrete examples of such assets, including
migration services, training services and ecosystem building efforts. This
chapter serves as a preamble to the entire book and has the following
objectives:
• To introduce the business motivation and main drivers behind
Industry 4.0 in manufacturing. Most of the systems and technologies that
are presented in this book are destined to help manufacturers confront
such business pressures and to excel in the era of globalization and
technology acceleration.
1.1 Introduction 3
• To present some of the main Industry 4.0 use cases in areas such as
industrial automation, enterprise maintenance and worker safety. These
use cases set the scene for understanding the functionalities and use of
the platforms that are presented in this book, including use cases that are
not explicitly presented as part of the subsequent chapters.
• To illustrate the main digital technologies that enable the platforms and
technologies presented in the book. Note that the book is about the
digitization of industrial processes and digital automation platforms,
rather than about IT technologies. However, in this first chapter, we pro-
vide readers with insights about which digital technologies are enabling
Industry 4.0 in manufacturing and how.
• To review the state of the art in digital automation platforms, including
information about legacy efforts for digitizing the shopfloor based on
technologies like Service Oriented Architectures (SOA) and intelligent
agents. It’s important to understand how we got to today’s digital
automation platforms and what is nowadays different from what has
been done in the past.
• To introduce the vision of a fully digital shopfloor that is driving the
collaboration of research projects that are contributing to this book. The
vision involves interconnection of all machines and complete digitiza-
tion of all processes in order to deliver the highest possible automation
with excellent quality at the same time, as part of a cognitive and
fully autonomous factory. It may take several years before this vision
is realized, but the main building blocks are already set in place and
presented as various chapters of the book.
In-line with the above-listed objectives, the chapter is structured as follows:
• Section 2 presents the main business drivers behind Industry 4.0 and
illustrates some of the most prominent use cases, notably the ones with
proven business value;
• Section 3 discusses the digital technologies that underpin the fourth
industrial revolution and outlines their relation to the systems that are
presented in the latter chapter;
• Section 4 reviews the past and the present of digital automation plat-
forms, while also introducing the vision of a fully digital shopfloor;
• Section 5 is the final and concluding section of the chapter.
4 Introduction to Industry 4.0 and the Digital Shopfloor Vision
While the presented list of use cases is not exhaustive, it is certainly indicative
of the purpose and scope of most digital manufacturing deployments in recent
years. Later chapters in this book present practical examples of Industry 4.0
deployments that concern one or more of the above use cases. However,
we expect that these use cases will gradually expand in sophistication as
part of the digital shopfloor vision, which is illustrated in a following section
of this chapter. Moreover, we will see the interconnection and interaction
of these use cases as part of a more cognitive, autonomous and automated
factory, where automation configuration, supply chain flexibility, predictive
maintenance, worker training and safety, as well as digital twins co-exist and
complement each other.
come to enhance rather than replace the connectivity capabilities that are
currently provided by 4G and WiFi technologies, notably in the direction
of accurate item localization that existing technologies cannot deliver.
• Cloud Computing: CPS manufacturing systems and applications are
very commonly deployed in the cloud, in order to take advantage of
the capacity, scalability and quality of service of cloud computing.
Moreover, manufacturers tend to deploy their enterprise systems in the
cloud. Likewise, state of the art automation platforms (including some
of the platforms that are presented in this book) are cloud based. In
the medium term, we will see most manufacturing applications in the
cloud, yielding cloud computing infrastructure an indispensable element
of Industry 4.0.
• Edge Computing: During the last couple of years, CPS and IIoT
deployments in factories implement the edge computing paradigm. The
latter complements the cloud with capabilities for fast (nearly real time)
processing, which is performed close to the field rather than in the
cloud [2]. In an edge computing deployment, edge nodes are deployed
close to the field in order to support data filtering, local data process-
ing, as well as fast (real time) actuation and control tasks. The edge
computing paradigm is promoted by the major reference architecture
for IIoT and Industry 4.0 such as the Industrial Internet Consortium
Reference Architecture (IIRA) and the Reference Architecture of the
OpenFog consortium.
• Big Data: The vast majority of Industry 4.0 use cases are data intensive,
as they involve many data flows from multiple heterogeneous data
sources, including streaming data sources. In other words, several
Industry 4.0 use cases are based on datasets that feature the 4Vs
(Volume,Variety, Velocity, Veracity) of Big Data. As mentioned in ear-
lier sections, predictive maintenance is a classic example of a Big Data
use case, as it combines multi-sensor data with data from enterprise sys-
tems in a single processing pipeline. Therefore, the evolution of Big Data
technologies and tools is a key enabler of the fourth industrial revolution.
Industry 4.0 is typically empowered by Big Data technologies for data
collection, consolidation and storage, given that industrial use cases need
to bring together multiple fragmented datasets and to store them in a
reliable and cost-effective fashion. However, the business value of these
data lies in their analysis, which is indicative of the importance of Big
Data analytics techniques, including machine learning techniques.
12 Introduction to Industry 4.0 and the Digital Shopfloor Vision
1.5 Conclusion
This chapter has introduced Industry 4.0 in general and digital automation
platforms in particular, which are at the core of the book. Our introduction
to Industry 4.0 has underlined some of the proven and most prominent
use cases that are being implemented as part of early deployments. Special
emphasis has been given in use cases associated with flexible automation,
worker training and safety, predictive maintenance, quality management,
digital simulations and more. Basic knowledge about these use cases is a
key prerequisite for understanding the automation use cases and applications
that are presented in subsequent chapters of the book.
The chapter has also presented the most widely used digital technologies
in the scope of Industry 4.0. Emphasis has been put on illustrating the
relevance of each technology to Industry 4.0 use cases, but also on presenting
how their evolution will impact deployment and adoption of CPS manufac-
turing systems. This discussion of digital technologies is also a prerequisite
for understanding the details of the digital solutions that are presented in
subsequent chapters. This is particularly important, given that no chapter
of the book presents in detail digital technologies. Rather the emphasis of
the book is on presenting advanced manufacturing solutions based on digital
automation platforms that leverage the above-listed digital technologies.
Despite early deployments and the emergence of various digital automa-
tion platforms, the Industry 4.0 vision is still in the early stages. In the
medium- and long-term, different technologies and platforms will be inte-
grated towards a fully digital shopfloor, which supports the digital trans-
formation of industrial processes end-to-end. The vision of a fully digital
shopfloor entails the interoperability and interconnection of multiple digitally
enhanced machines in-line with the needs of end-to-end automation processes
within the factory. As part of this book, we present several automation
approaches and functionalities, including field control, data analytics and
digital simulations. In the future digital shopfloor, these functionalities will
co-exist and seamlessly interoperate in order to enable fully autonomous,
intelligent and resource efficient factories. With this wider vision in mind,
readers could focus on the more fine-grained descriptions of platforms and
technologies presented in subsequent chapters.
22 Introduction to Industry 4.0 and the Digital Shopfloor Vision
References
[1] Alasdair Gilchrist ‘Industry 4.0: The Industrial Internet of Things’,
1st ed. Edition Apress, June 28, 2016, ISBN-10: 1484220463, ISBN-13:
978-1484220467
[2] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, ‘Fog computing and its role
in the internet of things’, Proceedings of the first edition of the MCC
workshop on Mobile cloud computing, MCC ’12, pp 13–16, 2012.
[3] F. Jammes and H. Smit, ‘Service-Oriented Paradigms in Industrial
Automation Industrial Informatics’, IEEE Transactions on., pp. 62–70,
vol. 1, issue 1, Feb, 2005.
[4] T. Haluška, R. Paulièek, and P. Važan, “SOA as A Possible Way to
Heal Manufacturing Industry”, International Journal of Computer and
Communication Engineering, Vol. 1, No. 2, July 2012
[5] A. W. Colombo (ed.), T. Bangemann (ed.), S. Karnouskos (ed.),
J. Delsing (ed.), P. Stluka (ed.), R. Harrison (ed.), F. Jammes (ed.), J. L.
Martı́nez Lastra (ed.), ‘Industrial Cloud-Based Cyber-Physical Systems:
The IMC-AESOP Approach’, Springer. 245 p. 2014.
[6] C Leitão, “Agent-based distributed manufacturing control: A state-of-
the-art survey,” Engineering Applications of Artificial Intelligence, vol.
22, no. 7, pp. 979–991, Oct. 2009.
[7] P. Vrba, ‘Review of Industrial Applications of Multi-agent Technolo-
gies’, Service Orientation in Holonic and Multi Agent Manufacturing
and Robotics, Studies in Computational Intelligence Vol. 472, Springer,
pp 327–338, 2013.
[8] Tapia, S. Rodrı́guez, J. Bajo, and J. Corchado, ‘FUSION@, A
SOA-Based Multi-agent Architecture’, in International Symposium on
Distributed Computing and Artificial Intelligence 2008 (DCAI 2008),
vol. 50 J. Corchado, S. Rodrı́guez, J. Llinas, and J. Molina, Eds. Springer
Berlin/Heidelberg, 2009, pp. 99–107.
[9] F. Basile, P. Chiacchio, and D. Gerbasio, ‘On the Implementation of
Industrial Automation Systems Based on PLC’, IEEE Transactions
on Automation Science and Engineering, Volume: 10, Issue: 4, pp.
9901003, Oct 2013.
[10] W. Dai, V. Vyatkin, J. Christensen, V. Dubinin, ‘Bridging Service-
Oriented Architecture and IEC 61499 for Flexibility and Interoperabil-
ity’, Industrial Informatics, IEEE Transactions on, Volume: 11, Issue: 3
pp: 771–781, DOI: 10.1109/TII.2015. 2423495, 2015.
References 23
27
28 Open Automation Framework for Cognitive Manufacturing
To meet the requirements of both large and small firms, this chap-
ter elaborates on the proposal of a holistic framework for smart inte-
gration of well-established SME-friendly digital frameworks such as the
ROS-supported robotic Reconcell framework, FIWARE-enabled data-driven
BEinCPPS/MIDIH Cyber Physical Production frameworks and OpenFog [3]
compliant open-control hardware frameworks. The chapter demonstrates how
AUTOWARE digital abilities are able to support automatic awareness; a
first step in the support of autonomous manufacturing capabilities in the
digital shopfloor. This chapter also demonstrates how the framework can
be populated with additional digital abilities to support the development of
advanced predictive maintenance strategies as those proposed by the Zbre4k
project.
2.1 Introduction
SMEs are a pillar of the European economy and key stakeholder for a suc-
cessful digital transformation of the European industry. In fact, manufacturing
is the second most important sector in terms of small and medium-sized
enterprises’ (SMEs) employment and value added in Europe [1]. Over 80% of
the total number of manufacturing companies is constituted by SMEs, which
represent 59% of total employment in this sector.
In an increasingly global competition arena, companies need to respond
quickly and economically feasible to the market requirements. In terms of
market trends, a growing product variety and mass customization are leading
to demand-driven approaches. Industry, in general, and SMEs, in particular,
face significant challenges to deal with the evolution of automation solu-
tions (equipment, instrumentation and manufacturing processes) they should
support to respond to demand-driven approaches, i.e. increasing and abrupt
changes in market demands intensified by the manufacturing trends of mass
customization and individualization, which needs to be coupled with pressure
on reduction of production costs, imply that manufacturing configurations
need to change more frequently and dynamically.
Current practice is such that a production system is designed and opti-
mized to execute the exact same process over and over again. Regarding the
growing dynamics and these major driving trends, the planning and control
of production systems has become increasingly complex regarding flexibility
and productivity as well as the decreasing predictability of processes. It
is well accepted that every production system should pursue the following
three main objectives: (1) providing capability for rapid responsiveness,
2.1 Introduction 29
(2) enhancement of product quality and (3) production at low cost. On the
one hand, these requirements have been traditionally satisfied through highly
stable and repeatable processes with the support of traditional automation
pyramids. On the other hand, these requirements can be achieved by creating
short response times to deviations in the production system, the production
process, or the configuration of the product in coherence to overall per-
formance targets. In order to obtain short response times, a high process
transparency and reliable provisioning of the required information to the point
of need at the correct time and without human intervention are essential.
However, the success of those adaptive and responsive production sys-
tems highly depends on real-time and operation-synchronous information
from the production system, the production process and the individual
product. Nevertheless, it can be stated that the concept of fully automated
production systems is no longer a viable vision, as it has been shown that
the conventional automation is not able to deal with the ever-rising com-
plexity of modern production systems. Especially, a high reactivity, agility
and adaptability required by modern production systems can only be reached
by human operators with their immense cognitive capabilities, which enable
them to react to unpredictable situations, to plan their further actions, to learn
and to gain experience and to communicate with others. Thus, new concepts
are required, which apply these cognitive principles to support autonomy in
the planning processes and control systems of production systems. Open and
smart cyber-physical systems (CPS) are considered to be the next (r)evolution
in industrial automation linked to Industry 4.0 manufacturing transformation,
with enormous business potential enabling novel business models for inte-
grated services and products. Today, the trend goes towards open CPS devices
and we see a strong request for open platforms, which act as computational
basis that can be extended during manufacturing operation. However, the
full potential of open CPS has yet to be fully realized in the context of
cognitive autonomous production systems.
In fact, in particular to SMEs, it still seems difficult to understand the
driving forces and most suitable strategies behind shopfloor digitalization and
how they can increase their competitiveness making use of the vast variety of
individualized products and solutions to digitize their manufacturing process,
making them cognitive and smart and compliant with Industry 4.0 reference
architecture RAMI 4.0 IEC 62443/ISA99. Moreover, as SMEs intend to
adopt data-intensive collaborative robotics and modular manufacturing sys-
tems, making their advanced manufacturing processes more competitive, they
face additional challenges to the implementation of “cloudified” automation
30 Open Automation Framework for Cognitive Manufacturing
processes. While the building blocks for digital automation are available, it is
up to the SMEs to align, connect and integrate them to meet the needs of their
individual advanced manufacturing processes, leading to difficult and costly
digital automation platform adoption.
This chapter presents the AUTOWARE architecture, a concerted effort
of a group of European companies under the Digital Shopfloor Alliance
(DSA) [12] to provide an open consolidated architecture that aligns currently
disconnected open architectural approaches with the European reference
architecture for Industry 4.0 (RAMI 4.0) to lower the barrier of small,
medium- and micro-sized enterprises (SMMEs) in the development and
incremental deployment of cognitive digital automation solutions for next-
generation autonomous manufacturing processes. This chapter is organized
as follows. Section 2.2 presents the background and state of the art on
open digital manufacturing platforms, with a particular focus on European
initiatives. Section 2.3 introduces the AUTOWARE open OS building blocks
and discusses their mapping to RAMI 4.0, the Reference Architecture for
Manufacturing Industry 4.0. Then, Section 2.4 exemplifies how AUTOWARE
platform can be tailored and customized to advanced predictive mainte-
nance services. Finally, the chapter concludes with the main features of the
AUTOWARE open automation framework.
hand, there are the so-called “active” elements inside the different layers and
are called Industry 4.0 components (I4.0 component). I4.0 components are
also objects, but they have the ability to interact with other elements and
can be summarized as follows: (1) they provide data and functions within
an information system about an even complex object; (2) they expose one or
more end-points through which their data and functions can be accessed and
(3) they have to follow a common semantic model.
Therefore, the RAMI 4.0 framework goal is to define how I4.0 com-
ponents communicate and interact with each other and how they can be
coordinated to achieve the objectives set by the manufacturing companies.
Figure 2.4 Digital manufacturing platform landscape. Adapted from [14] and [15].
2.2 State of the Play: Digital Manufacturing Platforms 37
seen as a blueprint for secure data exchange and efficient data combination.
Figure 2.5 illustrates the technical architecture of the Industrial Data Space.
The Industrial Data Space fosters secure data exchange among its partic-
ipants, while at the same time ensures data sovereignty for the participating
data owners. The Industrial Data Space Association defines the framework
and governance principles for the Reference Architecture Model, as well as
interfaces aiming at establishing an international standard which considers
the following user requirements: (1) data sovereignty; (2) data usage control;
(3) decentralized approach; (4) multiple implementations; (5) standardized
interfaces; (6) certification; (7) data economy and (8) secure data supply
chains.
In compliance with common system architecture models and standards
(such as ISO 42010, 4+1 view model, etc.), the Reference Architecture Model
uses a five-layer structure expressing stakeholder concerns and viewpoints at
different levels of granularity (see Figure 2.6).
The IDS reference architecture consists of the following layers:
• The business layer specifies and categorizes the different stakeholders
(namely the roles) of the Industrial Data Space, including their activities
and the interactions among them.
2.2 State of the Play: Digital Manufacturing Platforms 39
Figure 2.8 Digital shopfloor visions for autonomous modular manufacturing, assembly and
collaborative robotics.
2.3 Autoware Framework for Digital Shopfloor Automation 43
RAMI 4.0), taking into consideration the industrial requirements from sev-
eral use cases, aiming to provide a solution-oriented framework for digital
shopfloor automation. Figure 2.10 shows the AUTOWARE framework with
its main components.
The AUTOWARE framework from a technical perspective offers many
features and concepts that are of great importance for cognitive manufac-
turing in particular to the automatic awareness abilities that AUTOWARE is
primarily aiming at:
• Open platform. Platforms contain different technology building blocks
with communication and computation instances with strong virtual-
ization properties with respect to both safety and security for the
cloudification of CPS services.
• Reference architecture. Platforms focused on harmonization of refer-
ence models for cloudification of CPS services have to make a tem-
plate style approach for flexible application of an architectural design
for suitable implementation of cognitive manufacturing solutions, e.g.
predictive maintenance, zero defect manufacturing, energy efficiency.
• Connectivity to IoT. Multi-level operation (edge, cloud) and function
visualization through open interfaces allow native support for service
connection and disconnection from the platform, orchestrating and
provisioning services efficiently and effectively.
46 Open Automation Framework for Cognitive Manufacturing
for all of them, different modeling approaches are available. The goal of
these modeling approaches is to ease the end user/system developer/system
integration developing the tools or technologies for the different levels.
Additionally, it could be possible to have modeling approaches that take the
different layers into account and make it easier for the users to model the
interaction between the different layers.
The AUTOWARE reference architecture also represents the two data
domains that the architecture anticipates, namely the data in motion and data
and rest domains. These layers are also matched in the architecture with the
type of services automation, analysis and learning/simulation that are also
pillars of the RA. The model also represents the layers of the RA where such
services could be executed with the support of the fog/cloud computing and
persistence services (blue pillar in Figure 2.11).
and fog technologies and smart machines as a common system. All these
conform to a set of automatic awareness integrated technologies, which,
as shown in Figure 2.12, adopt i-ROS-ready reconfigurable robotic cell
and collaborative robotic bi-manipulation technology, smart product mem-
ory technology, OpenFog edge computing and virtualization technology,
5G-ready distributed data processing and reliable wireless mobile networking
technologies, OPC-UA compliant Time Sensitive Networking (TSN) tech-
nology, Deep object recognition technology and ETSI CIM-ready FIWARE
Context Brokering technology.
On the other hand, the AUTOWARE digital ability framework addi-
tionally provides automatic awareness usability services intended for a
more cost-effective, fast and usable modeling, programming and configura-
tion of integrated solutions based on the AUTOWARE enabling automatic
digital shopfloor awareness technologies. This includes, for instance, aug-
mented virtuality services, CPPS-trusted auto-configuration services or robot
programming by training services.
Through its digital abilities, AUTOWARE facilitates the means for the
deployment of completely open digital shopfloor automation solutions for
fast data connection across factory systems (from shop floor to office floor)
and across value chains (in cooperation with component and machine OEM
smart services and knowledge). The AUTOWARE added value is not only to
deliver a layered model for the four layers of the digital business ecosystem
discussed in Section 2.2 for the digital shopfloor (smart space, smart product,
smart data and smart service), but more importantly to provide an open and
flexible approach with suitable interfaces to commercial platforms that allows
the implementation of collective and collaborative services based on trusted
information spaces and extensive exploitation of digital twin capabilities and
machine models and operational footprints.
The third element in the AUTOWARE digital ability is the provision of
validation and verification (V&V) services for digital shopfloor solutions,
i.e. CPPS. Although CPPS are defined to work correctly under several envi-
ronmental conditions, in practice, it is enough if it works properly under
specific conditions. In this context, certification processes help to guarantee
correct operation under certain conditions, making the engineering process
easier, cheaper and shorter for SMEs that want to include CPPS in their
businesses. In addition, certification can increase the credibility and visibility
of CPPS as it guarantees its correct operation under specific standards. If
a CPPS is certified to follow some international or European standards or
2.3 Autoware Framework for Digital Shopfloor Automation 51
• End Users (SME): The main target group of the AUTOWARE project is
SMEs (small and medium-sized enterprises) that are looking to change
their production according to Industry 4.0, CPPS and Internet of Things
(IoT). These SMEs are considered the end user of the AUTOWARE
developments, whereby they do not have to use all the developed
technologies, but can only be interested in a subset of the technologies.
• Software Developers: As the AUTOWARE platform is an open plat-
form, software developers can create new applications that can run on
the AUTOWARE system. To support these users in their work, the
system provides high usability and intuitiveness level, so that software
developers can program the system to their wishes.
• Technology Developers: The individual technical enablers can be used
as a single technology, but being an open technology, they can also
be integrated into different technologies by technology developers.
The technology must be open and once again be intuitive to re-use
in different applications. Technology developers can then easily use
the AUTOWARE technology to develop new technologies for their
applications and create new markets for the AUTOWARE results.
• Integrator: The integrator is responsible for the integration of the
technologies into the whole manufacturing chain. To target this user
group, the technologies must support open interfaces, so the system can
intuitively be integrated into the existing chain. The advantage of the
open interfaces is that the integrator is not bound to a certain brand or
vendor.
• Policy Makers: Policy makers can make or break a technology. To
increase the acceptance rate, the exploitation and dissemination of the
technology must be at a professional level, and additionally, the tech-
nology must be validated, supporting the right standards and targeting
the right problems currently present on the market. Policy makers can
push technologies further into the market and act as large catalyst for
new technologies.
• HW Developers: For hardware developers, it is important to know what
kind of hardware is required for the usage of the different technologies.
In ideal case, all kind of legacy hardware is capable of interacting with
new hardware, but unfortunately, this is not always the case.
• Automation Equipment Providers: The technologies developed
within the AUTOWARE project can be of interest to other automa-
tion equipment providers, e.g. robot providers, industrial controller
providers, sensor providers, etc.
2.3 Autoware Framework for Digital Shopfloor Automation 53
Figure 2.14 Context Broker basic workflow & FIWARE Context Broker Architecture.
Figure 2.15 Embedding of the fog node into the AUTOWARE software-defined platform as
part of the cloud/fog computing & persistence service support.
The next step is, through the IDS Connectors connected to the Work-
cell layer components, the data (normally preprocessed by the Workcell
components) is sent to (published) the Orion Context Broker. The different
components from the factory layer that are subscribed to each data set will
receive it for their analysis and processing. The factory services components,
which are divided into Learning, Simulating and Cognitive Computing Ser-
vices, may require processed data from another factory layer service. The
outputs from factory layer components that are required as inputs by other
factory layer components will be published once again in the Orion Context
Broker in the Workcell. The factory layer components that need those outputs
as inputs will be subscribed to that data and will receive it. That is how the
communication and information flow will be carried out through the different
hierarchical levels.
The Learning, Simulating and Cognitive Computing Services will end
up creating valuable information as outputs that will be published in the
Plant Network’s Orion Context Broker. The different Business Management
Services will recollect the information required as inputs for their processing
and will elaborate reports, actions, alerts, decision support actions, etc. Dual
Reality and Modelling Services will also gather information and will process
it to give extra support information for business management decision making
and user interfaces by publishing it back in the Plant’s Orion Context Broker.
64 Open Automation Framework for Cognitive Manufacturing
C06.
C06 C10. M3
VRfx C04. DSS
SW
Machine
M3-Box
Data Base
C02. C01. i-Like
C10. M3
Cognitive Condition
SW
Embedded Monitoring
Open Automation Framework for Cognitive Manufacturing
2.5 Conclusions
In this chapter, we have discussed the needs for development of a dig-
ital automation framework for the support of autonomous digital manu-
facturing workflows. We have also presented how various open platforms
(i-ROS, OpenFog, IDS, FIWARE, BeinCPPS, MIDIH, ReconCell, Arrow-
head, OPC-UA/TSN, 5G) can be harmonized through open APIs to deliver
a software-defined digital shopfloor platform enabling a more cost-effective,
control and extendable deployment of digital abilities in the shopfloor in close
alignment with business strategies and investments available. This chapter
has also presented how AUTOWARE is also bringing forward the technology
enablers (connectivity, data distribution, edge extension of automation and
control equipment for app-ized smart open control hardware (open trusted
platforms) operation, deep object recognition), usability services (augmented
virutality, CPPS autoconfiguration, robotic programming by training) and
verification and validation framework (safety & standard compliant) to the
deployment and operation of automatic awareness digital abilities, as a first
step in cognitive autonomous digital shopfloor evolution. We have presented
how open platforms for fog/edge computing can be combined with cloudified
control solutions and open platforms for collaborative robotics, modular
manufacturing and reconfigurable cells for delivery of advanced manufac-
turing capabilities in SMEs. Moreover, we have also presented how the
AUTOWARE framework is flexible enough to be adopted and enriched with
additional digital capability services to support advanced and collaborative
predictive maintenance decision workflows. AUTOWARE is adapted for
operation of predictive maintenance strategies in high diversity of machinery
(robotic systems, inline quality control equipment, injection molding, stamp-
ing press, high-performance smart tooling/dies and fixtures), very challenging
and sometimes critical manufacturing processes (highly automated packaging
industry, multi-stage zero-defect adaptive manufacturing of structural light-
weight component for automotive industry, short-batch mass customized
production process for consumer electronics and health sector) and key
economic European sectors with the strongest SME presence (automotive,
food and beverage, consumer electronics).
Acknowledgments
This work was funded by the European Commission through the
FoF-RIA Project AUTOWARE: Wireless Autonomous, Reliable and
Resilient Production Operation Architecture for Cognitive Manufacturing
References 69
(No. 723909) and through the FoF-IA Project Zbre4k: Strategies and
Predictive Maintenance models wrapped around physical systems for
Zero-unexpected-Breakdowns and increased operating life of Factories
(No. 768869).
References
[1] P. Muller, J. Julius, D. Herr, L. Koch, V. Peycheva, S. McKiernan,
Annual Report on European SMEs 2016/2017 European Commission,
Directorate-General for Internal Market, Industry, Entrepreneurship
and SMEs; Directorate H. https://ec.europa.eu/growth/smes/business-
friendly-environment/performance-review es#annual-report
[2] AUTOWARE http://www.AUTOWARE-eu.org/
[3] OpenFog Consotium; https://www.openfogconsortium.org/#consortium
[4] B. Laibarra “Digital Shopfloor Alliance”, EFFRA ConnectedFactories
Digital Platforms for Manufacturing Workshop, Session 2 - Integration
between projects’ platforms – standards & interoperability, Brussels,
5, 6 February 2018 https://cloud.effra.eu/index.php/s/2tlFxI811TOjCOp
[5] Acatech, “Recommendations for the Strategic Initiative Web-based
Services for Businesses. Final report of the Smart Service Working
Group”, 19 August 2015. https://www.acatech.de/Publikation/recom
mendations-for-the-strategic-initiative-web-based-services-for-business
es-final-report-of-the-smart-service-working-group/
[6] M. Lemke “Digital Industrial Platforms for the Smart Connected
Factory of the Future”, Manufuture – Tallinn 24 October 2017.
http://manufuture2017.eu/wp-content/uploads/2017/10/pdf-Max-Lemk
e-24.10.pdf
[7] A. Zwegers, Workshop on Digital Manufacturing Platforms for Con-
nected Smart Factories, Brussels, 19 October 2017. https://ec.europa.
eu/digital-single-market/en/news/workshop-digital-manufacturing-platf
orms-connected-smart-factories
[8] EC, “Digitising European Industry: progress so far, 2 years after
the launch”. March 2018. https://ec.europa.eu/digital-single-market/en/
news/digitising-european-industry-2-years-brochure
[9] SIEMENS Mindsphere https://siemens.mindsphere.io/
[10] SAP Leonardo https://www.sap.com/products/leonardo.html
[11] Bosch IoT Suite https://www.bosch-si.com/iot-platform/bosch-iot-suite/
homepage-bosch-iot-suite.html
[12] Dassault 3D experience platform https://www.3ds.com/
70 Open Automation Framework for Cognitive Manufacturing
Mauro Isaja
This chapter will introduce the reader to the FAR-EDGE Reference Archi-
tecture (RA): the conceptual framework that, in the scope of the FAR-EDGE
project, was used as the blueprint for the proof-of-concept implementation of
a novel edge computing platform for factory automation: the FAR-EDGE
Platform. Such platform is going to prove edge computing’s potential to
increase flexibility and lower costs, without compromising on production
time and quality. The FAR-EDGE RA exploits best practices and lessons
learned in similar contexts by the global community of system architects
(e.g., Industrie 4.0, Industrial Internet Consortium) and provides a terse
representation of concepts, roles, structure and behaviour of the system under
analysis. Its unique approach to edge computing is centered on the use of
distributed ledger technology (DLT) and smart contracts – better known
under the collective label of Blockchain. The FAR-EDGE project is exploring
the use of Blockchain as a key enabling technology for industrial automation,
analytics and virtualization, with validation use cases executed in real-world
environments that are briefly described at the end of the chapter.
71
72 Reference Architecture for Factory Automation using Edge Computing
1
https://www.zvei.org/en/subjects/industry-4-0/
2
http://www.plattform-i40.de/I40/Navigation/EN/Home/home.html
76 Reference Architecture for Factory Automation using Edge Computing
3.3.3 IIRA
The Industrial Internet Reference Architecture (IIRA)3 has been developed
and is actively maintained by the Industrial Internet Consortium (IIC), a
global community of organizations (>250 members, including IBM, Intel,
Cisco, Samsung, Huawei, Microsoft, Oracle, SAP, Boeing, Siemens, Bosch
and General Electric) committed to the wider and better adoption of the
3
http://www.iiconsortium.org/IIRA.htm
3.3 State of the Art in Reference Architectures 77
Internet of Things by the industry at large. The IIRA, first published in 2015
and since evolved into version 1.8 (Jan 2017), is a standards-based architec-
tural template and methodology for the design of Industrial Internet Systems
(IIS). Being an RA, it provides an ontology of IIS and some architectural
patterns, encouraging the reuse of common building blocks and promoting
interoperability. It is worth noting that a collaboration between the IIC and
Platform Industrie 4.0, with the purpose of harmonizing RAMI 4.0 and IIRA,
has been announced.4
IIRA has four separate but interrelated viewpoints, defined by identify-
ing the relevant stakeholders of IIoT use cases and determining the proper
framing of concerns. These viewpoints are: business, usage, functional and
implementation.
• The business viewpoint attends to the concerns of the identification
of stakeholders and their business vision, values and objectives. These
concerns are of particular interest to decision-makers, product managers
and system engineers.
• The usage viewpoint addresses the concerns of expected system usage.
It is typically represented as sequences of activities involving human
or logical users that deliver its intended functionality in ultimately
achieving its fundamental system capabilities.
• The functional viewpoint focuses on the functional components in a
system, their interrelation and structure, the interfaces and interactions
between them and the relation and interactions of the system with
external elements in the environment.
• The implementation viewpoint deals with the technologies needed to
implement functional components, their communication schemes and
their life cycle procedures.
In FAR-EDGE, which deals with platforms rather than solutions, the
functional and implementation viewpoints are the most useful.
The functional viewpoint decomposes an IIS into functional domains,
which are, following a bottom-up order, control, operations, information,
application and business. Of particular interest in FAR-EDGE are the first
three.
The control domain represents functions that are performed by industrial
control systems: reading data from sensors, applying rules and logic and exer-
cising control over the physical system through actuators. Both accuracy and
4
http://www.iiconsortium.org/iic-and-i40.htm – to date, no concrete outcomes of such
collaboration have been published.
78 Reference Architecture for Factory Automation using Edge Computing
3.3.4 OpenFog RA
The OpenFog Consortium5 is a public–private initiative, which was born in
2015 and shares similarities to the IIC: both consortia share big players like
IBM, Microsoft, Intel and Cisco as their founding members and both use the
ISO/IEC/IEEE 42010:2011 international standard6 for communicating archi-
tecture descriptions to stakeholders. However, the OpenFog initiative is not
constrained to any specific sector: it is a technology-oriented ecosystem that
fosters the adoption of fog computing in order to solve the bandwidth, latency
5
https://www.openfogconsortium.org/
6
https://www.iso.org/standard/50508.html
80 Reference Architecture for Factory Automation using Edge Computing
7
The term conveys the concept of cloud computing moved at the ground level
8
https://www.openfogconsortium.org/ra/
3.4 FAR-EDGE Reference Architecture 81
The Field Tier is in Plant Scope. Individual ENs are connected to the
digital world in the upper Tiers either directly by means of the shopfloor’s
LAN, or indirectly through some special-purpose local network (e.g., WSN)
that is bridged to the former.
From the RAMI 4.0 perspective, the FAR-EDGE Field Tier corresponds
to the Field Device and Control Device levels on the Hierarchy
axis (IEC-62264/IEC-61512), while the entities there contained are
positioned across the Asset and Integration Layers.
From the RAMI 4.0 perspective, the FAR-EDGE Gateway Tier cor-
responds to the Station and Work Centre levels on the Hierarchy
axis (IEC-62264/IEC-61512), while the EGs there contained are posi-
tioned across the Asset, Integration and Communication Layers.
Edge Processes running on EGs, however, map to the Information and
Functional Layers.
Ledger Service. Another use case may come from the Automation Functional
Domain, demonstrating how the Ledger Tier can also be leveraged from
the Field: a smart machine with embedded plug-and-produce functionality
(Smart Object) can ask permission to join the system by making a service call
and then, having received green light, can dynamically deploy its own specific
Ledger Service for publishing its current state and/or receiving external
high-level commands.
The Ledger Tier lays across the Plant and the Enterprise Ecosystem
Scopes, as it can provide support to any Tier. The physical location of Peer
Nodes, which implement smart contracts and the distributed ledger, is not
defined by the FAR-EDGE RA as it depends on implementation choices. For
instance, some implementations may use EGs and even some of the more
capable ENs in the role of Peer Nodes; others may separate concerns, relying
on specialized computing nodes that are deployed on the Cloud.
From the RAMI 4.0 perspective, the FAR-EDGE Ledger Tier corre-
sponds to the Work Centre, Enterprise and Connected World lev-
els on the Hierarchy axis (IEC-62264/IEC-61512), while the Ledger
Services there contained are positioned across the Information and
Functional Layers.
From the RAMI 4.0 perspective, the FAR-EDGE Cloud Tier corresponds
to the Work Centre, Enterprise and Connected World levels on the
Hierarchy axis (IEC-62264/IEC-61512), while the Cloud Services and
Applications there contained are positioned across the Information,
Functional and Business Layers.
as we will see further on. That said, cryptocurrencies are problematic for
many reasons, including regulatory compliance, and hinder the adoption of
the Blockchain in the corporate world.
Another key point of Blockchain technology that is worth mentioning is
the problem of transaction finality. Most BFT implementations rely on forks
to resolve conflicts between peer nodes: when two incompatible opinions
on the validity of some transaction exist, the log is split into two branches,
each corresponding to one alternate vision of reality, i.e., of system state. The
other nodes of the network will then have to choose which branch is the valid
one, and will do this by appending their new blocks to the “right” branch
only. Over time, consensus will coalesce on one branch (the one having
more new blocks appended), and the losing branch will be abandoned. While
this scheme is indeed effective for achieving BFT in public networks, it has
one important consequence: there is no absolute guarantee that a committed
transaction will stay so, because it may be deemed invalid after it is written to
the log. In other words, it may appear only on the “bad” branch of a fork and
be reverted when the conflict is resolved. Clearly enough, this behaviour of
the Blockchain is not acceptable in scenarios where a committed transaction
has side effects on other systems.
This is how first-generation Blockchains work. For all these reasons,
public Blockchains are, at least to date, extremely inefficient for common
online transaction processing (OLTP) tasks. This is most unfortunate, because
second-generation platforms like Ethereum have introduced the smart con-
tract concept. Smart contracts were initially conceived as a way for users to
define their custom business logic for transaction, i.e., making the Blockchain
“smarter” by extending or even replacing the built-in logic of the platform. It
then became clear that smart contracts, if properly leveraged, could also turn
a Blockchain into a distributed computing platform with unlimited potential.
However, distributed applications would still have to deal with the scalability,
responsiveness and transaction finality of the underlying BFT engine, which
significantly limits the range of possible use cases.
To tackle this problem, the developer community is currently treading
two separate paths: upgrading the BFT architecture on the one hand and
relax functional requirements on the other hand. The former approach is
ambitious but slow and difficult: it is followed by a third generation of
Blockchain platforms that are proposing some innovative solution, although
transaction finality still appears to be an open point nearly everywhere. The
latter is much easier: if we can assume some limited degree of trust between
parties, we can radically simplify the BFT architecture and thus remove the
94 Reference Architecture for Factory Automation using Edge Computing
worst bottlenecks. From this reasoning, an entirely new species was born
in recent years: permissioned Blockchains. Given their simpler architec-
ture, commercial-grade permissioned Blockchains are already available today
(e.g., Hyperledger, Corda), as opposed to third-generation ones (e.g., EOS,
NEO) which are still experimental.
users, who were eager to tackle some concrete problems and experiment with
some new ideas. The general framework of this exercise is described here.
As explained, the main objective in FAR-EDGE is to achieve flexibility in
the factory through the decentralization of production systems. The catalyst
of this transformation is the Blockchain, which – if used as a computing
platform rather than a distributed ledger – allows the virtualization of the
automation pyramid. The Blockchain provides a common virtual space where
data can be securely shared and business logic can be consistently run.
That said, users can leverage this opportunity in two ways: one easier but
somewhat limited approach, and the other more difficult and more ambitious
approach.
The easiest approach is of the brown-field type: just migrate (some of) the
factory’s centralized monitoring and control functionality to Ledger Services
on the Ledger Tier. Thanks to the Gateway Tier, legacy centralized services
can be “impersonated” on a local scale by Edge Gateways: the shopfloor
– that hardest environment to tamper with in a production facility – is left
untouched. The main advantages of this configuration are the mitigation of
performance bottlenecks (heavy network traffic is confined locally, workload
is spread across multiple computing nodes) and added resiliency (segments
of the shopfloor can still be functional when temporarily disconnected from
the main network). Flexibility is also enhanced, but on a coarse-grained scale,
modularity is achieved by grouping a number of shopfloor Edge Nodes under
the umbrella of one Edge Gateway, so that they all together become a single
“module” with some degree of self-contained intelligence and autonomy.
Advanced Industry 4.0 scenarios like plug-and-produce are out of reach.
The more ambitious approach is also a much more difficult and risky
endeavour in real-world business, being of the green-field type. It is about
delegating responsibility to Smart Objects on the shopfloor, which communi-
cate with each other through the mediation of the Ledger Tier. The business
logic in Ledger Services is of higher level with respect to the previous
scenario: more about governance and orchestration than direct control. The
Gateway Tier has a marginal role, mostly confined to Big Data analytics. In
this configuration, central bottlenecks are totally removed and the degree of
flexibility is extreme. The price to pay is that a complete overhaul of the
shopfloor of existing factories is required, replacing PLC-based automation
with intelligent machines.
In FAR-EDGE, both paths are explored with different use cases combin-
ing on automation, analytics and simulation. We here give one full example
of each type.
3.5 Key Enabling Technologies for Decentralization 97
The first use case follows the brown-field approach. The legacy envi-
ronment is an assembly facility for industrial vehicles. The pilot is called
mass-customization: the name refers to capability of the factory assembly
line to handle individually customized products having a high level of variety.
If implemented successfully, mass-customization can give a strategic advan-
tage to target niche markets and meet diverse customer needs in a timely
fashion. In particular, the pilot factory produces highly customized trucks.
The product specification is defined by up to 800 unique variants, and the
final assembly includes approximately 7000 manufacturing operations and
handles a very high degree of geometrical variety (axle configurations, fuel
tank positions etc.). Despite the high level of variety in the standard product,
at some production sites, 60% of the produced trucks have unique customer
adaption.
In the pilot factory, the main assembly line is sequential but feeds a
number of finishing lines that work in parallel. In particular, the wheel
alignment verification is done on the finishing assembly line and is one of
the last active checks done on trucks before they leave the plant. This opens
up an opportunity to optimize the workload. In the as-is scenario, wheel
alignment stations are statically configured to accommodate specific truck
model ranges: products must be routed to a matching station on arrival,
creating a potential bottleneck if model variety is not optimal. As part of
the configuration, a handheld nut runner tool needs to be instructed as to the
torque force to apply.
In the to-be solution, according to the FAR-EDGE architectural blueprint,
each wheel alignment station is represented at the Gateway Tier level by a
dedicated Edge Gateway box. The EG runs some simple ad-hoc automation
software that integrates the Field systems attached to the station (e.g., a
barcode reader, the smart nut runner) using standard IoT protocols like
MQTT. The EG also runs a peer node that is a member of the logical Ledger
Tier. A custom Ledger Service deployed on the Ledger Tier implements the
business logic of the use case. The instruction set for the products to be
processed is sent in JSON format to the Ledger Service, once per day, by
the central ERP-MES systems: from that point and until a new production
plan is published, the Ledger and Gateway Tiers are autonomous.
When a new truck reaches the end of the main line, it is dispatched
to the first finishing line available, achieving the desired result of product
flow optimization. Then, when it reaches the wheel alignment station, the
chassis ID is scanned by a barcode reader and a request for instructions
is sent, through the automation layer on the EG, to the Ledger Service.
The Ledger Service will retrieve the instruction set from the production
98 Reference Architecture for Factory Automation using Edge Computing
plan – which is saved on the Ledger itself – by matching the chassis ID.
When the automation layer receives the instructions set, it parses the specific
configuration parameters of interest and sends them to the nut runner, which
adjusts itself. The wheel alignment operations will then proceed as usual.
A record of the actual operations performed, which may differ from those in
the instruction set, is finally set back to the Ledger and used to update the
production plan. An overall view of the use case is given in Figure 3.7.
While the product flow optimization mentioned above is the immediate
result of the pilot, there are some additional benefits to be gained either as a
by-product or as planned extensions.
First, the wheel alignment station, together with its EG box, becomes an
autonomous module that can be easily added/removed and even relocated in
a different environment. This scenario is not as far-fetched as it may seem,
because it actually comes from a business requirement: the company has a
number of production sites in different locations all over the world, each with
their own unique MES maps. The deployment of a new module with different
MES maps is currently a difficult and costly process.
Second, in the future, the truck itself may become a Smart Object that
communicates directly with the Ledger Tier. Truck–Ledger interactions will
3.5 Key Enabling Technologies for Decentralization 99
happen throughout the entire life cycle of the truck – from manufacturing to
operation and until decommissioning – with the Ledger maintaining a digital
twin of the truck.
The second use case follows instead the heavyweight green-field
approach. The pilot belongs to a white goods (i.e., domestic appliances)
factory. The objective of the pilot is “reshoring”, which in the FAR-EDGE
context means enabling the company to move production back from off-
shore locations, thanks to a better support for the rapid deployment of new
technologies (i.e., shopfloor Smart Objects) offered by the more advanced
domestic plants. In this particular plant, a 1 km long conveyor belt moves
pallets of finished products from the factory to a warehouse, where they are
either stocked or forwarded for immediate delivery. The factory/warehouse
conveyor is not only a physical boundary, but also an administrative one, as
the two facilities are under the responsibility of two different business units.
Moreover, once the pallet is loaded on a delivery vehicle, it comes under the
responsibility of a third party who operates the delivery business.
In the as-is scenario, the conveyor feeds 19 shipping bays, or “lanes”,
in the warehouse. Each lane is simply a dead-end conveyor segment, where
pallets are dropped in by the conveyor and retrieved by a manually operated
forklift (basically, an FIFO queue). Simple mechanical actuators do the
physical routing of the pallets, controlled by logic that runs on a central
“sorter” PLC. The sorting logic is very simple: it is based on a production
schedule that is defined once per day and on static mappings of the lanes to
product types and/or final destinations. This approach has one big problem:
production cannot be dynamically tuned to match business changes, or at
least only to a very limited extent, because the fixed dispatching scheme
downstream cannot sync with it. The problem is not only in software: the
physical layout of the system is fixed.
In the to-be solution, the shipping bays become Smart Objects that can be
plugged in and out at need (see Figure 3.8). They embed simple sensors that
detect the number of pallets currently in their local queue, and a controller
board that runs some custom automation logic and connects directly to the
Ledger Tier (i.e., without the mediation of an Edge Gateway). A custom
Ledger Service acts as a coordination hub: it is responsible for authorizing
a new “smart bay” that advertise itself to join the system (plug-and-produce)
and, once accepted, to apply the sorting logic. This is based on the current
state of the main conveyor belt, where incoming and outgoing pallets are
individually identified by an RFID tag, and on “capability update” messages
that are sent by smart bays each time they undergo an internal state change
100 Reference Architecture for Factory Automation using Edge Computing
(e.g., number of free slots in the local queue, preference for a product
type). The production schedule is not required at all, because sorting is only
calculated on the actual state.
3.6 Conclusions
FAR-EDGE is one of the few ongoing initiatives that focus on edge com-
puting for factory automation, similarly to the IIC’s edge intelligence testbed
and EdgeX Foundry. However, the FAR-EDGE RA introduces some unique
concepts. In particular, the notion of a special logical layer, the Ledger Tier,
that is responsible for sharing process state and enforcing business rules
across the computing nodes of a distributed system, thus permitting virtual
automation and analytics processes that span multiple nodes – or, from a
bottom-up perspective, autonomous nodes that cooperate to a common goal.
This new kind of architectural layer stems from the availability of Blockchain
technology, which, while being well understood and thoroughly tested in
mission-critical areas like digital currencies, have never been applied before
References 101
References
[1] Karsten Schweichhart: Reference Architectural Model Industrie 4.0 –
An Introduction, April 2016, Deutsche Telekom, online resource:
https://ec.europa.eu/futurium/en/system/files/ged/a2-schweichhart-
reference architectural model industrie 4.0 rami 4.0.pdf
[2] Dagmar Dirzus, Gunther Koschnick: Reference Architectural Model
Industrie 4.0 – Status Report, July 2015, VDI/ZVEI, online resource:
https://www.zvei.org/fileadmin/user upload/Themen/Industrie 4.0/Das
Referenzarchitekturmodell RAMI 4.0 und die Industrie 4.0-
Komponente/pdf/5305 Publikation GMA Status Report ZVEI
Reference Architecture Model.pdf
[3] H. Halpin, M. Piekarska, “Introduction to Security and Privacy on
the Blockchain”, IEEE European Symposium on Security and Privacy
Workshops (EuroS & PW), Paris, 2017, pp. 1–3.
[4] L. Lamport, R. Shostak, M. Pease, “The Byzantine Generals problem”,
ACM Transactions on Programming Languages and Systems, volume 4
no. 3, p. 382–401, 1982.
[5] Z. Zheng, S. Xie, H. Dai, X. Chen, H. Wang, “An overview of
Blockchain technology: architecture, consensus, and future trends”,
proceedings of IEEE 6th International Congress on Big Data, 2017.
[6] T. Dinh, J. Wang, G. Chen, R. Liu, C. Ooi, K. L. Tan, “BLOCKBENCH:
a framework for analyzing private Blockchains”, unpublished, 2017.
Retrieved from: https://arxiv.org/pdf/1703.04057.pdf
4
IEC-61499 Distributed Automation
for the Next Generation
of Manufacturing Systems
103
104 IEC-61499 Distributed Automation for the Next Generation
fostering the creation of a digital ecosystem that could go beyond the cur-
rent limits of manufacturing control systems and propose an ever-growing
market of innovative solutions for the design, engineering, production and
maintenance of plants’ automation.
4.1 Introduction
European leadership and excellence in manufacturing are being significantly
threatened by the huge economic crisis that hit the Western countries over the
last years. More sustainable and efficient production systems, able to keep
pace with the market evolution, are fundamental in the recovery plan aimed
at innovating the European competitive landscape. An essential ingredient
for a winning innovation path is a more aware and widespread use of ICT in
manufacturing-related processes.
In fact, the rapid advances in ubiquitous computational power, coupled
with the opportunities of de-localizing into the Cloud parts of an ICT
framework, have the potential to give rise to a new generation of service-
based industrial automation systems, whose local intelligence (for real-time
management and orchestration of manufacturing tasks) can be dynamically
linked to runtime functionalities residing in-Cloud (an ecosystem where those
functionalities can be developed and sold). Improving the already existing
and implemented IEC-61499 standard, these new “Cyber Physical Systems”
will adopt an open and fully interoperable automation language (dissipating
the borders between the physical shop floors and the cyber-world), to enable
their seamless interaction and orchestration, while still allowing proprietary
development for their embedded mechanisms.
These CPS based on real-time distributed intelligence, enhanced by
functional extensions into the Cloud, will lead to a new information-
driven automation infrastructure, where the traditional hierarchical view
of a factory functional architecture is complemented by a direct access
to the on-board services (non-real-time) exposed by the Cyber-Physical
manufacturing system, composed in complex orchestrated behaviours. As
a consequence, the current classical approach to the Automation Pyramid
(Figure 4.1) has been recently addressed several times (Manufuture, ICT
2013 and ICT 2014 conference, etc.) and deemed by RTD experts and
industrial key players to be inadequate to cope with current manufacturing
trends and in need to evolve.
4.1 Introduction 105
Figure 4.2 Daedalus fully accepts the concept of vertically integrated automation pyramid
introduced by the PATHFINDER [1] road-mapping initiative and further developed with the
Horizon 2020 Maya project [6].
106 IEC-61499 Distributed Automation for the Next Generation
Figure 4.3 The industrial “needs” for a transition towards a digital manufacturing paradigm.
models and with the software “apps” that allows their simple integration
and orchestration in complex system-of-systems architectures;
• The development of coordination (orchestration) intelligence by system
integrators, machine builders or plant integrators (more in general, by
aggregators of CPS) should rely on existing libraries of basic functions,
developed and provided in an easy-to-access way by experts of specific
algorithmic domains;
• Systemic performance improvement at automation level should rely on
well-maintained SDKs that mask the complexity of behind-the-scenes
optimization approaches;
• Large-scale adoption of simulation as a tool to accelerate development
and deployment of complex automation solutions should be obtained by
shifting the implementation effort of models to device/system producers;
This translates into an explicit involvement of all main stakeholders of the
automation development domain, brought together in a multi-sided market.
Such “Automation Ecosystem” must rely on a technological platform that,
leveraging on standardization and interoperability, can mask the complexity
of interconnecting these Complementors.
4.3 Reasons for a New Engineering Paradigmin Automation 109
Figure 4.5 RAMI 4.0 framework to support vertical and horizontal integration between
different functional elements of the factory of the future.
110 IEC-61499 Distributed Automation for the Next Generation
Figure 4.6 Qualitative representation of the functional model for an automation CPS based
on IEC-61499 technologies; concept of “CPS-izer”.
Figure 4.7 The need for local cognitive functionalities is due to the requirements of Big
Data elaboration.
environments) that already satisfies the major needs for engineering complex
orchestrating applications: interoperability between devices, real-time com-
munication between distributed systems, hardware abstraction, automatic
management of low-level variable binding between CPS, a modern develop-
ment language (and environment), etc. This set of functionalities just needs to
be “completed” with additional ones that will make it the undisputed standard
at European level.
Figure 4.9 therefore shows how a real “hierarchy” of CPS can be imag-
ined in the shop floor of future factories, where the physical aggregation of
equipment and devices to generate more complex systems (typical of the
mechatronic approach) must be equally supported by a progressive orches-
tration of their behaviour, accepting the so-called “Automation Object Orien-
tation” (A-OO, see also Section 4.5 for details) and taking into account that
each subsystem may exist with its own controller and internally developed
control logics.
The strength of this approach, that is already supported in all its basic
and fundamental functionalities by the IEC-61499 standard and programming
language, is highlighted in Figure 4.10.
A single CPS, independently from being a basic one or obtained through
aggregation of others, can be seen internally (from the perspective of the
developer of that CPS) as an intelligent system, which must be programmed
4.3 Reasons for a New Engineering Paradigmin Automation 115
Figure 4.12 Direct and distributed connection between the digital domain and the
Daedalus’ CPS.
the control application can access their services by making event and data
connections to them. In this way, the application designer does not require
any knowledge about the technical details how the communication will be
established. For the platform to be widely applicable, it also needs the ability
to communicate with other wireless CPS devices (see Section 4.5).
To enable faster, easier and less error-prone configuration of a network
of CPSs in a dynamic changeable network topology, in Daedalus, auto-
discovery and self-declaration have been added to the IEC61499 Runtime.
To allow this, each device must be capable of creating semantic description of
its own interface and functional automation capabilities, making its existence
on the network (presence) known to other devices by advertising its entrance
and leaving of the network and make necessary exchange of information in
standardized, unambiguous syntax and semantics.
The first step is to develop a semantic meta-model for describing the
functionalities provided by the CPS. The model must describe the physi-
cal interface of the device (parameters) and logical interface to access the
automation capabilities it provides. Once the model has been automati-
cally created, it can be exchanged with other CPS in predefined, extensible
.xml format.
For the CPS to easily adapt to the dynamic network topology (imagine
wireless CPS devices on a mobile platform), where CPS or SoA entities may
120 IEC-61499 Distributed Automation for the Next Generation
join and leave local network at will, the auto-discovery must be based on a
zero-configuration (zeroconf) technology, where there is no need to manually
reconfigure the network layout or a need for a centralized DNS server, where
it becomes a single point of failure. A CPS device participating in a zeroconf
network will be automatically assigned with address and hostnames, making
low-level network communication possible immediately after a device joins
a network. Multicast DNS, a subset of the zeroconf technology, will further
allow CPS to subscribe and be automatically notified of changes to the layout
of the network.
To support the exchange of semantic information used for identification of
other CPS’s capabilities in the network, a new communication protocol based
on XMPP has been chosen to be included in the IEC 61499 runtime. XMPP
is chosen to leverage on mature standards that will encourage a broader
acceptance of the solution implemented as well as its intrinsic nature of being
extensible via its XEP protocol.
that can be expected for the coordination of the distributed CPS in that
network.
Finally, the object-oriented approach that the standard adopts will not
be limited a priori to automation algorithms only, but it can be extended to
further “dimensions of existence” of the system, guaranteeing two important
added values. Behavioural models of CPS (needed for several purposes,
such as simulation) will become explicit elements of the device virtual
representation (avatar), enabling seamless (= transparent to the end-user)
connectivity between the device deployed on field and its models memorized
in-Cloud. In addition, synchronization and co-simulation in near-real-time
will be automatically achieved as already part of the functional IEC-61499
architecture, with the event-based nature of the standard perfectly suited to
deal with the management of Big Data coming from the field.
The CPS-izer follows the same common requirements like for an IEC-
61499 Controller device, but deviations of the implementation of the common
requirements for the CPS-izer in comparison to an IEC-61499 Controller
device are possible. Besides these common requirements, there are other
requirements and constraints defined for the CPS-izer. First of all, the
CPS-izer needs to provide support for legacy industrial networks.
Legacy industrial networks are characterized by means of their physical
and data link layers (e.g., Serial, CAN, Ethernet) and the transport layers up
to the application layers depending on the implemented technologies (e.g.,
Modbus/RTU, PROFIBUS, CANopen, PROFINET).
4.5 The “CPS-izer”, a Transitional Path towards Full Adoption of IEC-61499 125
of the devices. On the other hand, it will collect the inputs from the devices
and put them into the output data of the CPS-izer. The processing of output
and input data in the PLC will follow the common approach for a scan cycle
as it is implemented in automation industries since decades: read inputs –
execute process data – write outputs.
In terms of such PLC systems, the CPS-izer will put output data from
the legacy industrial network to the CPS, which is seen there as input. It will
get output data from the CPS, which is seen as input in the legacy industrial
network. For the CPS-izer, the execution of the process data in the PLC is
just a copy function to copy data from the process image input to the process
image output.
Some legacy industrial networks like EtherCAT or PROFINET provide
real-time capabilities to transport IO data in the ms or even s range of cycle
times. This real-time behaviour will not be made transparent to the CPS.
The CPS-izer will only guarantee data consistency between CPS and legacy
industrial network related to the cycle time running in that network, but it
cannot guarantee real-time transport between both systems.
The CPS-izer should be realized in a small industrial-approved plastic
housing, which could be easily mounted at a machine or in a cabinet using
DIN-rail mechanics. It should require a single 24 V power supply as used
in standard industrial automation systems. Furthermore, it should realize a
common way to connect to legacy industrial networks by means of front
plugs/connectors and indicators.
The CPS-izer should follow requirements for industrial grading like tem-
perature range, shock and vibration, EMC and others for common cabinet
mounting. It must adhere to CE compliance.
Harsh industrial requirements like IP67, sealed connectors and housing
and higher temperature range are not in the focus of the realization of the
CPS-izer.
4.6 Conclusions
This chapter has explored a new generation of functional architecture for
industrial automation, centred around the concepts, methodologies and tech-
nologies of the IEC-61499 standard but exploiting and extending them for a
concrete implementation of what are called “Cyber-Physical Systems”.
The transition to this type of model is not just a matter of installing
new devices into a shopfloor, but it requires a real paradigm shift in the
way real-time control and automation in manufacturing are engineering,
introducing new concepts of design and the corresponding skills.
References 127
Acknowledgements
This work was achieved within the EU-H2020 project DAEDALUS, which
received funding from the European Union’s Horizon 2020 research and
innovation programme, under grant agreement No 723248.
References
[1] http://www.pathfinderproject.eu
[2] https://ec.europa.eu/digital-agenda/en/digitising-european-industry
[3] Zoitl, Alois, and Valeriy Vyatkin. “IEC 61499 Architecture for
Distributed Automation: the ‘Glass Half Full’View”, IEEE Industrial
Electronics Magazine 3.4: 7–23, 2009.
[4] Object Management Group, “Model Driven Architecture”, Online Avail-
able: http://www.omg.org/mda/faq mda.htm, Jun. 2009.
[5] B. Huber, R. Obermaisser, and P. Peti, “MDA-based development in
the DECOS integrated architecture–modeling the hardware platform”,
in Object and Component-Oriented Real-Time Distributed Compu-
ting, 2006. ISORC 2006. Ninth IEEE International Symposium in
April 2006, p. 10.
[6] http://www.maya-euproject.com
5
Communication and Data Management
in Industry 4.0
Pisa, Italy
E-mail: m.lucas@umh.es; theofanis.raptis@iit.cnr.it; msepulcre@umh.es;
andrea.passarella@iit.cnr.it; j.gozalvez@umh.es; marco.conti@iit.cnr.it
129
130 Communication and Data Management in Industry 4.0
Cloud RAN will allow to achieve the flexibility, scalability and adaptation
capabilities required to support the high-demanding and diverse industrial
environment.
5.1 Introduction
In future industrial applications, the Internet of Things (IoT) with its com-
munications and data management functions will help shape the operational
efficiency and safety of industrial processes through integrating sensors,
data management, advanced analytics, and automation into a mega-unit [1].
The future and significant participation of intelligent robots will enable
effective and cost-efficient production, achieving sustainable revenue growth.
Industrial automation systems, emerging from the Industry 4.0 paradigm,
count on sensors’ information and the analysis of such information [2].
As such, connectivity is a crucial factor for the success of industrial
Cyber-Physical-Systems (CPS), where machines and components can talk
to one another. Moreover, in the context of Industry 4.0 and to match the
increased market demand for highly customized products, traditional pilot
lines designed for mass production are now evolving towards more flexible
“plug & produce” modular manufacturing strategies based on autonomous
assembly stations [3], which will make increased use of massive volumes
of Big Data streams to support self-learning capabilities and will demand
real-time reactions of increasingly connected mobile and autonomous robots
and vehicles. While conventional cloud solutions will be definitely part of
the picture, they will not be enough. The concept of centrally organized
enterprises at which large amounts of data are sent to a remote data center
do not deliver the expected performance for Industry 4.0 scenarios and
applications. Recently, moving service supply from the cloud to the edge has
enabled the possibility of meeting application delay requirements, improves
scalability and energy efficiency, and mitigates the network traffic burden.
With these advantages, decentralized industrial operations can become a
promising solution and can provide more scalable services for delay-tolerant
applications [4].
Two technological enablers of Industry 4.0 are: (i) the communication
infrastructure that will support the ubiquitous connectivity of Cyber-Physical
Production Systems (CPPS) and (ii) the data management schemes built upon
the communication infrastructure that will enable efficient data distribution
within the Factories of the Future [5]. In the industrial environment, a wide set
5.1 Introduction 131
in all the layers can be included or implemented in the Fog/Cloud, and (ii) the
Modelling layer, since different technical components inside the different
layers can be modelled, and it could be possible to have modeling approaches
that take the different layers into account. The communications and data
management architecture proposed in AUTOWARE supports the commu-
nication network and the data management system and enables the data
exchange between the different AUTOWARE components, exploiting the Fog
and/or Cloud concepts. It provides communication links between devices,
entities, and applications implemented in different layers, and also within
the same layer. Within the AUTOWARE Reference Architecture (defined
in the H2020 AUTOWARE Project), the communication network and data
management system can be represented as a transversal layer that intercon-
nects all the functional layers of the AUTOWARE Reference Architecture
(see Figure 5.2). The communications and data management architecture
presented in this chapter provides the communication and data distribution
capabilities required by the different systems or platforms developed within
the AUTOWARE framework.
AUTOWARE proposes the use of a heterogeneous network that integrates
different communication technologies covering the industrial environment.
The objective is to exploit the abilities of different wired and wireless com-
munication technologies to meet the broad range of communication require-
ments posed by Industry 4.0 in an efficient and reliable way. To this aim,
inter-system interferences between different wireless technologies operating
in the same unlicensed frequency band need to be monitored and controlled,
as well as inter-cell interferences for wireless technologies using the licensed
Figure 5.2 Communication network and data management system into the AUTOWARE
Reference Architecture.
5.1 Introduction 133
in [10] and differentiated two types of applications. The first type involves the
use of sensors and actuators in industrial automation and its main requirement
is the real-time behavior or determinism. The second type of applications
involves the communication at higher levels of the automation hierarchy, e.g.
at the control or enterprise level, where throughput, security, and reliability
become more important. Automation systems are subdivided into three main
classes (manufacturing cell, factory hall, and plant level) with different needs
in terms of latency (from 5 to 20 ms). Their requirements in terms of latency,
update time, and number of devices can notably differ between them (see
Table 5.2). However, all three classes require a 10−9 packet loss rate and a
99.999% application availability.
The timing requirements depend on different factors. As presented by the
5GPPP in [6], process automation industries (such as oil and gas, chemicals,
food and beverage, etc.) typically require cycle times of about 100 ms. In
factory automation (e.g. automotive production, industrial machinery, and
consumer products), typical cycle times are 10 ms. The highest demands
Table 5.2 Performance requirements for three classes of communication in industry estab-
lished by ETSI [10]
Manufacturing
Cell Factory Hall Plant Level
Mostly Mostly
Indoor/outdoor application Indoor indoor outdoor
Spatial dimension L×W×H (m3 ) 10×10×3 100×100×10 1000×1000×50
Number of devices (typically) 30 100 1000
Number of parallel networks 10 5 5
(clusters)
Number of such clusters per plant 50 10 1
Min. number of locally parallel 300 500 250
devices
Network type Star Star/Mesh Mesh
Packet size (on air, byte) 16 200 105
Max. allowable latency (end-to-end) 5 ± 10% 20 ± 10% 20 ± 10%
incl. jitter/retransmits (ms)
Max. on-air duty cycle related to 20% 20% 20%
media utilization
Update time (ms) 50 ± 10% 200 ± 10% 500 ± 10%
Packet loss rate (outside latency) 10−9 10−9 10−9
Spectral efficiency (typically) 1 1.18 0.13
(bis/s/Hz)
Bandwidth requirements (MHZ) 8 34 34
Application availability Exceeds 99.999%
5.2 Industry 4.0 Communication and Data Requirements 137
Table 5.5 Additional requirements for different application scenarios [13, 14]
Desired Value Application Scenario
Connectivity 300.000 devices Massive M2M connectivity
per AP
Battery life >10 years Hard-to-reach deployments
Reliability 99.999% Protection and control
Seamless and quick connectivity – Mobile devices
A Field F
Gateway 1 Devices Backbone A Roung F
Plant Automaon Network
Control System
B G Handheld
B G
Network Manager Wired 2
HART Device 3
Security Manager E
Gateway,
Gateway 2 System Manager E 1 I/O Devices
C Adapter
and Security Backbone
Manager Router 2 C
Process Automaon
Controller
wired technologies can provide high communications reliability, they are not
able to fully meet the required flexibility and adaptation of future manufactur-
ing processes for Industry 4.0. Wireless communication technologies present
key advantages for industrial monitoring and control systems. They can
provide connectivity to moving parts or mobile objects (robots, machinery,
or workers) and offer the desired deployment flexibility by minimizing and
significantly simplifying the need of cable installation. Operating in unli-
censed frequency bands, WirelessHART, ISA100.11a, and IEEE 802.15.4e,
are some of the wireless technologies developed to support industrial automa-
tion and control applications. These technologies are based on the IEEE
802.15.4 physical and MAC (Medium Access Control) layers, and share
some fundamental technologies and mechanisms, e.g., a centralized network
management and Time Division Multiple Access (TDMA) combined with
Frequency Hopping (FH). Figure 5.3 shows the network architecture for
WirelessHart and ISA100.11a. In both examples, there is a central network
management entity referred to as Network Manager in a WirelessHart net-
work and System Manager in the ISA100.11a network that is in charge of
the configuration and management at the data link and network levels of
the communications between the different devices (gateways, routers, and
end devices).
The main objective of having a centralized network management is to
achieve high communications reliability levels. However, the excessive over-
head and reconfiguration time that results from collecting state information
by the central manager (e.g. the Network Manager in a WirelessHart network
or the System Manager in a ISA100.11a network) and distributing man-
agement decisions to end devices limits the reconfiguration and scalability
capabilities of networks with centralized management, as highlighted in [16]
140 Communication and Data Management in Industry 4.0
DEWI Bubble
Gateway
Control Level 2 External
Centre DEWI Bubble 2 user
Gateway Gateway
DEWI Bubble
High capacity DEWI Bubble 1 Gateway
sensor
(e.g. video) Level 1
WSN
Gateway 1
Coordinator
Coordinator WSN1 Internal
user
Level 0
WSN WSNn
WSN Gateway 3
Sensors/actuators Gateway 2 Level 0
WSN2
Level 0
a) IWN architecture adapted from [19] b) IWN architecture adapted from [20]
Figure 5.4 Examples of hierarchical IWN architectures.
licensed bands) interference among them. In ref. [21], the control plane and
the data plane are split following the Software-Defined Networking (SDN)
principle. Control management is carried out in a centralized mode at LRCs
and the GRC. For the data plane, centralized and assisted Device-to-Device
(D2D) modes are considered within each cell.
5G networks are also being designed to support, among other verticals,
Industrial IoT systems [24]. To this end, the use of Private 5G networks is
proposed [25]. Private 5G networks will allow the implementation of local
networks with dedicated radio equipment (independent of traffic fluctuation
in the wide-area macro network) using shared and unlicensed spectrum, as
well as locally dedicated licensed spectrum. The design of these Private
5G networks to support industrial wireless applications considers the imple-
mentation of several small cells to cover the whole industrial environment
integrated in the network architecture as shown in Figure 5.5. Private 5G
networks will have to support Ultra Reliable Low Latency Communications
(URLLC) for time-critical applications, and Enhanced Mobile Broadband
services for augmented/virtual reality services. In addition, the integration
of 5G networks with Time Sensitive Networks (TSN)1 is considered to
guarantee deterministic end-to-end industrial communications, as presented
in [24]. Figure 5.6 summarizes these key capabilities of Private 5G networks
for Industrial IoT systems.
The reference communication and data management architecture
designed in AUTOWARE is very aligned with the concepts that are being
studied for Industrial 5G networks. The support of very different communi-
cation requirements demanded for a wide set of industrial applications (from
time-critical applications to ultra-high demanding throughput applications)
and the integration of different communication technologies (wired and wire-
less) are key objectives of the designed AUTOWARE communication and
data management architecture to meet the requirements of Industry 4.0. In
fact, AUTOWARE focuses on the design of a communication architecture
that is able to efficiently meet the varying and stringent communication
1
TSN is a set of IEEE 802 Ethernet sub-standards that aim to achieve deterministic com-
munication over Ethernet by using time synchronization and a schedule that is shared between
all the components (i.e. end systems and switches) within the network. By defining various
queues based on time, TSN ensures a bounded maximum latency for scheduled traffic through
switched networks, thereby guaranteeing the latency of critical scheduled communication.
Additionally, TSN supports the convergence of having critical and non-critical communication
sharing the same network, without interfering with each other, resulting in a reduction of costs
(reduction of required cabling).
5.3 Industrial Wireless Network Architectures 143
5G/Ethernetadaptaon
Core network
Small cells
UE
5G/Ethernetadaptaon
Figure 5.5 Private 5G Networks architecture for Industrial IoT systems [24].
EnahncedMBB for
Augmented/Virtualreality
TSN integraon
Expand to shared/unlicensed
spectrum
Figure 5.6 Key capabilities of Private 5G Networks for Industrial IoT systems [24].
requirements of the wide set of applications and services that will coexist
within the factories of the future; in contrast to the architectures proposed
in [20] and [21], which are mainly designed to guarantee communication
requirements of a given type of service (to provide connectivity to a large
number of communication devices in [20], and mission-critical wireless com-
munications in [21]). In addition, this work goes a step further and analyzes
the requirements of the communication architecture from the point of view of
the data management and distribution.
144 Communication and Data Management in Industry 4.0
Figure 5.7 Hierarchical and heterogeneous reference architecture to support CPPS connec-
tivity and data management.
Figure 5.8 Communication and data management functions in different entities of the
hierarchical architecture.
locally coordinate the use of radio resources among the devices attached to
the same cell and require very short response times. Intra-Cell Interference
Control needs to be carried out also by the LM if several transmissions are
allowed to share radio resources within the same cell. LMs also report the
performance levels experienced within its cell to the Orchestrator. Thanks to
its global vision, the Orchestrator has the information required and the ability
to adapt and (re-)configure the whole network. For example, under changes
in the configuration of the industrial plant or in the production system, the
Orchestrator can reallocate frequency bands to cells implementing licensed
technologies based on the new load conditions or the new communication
requirements. It could also establish new interworking policies to control
interferences between different cells working in the unlicensed spectrum. The
Orchestrator can also establish constraints about the maximum transmission
power or the radio resources to allocate to some transmissions to guarantee
the coordination between different cells. It is also in charge of the Admission
Control. In this context, the Orchestrator also decides to which cell a new
device is attached to consider the communication capabilities of the device,
the communication requirements of the application, and the current operating
conditions of each cell.
The described hierarchical communication and data management archi-
tecture corresponds to the control plane. We consider that control plane and
5.5 Hierarchical Communication and Data Management 149
2
The User Plane carries the network user traffic, i.e., the data that is generated and
consumed by the AUTOWARE applications and services. The Control Plane carries signaling
traffic, and is critical for the correct operation of the network. For example, signaling messages
would be needed to properly configure a wired/wireless link to achieve the necessary latency
and reliability levels to support an application. They would also be needed to intelligently
control the data management process. The Control Plane therefore is needed to enable the
user data exchange between the different AUTOWARE components.
150 Communication and Data Management in Industry 4.0
Tier 1: RT nRT
Ultra-high demanding LM
latency & reliability management
Tier 2: RT nRT
High demanding LM Orchestrator
latency & reliability management
Tier 3: RT nRT
Low demanding LM
latency & reliability management
RT ≡ Real Time, nRT ≡ non Real Time
Figure 5.9 LM–Orchestrator interaction at different tiers of the management architecture.
Compung
Storage
Radio Resources
RAN Compung
LM Storage
Slicing
Radio Resources
Physical cell
LM Virtual cell for URLLC
RAN functions [43]. Cloud RAN splits the base station into a radio unit,
known as Radio Remote Head (RRH), and a signal-processing unit referred
to as Base Band Unit (BBU) [44]. The key concept of Cloud RAN is that
the signal processing units, i.e., the BBUs, can be moved to the cloud. Cloud
RAN shifts from the traditional distributed architecture to a centralized one,
where some or all of the base station processing and management functions
are placed in a central virtualized BBU pool (a virtualized cluster which can
consist of general purpose processors to perform baseband processing and
that is shared by all cells) [43]. Virtual BBUs and RRHs are connected by a
fronthaul network. Centralizing processing and management functions in the
same location improves interworking and coordination among cells; virtual
BBUs are located in the same place, and exchange of data among them can
be carried out easier and with shorter delay.
We foresee Cloud RAN as the baseline technology for the proposed
architecture, to implement hierarchical and multi-tier communication man-
agement. Cloud RAN will be a key technology to achieve a tight coordination
between cells in the proposed architecture and to control inter-cell and inter-
system interferences. As presented in [45] and [46], Cloud RAN can support
different functional splits that are perfectly aligned with the foreseen needs of
industrial applications; some processing functions can be executed remotely
while functions with strong real-time requirements can remain at the cell
site. In the proposed communication and data management architecture,
the decision about how to perform this functional split must be made by the
Orchestrator considering the particular communication requirements of the
industrial applications supported by each cell (see Figure 5.11).
Tier 1:
Ultra-high demanding LM .
latency & reliability
BBU
Tier 2:
High demanding LM . Orchestrator
latency & reliability
BBU
BBU
Tier 3: pool
Low demanding LM .
latency & reliability
BBU
Remote locaon
Global m
... 8 Local n. Communicaon
.. 4
management 2 Reporng management 2 Reporng management
decision 1 decision 1 decision
10 6a 6b 2a 2b
9a 9b 5a 5b 1
m
... n.
Local Performance ..
Local Performance 2 2 Local
measurements gathering 1 measurements gathering 1 measurements
identifying which paths in the network the data should follow and on which
proxies they should be cached, in order to meet the latency constraint and
to efficiently prolong the network lifetime. We implemented the method and
evaluated its performance in a testbed, composed of IEEE 802.15.4-enabled
network nodes. We demonstrated that the proposed heuristic (i) guarantees
data access latency below the given threshold and (ii) performs well in terms
of network lifetime with respect to a theoretically optimal solution.
Figure 5.13 Integration of the hierarchical and multi-tier heterogeneous communication and
data management architecture into the AUTOWARE Reference Architecture.
These field devices are then included within the Field Devices Layer of the
AUTOWARE Reference Architecture defined in Chapter 10. Various LMs
can be implemented at different workcells or production lines to locally man-
age the communication resources and data in the different communication
cells deployed in the industrial plant. These management nodes are included
in the Workcell/Production Line Layer, and they form a distributed manage-
ment infrastructure that operates close to the field devices. As previously
presented, both the Orchestrator and the LMs have communication and data
management functionalities.
From the point of view of communications, the Orchestrator is in charge
of the global management of the communication resources used by the dif-
ferent cells deployed within a factory plant. When there is only one industrial
plant or when there are multiple but independent plants (from the communi-
cations perspective), the main communication functions of the Orchestrator
are in the Factory Layer. However, if different industrial plants are deployed
and they are close enough so that the operation of a cell implemented in
a plant can affect the operation of a different cell in the other plant, then
the Orchestrator should be able to manage the communication resources
of the different plants. In this case, some of its communication functions
should be part of the Enterprise Layer. Based on the previous reasoning,
the Orchestrator and, in particular, the communication management function
within the Orchestrator should be flexible and be able to be implemented in
the Factory and the Enterprise Layer.
162 Communication and Data Management in Industry 4.0
From the point of view of data storage, management, and distribution, the
data can be circulated and processed at different levels of the architecture,
depending on the targeted use case and the requirements that the industrial
operator is imposing on the application. For example, if the requirements
necessitate critical and short access latency applications (e.g., Table 5.5),
such as condition monitoring, then imposing data transfers back and forth
between the Field Layer, the Workcell/Production Line Layer, and the Factory
Layer may lead to severe sub-optimal paths, which in turn negatively affect
the overall network latency. At the same time, those transfer patterns will
lead to poor network performance, as field devices often have to tolerate
longer response times than necessary. In this case, the data can be stored and
managed at the lower layers of the architecture, with the LMs in the role of
the data coordinator. Another example is when the requirements necessitate
the employment of computationally more sophisticated methods on larger
volumes of data that can only be performed by stronger devices than those at
the Field Layer, such as 3D object recognition or video tracking, which come
with vast amounts of data. In this case, the data can be forwarded, stored, and
processed in the higher levels of the architecture, the Factory Layer, or the
Enterprise Layer, with the Orchestrator in the role of the data coordinator.
5.9 Conclusions
A software-defined heterogeneous, hierarchical, and multi-tier communica-
tion management architecture with edge-powered smart data distribution
strategies has been presented in this chapter to support ubiquitous, flexible,
and reliable connectivity and efficient data management in highly dynamic
Industry 4.0 scenarios where multiple digital services and applications are
bound to coexist. The proposed architecture exploits the different abilities of
heterogeneous communication technologies to meet the broad range of com-
munication requirements demanded by Industry 4.0 applications. Integration
of the different technologies in an efficient and reliable network is achieved
by means of a hybrid management strategy consisting of decentralized man-
agement decisions coordinated by a central orchestrator. Local management
entities organized in different virtual tiers of the architecture can implement
different management functions based on the requirements of the application
they support. The hierarchical and multi-tier communication management
architecture enables the implementation of cooperating, but distinct manage-
ment functions to maximize flexibility and efficiency to meet the stringent and
varying requirements of industrial applications. The proposed architecture
References 163
considers the use of RAN Slicing and Cloud RAN as enabling technologies
to meet reliably and effectively future Industry 4.0 autonomous assembly
scenarios and modular plug & play manufacturing systems. The technological
enablers of the communications and data management architecture were
identified as part of the AUTOWARE framework, both in the user plane and
in the control plane of the AUTOWARE reference architecture.
Acknowledgments
This work was funded by the European Commission through the FoF-RIA
Project AUTOWARE: Wireless Autonomous, Reliable and Resilient Produc-
tion Operation Architecture for Cognitive Manufacturing (No. 723909).
References
[1] V. K. L. Huang, Z. Pang, C. J. A. Chen and K. F. Tsang, “New Trends
in the Practical Deployment of Industrial Wireless: From Noncritical to
Critical Use Cases”, in IEEE Industrial Electronics Magazine, vol. 12,
no. 2, pp. 50–58, June 2018.
[2] T. Sauter, S. Soucek, W. Kastner and D. Dietrich, “The Evolution
of Factory and Building Automation”, in IEEE Industrial Electronics
Magazine, vol. 5, no. 3, pp. 35–48, September 2011.
[3] How Audi is changing the future of automotive manufacturing,
Feb. 2017. Available at https://www.drivingline.com/. Last access on
2017/12/01.
[4] C. H. Chen, M. Y. Lin and C. C. Liu, “Edge Computing Gateway of the
Industrial Internet of Things Using Multiple Collaborative Microcon-
trollers”, in IEEE Network, vol. 32, no. 1, pp. 24–32, January–February
2018.
[5] Plattform Industrie 4.0, “Network-based communication for
Industrie 4.0”, Publications of Plattform Industrie 4.0, April 2016.
Available at http://www.plattform-i40.de. Last access on 2017/10/20.
[6] 5GPPP, 5G and the Factories of the Future, October 2015.
[7] H2020 AUTOWARE project website: http://www.autoware-eu.org/.
[8] M. C. Lucas-Estañ, T. P. Raptis, M. Sepulcre, A. Passarella, C. Regueiro
and O. Lazaro, “A software defined hierarchical communication and
data management architecture for industry 4.0”, 14th Annual Confer-
ence on Wireless On-demand Network Systems and Services (WONS),
Isola 2000, pp. 37–44, 2018.
164 Communication and Data Management in Industry 4.0
6.1 Introduction
A large number of digital automation applications in modern shopfloors
collect and process large amounts of digital data as a means of identifying the
status of machines and devices (e.g., a machine’s condition or failure mode)
169
170 A Framework for Flexible and Programmable Data Analytics
Table 6.1 Requirements and design principles for the FAR-EDGE DDA
Design Examples and use Cases DDA Implementation
Principles Guidelines
and Goals
High Complex data analyses over real-time Leverage high-performance
performance streams should be performed within data streaming technology as
and timescales of a few seconds. As an background for the EAE
Lowlatency example, consider the provision of implementation (e.g. ECI’s
quality control feedback about an streaming technology)
automation process in a station, based
on the processing of data from the
station. The DDA support the collection
and analysis of data streams within a
few seconds.
(Continued)
174 A Framework for Flexible and Programmable Data Analytics
Along with the Device Registry, the DR&P provides a Data Bus, which
is used to route streams from the various devices to appropriate consumers,
i.e. processors of the EA-Engine. Moreover, the Data Bus is not restricted to
routing data streams stemming directly from the industrial devices and other
shopfloor data sources. Rather it can also support the routing of additional
data streams and events that are produced by the EA-Engine.
in Figure 6.2), which specifies the processors that comprise the AM. In
particular, an AM defines a set of analytics functionalities as a graph of
processing functions that comprises the above three types of processors and
which can be executed by the EA-Engine.
Note also that an AM instance is built based on the available devices, data
sources, edge gateways and analytics processors, which are part of the data
models of the DDA. The latter reflect the status of the factory in terms of
available data sources and processing functions, which can be used to specify
more sophisticated analytics workflows.
and pre-processes streams coming from CPS2. In both cases, the streams
are accessed through the Data Bus.
• Step 3 (Runtime): Analytics Processor 3 consumes the produced
streams from Analytics Processor 1 and 2 towards applying the analytics
algorithm. In this case, the analytics processor cannot execute without
input for the earlier Analytics Processors.
• Step 4 (Runtime): Store Analytics Processor 4 consumes the data
stream produced from Analytics Processor 3 and forwards it to the Data
Store, which persists and data coming from Analytics Processor 4.
This is a simple example of the EA-Engine operation, which illustrates the
use of all three types of processors in a single pipeline. However, much more
complex analytics workflows and pipelines can be implemented based on
the combination of the three different types of processors. The only limiting
factor is the expressiveness of the AM, which requires that instances of the
three processors are organized in a graph fashion, with one or more processors
providing input to others.
Vendors and integrators of industrial automation solutions can take
advantage of the versatility of the EA-Engine in two ways:
• First, they can leverage existing processors of the EA-Engine towards
configuring and formulating analytics workflows in line with the needs
of their application or solution.
• Second, they can extend the EA-Engine with additional processing
capabilities, in the form of new reusable processors.
In practice, industrial automation solution integrators will use the EA-
Engine in both the above ways, which are illustrated in the following
paragraphs.
the example involves two devices (CPS1, CPS2), which generate two data
streams under a topic each one named after their ID. We therefore need to:
• Apply some pre-processing to each one of the two streams (by Processor
1 and Processor 2).
• Apply an Analytics algorithm (Processor 3) to the pre-processed
streams.
• Persist the result to a Data storage (i.e. the Data Storage).
Figure 6.4 illustrates the steps required to register a new processor, build
the Edge Analytics configuration (AM), register it to the EA-Engine and
instantiate the appropriate Analytics Processors. In particular:
• The user of the EA-Engine (e.g. a solution integrator) registers new
Processors required to the Model Repository. To this end, it can use an
API or a visual tool.
• In order to set up an AM, all the available processors are discovered
from the Model Repository and all the available Data Sources (DSMs)
are discovered from the Distributed Ledger.
• The user has all the required information and with the help of the
Configuration Dashboards can now set up a valid AM flow for the four
Analytic Processors.
• The AM is set up based on a proper combination of devices data
streams and processors. In this example, the AM includes the required
configurations for Processor 1 (APM1), Processor 2 (APM2), Processor
3 (APM3) and Processor 4 (APM4).
replicated across all the peer nodes of the system – the data model of
such state is shaped in code by the Ledger Service implementation itself.
Practically speaking, the data store of a Ledger Service is initialized
according to a specific data model by a special code section when the
instance is first deployed. Once initialized, no structural changes in the
data model occur.
• Defining and executing business logic. Application logic is coded in
software and exposed on the network as a number of application-specific
service endpoints, which can be called by clients. These service calls
represent the API of the Ledger Service. Through them, callers can
query and change the global state of the Ledger Service. The API can be
invoked by any authorized client on the network following some well-
documented calling conventions of the DL-Engine. Moreover, we have
implemented an additional layer of software in order to simplify the
development of client applications: each Ledger Service implemented
in the project comes with its own client software library – called Ledger
Client – which an application can embed and use as a local proxy of
the actual Ledge Service API. The Ledger Client provides an in-process
API, which has simple call semantics.
• Enforcing (and possibly also defining) fine-grained access and/or
usage policies. This is optional one, as a basic level of access control
is already provided by the DL-Engine, which requires all clients to
have a strong digital identity and be approved by a central authority.
When a more fine-grained control is required – e.g. an Access Control
List (ACL) applied to individual service endpoints – the Ledger Service
implementation is required to manage it as part of its code.
In the specific context of the FAR-EDGE Platform, peer nodes are
usually – but not mandatorily – installed on Edge Gateway servers, together
with Edge Tier components. This setup allows for DL clients that run on Edge
Gateways, like the EA-Engine, to refer to a localhost address by default when
resolving Ledger Service endpoints. However, this is not the only possible
way to deploy the Ledger Tier in FAR-EDGE-enabled system: peer nodes
can easily be deployed on the Cloud Tier to make them addressable from
anywhere or even embedded in Smart Objects on the Field Tier to make them
fully autonomous systems. In complex scenarios, peer nodes can actually be
spread across all the three physical layers of the FAR-EDGE architecture
(Field, Edge and Cloud), exploiting the flexibility of the DL enabler to its
full extent.
190 A Framework for Flexible and Programmable Data Analytics
construct of the FAR-EDGE data models. The processors that are set up
include: (i) A Processor for hourly average calculation for values from
a MongoDB and (ii) A Processor for persisting results in a MongoDB.
The above information is stored at the Data Model repository, which
resides on the cloud.
• Distributed Analytics Installation & Registration: The specified data
models are used to generate the Analytics Processor Manifest (APM) for
each required Processor and are registered to the Cloud. The following
processors are registered: (i) A Processor for hourly average calculation
from the TotalRealPower parameters for all IBs based on information
residing in the (global) MongoDB in the cloud; (ii) A Processor for
hourly average calculation from TotalRealEnergy for all IBs based on
information residing in the (global) MongoDB in the cloud; and (iii) A
Processor for persisting results in the (global) MongoDB in the cloud.
An Analytics Manifest (AM) will be generated for combining data
from the instantiated Processors. The AM will be registered and started
through the Open API of the DA-Engine.
6.7 Conclusions
Distributed data analytics is a key functionality for digital automation in
industrial plants, given that several automation and simulation functions rely
on the collection and analysis of large volumes of data (including streaming
data) from the shopfloor. In this chapter, we have presented a framework
for programmable, configurable, flexible and resilient distributed analytics.
The framework takes advantage of state-of-the-art data streaming frameworks
(such as Apache Kafka) in order to provide high-performance analytics.
At the same time however, it augments these frameworks with the ability
to dynamically register data sources in repository and accordingly to use
registered data sources in order to compute analytics workflows. The latter are
also configurable and composed of three types of data processing functions,
including pre-processing, storage and analytics functions. The whole process
is reflected and configured based on digital models that reflect the status of
the factory in terms of data sources, devices, edge gateways and the analytics
workflows that they instantiate and support.
The analytics framework operates at two levels: (i) An edge analytics
level, where analytics close to the field are defined and performance and
(ii) A global factory-wide level, where data from multiple edge analytics
deployments can be combined in arbitrary workflows. We have also presented
196 A Framework for Flexible and Programmable Data Analytics
two approaches for configuring and executing global level analytics: One
following the conventional edge/cloud computing paradigm and another that
support decentralized analytics configurations and computations based on the
use of distributed ledger technologies. The latter approach holds the promise
to increase the resilience of analytics deployments, while eliminated single
point of failure and is therefore one of our research directions.
One of the merits of our framework is that it is implemented as open-
source software/middleware. Following its more extensive validation and
the improvement of its robustness, this framework could be adopted by
the Industry 4.0 community. It could be really useful for researchers and
academics who experiment with distributed analytics and edge computing, as
well as for solution providers who are seeking to extend open-source libraries
as part of the development of their own solutions.
Acknowlegdements
This work was carried out in the scope of the FAR-EDGE project (H2020-
703094). The authors acknowledge help and contributions from all partners
of the project.
References
[1] H. Lasi, P. Fettke, H.-G. Kemper, T. Feld, M. Hoffmann, ‘Industry 4.0’,
Business & Information Systems Engineering, vol. 6, no. 4, pp. 239,
2014.
[2] J. Soldatos (editor) ‘Building Blocks for IoT Analytics’, River
Publishers Series in Signal, Image and Speech Processing, November
2016, ISBN: 9788793519039, doi: 10.13052/rp-9788793519046.
[3] J. Soldatos, S. Gusmeroli, P. Malo, G. Di Orio ‘Internet of Things Appli-
cations in Future Manufacturing’, In: Digitising the Industry Internet of
Things Connecting the Physical, Digital and Virtual Worlds, Editors: Dr.
Ovidiu Vermesan, Dr. Peter Friess. 2016. ISBN: 978-87-93379-81-7.
[4] M. Isaja, J. Soldatos, N. Kefalakis, V. Gezer ‘Edge Computing and
Blockchains for Flexible and Programmable Analytics in Industrial
Automation’, International Journal on Advances in Systems and Mea-
surements, vol. 11 no. 3 and 4, December 2018 (to appear).
[5] T. Yu, X. Wang, A. Shami ‘A Novel Fog Computing Enabled Temporal
Data Reduction Scheme in IoT Systems’, GLOBECOM 2017 - 2017
IEEE Global Communications Conference, pp. 1–5, 2017.
References 197
199
200 Model Predictive Control in Discrete Manufacturing Shopfloors
7.1 Introduction
Part of the Daedalus project is dedicated to the design and implemen-
tation of the Software Development Kit (SDK) that provides helpful
tools to develop, implement and deploy advanced control system within
a distributed IEC-61499-based control framework, dedicated to automation
system engineers.
To such an aim, optimal orchestration of distributed IEC-61499
application is investigated and advanced control techniques as optimal control
and model predictive control are considered.
The main features of aggregated Cyber Physical System (CPS) are
evaluated to realize an advanced optimal control system: it exhibits, in
particular, both continuous and discrete variables to represent the aggregated
CPS. Straightforwardly, Hybrid system will be considered, and the various
modelling techniques are investigated in Section 7.2.
Another important feature of optimal orchestration of aggregated CPS is
the compliance with system constrains on both output variables, i.e. physical
limits, and manipulated variables, e.g. actuators saturation and limits. The
optimization of a measure of the performance of the system, i.e. the min-
imization of the cost function, is now a well-established approach in the
academia and in certain industries like the chemical and aerospace indus-
tries, which have to be widespread in every industrial sector. Therefore,
optimization-based control algorithms are investigated for the SDK. Among
these, Model Predictive Control stands out as the most promising, considering
that Receding Horizon approach offers a way to compensate for disturbances
on the system and model mismatch.
7.1 Introduction 201
7.1.2 Requirements
The investigation on orchestration of hierarchically aggregated CPS
controller problems had led different needs. The basic development tools,
to be compliant with IEC-61499 [1] and to have a platform-independent
7.1 Introduction 203
Figure 7.2 Conceptual map of used software. In the centre, there is object-oriented pro-
gramming language that better supports an easy development and management between
different application’s needs.
are denoted as hybrid models; they switch among many operating modes
described by differential equation, and mode transitions are triggered by
events like states crossing pre-specified thresholds.
Another kind of system that is agreeably represented by hybrid model is
non-linear system. Indeed, it is possible to represent non-linear system by a
piece-wise linearized model, which consists in a sequence linearization of
the system’s model around consecutive operating points (see Figure 7.3).
This kind of model representation is presented in Section 7.2.1, where its
behaviour is also shown. Indeed, the relationship between every working
mode is linear, whose slope changes in each region; this is called linearized
model of non-linear system and can be represented like a Hybrid system that
switches its operating mode.
r + e u yp
MPC Plant
+
-
Model ym
x
Figure 7.4 Model Predictive Control scheme.
where N ={1, 2,∞} represents norm-type that defines the type of minimiza-
tion problem. A linear problem is defined if N ={1,∞} and quadratic if
N = 2. P is the prediction horizon that will be considered. Qy,u,4u are
positive defined matrices, also called weight matrices of different objectives
208 Model Predictive Control in Discrete Manufacturing Shopfloors
Quadrac
Determine
Programming
controlled process
Constrained
subset
Opmizaon
of the controller: thanks to these parameters we can tune the controller. For
example, if it is not important to control the first output y1 , it is possible to
easily set Qy1 = 0, and the same action will be applied for other weights.
Overall, the flow of computation for a typical MPC problem is repre-
sented in Figure 7.6.
where x(t) ∈ Rn , u(t) ∈ Rm and y(t) ∈ Rr denote the state and the input
and output vectors. {χi }si=1 is a convex polyhedral partition of the states and
input space (i.e. see Figure 7.8). Each χi is given by a finite number of linear
inequalities.
7.2 Hybrid System Representation 211
where x(k) = [xTr (k), xTb (k)] with xr (k) ∈ Rnr and xb (k) ∈ {0, 1}nb ;
y(k) = [yrT (k), ybT (k)] with yr (k) ∈ Rmr and yb (k) ∈ {0, 1}mb ; u(k) =
[uTr (k), uTb (k)] with ur (k) ∈ Rqr and ub (k) ∈ {0, 1}qb . z(k) ∈ Rrr and
δ(k) ∈ {0, 1}rb are auxiliary variables that are used to represent the switching
between different operating modes.
The inequalities have to be interpreted component-wise, and they define
the switching conditions of different operating modes. The construction
of this inequality is based on tools able to convert logical facts involving
212 Model Predictive Control in Discrete Manufacturing Shopfloors
continuous variables into linear inequalities (for more details, see [17]). This
tool will be used to express relations describing the evolution of systems
where physical laws, logic rules and operating constrains are interdependent.
Equation (7.4) commits linear discrete-time dynamics for the first two
equations. It is possible to build up another formulation describing continuous
time version by substituting x(k + 1) by x(t) or a non-linear version by
changing the linear equation and inequalities in (7.4) to more non-linear
functions. However, in this way, the problem becomes hard tractable by a
computational point of view, and more in general, the MLD representation
allows to describe a wide range class of systems.
MLD models are successful thanks to good performance in computation
aspect. The main claim of their introduction was the easy handling of non-
trivial problems, for the formulation of Model Predictive Control for hybrid
and non-linear system. This formulation performs well when it is used
together with modern Mixed-Integer Programming (MIP) solver for synthe-
sizing predictive controller for hybrid systems, as described in Section 7.4.1.
Note that the class of Mixed Logical Dynamical systems includes the
following important system classes:
• Linear systems;
• Finite state machines;
• Automata;
• Constrained linear systems;
• Non-linear dynamic systems.
In fact, the next section introduces the equivalence between different hybrid
system representations and it underlines the potential of MLD models (in
Figure 7.9, it is possible to see the interconnection between MLD and other
system representation models).
Figure 7.9 Graphic scheme of the links between the different classes of hybrid. The arrow
from A to B classes shows that A is a subset of B.
7.3 Hybrid Model Predictive Control 213
MILP and MIQP problems are much more difficult to solve than a linear
or quadratic programming problem (LP or QP), and some properties like
convexity are lost (see ref. [19] for a more detailed description).
The computational load for solving an MIP problem is a key issue, as a
brutal force approach consists of the evaluation of every possible combina-
tion. The optimal solution would be to solve every QP or LP related to all
the feasible combinations of discrete decision variables. The solution is the
minimum of all the computed solution of QP/LP problems. For example, if
all the discrete decision variables are Boolean, then the number of possible
LP/QP problems is 2ˆ(n b). Fortunately, there exists an entire research field
on this topic and nowadays, there is a wide range of commercial solvers able
to deal with MIP problem in a very fast way. These software are mainly based
on branch and bound methods [20]; the most known and used are CPLEX
(ILOG Inc. [3]), GLPK (Makhorin [21]) or GUROBI [2] for which APIs for
many programming languages are available.
The application of the Model Predictive Control arose in the early 1990s.
One of the first fundamental studies was made by Bemporad and Morari [14]:
they proposed a rigorous approach to mathematical modelling of hybrid sys-
tem where it is possible to obtain a compact representation of system called
Mixed Logical Dynamical (MLD, see Section 7.3.2). Then, following the
optimization step, it is possible to synthesize an optimal constrained receding
horizon control. This methodology is helpful to optimize and orchestrate both
large systems with mixed-variables and non-linear systems linearized around
sequential operating points.
As in birth of MPC, the first implementation was in the field of refinery
and chemical process. In these fields, Model Predictive Control was already
a standard, and the possibility to build up a unique mathematical model
that represents the whole system, like plant with all its components, and
synthesize a unique controller able to find the optimal solution that respects
every specified constrain was a revolution. In the next section, we deeply
explore the issues and limits of Hybrid Model Predictive Control, which
are roughly synthesizable in computational time and computational power.
In that period, the solution of this problem was overcome by using off-
line optimization, also called Explicit MPC. This control method is able to
properly work only in a predetermined range of variable states: in fact, the
on-line optimization was replaced by an off-line optimization, summarized
in a lookup table. Using this methodology, the application of Hybrid MPC
could be extended to mechanical and mechatronics system, where the cycle
time can be very small. Some applications are summarized in refs. [10–12].
7.3 Hybrid Model Predictive Control 215
plant behaviour within the prediction horizon. As known, there are basically
two ways to construct a mathematical model of the plant:
• Analytic approach, where models are derived from first-principle
physics laws (like Newton’s laws, Kirchhoff’s laws, balance equations).
This approach requires an in-depth knowledge and physical insight into
the plant, and in the case of complex plants, it may lead to non-linear
mathematical models, which cannot be easily expressed, converted or
approximated in terms of hybrid linear models;
• System identification approach, where models are derived and validated
based on a set of data gathered from experiments. Unlike the analytic
approach, the model constructed through system identification has a
limited validity (e.g., it is valid only at certain operating conditions
and for certain types of inputs) and it does not give physical insights
into the system (i.e., the estimated model parameters may have no
physical meaning). Nevertheless, system identification does not need,
in principle, in-depth physical knowledge of the process, thus reducing
the modelling efforts.
In this project, hybrid linear models of the process of interested will be
derived via system identification, and physical insights into and knowledge
of the plant will be used, if needed, to assist the whole identification phase,
such as choosing the appropriate inputs to perform experiments, choosing the
structure of the hybrid model (defined, for instance, in terms of number of
discrete states and dynamical order of the linear subsystems), debugging the
identification algorithms and assessing quality of the estimated model.
The following two classes of hybrid linear models will be considered,
which mainly differ in the assumption behind the switches among the
(linear/affine) time-invariant sub-models:
• Jump Affine (JA) models, where the discrete-state switches depend on
an external signal, which does not necessarily depend on the value of
the continuous state. The switches among the discrete states can be
governed, for instance, by a Markov chain, and thus described in terms
of state transition probabilities. Alternatively, in deterministic jump
models, the mode switches are not described by a stochastic process,
but they are triggered by or associated to determinist events (e.g. gear
or speed selectors, evolutions dependent on if-then-else rules, on/off
switches and valves). In this chapter, we will focus on the identification
of deterministic jump models. Stochastic models might be considered at
a later stage, only if necessary.
218 Model Predictive Control in Discrete Manufacturing Shopfloors
1. let Cs ← ∅, s = 1, . . . , s;
7.4 Identification of Hybrid Systems 223
2. for t = 1, . . . , N do
2.1. let es (t) ← y(t) − Θs x(t),
2.2. let s(t) ← arg mins=1,...,s kes (t)k22 ;
2.3. let Cs(t) ← Cs(t) ∪ x(t);
2.4. update Θs(t) using recursive least-squares;
3. end for;
4. end.
Ss
1
A collection {Xs }ss=1 is a complete partition of the regressor domain X if s=1 Xs = X
and Xs◦ ∩ Xj◦ = ∅, ∀s 6= j, with Xs◦ denoting the interior of Xs.
7.4 Identification of Hybrid Systems 225
ϕs (x) = x0 ω s (7.14)
ϕ (x) =x0 ω s ∀x ∈ Cs , s = 1, . . . , s
0 j (7.16)
ϕ (x) ≥ x ω + 1 ∀x ∈ Cs , s 6= j
Figure 7.13 exec SPChange algorithm from the valve basic FB.
input event and an optional Boolean expression over input and/or internal
variables. Upon evaluation, a transition is considered to be enabled if the
respective guard condition evaluates to true. The ECC will then transition to
the next state by taking the enabled egress transition from the source state to
the corresponding target state.
An algorithm is a finite set of ordered statements that operate over the
ECC variables. Typically, an algorithm consists of loops, branching and
update statements, which are used to consume inputs and generate outputs.
The IEC 61499 standard allows algorithms to be specified in a variety of
implementation-dependent languages. As an example, the implementation
from nxtControl allows the development of custom algorithms in Structured
Text (ST).
The exec SPChange algorithm from the valve basic FB is presented in
Figure 7.13 that uses the ST language as defined in IEC-61131-3. Here, the
IF–THEN–ELSE construct is used to update the output value of cp based on
the value of the input isMan.
7.5 Integration of Additional Functionalities to the IEC 61499 Platform 229
Figure 7.14 A composite function block with an encapsulated function block network.
• the number of data values that are associated to the input and output
events;
• the data type associated to data values.
In addition to the description of the input/output events and data, the custom
code used in a generic DLL function block has to define a precise set of
functions that the IEC 61499 runtime uses to interact with the DLL when the
distributed control application needs to execute the custom code. The most
relevant of such functions are those used to register/unregister an FB DLL
with the appropriate DLL, the one used to execute the code associated to a
specific input event, as well as the one dedicated to signal the triggering of
an output event. In addition to those, there is also a function dedicated to the
log information that can be used by the code in the DLL to report diagnosis
information to the IEC 61499 runtime.
platform where the DLL will run. This means that an appropriate software
toolchain is needed to generate a binary code that can run on the controller
platform selected.
The main constraints that characterize this approach are:
• All the algorithms that define the behaviour of the FB DLL have to
be compiled as a dynamic loadable library (DLL) with a binary format
compatible with the architecture of the controller, where the DLL will
have to be installed;
• The DLL has to expose a C interface corresponding to the template
imposed by the generic DLL function block mechanism;
• In the case where the FB DLL is conceived to provide an output event to
confirm the completion of the elaboration performed by the FB before a
new input event can be processed by the FB, the elaboration performed
by the DLL has not to take too much time before generating the output
event. Otherwise that elaboration can affect negatively the controller’s
real-time performance;
• When the elaboration to be performed takes many computational
resources and a lot of time to generate a result from the elaboration,
another approach should be used: for example, the approach to run
elaborations in parallel and generate output events asynchronously is a
valid alternative;
• One of the aspects that needs to be considered at design is that a DLL
can be shared by all the FB DLL instances that make use of that library.
This means, as a consequence, that the current number of function
blocks registered with a DLL have to be managed appropriately, in order
to keep track of the code portions that need to be executed for each
FB DLL instance.
The compact approach
The first approach enabled by the use of the generic DLL function block
consists in exploiting the mechanism to implement a basic function block
fully customized, where the constraint of using an execution control chart
(ECC) is no more effective. In this case, the developer can freely design the
finite state machine for government of the function block’s logic states by
exploitation of any preferred development tool (Figure 7.16).
By means of this approach, the logic algorithms that need to be executed
when the associated input events are received by the FB DLL instance can be
designed and implemented following a customized approach that satisfies the
developer’s preferences and needs. At the same time, this mechanism enables
7.5 Integration of Additional Functionalities to the IEC 61499 Platform 235
Figure 7.16 Illustration of the compact approach based on exploitation of generic DLL FBs.
Figure 7.17 Illustration of the extended approach based on exploitation of generic DLL FBs.
In order to make this approach applicable, all the DLLs that are going
to be exploited within the code of a general DLL function block have to be
compiled for the specific architecture of the controller that will run that code.
That constraint can be limiting in certain scenarios, where the DLLs
referenced by the custom code are not available for the platform selected and
therefore it makes the use of those libraries impossible in such a scenario.
On the other hand, that limitation has not to be ascribed to the generic DLL
function block mechanism but to the lack of a compatible version of third
party’s libraries.
All the considerations that have been done for the basic approach of
exploiting the FB DLL are valid also for this extended case.
The distributed approach
The most general and flexible exploitation approach of the generic DLL
function block mechanism consists not only in leveraging the FB DLL
FBs to integrate custom made and/or third-party software algorithms, but
also in expanding the distributed computational network with additional
7.6 Conclusions 237
7.6 Conclusions
A deeply review of state of the art regarding solutions for controlling aggre-
gated CPS has been carried out: the focus has been pointed on Model Pre-
dictive Control, especially on Hybrid Model Predictive Control. The analysis
delves into Hybrid System representation and modelling, showing different
238 Model Predictive Control in Discrete Manufacturing Shopfloors
Acknowledgements
This work was achieved within the EU-H2020 project DAEDALUS, which
has received funding from the European Union’s Horizon 2020 research and
innovation programme, under grant agreement No. 723248.
References
[1] V. Vyatkin, “The IEC 61499 standard and its semantics”, IEEE Indus-
trial Electronics Magazine, vol. 3, 2009.
[2] G. Optimization et al., “Gurobi optimizer reference manual”, URL:
http://www.gurobi.com, vol. 2, pp. 1–3, 2012.
[3] I. L. O. G. CPLEX, Reference Manual, 2004, 2011.
[4] B. a. T. M. Meindl, “Analysis of commercial and free and open
source solvers for linear optimization problems”, Forschungsbericht
CS-2012-1, 2012.
[5] J. Lunze, F. Lamnabhi-Lagarrigue, Handbook of hybrid systems control:
theory, tools, applications, Cambridge University Press, 2009.
[6] S. A. Nirmala, B. V. Abirami, and D. Manamalli, “Design of model
predictive controller for a four-tank process using linear state space
model and performance study for reference tracking under distur-
bances”, in Process Automation, Control and Computing (PACC), 2011
International Conference on, 2011.
[7] J. G. Ortega, E. F. Camacho, “Mobile robot navigation in a partially
structured static environment, using neural predictive control”, Control
Engineering Practice, vol. 4, pp. 1669–1679, 1996.
[8] M. Kvasnica, M. Herceg, L. irka, M. Fikar, “Model predictive control
of a CSTR: A hybrid modeling approach”, Chemical papers, vol. 64,
pp. 301–309, 2010.
[9] J. Richalet, A. Rault, J. L. Testud, J. Papon, “Model predictive heuris-
tic control: Applications to industrial processes”, Automatica, vol. 14,
pp. 413–428, 1978.
240 Model Predictive Control in Discrete Manufacturing Shopfloors
This chapter presents the results of the conception effort done under
Daedalus to transfer the technological results of IEC-61499 into the indus-
trial domain of Human–Robot collaboration, with the aim of deploy-
ing the concept of mutualism in next-generation continuously adaptive
Human–Machine interactions, where operators and robots mutually comple-
ment their physical, intellectual and sensorial capacities to achieve optimized
quality of the working environment, while increasing manufacturing perfor-
mance and flexibility. The architecture proposed envisions a future scenario
where Human–Machine distributed automation is orchestrated through the
IEC-61499 formalism, to empower worker-centred cooperation and to capi-
talize on both worker’s and robot’s strengths to improve synergistically their
integrated effort.
8.1 Introduction
Personnel costs in Europe are higher compared to other industrial regions;
hence, EU industry today competes in the global market by offering high
243
244 Modular Human–Robot Applications in the Digital Shopfloor
Figure 8.1 Life-cycle stages to achieve human–robot symbiosis from design to runtime
through dedicated training.
Figure 8.4 Qualitative representation of the technological key enabling concepts of the
Mutualism Framework.
Daedalus, the open and interoperable IEC-61499 standard is taking the lead
in solving this need.
With the Mutualism approach, it is recognized that a collaborating team
of human and robotic symbionts is, in fact, an extension of the above-
mentioned concept of distributed intelligence, to encompass the hybrid nature
of shopfloors where operators and machines work shoulder-to-shoulder. This
means that the concept of Mutualism must be developed towards its technical
dimension of (soft) real-time orchestration of Symbionts, designed, deployed
and then executed at runtime thanks to the usage of an IEC-61499-based
engineering tool-chain.
The design stage of a Mutualistic manufacturing process will consider
both the conceptual definition of the specifications of the process itself, and
the engineering of the automation logics (through the IEC-61499 formalism
and programming language) that will control orchestration of the distributed
intelligence of Symbionts.
Conception of the Mutualism is where the principles of Lean Manu-
facturing are applied towards a new production model that considers the
opportunities of human–robot collaboration and therefore exploits them.
It originates from the key principles of the “Toyota Way” (especially the
kaizen) to implement Mutualism keeping the Human operator at the centre
of the process.
Leveraging on the state of the art of R&D on “Lean Automation”, it is
possible to focus mostly on the implementation of those design-support tools
that can simplify the definition of requirements for Mutualistic tasks, help
assembling the most appropriate team of Symbionts for those tasks and
support the generation of specifications for the corresponding IEC-61499
orchestration.
For what concerns the engineering of orchestration logics, the usage of
the IEC-61499 IDE and runtime developed in Daedalus allows to: (i) Guar-
antee ease of interfacing and functional wrapping of lower-level automation
architecture of specific robotic symbionts; (ii) Use 61499 formalism to con-
sider the 3D performance of Symbionts and (iii) Integrate with an adequate
perceptual learning platform.
8.6 Conclusions
During the coming decades, the whole European manufacturing sector will
have to face important social challenges. In fact, while shop floor operators
are usually considered a “special” community, with their job being consid-
ered one of the most disadvantaged in terms of workplace healthiness and
safety [26], Europe’s issue of an ageing population will lead inevitable work-
ers to postpone their retirement age. With this prospect, without a concrete
solution, the European industry is condemned to lose qualified workers who
are needed for manufacturing high-quality products, while national assistance
frameworks of EU-27 will have to assist retired workers who need to be kept
active to balance the new demographic distribution of population.
Through the Mutualism Framework based on the IEC-61499 platform
of Daedalus, we answer back to the popular belief that automation wipes
out many jobs and that is currently under the attack of many Industry
4.0 opposers. Indeed, recently, several experts have repeatedly proven this
thesis as groundless, demonstrating the mutually virtuous coexistence of
humans and machines interacting in industrial environments [27]. Even in
advanced automated scenarios, where machine learning can support adap-
tation to variable and unpredictable situations, interaction with humans is
still essential in the process of reacting to contextual information (thus,
machines need workers). Contemporarily, automation encompasses not only
repetitive tasks, but also sophisticated and high-performance functionalities
that human’s senses and capacities are not keen to; moreover, machines
could compensate human knowledge gaps, thus extending the opportunity
to have actual support for junior workers (thus, workers can benefit from
machines).
262 Modular Human–Robot Applications in the Digital Shopfloor
The deployment of these technologies may improve the mental and phys-
ical strain on human operators, reducing the number of injuries related to
manufacturing work. In the medium term, this will also improve the per-
ception of shopfloor workers about how their job negatively influences their
health status (currently 40% [28]). This will be mirrored within the society,
improving the general perception that population has about shopfloors and
increasing social acceptance of this profession.
At the same time, the opportunity is to bring new skills to the role of
the shopfloor worker, increasing the reputation of operators at social level.
The new task typology that operators will have to perform will create jobs
opportunities at shop floor level for more qualified profiles like technicians,
increasing appealing for younger workers. This is in line with current FOF
Roadmap 2020 to achieve sustainable and social acceptance of this sector
and to strengthen the global position of the EU manufacturing Industry.
Finally, the implementation of Mutualism distributes dynamically tasks
between operators and robots and coordinates their collaborative execution
according to their strengths and weaknesses. This approach offers new job
opportunities to people with disabilities, as the automation can overcome
functional limitations, facilitating inclusion of this community.
Acknowledgements
The work hereby described was achieved within the EU-H2020 project
DAEDALUS, which was funded by the European Union’s Horizon 2020
research and innovation programme, under grant agreement No. 723248.
References
[1] Brown, A. S. “Worker-friendly robots”, Mechanical Engineering 136.9,
pp. 10–11, September 2014.
[2] Spring et al. Product customisation and manufacturing strategy. Int. J. of
Operations & Production Management, 20(4), pp. 441–467, 2000.
[3] EFRA. Factories of the Future. Multi-annual roadmap for the contractual
PPP under Horizon 2020, 2013.
[4] European Working Conditions Surveys, http://www.eurofound.europa.
eu/surveys
[5] EUROSTATS: ec.europa.eu/eurostat/statistics-explained/index.php/
Population structure and ageing
References 263
[20] Duguay et al. From mass production to flexible/agile production. Int. J.of
Operations & Production Management, 17(12), pp. 1183–1195, 1997.
[21] Krüger, J., Lien, T. K., Verl, A., Cooperation of Human and Machines in
Assembly Lines, Annals of the CIRP, 58/2, pp. 628–646, 2009.
[22] Sawaragi et al.. Human-Robot collaboration: Technical issues from a
viewpoint of human-centred automation. ISARC, pp. 388–393, 2006.
[23] Billings Human-centered Aircfraft Automation Phylosophy, NASA
Technical Memorandum 103885, NASA Ames Research Center, 1991.
[24] Eeva Järvenpää, Pasi Luostarinen, Minna Lanz and Reijo Tuokko.
Adaptation of Manufacturing Systems in Dynamic Environment Based
on Capability Description Method, Manufacturing System, Dr. Faieza
Abdul Aziz (Ed.), ISBN: 978-953-51-0530-5, 2012.
[25] Koren, Y., Heisel, U., Jovane, F., Moriwaki, T., Pritschow, G., Ulsoy, G.,
et al. Reconfigurable manufacturing systems. CIRP Annals: Manufac-
turing Technology, 48(2), pp. 527–540, 1999.
[26] Occupational Safety and Health Administration, https://osha.europa.eu.
[27] Autor, D. H. “Why are there still so many jobs? The history and future
of workplace automation.” The Journal of Economic Perspectives 29.3,
pp. 3–30, 2015.
[28] European Working Conditions Surveys, http://www.eurofound.
europa.eu/surveys
PART II
9
Digital Models for Industrial
Automation Platforms
This chapter presents the role and uses of digital models in industrial
automation applications of the Industry 4.0 era. Accordingly, it reviews a
range of standard-based models for digital automation and their suitability for
the tasks of plant modelling and configuration. Finally, the chapter introduces
the digital models specified and used in the scope of the FAR-EDGE automa-
tion platform, towards supporting digital twins and system configuration
use cases.
9.1 Introduction
The digital modelling of the physical world is one of the core concepts
of the digitization of industry and the fourth industrial revolution (Indus-
try 4.0). It foresees the development of digital representations of physical
world objects and processes as a means of executing automation and control
operations, based on digital operations functionalities (i.e. at the cyber rather
than at the physical world) [1]. The motivation for this stems from the fact that
digital world operations can be flexibly altered or even undone at a low cost,
while this is impossible in the physical world. Hence, plant operators can
experiment with operations over digital models, run what-if scenarios and
ultimately derive optimal deployment configurations for automation opera-
tions, while also deploying them on the field based on IT applications and
tools, such as Industrial Internet of Things (IIoT) tools.
267
268 Digital Models for Industrial Automation Platforms
• Section 9.4 introduces the proprietary FAR-EDGE data models that are
used for configuring the distributed data analytics functionalities of the
platform;
• Section 9.5 presents a methodology for linking the FAR-EDGE pro-
prietary data models with standards-based data models used for digital
twins’ representations in the platform’s simulation domain.
• Section 9.6 is the final and concluding section of the chapter.
The synchronization between the physical and digital worlds can be also
used to improve the results of digital simulations based on the so-called
digital twins. In particular, it allows digital simulation applications to operate
not only based on simulated data, but also with real data stemming from the
synchronization of the physical and digital worlds. This can facilitate more
accurate and realistic simulations, given that part of them can rely on real data
that are seamlessly blended in the simulation application. The development
of such realistic simulations is therefore based on dynamic access to plant
information, which is illustrated in the following paragraph.
Obtain dynamic
informaon for the plant
Digital Automaon Plaorm Update the Digital
Models
Automaon Analycs Digital Models (“Mini-World”)
Operaons Operaons Plant Representaon (Schema)
Field Status (Instance)
Field Synchronizaon
9.3.1 Overview
For over a decade, various industrial standards have been developed, includ-
ing information models that are used for information modelling in factory
automation. Several standards come with a set of semantic definitions,
which are typically used for modelling and exchanging data across sys-
tems and applications. These standards include, for example, the IEC 62264
standard that complies with the mainstream ISA-95 standard for factory
automation. IEC 62264 boosts interoperability and integration across differ-
ent/heterogeneous enterprise and control systems. Likewise, ISA-88 for batch
processes comes with IEC 61512, and IEC 62424 supports exchange of data
between process control and productions tools, while IEC 62714 covers engi-
neering data of industrial automation systems [5]. Several of these standards
are referenced and/or used by the RAMI 4.0 reference model [6], which is
driving the design and development of several digital automation platforms.
In the following paragraphs, we briefly describe some of these standards.
its activities, the interface content and associated transactions within MoM
level and between MoM and Enterprise level. Examples of entities that are
modelled by the standard include materials, equipment, personnel, prod-
uct definition, process segments, production schedules, product capabilities,
production performance and more.
Note that IEC 62264 is among the standards referenced and used in
RAMI 4.0. Due to its compliance with RAMI 4.0, IEC 62264 meets several
of the requirements listed in the previous paragraph. However, it is focused
on Level 3 and Level 4 entities of the ISA-95 standards and hence it is not
very appropriate for use cases involving Levels 1, 2 and 3.
Management (SCM) systems), with control systems (e.g. SCADA, DCS) and
manufacturing execution systems (MES). This holds not only for B2MML
compliant business systems (i.e. systems that support directly the interpre-
tation of B2MML messages), but also for legacy ERP/SCM systems which
can be made B2MML-compliant based on the implementation of relevant
middleware adapters that transform B2MML to their own semantics and
vice versa.
The language can be considered RAMI 4.0-compliant, given that
RAMI 4.0 uses ISA-95 concepts and references of relevant standards (such
as IEC 62264). It is also important that the B2MML schemas provide support
for the entire ISA-95 standard, rather than a subset of it.
B2MML is characterized by compatibility with enterprise systems
(e.g. ERP and PLM systems), which makes it appropriate for supporting
information modelling for use cases involving enterprise-level entities and
concepts. Furthermore, B2MML can boost compatibility with a wide range
of available ISA-95-compliant systems, while at the same time adhering
to information models referenced in RAMI 4.0. Therefore, B2MML could
be exploited in the scope of use cases involving enterprises systems and
entities, as soon as it is used in conjunction with additional models supporting
concepts and entities for the configuration of an automation platform (e.g.
like edge node, edge gateways and edge processes in the scope of an edge
computing platform like FAR-EDGE).
9.3.8 AutomationML
AutomationML is an XML-based open standard, which provides the means
for describing the components of a complex production environment. It has a
hierarchical structure and is commonly used to facilitate consistent exchange
and editing of plant layout data across heterogeneous engineering tools.
AutomationML takes advantage of existing standards such as PLCopen XML
or COLLADA. It provides the means for modelling plant information and
automation processes based on objects structured in a hierarchical fashion,
including information about geometric, model logic, behaviour sequences
and I/O connections. AutomationML comprises different standards that
support modelling for various entities and concerns. In particular, it relies
on the following standards:
• CAEX (IEC 62424), in order to model topological information.
• COLLADA (ISO/PAS 17506) of the Khronos Group in order to model
and implement geometry concepts and 3D information as well as Kine-
matics (i.e. the geometry of motion). Support for Kinematics ensures
9.4 FAR-EDGE Digital Models Outline 277
9.6 Conclusions
This chapter has analysed the rationale behind the specification and inte-
gration of digital models in emerging digital automation platforms, which
included a discussion of the main requirements that drive any relevant digital
modelling effort. Moreover, it has presented a range of standards-based
digital models, notably models that are used for semantic interoperability and
information exchange in Industry 4.0 systems and applications. Following
this review, it has illustrated why AutomationML is suitable for supporting
the digital simulation functionalities of the FAR-EDGE platform.
The chapter has also introduced a proprietary model for representing
and configuring the analytics part of the platform. This model provides the
means for modelling and representing data sources and analytics workflows
based on appropriate manifests. The respective models are implemented and
persisted in a models repository, which is provided as a set of schemas and
open source libraries as part of the FAR-EDGE digital automation platform.
Hence, they can serve as a basis for using the FAR-EDGE digital models
in analytics scenarios, as well as for implementing similar digital modelling
ideas.
As part of this chapter, we have also outlined how globally unique iden-
tifiers can be used to link different models that refer to same entity or object
in the factory based on their own local identifiers. The use of such global
identifiers permits the association of entities referenced and used in both
the AutomationML models of FAR-EDGE simulation and the FAR-EDGE
models of the analytics engine. As part of our implementation roadmap, we
also plan to implement a Common Interoperability Registry (CIR) that will
keep track of all global identifiers and their mapping to local identifiers used
by the digital models of the simulation, analytics and automation domains.
This will strengthen the generality and versatility of our approach to digital
model interoperability.
Overall, this chapter can be a good start for researchers and engineers who
wish to start working with digital modelling and digital twins in Industry 4.0,
284 Digital Models for Industrial Automation Platforms
as it presents the different use cases of digital models, along with the
specification and implementation of a digital model for distributed data
analytics in industrial plants.
Acknowledgements
This work was carried out in the scope of the FAR-EDGE project (H2020-
703094). The authors acknowledge help and contributions from all partners
of the project.
References
[1] H. Lasi, P. Fettke, H.-G. Kemper, T. Feld, M. Hoffmann, ‘Indus-
try 4.0’, Business & Information Systems Engineering, vol. 6, no. 4,
pp. 239, 2014.
[2] G. Di Orio, A. Rocha, L. Ribeiro, J. Barata, ‘The prime semantic
language: Plug and produce in standard-based manufacturing production
systems’, The International Conference on Flexible Automation and
Intelligent Manufacturing (FAIM 2015), 23–26 June 2015.
[3] W. Lepuschitz, A. Lobato-Jimenez, E. Axinia, M. Merdan, ‘A sur-
vey on standards and ontologies for process automation’, in Industrial
Applications of Holonic and Multi-Agent Systems, Springer, pp. 22–32,
2015
[4] R. S. Peres, M. Parreira-Rocha, A. D. Rocha, J. Barbosa, P. Leitão
and J. Barata, ‘Selection of a data exchange format for industry 4.0
manufacturing systems,’ IECON 2016 - 42nd Annual Conference of
the IEEE Industrial Electronics Society, Florence, pp. 5723–5728,
doi: 10.1109/IECON.2016.7793750, 2016.
[5] ‘IEC 62714 engineering data exchange format for use in industrial
automation systems engineering - automation markup language - parts 1
and 2’, in International Electrotechnical commission, pp. 2014–2015.
[6] K. Schweichhart, ‘Reference Architectural Model Industrie 4.0 - An
Introduction’, Deutsche Telekom, April 2016 online resource: https://
ec.europa.eu/futurium/en/system/files/ged/a2-schweichhart-reference
architectural model industrie 4.0 rami 4.0.pdf
10
Open Semantic Meta-model
as a Cornerstone for the Design and
Simulation of CPS-based Factories
285
286 Open Semantic Meta-model as a Cornerstone for the Design and Simulation
10.1 Introduction
In order to empower simulation methodologies and multidisciplinary tools
for the design, engineering and management of CPS-based (Cyber Physical
Systems) factories, we need to target the implementation of actual digital
continuity, defined as the ability to maintain digital information all along the
factory life cycle, despite changes in purpose and tools.
A Semantic Data Model for CPS representation is the foundation to
achieve digital continuity, because it provides a unified description of the
CPS-based simulation models that different simulation tools can rely on
to operate.
Cyber Physical Systems are engineered systems that offer close interac-
tion between cyber and physical components. CPS are defined as the systems
that offer integrations of computation, networking, and physical processes,
or in other words, as the systems where physical and software components are
deeply intertwined, each operating on different spatial and temporal scales,
exhibiting multiple and distinct behavioural modalities, and interacting with
each other in a myriad of ways that change with context [2, 3]. From this
definition, it is clear that the number and complexity of features that a CPS
data model has to represent are very high, even if limited to the simulation
field. Moreover, many of the aspects that concur to define a CPS for simula-
tion (3D models, kinematics structures, dynamic behaviours, etc.) have been
already investigated and formalized by many well-established data models
that are, or can be considered, to all extents data exchange standards.
For these reasons, the goal of an effective CPS Semantic Data Model is
providing a gluing infrastructure that refers existing interoperability standards
and integrates them into a single extensible CPS definition. This approach
reduces the burden on the simulation software applications to access the new
data structures because they mainly add a meta-information level whereas
data for specific purposes is still available in standard formats.
10.2 Adoption of AutomationML Standard 287
10.3.1.1 Property
Property is an abstract class derived by IdentifiedElement and represents
runtime properties of every resource and prototype. These properties are
relevant information that can be dynamically assigned and read by the
simulation tools.
10.3.1.2 CompositeProperty
CompositeProperty is a class derived by Property and represents a composi-
tion of different properties of every resource and prototype. This composition
is modelled to create a list of simple properties of the resource, or even a
multilevel structure of CompositeProperty instances. Figure 10.2 shows a
possible application of the base model classes to represent properties, meta
information and documentation of a sample CPS. A resource (in this case,
CPS4) can have many properties instances associated to it and these proper-
ties can be simple (as ToolLength, EnergyConsumption and TempCPS4) or
composite that allow creating structured properties (CurrProd).
10.3.2.1 ExternalReference
ExternalReference is abstract and extends IdentifiedElement. This class
represents a generic reference to a data source that is external to the
Meta Data Model (e.g. a file stored on the Central Support Infrastructure
(CSI, see Chapter 13)). The external source can contain any kind of binary
10.3 Meta Data Model Reference 291
10.3.2.2 Asset
Asset is an extension of ExternalResource. This class represents a reference
to an external relevant model expressed according to interoperable standard
or binary format that behavioural models want to use. An important feature
that the CPS data model should support is the possibility to create links
between runtime properties and properties defined inside assets and between
properties defined by two different assets. Assets can be considered static data
of the CPS because they represent self-contained models (e.g. 3D Models)
that should be slowly changing.
10.3.2.3 Behaviour
Behaviour is an extension of ExternalResource. This class represents a refer-
ence to runnable behavioural models that implement: (i) functionalities and
operative logics of the physical systems and (ii) raw data stream aggregation
292 Open Semantic Meta-model as a Cornerstone for the Design and Simulation
and processing functions. Simulation Tools should be able to use directly the
former to improve reliability of simulations, whereas the latter should run
inside the CSI to update the runtime properties of the CPS model.
The data model aims at natively supporting the same efficient re-use
approach implementing the classes to describe “ready to use” resources,
called “prototypes” and “instances” of such elements that are the actual
resources composing plants. The relationship that exists between prototypes
and instances is the same that in OOP exists between a class and an object
(instance) of that class.
A prototype is a Resource model that is complete from a digital point
of view, but it is still not applied in any plant model. It contains all the
relevant information, assets and behaviours that simulation tools may want
to use and, ideally, device manufacturers should directly provide Prototypes
of their products ready to be assembled into production line models.
As shown in Figure 10.4, a Resource instance is a ResourcePrototype that
has become a well-identified, specific resource of the manufacturing plant.
Each instance shares with its originating Prototype the initial definition, but
during life cycle, its model can diverge from the initial one because properties
and even models change. Therefore, a single ResourcePrototype can be used
to instantiate many specific resources that share the same original model.
automotive, a cell safety group contains the robot and the surrounding fences,
but from an electrical point of view, fences are not represented at all).
For this reason, Meta Data Model provides an aggregation system that is
based on two levels:
• a first main hierarchy structure that is implemented in the two base
classes for prototypes and instances, AbstractResourcePrototype and
AbstractResource (Figure 10.6);
• a second level, discipline-dependent, that is defined in parallel to the
main one and that should be contained inside domain-specific Assets.
The former hierarchy level is meant to provide a reference organization
of the plant that enables both simulation tools and the CSI to access resources
in a uniform way. In fact, the main hierarchy has the fundamental role
of controlling the “visibility level” of resources, setting the lower access
boundaries that constrain the resources to which the secondary (“parallel”)
hierarchies should be associated.
Figure 10.5 shows an example of application of the main resources hierar-
chy and the secondary, domain-specific one. The main hierarchy organizes the
two robots and the surrounding security fence with a natural logical grouping
since Robot1, Robot2 and SecurityFence belong physically to the same
production cell, Painting Cell1. Even if this arrangement of the instances is
functional from a management point of view, it is not directly corresponding
to the relationships defined in the electrical schema of the plant, for which
the only meaningful resources are the two robots. Imagining that an elec-
tric connection exists between the two robots, a secondary, domain-specific
10.3 Meta Data Model Reference 295
schema (in this case, the domain is the electric design) needs to be defined
separately. The Painting Cell1 resource acts as the aggregator of the two
robot CPS; therefore, it has the “visibility” on the two resources of the
lower level (Level 1), meaning that they exist and it knows how to reference
them. For this reason, the electrical schema that connects Robot1 and Robot2
is defined at Level 2 as the “ElectricConnections” Asset associated to the
Painting Cell1. This asset, if needed, is allowed to make references to each
electric schema of the lower-level resources.
10.3.3.3 AbstractResourcePrototype
AbstractResourcePrototype is abstract and extends IdentifiedElement
(see Figure 10.6). It represents the base class containing attributes and
relationships that are common both to prototypes of intelligent devices and
to prototypes of simple passive resources or aggregation of prototypes.
The main difference between prototype and instance classes is that the
former does not have any reference to a Plant model, because they represent
“not-applied” elements.
10.3.3.4 ResourcePrototype
ResourcePrototype extends AbstractResourcePrototype. This class represents
the prototype of a generic passive resource of the plant that does not
have any electronic equipment capable of sending/receiving data to/from
its digital counterpart, or an aggregation of multiple resource prototypes.
Examples of simple resources are cell protection fences, part positioning
fixtures, etc.
Resource class is the direct instance class of a ResourcePrototype.
Since a ResourcePrototype must be identifiable within the libraries of
prototypes, its ID attribute should be set to a valid UUID that should be
unique within an overall framework deployment.
10.3.3.5 CPSPrototype
CPSPrototype extends AbstractResourcePrototype. This class represents a
prototype of an “intelligent” resource that is a resource equipped with
an electronic device, capable of sending/receiving data to/from its digital
counterpart. A CPSPrototype defines the way its derived instances should
connect to the physical devices to maintain synchronization between shop
floor and simulation models. CPS class is the direct instance class of a
CPSPrototype. Since a CPSPrototype must be identifiable within the libraries
of prototypes, its ID attribute should be set to a valid UUID that should be
unique within an overall framework deployment.
10.3.4.1 AbstractResource
AbstractResource is abstract and extends IdentifiedElement (Figure 10.7).
This class represents the generalization of the concept of plant resource. As
cited at the beginning of the section, a plant is a composition of intelligent
devices (e.g. machines controlled by PLC, IoT ready sensors, etc.) or passive
elements (fences, fixtures, etc.). Even if such resources are semantically
different, from a simulation point of view, they have a certain number of
common properties. This fact justifies, from a class hierarchy perspective,
the definition of a base class that CPS and Resource classes extend.
10.3.4.2 CPS
CPS extends AbstractResource. This class represents each “intelligent”
device belonging to the plant equipped with an electronic device capable
of sending/receiving data to/from its digital counterpart. A CPS can be
connected with the physical device to maintain synchronization between
shopfloor and simulation models. A CPS can be an aggregation of other
CPSs and simple Resources, using its Assets and Behaviours to aggregate
lower-level models and functionalities.
Each CPS must be identified by a string ID that must be unique within
the plant.
10.3.5.1 Device
Device is an IdentifiedElement and represents an electronic equipment of
physical layer that can be connected to the digital counterpart to send/receive
data.
10.3.5.2 DeviceIO
DeviceIO represents a map of input and output signals that can be exchanged
with a specific device. Moreover, the DeviceIO represents the communication
between CPS on IO-Level.
10.3.6.1 Project
A project is an IdentifiedElement. It can be considered mainly as a utility
container of different simulation scenarios that have been grouped together
because they are related to the same part of the plant (e.g. different scenarios
for the same painting cell of the production line).
10.3 Meta Data Model Reference 303
10.3.6.2 Plant
Plant is an extension of IdentifiedElement and represents an aggregation
of projects and resources. A plant instance could be considered as an
entry point for simulation tools that want to access models stored on the
CSI. It contains references to all the resource instances that are subject of
SimulationScenarios. In this way, it is possible to have different simulation
scenarios, even with simulation of different types, bound to a single resource
instance.
Note: the fact that different simulations of different nature can be set up
for the same resource (be it a cell, a line, etc.) is not related to the concept of
multi-disciplinary simulation that is, instead, implemented by the Simulation
Framework and refers to the possibility of running concurrent, interdependent
simulations of different types.
The ID of the Plant must be unique within the overall framework
deployment.
10.3.6.3 SimulationScenario
SimulationScenario is an extension of IdentifiedElement and represents the
run of a SimModel producing some SimResults. A simulation scenario refers
to a root resource that is not necessarily the root resource instance of the
whole plant, because a simulation scenario can be bound to just a small part
of the full plant. A simulation scenario can set up a multi-disciplinary simu-
lation, defining different simulation models for the same resource instance to
be run concurrently by the Simulation Framework.
10.3.6.4 SimModel
SimModel is an IdentifiedElement and represents a simulation model
within a particular SimulationScenario. Each model can assemble different
behavioural models of the root resource into a specific simulation model,
creating scenario-specific relationships that are stored inside simulation
assets that can be expressed both in an interoperable format (e.g. Automa-
tionML) when there is need for data exchange among different tools and in
proprietary formats.
The ID of a SimModel instance must be unique within a Simulation
Scenario.
304 Open Semantic Meta-model as a Cornerstone for the Design and Simulation
The object diagram shown below (Figure 10.10) shows a possible applica-
tion of the Project Model: a set of simple resources and CPS is organized into
two hierarchies: one representing the actual demo line and a second hierarchy
modelling a hypothesis of redesign of the demo plant. All the Resource
and CPS instances belong to the plant model Plant1 (relationships in this
case have not been reported to keep the diagram tidy). The user wants to
perform two different simulations, one for each root resource. For this reason,
he/she sets up two SimulationScenario instances: MD-DESScenario1 and
DESScenario2. Each one refers to a different root resource. The former is a
multi-disciplinary scenario of the DemoPlantNew that will use a combination
of a DES model and an Energy Consumption model, while the latter repre-
sents a simple DES-only scenario of the original DemoPlant. These scenarios
are aggregated in a Project instance (BendingCellProject) that belongs to the
Plant1 project and that is meant to compare the performance of the plant using
two different setups of the bending cell. For DESScenario2, there are already
simulation results Result2.
are reported, with a particular focus on the validation points that have been
reviewed by experts. In order to describe this part of the model, each class is
treated separately and clusters of functional areas have been created for sim-
plicity. All attributes, cardinality indications and relationships are described
with respect to the single entity and in the general data model perspective.
1
ISO 14649. http://www.iso.org/iso/catalogue detail?csnumber=34743
306 Open Semantic Meta-model as a Cornerstone for the Design and Simulation
10.3.7.2 Workpiece
Workpiece class (Figure 10.11) represents the part or product that needs to
be machined, assembled or disassembled. Each schedule realizes at least
one workpiece, but it may also realize different product variants, with var-
ious features. Each product variant is a different instantiation of the class
“Workpiece” and extends the IdentifiedElement class. Being a central entity
for the data model, the workpiece has a further development side that con-
cerns the production scheduling and product routing. Manufacturing methods
and instructions are not contained in the workpiece information but are
determined by the operations themselves.
10.3.7.3 ProgramStructure
ProgramStructure determines how the different operations are executed for
a specific work piece, i.e. in series or parallel (see also Figure 10.12).
10.3 Meta Data Model Reference 307
10.3.7.4 ProgramStructureType
Enumeration representing the allowed types of a ProgramStructure instance
(Figure 10.12).
10.3.7.5 MachiningExecutable
Machining executables initiate actions on a machine and need to be arranged
in a defined order. They define all those tasks that cause a physical trans-
formation of the workpiece. MachiningExecutable class extends the Identi-
fiedElements class and is a generalization of machining working steps and
machining NC functions, since both of these are special types of machining
executables. Hierarchically, it is also a sub-class of program structures, being
308 Open Semantic Meta-model as a Cornerstone for the Design and Simulation
their basic units, as it constitutes the steps needed for the execution of the
program structure. Starting from the machining executable, the connected
classes are represented in Figure 10.12.
10.3.7.6 AssemblyExecutable
AssemblyExecutable also extends IdentifiedElement class. AssemblyExe-
cutable are a specialization of program structures and generalizations of
working steps or NC functions. As in the case of machining executables,
they initiate actions on a machine and need to be arranged in a defined
order: assembly executables include all those operations that allow creating
a single product from two or more work pieces. Starting from the assembly
executable, the connected classes are represented in Figure 10.12.
10.3.7.7 DisassemblyExecutable
DisassemblyExecutable is derived from IdentifiedElement. DisassemblyEx-
ecutables are generalizations of working steps or NC functions. As in the
case of machining and assembly executables, they are also a specialization of
program structures, being their basic units, as these three classes constitute
the steps needed for the execution of the program structure. Thus, it can be
imagined that one or more machining executables, one or more assembly exe-
cutables and one or more disassembly executable compose program structure.
Disassembly executables also initiate actions on a machine and need to be
arranged in a defined order: disassembly executables perform an opposite
activity with respect to assembly, which means that from a single part it
extrapolates more than one part. Starting from the disassembly executable,
the connected classes are represented in Figure 10.12.
10.3.7.8 MachiningNcFunction
MachiningNcFunction is an IdentifiedElement and a specialization of
MachiningExecutable (Figure 10.13) that differentiates from the machining
working step for the fact that it is a technology-independent action, such as a
handling or picking operation or rapid movements. It has a specific purpose
and given parameters. If needed, other parameters regarding speed or other
technological requirements can be added as attributes.
10.3.7.9 MachiningWorkingStep
MachiningWorkingStep is an IdentifiedElement that is also a specialization of
MachiningExecutable, the most important one for the purpose of this work. It
is the machining process for a certain area of the workpiece, and as such,
10.3 Meta Data Model Reference 309
10.3.7.10 MachiningWorkpieceSetup
MachiningWorkpieceSetup has a direct reference to the workpiece and is
defined for each machining working step, since it defines its position for
machining. In fact, it may change according to the position of the single
machining feature on the workpiece. In fact, also the reference to the manu-
facturing feature for which it is defined is unique: a single workpiece setup,
in fact, refers to only one machining working step that is meant to realize a
defined feature.
10.3.7.11 MachiningSetupInstructions
For each single operation in time and space, precise setup instructions may
be specified, connected to the workpiece setup, such as operator instructions
and external material in the forms of tables, documents and guidelines.
MachiningSetupInstructions class extends the IdentifiedElement class.
10.3.7.12 ManufacturingFeature
ManufacturingFeature is an IdentifiedElement that is a characteristic of the
workpiece, which requires specific operations. For 3D simulation and Com-
puter Aided Design, it is fundamental to have the physical characteristics
specifications: as shown in Figure 10.13, the workpiece manufacturing fea-
tures are a relevant piece of information for modelling and simulation, as they
determine the required operations.
10.3.7.13 MachiningOperation
MachiningOperation is an IdentifiedElement that specifies the contents of a
machining working step and is connected to the tool to be used and a set
of technological parameters for the operation. The tool choice depends on
the specific working step conditions (Figure 10.13). The more information is
specified for tool and fixture, the more limited the list of possible matches is.
Therefore, only the relevant, necessary values should be specified.
10.3.7.14 MachiningTechnology
MachiningTechnology collects a set of parameters, such as feed rate or tool
reference point. The addition of new attributes would expand the possibilities
of technological specifications.
10.3.7.15 FixtureFixture
Fixture class is an IdentifiedElement that represents the fixtures required
by machining operations, if any. Given that the same operation may be
performed under different conditions, the choice of a fitting fixture is done
for the single working step.
10.3 Meta Data Model Reference 311
Figure 10.16 Class diagram for the security section of the Meta Data Model.
314 Open Semantic Meta-model as a Cornerstone for the Design and Simulation
10.4 Conclusions
Multidisciplinary simulation is increasingly important with regard to the
design, deployment and management of CPS-based factories. There are many
challenges arising when exploiting the full potential of simulation technolo-
gies within Smart Factories, where a consistent technological barrier is the
lack of digital continuity. Indeed, this chapter targets the fundamental issue
of the lack of common modelling languages and rigorous semantics for
describing interactions – physical and digital – across heterogeneous tools
and systems towards effective simulation applicable along the whole factory
life cycle.
The data model described in this chapter is the result of the joint effort of
different actors from the European academia and industry. From the reference
specifications presented in this chapter, which should be considered as a first
release of a broader collaboration, a model has indeed been developed and
has subsequently been validated within both an automotive industry use case
and a steel carpentry scenario.
Acknowledgements
This work was achieved within the EU-H2020 project MAYA, which received
funding from the European Union’s Horizon 2020 research and innovation
programme, under grant agreement No. 678556.
References 315
References
[1] www.automationml.org, accessed on March 24, 2017.
[2] Weyer, Stephan, et al.: Towards Industry 4.0-Standardization as the
crucial challenge for highly modular, multi-vendor production systems.
IFAC-PapersOnLine, 48. Jg., Nr. 3, S. 579–584, 2015.
[3] Baudisch, Thomas and Brandstetter, Veronika and Wehrstedt,
Jan Christoph and Wei{\ss}, Mario and Meyer, Torben: Ein zentrales,
multiperspektivisches Datenmodell fur die automatische Generierung
von Simulationsmodellen fur die Virtuelle Inbetriebnahme. Tagungs-
band Automation 2017.
11
A Centralized Support Infrastructure (CSI)
to Manage CPS Digital Twin, towards the
Synchronization between CPS Deployed on
the Shopfloor and Their Digital
Representation
317
318 A Centralized Support Infrastructure (CSI) to Manage CPS Digital Twin
11.1 Introduction
The main purpose of the CSI is to manage CPS Digital Twins (DTs) allowing
the synchronization between CPS deployed on the shopfloor and their digital
representation. In particular, during the whole factory life cycle, the CSI will
provide services (via suitable API endpoints) to analyze the data streams
coming from the shopfloor and to share simulation models and results
among simulators.
In this chapter, we present the implementation of a distributed middle-
ware developed within the frame of MAYA European project, tailored to
enable scalable interoperability between enterprise applications and CPS with
especial attention paid to simulation tools. The proposed platform strives for
being the first solution based on both Microservices [1, 2] and Big Data [3]
paradigms to empower shopfloor CPS along the whole plant life cycle and
realize real-digital synchronization ensuring at the same time security and
confidentiality of sensible factory data.
11.2 Terminology
Shopfloor CPS – With the expression “Shop-floor CPS” we refer to Digital-
Mechatronic systems deployed at shopfloor level. They are physical entities
that intervene in various ways in the manufacture of a certain product. For the
scope of this chapter, Shopfloor CPS (referred to as Real CPS or simply CPS)
can communicate to each other and with the CSI.
CPS Digital Twin (or just Digital Twin) – In the smart factory, each
shopfloor CPS is mirrored by its virtual alter ego, called Digital Twin (DT).
The Digital Twin is the semantic, functional, and simulation-ready rep-
resentation of a CPS; it gathers together heterogeneous pieces of infor-
mation. In particular, it can define, among other things, Shopfloor CPS
performance specifications, Behavioral (simulation) Models, and Functional
Models.
Digital Twin is a composite concept that is specified as follows:
CPS Prototype (or just Prototype) – Chapter 12 proposes a meta-model
that paves the way to a semantic definition of CPS within the CSI. Following
the Object-Oriented Programming (OOP) approach, we distinguish between
a Prototype (or class) and its derived instances. A CPS prototype is a model
that defines the structure and the associate semantic for a certain class of CPS.
11.3 CSI Architecture 319
each microservice exposes a small set of functionalities and runs in its own
process, communicating with other services mainly via HTTP resource API
or messages. Four groups of services can be identified and addressed in
what follows.
1
https://github.com/Netflix/zuul/wiki
2
https://github.com/Netflix/ribbon
3
https://martinfowler.com/bliki/CircuitBreaker.html
4
www.mysql.com
5
https://github.com/Netflix/eureka/wiki
322 A Centralized Support Infrastructure (CSI) to Manage CPS Digital Twin
Configuration Server
The main task of this service is to store properties files in a centralized way
for all the micro-services involved in the CSI. This is a task of paramount
importance in many scenarios involving the overall life cycle of the platform.
Among the benefits of having a configuration server, we mention here the
ability to change the service runtime behavior in order to, for example,
perform debugging and monitoring.
Monitoring Console
This macro-component with three services implements the so-called ELK
stack (i.e., Elasticsearch, Logstash, and Kibana) to achieve log collection,
analyzing, and monitoring services. In other words, logs from every microser-
vice are collected, stored, processed, and presented in a graphical form to the
CSI administrator. A query language is also provided to enable the adminis-
trator to interactively analyze the information coming from the platform.
FMService
This service is able to communicate with the Big Data platform; its main task
is to submit the Functional Models to Apache Spark, to monitor the execution,
cancel, and list them.
Updater MS
This service is designed to interact with the Big Data platform (in particular
with Apache Cassandra) to retrieve data generated by the Functional Models.
Simulations MS
This service is appointed to managing the persistence of simulation-related
data within a suitable database.
Designing and setting up a Big Data environment, here in the form of the
Lambda Architecture (Figure 11.2), is a complex task that starts with doing
some structural decisions. In what follows, some high-level considerations
about the technological choices made are presented:
6
https://thrift.apache.org/
326 A Centralized Support Infrastructure (CSI) to Manage CPS Digital Twin
7
http://oryx.io/
11.4 Real-to-Digital Synchronization Scenario 327
Figure 11.3 describes in UML the main actions carried out by the CPS and
by the CSI in the scenario at hand. In particular, the CPS connects by logging
in on the platform, at that point it is associated to a WebSocket endpoints and
it can start sending data up to the CSI. The CSI, on the other hand, launches
the execution of the Functional Model associated with the CPS.
A deeper insight is gained by means of Figure 11.4; in it, the interactions
among services within the CSI are highlighted. It is clear, in fact, that the
CPS connects with the CSI via the API Gateway. In the current version
of the CSI, the Gateway is in charge of checking whether the CPS asking
for being attended is legit (it must have been created within the platform
beforehand). To do this, the Gateway interrogates the Models MS service.
The Gateway then creates a WebSocket endpoint for the CPS, redirects the
incoming workload to Kafka, and notifies the Orchestrator. This, in turn,
is in charge of running the Functional model(s) associated with the CPS.
The Functional models are executed within the Big Data platform (in Apache
Kafka cluster) and in particular they use Kafka not only as source of data but
also as the endpoint where to post the results of the computation. Meanwhile
the Orchestrator has scheduled a recurrent job on the Scheduler that picks
up the updated from the output Kafka topic and uses them to update the
nameplated values of the CPS Digital Twin.
During the whole process, the Security is present in the form of SSL
connection, CPS log in via OAuth2, and service-to-service authorization and
authentication. We outlined the real-to-digital synchronization in Figure 11.5,
wherein the reader can spot the presence of all the players present in the
sequence diagram plus the UAA Service in charge of the authentication and
authorization tasks. The actions performed by this service are pervasive and
would have made the sequence diagram unintelligible.
11.5.1 Microservices
The Microservices approach proposes to have numerous small code bases
managed by small teams instead of having a giant code base that eventually
every developer touch with the result of making more complex, slow, and
painful the process of delivering a new version of the system.
In a nutshell, the microservice architecture is the evolution of the classical
Service-Oriented Architecture (SOA), in which the application is seen as a
suite of small services, each devoted to as single activity. Each microser-
vice exposes an atomic functionality of the system and runs in its own
process, communicating with other services via HTTP resource API (REST)
or messages.
The adoption of the microservice paradigm provides several benefits,
as well as presents inconveniences and new challenges. Among the benefits
of this architectural style, the following must be enumerated:
Agility – Microservices fit into the Agile/DevOps development methodo-
logy [2], enabling business to start small and innovate fast by iterating on their
core products without affording substantial downtimes. A minimal version of
an application, in fact, can be created in shorter time reducing time-to-market
and up-front investment costs, and providing an advantage with respect to
competitors. Future versions of the application can be realized by seamlessly
adding new microservices.
Isolation and Resilience – Resiliency is the ability of self-recovery after
a failure. A failure in a monolithic application can be a catastrophic event,
as the whole platform must recover completely. In a microservice platform,
instead, each service can fail and heal independently with a possibly reduced
impact on the overall platform’s functionalities. Resilience is strongly depen-
dent on compartmentalization and containment of failure, namely Isolation.
Microservices can be easily containerized and deployed as single process,
reducing thus the probability of cascade-fail of the overall application. Isola-
tion, moreover, enables reactive service scaling and independent monitoring,
debugging, and testing.
11.5 Enabling Technologies 331
access to its own file system and libraries, but it shares with other containers
the underpinning kernel.
This approach is defined para-virtualization because, unlike virtualization
systems that emulate hardware to execute whole virtual machines to run
atop, there is no need to emulate anything. Moreover, Docker do not depend
on specific virtualization technologies and, therefore, it can run wherever a
Linux kernel available. The overall approach results to be lightweight with
respect to more traditional hypervisor-based virtualization platform allowing
for a better exploitation of the available resources and for the creation of faster
and more reactive applications. In the light of these considerations, it should
be clear how Docker fits perfectly for microservices, as it isolates containers
to one process and makes it simple and fast to handle the full life cycle of
these services.
The current version of the CSI is provided with a set of scripts for
automatic creation of Docker images for each of the services involved
in the platform. Deployment scripts, which rely on a tool called Docker-
compose, are provided as well to streamline the deployment on a local
testbed. Nonetheless, a similar approach can be used to execute the platform
on the most important Clouds (e.g. Amazon ECS, Azure Container Service).
11.6 Conclusions
This document presented the Centralized Support Infrastructure built within
the H2020 MAYA project: an IoT middleware designed to support simulation
334 A Centralized Support Infrastructure (CSI) to Manage CPS Digital Twin
Acknowledgements
The work hereby described has been achieved within the EU-H2020 project
MAYA, which has received funding from the European Union’s Horizon 2020
research and innovation program, under grant agreement No. 678556.
References
[1] N. Dragoni et al., “Microservices: yesterday, today, and tomor-
row,” in Present and Ulterior Software Engineering, Springer Berlin
Heidelberg, 2017.
[2] S. Newman, Building microservices. “ O’Reilly Media, Inc.,” 2015.
[3] J. Manyika et al., “Big data: The next frontier for innovation, competi-
tion, and productivity,” 2011.
[4] S. Newman, Building microservices. “ O’Reilly Media, Inc.,” 2015.
[5] C. Yang, W. Shen, and X. Wang, “Applications of Internet of Things
in manufacturing,” in Proceedings of the 2016 IEEE 20th Interna-
tional Conference on Computer Supported Cooperative Work in Design,
CSCWD 2016, pp. 670–675, 2016.
[6] R. Drath, A. Luder, J. Peschke, and L. Hundt, “AutomationML-the glue
for seamless automation engineering,” in Emerging Technologies and
Factory Automation, 2008. ETFA 2008. IEEE International Conference
on, pp. 616–623, 2008.
[7] J. Dean and S. Ghemawat, “MapReduce: Simplified Data Processing
on Large Clusters,” Proc. OSDI - Symp. Oper. Syst. Des. Implement.,
pp. 137–149, 2004.
[8] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica,
“Spark?: Cluster Computing with Working Sets,” HotCloud’10 Proc.
2nd USENIX Conf. Hot Top. cloud Comput., p. 10, 2010.
References 335
339
340 Building an Automation Software Ecosystem on the Top of IEC 61499
12.1 Introduction
Technological innovation is the main engine behind economic development
that aims at supporting companies in adapting to the rhythm of the market
dictated by globalization [1, 2]. According to Stal [3], innovation is the devel-
opment of new methods, devices or machines that could change the way in
which things happen. The fourth industrial revolution, pursuing the extensive
adoption of innovative technologies and systems, increasingly impact almost
every industry. According to Bharadwaj et al. [4], “digital technologies
(viewed as combinations of information, computing, communication, and
connectivity technologies) are fundamentally transforming business strate-
gies, business processes, firm capabilities, products and services, and key
interfirm relationships in extended business networks”.
The automation industry has historically a leading role in experimenting
and pushing this transformation, with technological and process-related inno-
vation being assimilated all-inclusively across the whole value network [5].
However, the characteristics that the automation market acquired in the last
decades, where complex value networks support standard-based technolo-
gies relying on legacy systems, make purely technological advancements no
more enough to satisfy the need of innovation expressed by the market. As
proposed in the Technology-Organization-Environment Framework [6], the
propensity of companies towards the adoption of innovations is indeed not
only dependent on the technology per se, but it is influenced by the technolog-
ical context, the organizational context, and the environmental context. The
technological context includes the internal and external technologies that are
relevant to the firm. The organizational context refers to the characteristics
and resources of the firm. The environmental context includes the size and
structure of the industry, the firm’s competitors, the macroeconomic context,
12.2 An Outlook of the Automation Value Network 341
and the regulatory environment [6–8]. These three elements present both
constraints and opportunities for technological innovation and influence the
way a firm sees the need for, searches for, and adopts new technology [9].
In this context, the European initiative Daedalus supports companies in
facing opportunities and challenges of Industry 4.0 starting from the over-
coming of the rigid hierarchical levels of the automation pyramid. This is
done by supporting CPS orchestration in real time through the IEC-61499
automation language, to achieve complex and optimized behaviors impos-
sible with other current technologies. To do so, it proposes a methodology
and the related supporting technologies that, integrated within an “Industry
platform1 ” and brought to the market by means of a Digital Marketplace, are
meant to foster the evolution of the automation value network. The desired
evolution is expected not only to impact on how companies manage their
production systems, providing extended functionalities and greater flexibility
across the automation pyramid, but also to broadly impact the automation
ecosystem, creating and/or improving connections, relationships and value
drivers among the automation stakeholders.
In the following sections, the principal characteristics of the current
automation domain are analysed by focussing on the stakeholders (hereinafter
complementors) that populate the ecosystem and on the structure of the
relationships among them. The Daedalus platform is therefore presented by
highlighting, beyond technological aspects described in Chapter 5, the value
exchanges managed by the digital marketplace. The impact that the creation
of such ecosystem has on the complementors is eventually discussed through
an analysis of the to-be business networks.
1
An industry platform is defined as: foundation technologies or services that are essential
for a broader, interdependent ecosystem of businesses [17, 18].
342 Building an Automation Software Ecosystem on the Top of IEC 61499
describing its characteristics, its players and the relation that they have
established over time.
overview of the current automation value chains with a particular focus on:
(i) Type of relation, (ii) Distribution Channels and (iii) Value-proposition.
These elements drawn in boxes among complementors are intended to be
univocal: for example, the value proposition of one player can vary a lot in
accordance to the customer he serves.
The main interactions that can exist among the automation players are
here presented with the aim of not covering all the possible interactions
(biggest players frequently group under their umbrella more than one of
the proposed stakeholders’ functions; similarly, it is frequent that companies
establish partnerships exposing a unique contact point with the customer),
but describing the most common ones. The resulting schema highlights the
linearity of the current automation ecosystem, where automation solutions,
i.e. manufacturing lines, are the result of a “chronological” (even if very
complex) interaction among players that goes from the granularity of low
intelligence components, to the high integration and desired smartness of full
manufacturing lines.
In the existing value chains, the automation solution to be purchased is
still selected merely considering its hardware elements. Despite the great
commitment exerted in software development to create integrated and versa-
tile automation solutions, resulting in high impacts the software has in terms
of costs and implementation efforts, but still it is not a primary decision-
making parameter. The decision between a solution or another one depends
first on the hardware (the component, the control system, the machine, the
equipment, the production line) and, only in second instance, the software
to integrate, coordinate, and/or use the entire system is selected. To this end,
in the schema, it is not underlined in the relevance of the software, being
considered a player in the background.
Their main customers are E&MBs, and sensors, drives, panels, I/O clamps,
etc. are typical “deliverables”. Production is usually oriented towards a
make-to-stock approach in large numbers, aiming at a wide application scope.
Their business model mainly focuses on the premium segment and/or on
the customization, where it is possible to obtain the largest profits, with
a strong emphasis on their home country [10]. CSs usually try to grow
through joint ventures and cooperation, exploring adjacent businesses also
with horizontal integration.
It is necessary to consider also that customers can be both plant owners, who
directly purchase the machines and equipment, and the SIs, who purchase
it for a third party. Another element, which is influenced by E&MBs, is the
machines integration. Some E&MBs provide this service, while others pro-
vide only the product and leave the integration to a third-party actor or to the
customer. Depending on its needs and the type of equipment and machine,
builders can produce basic, highly standardized and high-volume machines
or customized ones, involving the customer in the development with a strict
collaboration between customer and supplier.
transaction costs and facilitating exchanges that otherwise would not have
occurred [11]. The main value that the platforms create is the reduction of
the barriers of use for their customers and suppliers. A platform encourages
producers and suppliers to provide contents, removing hurdles and constraints
that are part of the traditional businesses. As for suppliers and producers,
platforms create significant value also for consumers, providing ways to
access to products and services that they have not even been imagined before.
Platforms allow users to trust in strangers, allowing them to enter in their
rooms (Airbnb), renting their cars (Uber) and using their applications (Phone
and PC marketplaces). Platforms provide and guarantee for users’ reliability
and quality. New sources of supply can cause undesirable contents, if not
filtered, while thanks to the reliability and quality mechanisms that platforms
integrate, this issue becomes not relevant.
The platform developed within the Daedalus project follows this trend by
extending platform logics to the automation domain. This is done exploiting
as a foundation the new evolution of the IEC-61499 standard that envis-
ages the technology on the top of which additional value drivers for the
automation complementors can be set up. The Standard allows: (i) the design
and modelling of distributed control systems and application execution on
distributed resources (not necessarily PLC), (ii) the creation of portable and
interchangeable data and models and the re-utilization of the code, (iii) the
seamless management of the communication between the different function
blocks of an application (independently from the hardware resource they run
on) and (iv) the decoupling of the elements of the CPS (its behavioral models)
from the physical devices and reside (designed, used and updated) in-Cloud,
within the “cyber-world”, where all the other software tools of the virtualized
automation pyramid can access them and exploit their functionalities.
Among the others, code modularity, reusability and reconfigurability of
systems are the main features that are advertised as practical benefits of
applying this Standard [12]. The final result is the ability of designing more
flexible and competitive automation systems by providing the functionality to
combine hardware components and software tools of different vendors within
one system as well as the reuse of code [13, 14].
The Daedalus platform is therefore meant to bring together automation
complementors and give them the infrastructural support to technologies,
services and skills essential for systems improvement through CPS inte-
gration and orchestration [15]. This is done by opportunely adapting the
functional model already implemented successfully within the IT world
for mobile applications developing a digital place (i.e. a marketplace),
350 Building an Automation Software Ecosystem on the Top of IEC 61499
The designed data model has been divided into five sections, each
respective to one of the five packages of the structure presented in the
figure above:
1. User Characterization package: it contains all data entities related to
the user description and characterization. This part of the data model
deals with the representation of the User, being it a developer (Developer
class) or a manufacturer (Manufacturer class) or a customer (Customer
class), and all the related information.
2. Product Description package: it contains the data objects needed to
describe the products (hardware, application and services) hosted by
the Marketplace. This package groups the set of entities needed to
formalize the data structure for describing the hosted products in terms
of features, possible relationships with other products, product contract
configurability, product validation and certification.
3. Contract Definition package: this package contains all entities needed
to formalize all possible configuration options of the contract that reg-
ulate the economic aspects between the Marketplace and the customers
about the use of the products.
4. Validation and Certification package: this part of the data model
is dedicated to formalize those entities meant to support the valida-
tion of the submitted product and the optional product certification.
354 Building an Automation Software Ecosystem on the Top of IEC 61499
12.3.2.1 Customers
The main relationships that customers have with the marketplace are: the pur-
chase of product/services, agreements with product/service providers medi-
ated by the marketplace and rating of the delivered product services. To this
end, customers are meant to start the interaction with the marketplace by
performing a registration that allows them to store and retrieve data related to
their buying experience. By browsing the hardware and software catalogues,
customers can select the product/service they are interested in and visualize
the software/hardware products or services associated to the selected product.
356 Building an Automation Software Ecosystem on the Top of IEC 61499
Figure 12.4 High-level definition of marketplace interactions with main Daedalus stake-
holders.
The selection of one product enables the customer to access the contrac-
tual area, where the contract among the customer and the marketplace is
agreed, and recall the payment service. In its interactions with the market-
place, the customer is not charged for the services provided: it is always the
product/service provider that pays a percentage fee.
Once completed the purchase, the customer can exit the marketplace. He
will then receive the products/services according to the modalities agreed
within the contract. Customer will receive notifications with respect to
software updates in order to improve the customer experience and support
the maintenance of updated hardware/software functionalities.
sold, the marketplace will also provide specific contracts templates sup-
porting characteristics of an application sale (in-app purchase, period-based
licence, etc.).
The marketplace will be also the place where developers will be able to
find, accessing dedicated spaces, the quality procedures and SDK required
to develop applications compliant with the ecosystem. These services are
provided without additional costs to the developers.
solution, they have a direct vision on their main needs. In this context, SIs are
realizing, more than any other automation player, that customers are requiring
more flexible and reconfigurable solutions, capable of increasing production
performance and providing more advanced functionalities.
On the other hand, in the current automation environment, SIs have a
marginal role in adding value for customers and have low technological
competences. They usually integrate different components, equipment and
machines to provide functional and ready-to-use solutions. They mainly
perform the operative part, which does not only allow to cover customer
needs, but only to satisfy their functional requirements.
Adhering to a platform-based ecosystem, SIs will no longer be a sim-
ple assembler, but they will have the opportunity to add relevant value to
the automation solution. This could be done by developing SW for their
customers and proposing dedicated solutions, which add functionalities,
improve performances and manage orchestration and distributed architecture
between the different factory levels. SIs have therefore the opportunity not
only to deliver functional solutions meeting customer requirements but also
to add functionalities to the systems, increasing the value of the overall
proposed solution. Moreover, thanks to reduced hardware dependence, code
re-usability and modularity achieved through the adoption of IEC 61499 log-
ics, SW use could be extended in different contexts, for different customers
application.
The first opportunity for SIs will be the update of existing legacy automa-
tion systems. For the first adoption of platform principles, CPS-izers (systems
that are meant to act as an adapter among legacy and IEC 61499 technologies)
develop a fundamental role, allowing SIs to transform solutions tied to old
legacies to compliant ones. The higher integrability of components, equip-
ment and machines will allow SIs to reduce the effort to provide ready-to-use
solutions and to ease the integration of new functionalities by developing
dedicated SW. This becomes a relevant activity that is expected to be mainly
internalized by SIs. Thanks to the platform and the related marketplace, they
will have the opportunity to re-use libraries and algorithms developed by
third-party developers to improve or speed up the development of their SW
solutions.
All these elements are meant to increase the value proposed to cus-
tomers, allowing to extend solutions’ functionalities and enabling to dedicate
more resources to the development of high-level applications and SW, while
reducing efforts and resources for components, equipment and machines
integration and basic functions programming.
360 Building an Automation Software Ecosystem on the Top of IEC 61499
It is necessary to consider that SIs are the players that can achieve the
highest benefits from platform-based automation ecosystem, but to which are
also required the main transition efforts. In this kind of domain, SI becomes
a more advanced player, to which are required more technological com-
petences. It is no more a simple consultant, but it also a product (SW)
developer. It is expected that SIs expand their know-how and competences
from low operative level to higher, with the objective to provide more added
value to its customers, not only through integration, but also through the
improvement of performance and functionalities, maintaining them during
the whole solution’s life cycle.
12.3.3.2 Opportunities for equipment and machines builders
E&MBs, adhering at the ecosystem and adopting the related technologies,
have the opportunity to release more advanced products, able to work in
flexible and orchestrated production systems. E&MBs can produce complex
manufacturing systems as aggregation of CPS, focusing their effort on the
assembling and orchestration of the automation tasks of these composing
elements. The adoption of platform technologies can allow an E&MB to
develop products that can take advantage of all the components (control
software, applications and services) IEC-61499 compliant.
For E&MBs, the platform will become a relevant resource, being one of
the structural technologies on which its products will be designed and pro-
duced. The management of this resource should be performed with particular
attention, in order to spread out all the possible benefits and to maximize
products’ performances.
12.3.3.3 Opportunities for components suppliers
The platform-based ecosystem generates opportunities also to CSs. They have
to become capable of releasing more functional, intelligent and independent
components. Components can be designed and developed as more complex
elements (such as CPS), already equipped with on-board distributed intelli-
gence. A CS should not be focused only on reliability, quality, price and lead
time. It should innovate its products adding functionalities. Therefore, CSs
will have the opportunity to provide not only hardware, but also SW, adding
value to their solutions and increasing the revenue opportunities, creating a
closer relation with their customers.
12.4 Conclusions
In the last decades, the automation domain has been characterized by an
ecosystem ruled by legacy technologies, where the dominant role of the
chosen hardware solutions strongly constrains reusability, upgradability and
orchestration of manufacturing systems. This situation led to the rise of
362 Building an Automation Software Ecosystem on the Top of IEC 61499
Acknowledgements
The work hereby described has been achieved within the EU-H2020
project DAEDALUS, that has received funding from the European Union’s
Horizon 2020 research and innovation programme, under grant agreement
No. 723248.
References 363
References
[1] W. B. Arthur, The Nature of Technology - What It Is and How It Evolves,
2011.
[2] K. C. Mussi, F.B., Canuto, “Percepção dos usuários sobre os atributos
de uma inovação,” REGE Rev. Gestão, vol. 15, pp. 17–30, 2008.
[3] R. da S. Pereira, I. D. Franco, I. C. dos Santos, and A. M. Vieira, “Ensino
de inovação na formação do administrador brasileiro: contribuições para
gestores de curso,” Adm. Ensino e Pesqui., vol. 16, no. 1, p. 101, March
2015.
[4] A. Bharadwaj, O. A. El Sawy, P. A. Pavlou, and N. Venkatraman, “Dig-
ital Business Strategy: Toward a Next generation of insights,” vol. 37,
no. 2, pp. 471–482, 2013.
[5] M. Müller-Klier, “Value Chains in the Automation Industry.”
[6] R. Depietro, E. Wiarda, and M. Fleischer, “The context for change:
Organization, technology and environment,” in The processes of tech-
nological innovation, Lexington, Mass, pp. 151–175, 1990.
[7] J. Tidd, “Innovation management in context: environment, organization
and performance,” Int. J. Manag. Rev., vol. 3, no. 3, pp. 169–183,
September 2001.
[8] J. Tidd, J. Bessant, and K. Pavitt, Integrating Technological, Market and
Organizational Change. John Wiley & Sons Ltd, 1997.
[9] Z. Arifin and Frmanzah, “The Effect of Dynamic Capability to Tech-
nology Adoption and its Determinant Factors for Improving Firm’s
Performance; Toward a Conceptual Model,” Procedia - Soc. Behav. Sci.,
vol. 207, pp. 786–796, 2015.
[10] Mckinsey&Company, “How to succeed: Strategic options for European
Machinery,” 2016.
[11] P. Muñoz and B. Cohen, “Mapping out the sharing economy: A configu-
rational approach to sharing business modeling,” Technol. Forecast. Soc.
Change, 2017.
[12] V. Vyatkin, “IEC 61499 as Enabler of Distributed and Intelligent
Automation: State-of-the-Art Review,” IEEE Trans. Ind. Informatics,
vol. 7, no. 4, pp. 768–781, November 2011.
[13] M. Wenger, R. Hametner, and A. Zoitl, “IEC 61131-3 control applica-
tions vs. control applications transformed in IEC 61499,” IFAC Proc.
Vol., vol. 43, no. 4, pp. 30–35, 2010.
364 Building an Automation Software Ecosystem on the Top of IEC 61499
Today, industries are facing new market demand and customer requirements
for higher product personalization, without jeopardizing the low level of
production costs achieved through mass production. The joint pursuit of these
objectives of personalization and competitiveness on costs is quite difficult for
manufacturers that have traditional production systems based on centralized
automation architectures. Centralized control structures, in fact, do not guar-
antee the system adaptability and flexibility required to achieve increasing
product variety at shorter time-to-market. In order to avoid business failure,
industries need to quickly adapt their production systems and migrate towards
novel production systems characterized by digitalization and robotization.
The objective of this chapter is to illustrate a methodological approach
to migration that supports decision makers in addressing the transforma-
tion. The approach encompasses the initial assessment of the current level
of manufacturing digital maturity, the analysis of priorities based on the
business strategy, and the development of a migration strategy. Specifically,
this chapter presents an innovative holistic approach to develop a migration
365
366 Migration Strategies towards the Digital Manufacturing Automation
strategy towards the digital automation paradigm with the support of a set of
best practices and tools. The application of the approach is illustrated through
an industrial case.
13.1 Introduction
In recent years, lot of research has been devoted to the improvement of
control automation architectures for production systems. Latest advances
in manufacturing technologies collaborate under the Industry 4.0 paradigm
in order to transform and readapt the traditional manufacturing process in
terms of automation concepts and architectures towards the fourth indus-
trial revolution [1]. The increasing frequency of new product introduction
and new technological development leads to more competitive, efficient and
productive industries in order to meet the volatile market demands and
customer requirements.
The Industry 4.0 initiative promotes the digitalization of manufacturing in
order to enable a prompt reaction to continuously changing requirements [2].
The envisioned digitalization is supported by innovative information and
communication technologies (ICT), Cyber-Physical Systems (CPS), Inter-
net of Things (IoT), Cloud and Edge Computing (EC), and intelligent
robots. The control architecture is a key factor for the final performance
of these application systems [3]. Therefore, new automation architectures
are required to enhance flexibility and scalability, enabling the integration
of modern IT technologies and, consequently, increasing efficiency and
production performance.
For this purpose, within the last years, a lot of decentralized control
architectures have been developed in different research projects highlighting
the benefit of decentralized automation in terms of flexibility and reconfig-
urability of heterogeneous devices [4]. However, after years of research, the
reality today shows the dominance of production system based on the tradi-
tional approach, i.e. the automation pyramid based on the ISA-95 standard,
characterized by a hierarchical and centralized control structure.
The difficulty in adopting new architectural solutions can be summarized
in two main problems:
• Enterprises that are reluctant to make the decision to change;
• Projects that fail during the implementation or take-up.
Manufacturers are reluctant to adopt decentralized manufacturing tech-
nologies due to their past large investments on their current production
13.1 Introduction 367
facilities, whose current lifetime is long and, therefore, the required changes
are sporadic and limited. In addition, methods and guidelines on how to
integrate, customize, and maintain the new technologies into the existing ICT
infrastructure are unclear and often incomplete. Nevertheless, with the advent
of future technologies and with current market requirements, changes during
the whole life cycle of the devices and services are necessary.
These changes lead to the transformation of the existing production
systems and their migration towards the digital manufacturing of the Industry
4.0 paradigm. The term “migration” refers to the changing process from an
existing condition of a system towards the desired one. Here, specifically,
the migration is considered as a progressive transformation that moves and
the existing production system towards digitalization. Migration strategies
are thus essential to support the implementation of digital technologies in the
manufacturing sector and the decentralization of the automation pyramid, in
order to achieve a flexible manufacturing environment based on rapid and
seamless processes as response to new operational and business demands.
Aligned to this vision, the aim of the EU funded project FAR-EDGE
(Factory Automation Edge Computing Operating System Reference Imple-
mentation) [5] is twofold: it intends not only to virtualize the conventional
automation pyramid, by combining EC, CPS and IoT technologies, but
also to mitigate manufacturers’ conservatism in adopting these new tech-
nologies in their existing infrastructures. To this end, it aims at providing
them with roadmaps and strategies to guarantee a smooth and low-risk
transition towards the decentralized automation control architecture based
on FAR-EDGE solutions. Indeed, migration strategies are expected to play
an essential role to the success of the envisioned virtualized automation
infrastructure. To this end, FAR-EDGE is studying and providing smooth
migration path options from legacy-centralized architectures to the emerging
FAR-EDGE-based ones.
This chapter aims at describing the migration approach developed within
the FAR-EDGE project. After this brief introduction, the state-of-the-art
migration processes, change management approaches and maturity models
are presented in Section 13.2, providing the founding principles of the FAR-
EDGE migration approach presented in Section 13.3. An industrial use case
application scenario is presented in Section 13.4, which is assessed and ana-
lyzed in Section 13.5, providing an example of migration path alternatives.
Finally, Section 13.6 gives an outlook and presents the main conclusions.
368 Migration Strategies towards the Digital Manufacturing Automation
resistance, Lewin proposed a three steps process: (i) unfreezing, (ii) moving,
and (iii) freezing. The first step aims at destabilizing the equilibrium corre-
spondent to the status-quo, so that current behaviours become uncomfortable
and can be discarded, i.e. unlearnt, opening up for new behaviours. In prac-
tice, unfreezing can be achieved by provoking some emotional feeling, such
as anxiety about the survival of the business; the second step consists in a
process of searching for more acceptable behaviours, in which individuals
and groups progress in learning; the third steps aim at consolidating the
conditions of a new quasi-stationary equilibrium [15].
Lewin’s work, by providing insight about the mechanisms that rule human
groups and operate within the organizations, and by delivering guidance
about change management strategies, has opened the way to following
studies. In the last decades, several frameworks and approaches have been
defined in order to successfully undertake transformation processes and
overcome possible resistance. Starting from the analysis of why change
effort fails, Kotter [16] has identified a sequence of eight steps for enacting
changes in organizations: (i) creating a sense of urgency, e.g., by attracting
the attention on potential downturn in performances or competitive advan-
tage and discussing the dramatic implications of such crisis and timely
opportunities to be grasped; (ii) building a powerful guiding coalition, i.e.,
forming a team of people with enough power, interest and capability to work
together for leading the change effort; (iii) creating a vision, i.e., building a
future scenario to direct the transformation; (iv) communicating the vision,
including teaching by the example of the new behaviours of the guiding
coalition; (v) empowering others to behave differently, also by changing the
systems and the architectures; (vi) planning actions with short term returns,
limited changes that bring visible increases in performances and, through
acknowledgment and rewarding practices, can be used as examples; (vii)
consolidating improvements, developing policies and practices that reinforce
the new behaviours; and (viii) institutionalizing new approaches, by struc-
turing and sustaining the new behaviours. Another quite famous framework
for managing changes is the Prosci ADKAR Model [17], which suggests to
pursue changes through a sequence of five steps corresponding to the initial
letters of ADKAR, i.e. (i) awareness about the need for change; (ii) desire
to support the change; (iii) knowledge about how to change; (iv) ability
to demonstrate new behaviours and competencies; and (v) reinforcement to
stabilize the change.
The focus of some researchers and practitioners has shifted from an
episodic to a continuous change.
13.2 Review of the State-of-the Art Approaches 371
Continuously
improving
process Optimizing (5)
Predictable
Standard, process Managed (4)
consistent
Defined process Defined (3)
process
Repeatable (2)
Initial (1)
The practices that describe the key process areas are organized by com-
mon features. These are attributes that indicate whether the implementation
of a key process area is effective, repeatable and lasting.
Finally, each process area is described in terms of key practices. They
define the activities and infrastructure for an effective implementation and
institutionalization of the key process area. In other words, they describe what
to do, but not how to do it.
In 2002, the CMMI was proposed [28]. It is considered as an improvement
of the CMM model, but in contrast to this model that was built for software
development, the purpose of CMMI has been to provide guidance for improv-
ing organizations’ processes and their ability to manage the development,
acquisition, and maintenance of products or services in general [28]. Further-
more, the focus of this model lies on the representation of the current maturity
situation of the organization/process (coherently with the evolutionary model)
and on giving indications on how a higher maturity level can be achieved
(as proposed by evolutionist model). For these reasons, considering also the
FAR EDGE purposes, the CMMI can be considered as the most appropriate
to be taken as a reference model to implement a blueprint migration strategy.
to support the analysis of the current situation and the desired ones of the
manufacturing systems before defining their migration path.
Inspired by the migration process defined in [12], a methodology to
define and evaluate different architectural blueprints has been defined within
the FAR-EDGE project to support companies in investigating the possible
technology alternatives towards the digital manufacturing automation with a
positive return on investments.
First, there is a preparation phase [12] that aims at analyzing the current
domain of the company, as well as the business long-term vision. Through
questionnaires and workshops with people involved in the manufacturing
process (i.e. production and operation management, IT infrastructure, and
change management), the migration goal and starting point are defined,
as well as the possible impact and the typical difficulties that the FAR-EDGE
solution can have.
The scope of this phase is to have a clear picture on what should
be changed in a company’s business by investigating the technology and
business process points of view simultaneously and deriving the implica-
tion at technical, operational and human dimensions in a holistic approach.
In fact, it is important to keep in mind that the implementation of smart
devices, intelligent systems, and new communication protocols has a big
impact not only on the technological dimension of the factory but also on
system’s performance, work organization, and business strategy [32]. There-
fore, a questionnaire of circa 60 questions about the technical, operational,
376 Migration Strategies towards the Digital Manufacturing Automation
fashion, and the system can reconfigure itself in the event of changes and
continue the production process without additional adjustments of the overall
control.
Applying this vision to the considered use case, the single physical
equipment becomes a single “Plug-and-Produce” module able to configure
and deploy itself without human intervention. The plugging of the module
could be implemented at the edge automation component of the platform
(Ref to CHAPTER 2 e chapter 4). An adapter for controlling and accessing
information about the single equipment should be developed as part of the
communication middleware. Data will flow to the edge automation compo-
nent, which will interact with the CPS models database of the platform in
order to access and update information about the location and status of the
single equipment. The synchronization and reconfiguration functionalities of
the platform will trigger changes to the configuration of the stations, which
will be reflected in the CPS models database. The ledger automation and
reconfiguration services could also be used for automating the deployment
and reconfiguration of the shopfloor.
Table 13.1 AS-IS situation of the use case for the automation functional domain
FAR-EDGE
AS-IS Level 1 Level 2 Level 3 Level 4 Level 5
Equipment/Machinery connectivity and communication protocols
N.A. Basic Local network Networked Networked with
connectivity standard
through with vendor
(RS232- communication
LAN/WAN specific API protocols
RS485)
Security and access control mechanisms
N.A. Basic security Basic security Vendor based Full security and
or local and local access control global access
access control access control for each system control
Production Data Monitoring and Processing
N.A. Locally, per Centrally Available and
Available and
station / available analyzed
analyzed through
equipment / through through MES at
the Cloud
machinery SCADA Factory level
3D layouts, visualization and simulation tools
CAD CAD systems CAD systems CAD systems Fully integrated
systems not manually interfaced interfaces with CAD systems
with intelligent
related to feed with with other intelligent
tools for
production production design systems for fast interactive design
data data systems development process
Reconfiguration of production equipment and processes
Manual Locally Centrally Centrally Centrally
managed at managed from managed by managed
machine level SCADA MES according to ERP
(PLC)
Product Optimization
N.A. Rare offline Offline Manual Automatic
optimization optimization optimization optimization
based on based on based on
manual data simulation data simulation
extraction services
Availability of production process models
N.A. Models Models defined Models defined Models defined
defined with limited and integrated and integrated
(Excel based) specific with business with several
with limited
functions functions different functions
use
IT Operator
N.A. External Internal for Internal for Internal for all
service traditional IT specific digital systems from
provider systems systems field to cloud
Impact on Operator, Product Designer and Production Engineer
Still unclear Identified in Analyzed Defined Implemented in
continuous
general terms
improvement
13.5 Application of the Migration Approach 381
The latter consideration enables the integration between design and pro-
duction, in terms of processes and systems, increasing product quality and
process efficiency. This convergence is the source of not only product but also
process definition. On one side, the Bill of Process (BoP) provides traceability
to the Bill of Materials (BoM) to leverage PLM’s configuration and effec-
tiveness controls, defining the correct sequence of operations to guarantee a
high level of product quality. On the other side, the Manufacturing process
management carries out the documentation and the follow-up of processes
in the MES, which reshapes theoretically designed processes to make them
fit the reality on the shopfloor, ensuring the process efficiency. Considering
this, the proper integration of systems is vital, otherwise data related to the
new machine introduction or the process adjustment would “manually” be
passed to MES (that coordinate and monitor the process execution).
From this consideration, the evolution to a Plug and Produce production
system has to go through the information harmonization between engineering
and manufacturing, coherently with a stepwise approach. To this aim, the
first step is to realize an overall data backbone for all processes and products.
This means to centralize the DBs and the information systems in order to inte-
grate the information flow between manufacturing and engineering domain.
Within the next step, the MES will automatically provide execution data to
ensure holistic and reliable product information that, being documented and
available in both systems, can be considered as a strategic asset to improve
the maintenance, repair, and optimization process.
In this context, the deployment of event-driven architecture (‘RT-SOA’
or Real-Time Service Oriented Architecture) could facilitate the information
exchange and, therefore, the seamless reconfiguration of machinery and
robots as a response to operational or business events.
The migration matrixes depicted for the two MPs represent two specific
improvement scenarios and not the production system as a whole. In both
matrixes, the maturity levels of the current situation are represented in
red, while the migration steps are represented in yellow (the intermediate
migration step) and in green (the final step).
MP 1: Implementation of reconfigurability. According to the business
strategy, the deployment configuration should give priority to the Cloud,
since the factory already planned to implement cloud technologies in the
production automation control. The collection and integration of information
through the Cloud will support the reconfigurability of plug and produce
equipment. In fact, PLM provides the planning information about how the
product will be produced and the MES serves as the execution engine to
realize the plan and BoP. As a second step, the information provided by PLM
386 Migration Strategies towards the Digital Manufacturing Automation
stuck in local optimization extremes and are able to find the global maximum
and minimum which results in high performance. Therefore, additionally to
the migration steps described in MP 1, the integration of digital models must
be considered. Firstly, the existing CAD systems will be interfaced to each
other, and secondly, they will be fully integrated to enable the optimization of
equipment reconfiguration through intelligent simulation tools. In the same
way, the production will be optimized based on the integrated information
derived from the CAD designs and then it will be automatically implemented
through the intelligent tools. To this end, the production process models and
their different layout versions will be first integrated with business functions,
in order to align the process parameters with cost deployment and profitability
measures. From an organizational perspective, the main implications affect
the roles of product designers and production engineers: they need to increase
their level of cooperation to model all the relevant aspects of the manu-
facturing processes into the CAD. Furthermore, the production engineers
have to see that the models of the CAD are connected to the models of the
actual production facilities, so that the production can be simulated, planned
and monitored. Therefore, the competences of the above mentioned roles
require to be enhanced with new skills concerning digitalization, modeling
and simulation. Furthermore, the tasks and responsibilities of these roles have
to be updated accordingly.
The migration matrixes support manufacturers by providing them with
a holistic view of the required steps for migration towards the Industry 4.0
vision at different dimensions of the factory, i.e. technical, operational, and
human. Based on this information and according to the business goals,
the manufacturer will select the optimal scenario as first step of migra-
tion towards the long-term goal of complete digitalization of the factory.
The solution identified within the selected scenario will be then designed
in detail, implemented and deployed according to next process phases
described in [12].
13.6 Conclusion
In conclusion, this chapter shows how the FAR-EDGE migration approach
can lead a manufacturing company to achieve an improvement towards a new
manufacturing paradigm following a smooth and no risk transition approach
with a holistic overview.
In fact, the use case scenario points out that every part of an orga-
nization – including workforce, product development, supply chain and
388 Migration Strategies towards the Digital Manufacturing Automation
Acknowledgements
The authors would like to thank the European Commission for the support,
and the partners of the EU Horizon 2020 project FAR-EDGE for the fruitful
discussions. The FAR-EDGE project has received funding from the Euro-
pean Union’s Horizon 2020 research and innovation programme under grant
agreement No. 723094.
References
[1] Acatech - National academy of science and Engineering, “Recommen-
dations for implementing the strategic initiative INDUSTRIE 4.0-Final
report of the Industrie 4.0 Working Group”. pp. 315–320.
[2] Acatech - National academy of science and engineering-, “Cyber-
Physical Systems Driving force for innovation in mobility, health,
energy and production”.
References 389
[25] R. L. Nolan, “Managing the crises in data processing”, Harv. Bus. Rev.,
vol. 57, pp. 115–127, March 1979.
[26] P. B. Crosby, “Quality is free: The art of making quality certain”,
New York: New American Library. p. 309, 1979.
[27] P. Fraser, J. Moultrie, and M. Gregory, “The use of maturity mod-
els/grids as a tool in assessing product development capability”, in IEEE
International Engineering Management Conference, 2002.
[28] C. P. Team, “Capability Maturity Model{\textregistered} Integration
(CMMI SM), Version 1.1”, C. Syst. Eng. Softw. Eng. Integr. Prod.
Process Dev. Supplier Sourc. (CMMI-SE/SW/IPPD/SS, V1. 1), 2002.
[29] R. Wendler, “The maturity of maturity model research: A systematic
mapping study”, Inf. Softw. Technol., vol. 54, no. 12, pp. 1317–1339,
2012.
[30] M. Kerrigan, “A capability maturity model for digital investigations”,
Digit. Investig., vol. 10, no. 1, pp. 19–33, 2013.
[31] N. Rother, “Toyota KATA - Managing people for improvement, adap-
tiveness, and superior results”, 2010.
[32] A. Calà, A. Lüder, F. Boschi, G. Tavola, M. Taisch, P. Milano, and V. R.
Lambruschini, “Migration towards Digital Manufacturing Automation -
an Assessment Approach”.
[33] M. Macchi and L. Fumagalli, “A maintenance maturity assessment
method for the manufacturing industry”, J. Qual. Maint. Eng., 2013.
[34] M. J. F. Macchi M., Fumagalli L., Pizzolante S., Crespo A. and
Fernandez G., “Towards Maintenance maturity assessment of main-
tenance services for new ICT introduction”, in APMS-International
Conference Advances in Production Management Systems, 2010.
[35] A. De Carolis, M. Macchi, E. Negri, and S. Terzi, “A Maturity Model for
Assessing the Digital Readiness of Manufacturing Companies”, in IFIP
International Federation for Information Processing 2017, pp. 13–20,
2017.
[36] Atos Scientific and C. I. Convergence, “The convergence of IT and
Operational Technology”, 2012.
[37] F. Boschi, C. Zanetti, G. Tavola, and M. Taisch, “From key business
factors to KPIs within a reconfigurable and flexible Cyber-Physical Sys-
tem”, in 23rd ICE/ITMC Conference- International Conference on Engi-
neering, Technology,and Innovationth ICE/IEEE ITMC International
Technology Management Conference w, Janurary 2018.
[38] M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V Weber, “Capability
Maturity ModelSM for Software, Version 1.1”, Office. 1993.
14
Tools and Techniques for Digital
Automation Solutions Certification
393
394 Tools and Techniques for Digital Automation Solutions Certification
14.1 Introduction
In the context of Industry 4.0 and Cyber Physical Production Systems
(CPPS), markets, business models, manufacturing processes, and other chal-
lenges along the value chain are all changing at an increasing speed in
an increasingly interconnected world, where future workplace will present
increased mobility, collaboration across humans, robots and products with in-
built plug & produce capabilities. Current practice is such that a production
system is designed and optimized to execute the exact same process over and
over again.
The planning and control of production systems has become increasingly
complex regarding flexibility and productivity, as well as the decreasing
predictability of processes. The full potential of open and smart CPPS is
yet to be fully realized in the context of cognitive autonomous production
systems. In an autonomous production scenario, as the one proposed by
Digital Shopfloor Alliance (DSA) [1], the manufacturing systems will have
the flexibility to adjust and optimize for each run of the task. Small and
medium-sized enterprises (SMEs) face additional challenges to the imple-
mentation of “cloudified” automation processes. While the building blocks
for digital automation are available, it is up to the SMEs to align, connect,
and integrate them together to meet the needs of their individual advanced
manufacturing processes. Moreover, SMEs face difficulties to make decisions
on the strategic automation investments that will boost their business strategy.
Within the AUTOWARE project [3], new digital technologies including
reliable wireless communications, fog computing, reconfigurable and col-
laborative robotics, modular production lines, augmented virtuality, machine
learning, cognitive autonomous systems, etc. are being made ready as man-
ufacturing technical enablers for their application in smart factories. Special
attention is paid to the interoperability of these new technologies between
each other and with legacy devices and information systems on the factory
floor, as well as to providing reliable, fast integration, and cost-effective
customized digital automation solutions. To achieve these goals, the focus
has been set on open platforms, protocols, and interfaces, providing a
Reference Architecture for the factory automation, and on a specific cer-
tification framework, for the validation not only of individual components
but of deployed solutions for specific purposes, to help SMEs and other
manufacturing companies to access and integrate new digital technologies
in their production processes.
This chapter aims to review the certification framework, tools and
techniques proposed within the global vision of DSA ecosystem, with a clear
14.2 Digital Automation Safety Challenges 395
manufacturers need to develop a safety and security concept for their equip-
ment and confirm that their equipment complies with legal requirements.
Modular certification and self-assessment schemes are critical elements in
the operation of autonomous equipment variants. This ensures that the equip-
ment is automatically certified when a module is replaced, or a new line
configuration is set by the integration of autonomous equipment in modular
manufacturing settings and thus continues to be in conformity with the legal
requirements and/or the standard.
Currently, industrial automation is a consolidated reality, with
approximately 90 per cent of machines in factories being unconnected.
These isolated and static systems mean that product safety (functional safety
and security) can be comfortably assessed. However, the connected world of
Industry 4.0’s smart factories adds a new dimension of complexity in terms
of machinery and production line safety challenges. IoT connects people and
machines, enabling bidirectional flow of information and real-time decisions.
Its diffusion is now accelerating with the reduction in size and price of the
sensors, and with the need for the exchange of large amount of data. In
today’s static machinery environment, the configuration of machines and
machine modules in the production line is completely known at the starting
point of the system design. However, if substantial changes are made, a new
conformity assessment may be required. It is an employer’s responsibility to
ensure that all machinery meet the requirements of the Machinery Directive
and Provision and Use of Work Equipment Regulations (PUWER), of which
risk assessments are an essential ingredient. Therefore, if a machine has
a substantial change made, a full CE marking and assessment must be
completed before it can be returned to service. Any configuration change in
the production line requires re-certification of the whole facility.
However, the dynamic approach of Industry 4.0’s autonomous
robotic systems means that with a simple press of a button, easily
configurable machinery and production lines can be instantly changed.
As it is the original configuration that is risk assessed, such instant updates
to machinery mean that the time-hungry, the traditional approach of “risk
assessment as you make changes” will become obsolete. The risk assessment
process therefore needs to be modified to meet the demands of the more
dynamic Industry 4.0 approach. This would mean that all possible config-
urations of machines and machine modules would be dynamically validated
during the change of the production line. Each new configuration would be
assessed in real time, based on digital models of the real behavior of each
configuration, which would be based upon the machinery manufacturer’s
398 Tools and Techniques for Digital Automation Solutions Certification
correct (and trusted) data. The result would be a rapidly issued digital
compliance certificate.
This Section discuss the challenges that such approach would entail from
the context of safe operation of modular manufacturing, reconfigurable cells,
and collaborative robotic scenarios.
[5] specifies requirements and guidelines for the inherent safe design, protec-
tive measures, and information for use of industrial robots. It describes the
basic hazards associated with robots and provides requirements to eliminate,
or adequately reduce, the risks associated with these hazards.
The ISO 10281-2:2011, “Robots and robotic devices – Safety require-
ments for industrial robots – Part 2: Robot systems and integration,” specifies
safety requirements for the integration of industrial robots and industrial
robot systems as defined in ISO 10218-1 with industrial robot cell(s) [6].
The integration includes the following:
• the design, manufacturing, installation, operation, maintenance, and
decommissioning of the industrial robot system or cell;
• necessary information for the design, manufacturing, installation,
operation, maintenance, and decommissioning of the industrial robot
system or cell; and
• component devices of the industrial robot system or cell.
ISO 10218-2:2011 describes the basic hazards and hazardous situations
identified with these systems, and it also provides requirements to eliminate
or adequately reduce the risks associated with these hazards. It also specifies
requirements for the industrial robot system as part of an integrated manufac-
turing system. The design of experiments in AUTOWARE JSI reconfigurable
robotics cell will take into account these two standards.
Enterprise Services
Workcell Network
Workcell Services
Factory Network
Factory Services
and services and dedicated data management systems that will contribute to
meet the real-time visibility and timing constrains of the cloudified planning
and control algorithms for autonomous production services. Moreover, at the
smart service level, AUTOWARE provides secure CPS capability exposure
and trusted CPPS system modeling, design, and (self) configuration. In this
latter aspect, the incorporation of the TAPPS CPS application framework,
coupled with the provision of a smart automation service store, will pave the
way towards an open service market for digital automation solutions which
will be “cognitive by-design.” The AUTOWARE cognitive operating system
makes use of a combination of reliable M2M communications, human-
robotics-interaction, modelling and simulation, and cloud and fog-based
data analytics schemes. In addition, taking into account the mission-critical
requirements, this combination is deployed in a secure and safe environment,
which includes validation and certification processes in order to guarantee its
correct operation. All of this should enable a reconfigurable manufacturing
system that enhances business productivity.
Figure 14.9 Main characteristics of CPPS solutions that are desired by SMEs.
Component n
Component 1
Component 2
Component 3
Critical
Certificate
Security Level
Technology
Commercial
14.5.2.2 Strategy
An appropriate strategy must be determined depending on the specific prod-
uct/solution and the data obtained during the data collection process. For this
purpose, the following questions must to be answered.
• Which tools are the most appropriate?
• How far the certification process has to go?
• What type of tests should be defined?
Depending on the data obtained in the data collection process, an appropriate
series of tests must be defined encompassing as much as possible all the
different possibilities: functional tests per component, integrity tests, unit
tests per component, complete functional tests, etc.
The DSA approach, based on the access to DATV Core Products and
Solutions and DSA expert professionals and services, will reduce consid-
erably the integration and customization costs of validated deployments.
Through the proposed certification framework and DATV tools, the DSA
aims to maximize the Industry 4.0 ROI and ensure the future scala-
bility/extendibility of the digital shopfloors, by the implementation of a
capability development framework (shown in Figure 14.13) and a service
deployment path (shown in Figure 14.14) that guide SMEs in their Digi-
tal Transformation strategy in order to leverage their automation solutions
visibility, analytic, predictability and autonomy.
418 Tools and Techniques for Digital Automation Solutions Certification
14.8 Conclusion
This section has presented the foundation of the DSA and the associated
validation and verification framework as the basis to develop a manufacturing
driven multi-sided ecosystem. The DSA is originated as a means for SMEs
to navigate and exploit the large set of tools and platforms available for the
development of digital solutions for the digital shopfloor. This paper has
discussed how the DSA approach can nurture synergies across multiple stake-
holders for the benefit of SME digitization and the gradual integration of the
digital abilities in the digital shopfloor with a business impact. This paper has
presented main standardization and compliance drivers, for instance, digital
shopfloor safety in advanced robotic systems as one of the multipliers for
adoption and the need for a DSA ecosystem that facilitates navigation across
standards, platforms, and services with a focus on business competitiveness.
This paper has also presented the fundamental services envisioned for such
DSA and the dimensions that need to be validated to ensure that digital
abilities such as automatic awareness can be fully realized in the context of
cognitive manufacturing digital transformation.
References 423
Acknowledgments
This work has been funded by the European Commission through the
FoF-RIA Project AUTOWARE: Wireless Autonomous, Reliable and Resilient
Production Operation Architecture for Cognitive Manufacturing (No.
723909).
References
[1] Max Blanchet, The Industrie 4.0 transition. How it reshuffles the eco-
nomic, social and industrial model http://www.ims.org/wpcontent/
uploads/2017/01/2.02 Max-Blanchet WMF2016.pdf
[2] Digital Shopfloor Alliance website. Available at: https://digital
shopflooralliance.eu/, last accessed September 2018.
[3] H2020 AUTOWARE project website. Available at: http://www.
autoware-eu.org/, last accessed September 2018.
[4] Kollaborierende Robotersysteme. Planning von Anlagen mit der Funk-
tion “Leistungs- und Kraftbegrenzing”, [Online], Available at: https://
www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=
0ahUKEwjdlry0wNrZAhUuyqYKHXb0BWcQFghAMAE&url=http%
3A%2F%2Fwww.dguv.de%2Fmedien%2Ffb-holzundmetall%2Fpubli
kationendokumente%2Finfoblaetter%2Finfobl deutsch%2F080 roboter.
pdf&usg=AOvVaw1UxaEcsQ9K4lWZXq3W3UDy, last accessed:
March 2018.
[5] ISO 10218-1:2011, “Robots and robotic devices – Safety require-
ments for industrial robots – Part 1: Robots” Available at: https://www.
iso.org/standard/51330.html
[6] ISO 10218-1 with industrial robot cell(s) Available at: https://www.iso.
org/standard/41571.html
[7] H2020 DAEDALUS project website. Available at: http://daedalus.
iec61499.eu, last accessed September 2018.
[8] H2020 FAR-EDGE project website. Available at: http://www.faredge.
eu/#/, last accessed September 2018.
[9] ACATECH NATIONAL ACADEMY OF SCIENCE AND ENGI-
NEERING Available at: https://en.acatech.de/, last accessed September
2018.
[10] Industry 4.0 Smart Service Welt initiative Available at: https://
www.digitale-technologien. de/DT/Navigation/EN/Foerderprogramme/
Smart Service Welt/smart service welt.html, last accessed September
2018.
15
Ecosystems for Digital Automation
Solutions an Overview and the
Edge4Industry Approach
15.1 Introduction
The advent of the fourth industrial revolution (Industrie 4.0) is enabling a
radical shift in manufacturing operations, including both factory automation
operations and supply chain management operations. CPS (Cyber Physical
Systems)-based manufacturing facilitates the collection and processing of
425
426 Ecosystems for Digital Automation Solutions
(MSP), which will bring together supply and demand about digital factory
automation services based on the edge-computing paradigm. A wide range
of solutions and services will be provided by FAR-EDGE to its ecosystem
community, including industrial software and middleware-related services
(e.g., automation and analytics solutions), as well as business and technical
support services (e.g., support on solutions migration).
This chapter aims at providing insights on the IIoT ecosystems in general
and the FAR-EDGE ecosystem in particular. The presentation of the existing
ecosystems provides a comprehensive overview of the different types of
services that they provide, as well as of their business models. Likewise,
the presentation of FAR-EDGE ecosystem portal (www.edge4industry.eu)
provides an overview of the solutions, services, and the knowledge base that
are provided as part of the project and are made available to the community.
The chapter is structured as follows:
• Section 15.2 following the chapter’s introduction presents a review of
some of the most representative Industry 4.0 and IIoT ecosystems and
their services;
• Section 15.3 provides a comparative analysis of the presented ecosys-
tems, including a description of their business models;
• Section 15.4 introduces the Edge4Industry ecosystem portal and
describes its structure and services; and
• Section 15.5 is the final and concluding section of the chapter.
• Security to protect the Factory brown field from the outside network;
and
• Integration of data from the Business Systems.
IIC testbeds are privately funded by member companies or publicly funded by
government agencies, while Hybrid models involving both public and private
funding are also possible.
sponsors and platform providers. FIWARE provides one of the most promi-
nent operational Future Internet platforms in Europe. Its platform provides a
rather simple yet powerful set of open public APIs that ease the development
of applications in multiple vertical sectors. The implementation of a FIWARE
Generic Enabler (GE) becomes a building block of a FIWARE instance. Any
implementation of a GE is made up of a set of functions and provides a con-
crete set of APIs and interoperable interfaces that are in compliance with open
specifications published for that GE. The FIWARE project delivers reference
implementations for each defined GE, where an abstract specifications layer
allows the substitution of any Generic Enabler with alternative or custom
made equivalents.
FIWARE’s main contribution is the gathering of the best available design
patterns, emerging standards and open source components, putting them all
to work together through well-defined open interfaces. There is a lot of
knowledge embedded, lowering the learning curve and mitigating the risks
of bad architecture designs. The scope of the platform is also very wide,
covering the whole pipeline of any advanced cloud solution: connectivity to
the IoT, processing and analyzing Big data, real-time media, cloud hosting,
data management, applications, services, security, etc. But FIWARE does not
only accelerate the development of robust and scalable cloud based solutions,
it also establishes the basis for an open ecosystem of smart applications.
In the FIWARE sense, be SMART means to be Context Aware and to be
able to interoperate with other applications and services; and this is where
FIWARE excels.
FIWARE has over the years developed an ecosystem of developers,
integrators and users of FIWARE technologies, which includes several SMEs.
An instrumental role for the establishment and development of the FIWARE
ecosystem has been played by the FIWARE Acceleration Programme, which
promoted the take up of FIWARE technologies among solution integra-
tors and application developers, with special focus on SMEs and start-ups.
Around this programme, the EU has also launched an ambitious campaign
where SMEs, start-ups and web entrepreneurs can get a funding support
for the development of innovative services and applications using FIWARE
technology. This support intends to be continuous and sustainable in the
future, engaging accelerators, venture capitalists and businesses who believe
in FIWARE.
The FIWARE ecosystem is supported and sustained by the FIWARE
Foundation, which is the legal independent body providing shared resources
to help achieve the FIWARE mission. The foundation focuses on promoting,
15.2 Ecosystem Platforms and Services for Industry 4.0 437
Ecosystem
of IIoT soluons
Hosng & Support
Services
and Integraon
Soluon Design
Educaon
Training &
Services
Consulng
Advisory &
Services
and Validaon
Experimentaon
Soluon
Services
Standardizaon
ware libraries
soware/middle
Access to
News Updates
Informaon and
/Services
ThingWorx and X X X X X
IIoT/cloud
plaorms
IIC Testbeds X X
I4.0 Testbeds X X
EFFRA Innovaon X X
Portal
FIWARE X
Standards Bodies X
Figure 15.3 Overview of Services offered by various IIoT/Industry 4.0 ecosystems and
communities.
15.3 Consolidated Analysis of Ecosystems – Multi-sided Platforms Specifications 439
15.4.1 Services
The Services section can be easily accessed through the main menu by
clicking in the Services button and intends to present to the users community
all the available FAR-EDGE services. At this stage, the following services
are available:
• FAR-EDGE Datasets: Provides access to open datasets that can be used
for experimentation and research. The first datasets provided include
data related to individual production modules such as their power con-
sumption, their status, operating mode (maintenance, active, etc.). The
datasets include all module production-related information, including
Module ID, module description, production status, conveyor status,
operating status, error status, uptime information, power consumption,
order number, process time etc.
• Migration Services: The FAR-EDGE Migration Services supports
manufacturers, plant operators and solutions integrators in planning and
realizing a smooth migration from conventional industrial automation
systems (like ISA-95 systems) into the emerging Industry 4.0 ones (like
edge computing systems). The service provides a Migration Matrix Tool,
which includes all the essential improvement steps and plans needed to
enable a smooth migration from traditional control production systems
towards the decentralised control automation architecture based on edge
computing, CPS, and IoT technologies.
• Training Services: This service delivers technical, architectural, and
business training to Industry 4.0-related communities, as a means of
raising awareness about digital automation in general and FAR-EDGE
15.4 The Edge4Industry Ecosystem Portal 443
15.4.2 Solutions
Similar to the Services section, the Solutions section intends to present all the
available FAR-EDGE solutions and can be accessed too through the main
menu by clicking in the Solutions button. At this stage, the FAR-EDGE
solutions that are available are as follows:
• Analytics Engine: The Analytics Engine solution is a middleware
component for configurable distributed data analytics in industrial
automation scenarios. Its functionalities are accessible through an
Open API, which enables the configuration and deployment of various
industrial-scale data analytics scenarios. It supports processing of large
volumes of streaming data, at both the edge and the cloud/enterprise
layers of digital automation deployments. It also supports data analytics
at both the edge and the cloud layers of a digital automation system. It is
extremely flexible and configurable based on the notion of Analytics
Manifests (AMs), which obviate the need for tedious data analytics
programming. AMs support various analytics functionalities and are
amenable by visual tools. Note that the Analytics Engine is provided
with an open source license.
• Automation Engine: This solution provides the means for executing
automation workflows based on an appropriate Open API. It enables
lightweight high-performance interactions with the field for the purpose
of configuring and executing automation functionalities. It provides
field abstraction functionalities and therefore supports multiple ways
and protocols for connecting to the field. It also facilitates the execu-
tion of complex automation workflows based on a system-of-systems
approach. It offers reliable and resilient functionalities at the edge of the
plant network, based on Arrowhead’s powerful local cloud mechanism.
Finally, it leverages a novel, collaborative blockchain-based approach to
synchronizing and orchestrating automation workflows across multiple
local clouds.
• Distributed Ledger Infrastructure: This solution results in a run-
time environment for user code that implements decentralized network
services as smart contracts, which are used for plant-wide synchroniza-
tion of industrial processes. It enables the synchronization of several
444 Ecosystems for Digital Automation Solutions
15.4.4 Blog
The blog section presents to the ecosystem community publications about
topics that are related to the industry, including those that have been published
by members of the Edge4Industry community as well as other sources such as
other blogs and electronic magazines. Similar to the Knowledgebase section,
access to the Edge4Industry Blog section publications is user-friendly and
dynamic.
15.5 Conclusions
In the era of digitization, the development of proper ecosystems is as impor-
tant as the development of digital platforms. In many cases, most of the value
of a digital platform lies in its ecosystem and the opportunities that it provides
to stakeholders’ in order to collaborate and advance the digital transformation
446 Ecosystems for Digital Automation Solutions
Acknowledgments
This work has been carried out in the scope of the FAR-EDGE project
(H2020-703094). The authors acknowledge help and contributions from all
partners of the project.
References
[1] J. Moore ‘The Death of Competition: Leadership & Strategy in the Age
of Business Ecosystems’ New York: HarperBusiness. ISBN 0-88730-
850-3, 1996.
[2] T. Eisenmann, G. Parker and M. W. Van Alstyne ‘Strategies for
Two-Sided Markets,’ Harvard Business Review, November 2006.
[3] A Hagiu. ‘Two-Sided Platforms: Pricing, Product Variety and Social
Efficiency’ mimeo, Harvard Business School, 2006.
[4] Leslie Brokaw, (2014) “How to Win With a Multisided Platform
Business Model”, MIT Sloan Business School (blog), May 20, 2014.
16
Epilogue
At the dawn of the fourth industrial revolution, the benefits of the digital
transformation of plants are gradually becoming evident. Manufacturers and
plant operators are already able to use advanced CPS systems in order
to increase the automation, accuracy, and intelligence of their industrial
processes. They are also offered opportunities for simulating processes based
on digital data as a means of evaluating different scenarios (i.e. “what-if”
analysis) and taking optimal automation decisions. These capabilities are
empowered by the accelerated evolution of digital technologies, which is
reflected in rapid advances in areas such as cloud computing, edge computing,
Big Data, AI, connectivity technologies, block chains and more. The latter
digital technologies form the building blocks of the state-of-the-art digital
manufacturing platforms.
In this book, we have presented a range of innovative digital platforms,
which have been developed in the scope of three EU projects, namely the
AUTOWARE, DAEDALUS, and FAR-EDGE projects, which are co-funded
by the European Commission in the scope of its H2020 framework pro-
gramme for research and innovation. The presented platforms emphasized the
employment edge computing, cloud computing, and software technologies
as a means of decentralizing the conventional ISA-95 automation pyramid
and enabling flexible production plants that can support mass customization
production models. In particular, the value of edge computing for performing
high-performing operations close to the field was presented, along with
the merits of deploying enterprise systems in the cloud towards high per-
formance, interoperability, and improved integration of data and services.
Likewise, special emphasis was paid in illustrating the capabilities of the IEC
61499 standard and the related software technologies, which can essentially
allow the implementation of automation functionalities at the IT rather than
the OT part of the production systems.
447
448 Epilogue
Special emphasis has been put in the presentation of some innovative and
disruptive automation concepts, such as the use of cognitive technologies
for increased automation intelligence and the use of the trending block
chain technologies for the resilient and secure synchronization of industrial
processes within a plant and across the supply chain. The use of these
technologies in automation provide some characteristic examples about how
the evolution of digital technologies will empower innovative automation
concepts in the future.
In terms of specific Industry 4.0 functionalities and use cases, our focus
has been put on systems that boost the development of flexible and high-
performance production lines, which boost the mass customization and
reshoring strategies of modern manufacturers. A distinct part of the book was
devoted to digital simulation system and their role in digital automation. It is
our belief that digital twins will play a major role in enhancing the flexibility
of production lines, as well as in optimizing the decision-making process for
both production managers and business managers.
Nevertheless, the successful adoption of digital automation concepts in
the Industry 4.0 era is not only a matter of deploying the right technology.
Rather, it requires investments in a wide range of complementary assets, such
as digital transformation strategies, new production processes that exploit
the capabilities of digital platforms (e.g., simulation), training of workers in
new processes, and many more. Therefore, we have a dedicated a number of
chapters to the presentation of such complementary assets such as migration
strategies, ecosystem building efforts, training services, development support
services, and more. All of the presented projects and platforms pay emphasis
to the development of an arsenal of such assets as a means of boosting the
adoption, sustainability and wider use of these solutions.
Even though this book develops the vision of a fully digital shopfloor,
it should be outlined that we are only in the beginning and far from the
ultimate realization of this concept. In particular, we have only marginally
discussed integration and interoperability issues, which are at the heart of
a fully digital shopfloor. Moreover, we have not presented how different
components and modular solutions can be used to address the different
needs of manufacturers and plant operators. Our Digital Shopfloor Alliance
(DSA) initiative (https://digitalshopflooralliance.eu/) aims at bringing these
issues into the foreground, but also in creating critical mass for successfully
confronting them.
Industry 4.0 will be developed in a horizon that spans across the next
three to four decades, where digital platforms will be advanced in terms of
Epilogue 449
451
452 Index
L S
Ledger technologies 71, 73, SDK 200, 204, 355, 358
169, 196 Smart factory 74, 318, 376, 396
Streaming analytics 170, 172
M System identification 199, 204,
Migration strategy 365, 368, 217, 222
374, 388
Modular manufacturing
systems 27, 29, 46
Multi-side platforms 107, 339,
438, 446
About the Editors
453
454 About the Editors
served to the Future Internet Advisory Board and the Sherpa Group on 5G
Action Plan.
Dr. Oscar Lazaro has been one of the three experts appointed in the high-
level group supporting the EC in the analysis of the 15 national initiatives
in Digitising European Industry. He has been supporting the activities of
the I4MS Programme since its very beginning and leads the I4MS Com-
petence Centre for Advanced Quality Control Services in the Zero Defect
Manufacturing DIH at the Automotive Intelligence Centre in the Basque
Country. He is also part of the Smart Industry 4.0 Technical Committee of the
FIWARE Foundation and regular contributor to the activities of the Industria
Conectada 4.0 DIH and Platform working groups. Since January 2018, he has
been coordinating the European lighthouse initiative BOOST 4.0 on Big Data
Platforms for Industry 4.0.
During the last couple of years, the digital transformation of industrial processes is John Soldatos, Oscar Lazaro and
propelled by the emergence and rise of the fourth industrial revolution (Industry 4.0). Franco Cavadini (Editors)
The latter is based on the extensive deployment of Cyber-Physical Production Systems
(CPPS) and Industrial Internet of Things (IIoT) technologies in the manufacturing
shopfloor, as well as on the seamless and timely exchange of digital information across
supply chain participants. Despite early implementations and proof-of-concepts,
CPPS/IIoT deployments are still in their infancy for a number of reasons, including:
(i) Manufacturers’ poor awareness about digital manufacturing solutions and their
business value potential; (ii) The costs that are associated with the deployment,
maintenance and operation of CPPS systems; (iii) The time needed to implement
CPPS/IIoT and the lack of a smooth and proven migration path; (iv) The uncertainty
over the business benefits and impacts of IIoT and CPPS technologies; (v) The absence