0% found this document useful (0 votes)
12 views522 pages

Compit2019 Tullamore

The 18th International Conference on Computer and IT Applications in the Maritime Industries (COMPIT'19) took place in Tullamore from March 25-27, 2019, focusing on advancements in maritime technology and training. The conference featured various papers discussing topics such as autonomous ships, digital training solutions, and the application of big data in maritime contexts. Sponsored by several industry leaders, the event aimed to address the challenges and opportunities presented by digital transformation in the maritime sector.

Uploaded by

Ronan Fesselier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views522 pages

Compit2019 Tullamore

The 18th International Conference on Computer and IT Applications in the Maritime Industries (COMPIT'19) took place in Tullamore from March 25-27, 2019, focusing on advancements in maritime technology and training. The conference featured various papers discussing topics such as autonomous ships, digital training solutions, and the application of big data in maritime contexts. Sponsored by several industry leaders, the event aimed to address the challenges and opportunities presented by digital transformation in the maritime sector.

Uploaded by

Ronan Fesselier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 522

18th Conference on

Computer and IT Applications in the Maritime Industries

COMPIT’19
Tullamore, 25-27 March 2019
18th International Conference on

Computer and IT Applications in the Maritime Industries

COMPIT’19
Tullamore, 25-27 March 2019

Edited by Volker Bertram

1
Sponsored by

www.dnvgl.com

www.ssi-corporate.com www.siemens.com/marine

www.cadmatic.com www.foran.es

www.aveva.com www.krs.co.kr www.abb.com

www.friendship-systems.com www.altair.com

www.prostep.com www.benntec.de

2
18th International Conference on Computer and IT Applications in the Maritime
Industries, Tullamore, 25-27 March 2019, Hamburg, Technische Universität Hamburg-
Harburg, 2019, ISBN 978-3-89220-709-2

© Technische Universität Hamburg-Harburg


Schriftenreihe Schiffbau
Schwarzenbergstraße 95c
D-21073 Hamburg
http://www.tuhh.de/vss

3
Index

Volker Bertram, Tracy Plowman 7


A Hitchhiker’s Guide to the Galaxy of Maritime e-Learning

Mario Gehrke, Volker Köhler 24


Modular and Interactive Simulation Training Environment (SimTE) for Customer-Specific
Maritime Training

Stig Eriksen 33
Autonomous Ships – Changing Perceptions and Expectations

Babak Ommani, Lasse Bjermeland, Vegard Aksnes, Neil Luxcey, Svein-Arne Reinholdtsen 50
Application of Continuous Integration in Decision Support and Integrity Management Systems
of Offshore Structures

Myeong-Jo Son, Sang-Yeob Kim, Yeon Hwa Jo, Gap-Heon Lee, Min-Jae Oh, Myung-Il Roh 67
Big Data Analysis Application: Brake Power and Fuel Oil Consumption Estimation based on
Public History Voyage Data of Ships

Jesus A. Muñoz Herrero, Rodrigo Perez Fernandez 78


A.I. Technologies Applied to Naval CAD/CAM/CAE

Svein David Medhaug 94


Future of Autonomous Shipping from an Administration Point of View

Carmen Kooij, Robert Hekkenberg 104


Towards Unmanned Cargo-Ships: The Effects of Automating Navigational Tasks on Crewing
Levels

Stephen Hollister 118


OpenCalc - An Open Source Programming Framework for Engineering

Alina Colling, Robert Hekkenberg 132


A Multi-Scenario Simulation Transport Model to Assess the Economics of Semi-Autonomous
Platooning Concepts

Joanna Sieranski, Carsten Zerbst 146


Automatic Geometry and Metadata Conversion in Ship Design Process

Xinping Yan, Feng Ma, Jialun Liu, Xuming Wang 156


Applying the Navigation Brain System to Inland Ferries

Marcus Bole 163


A Strategy for Closely Integrating Parametric Generation and Interactive Manipulation in Hull
Surface Design

Woo-Sung Kil, Seokho Byun, Jeong-yeol Lee, Myeong-Jo Son 180


Development of Real-Time Emergency Response Training Simulator for Collective Ship Crews
Based on Virtual and Mixed Reality

Mikael Wahlström, Deborah Forster, Antero Karvonen, Ronny Puustinen, Pertti Saariluoma 191
Perspective-Taking in Anticipatory Maritime Navigation – Implications for Developing Auto-
nomous Ships

4
Tom Goodwin, Alan Dodkins 201
Simulation Driven Structural Design in Ship Building

Marije Deul, Bernardes Hoek, Sietske Moussault, Anna-Louise Nijdam, Gerrit Alblas, 213
Robert Hekkenberg
An Expert System for Cost Estimation of Shipyard Steel Assembly

Stefan Harries, George Dafermos, Afroditi Kanellopoulou, Madalina Florean, Scott Gatchell, 224
Eero Kahva, Paulo Macedo
Approach to Holistic Ship Design – Methods and Examples

Charles-Edouard Cady 246


Microservices to Reduce Ship Emissions?

Harry Linskens, Hans van der Tas 254


Communicating Ship Designs via Virtual Reality

Matthias Steidel, Axel Hahn 261


MTCAS – An Assistance System for Collision Avoidance at Sea

Laura Walther, Britta Schulte, Carlos Jahn 274


Shore-side Assistance for Remote-controlled Tugs

Jitte van Dijk, Peter de Vos, Rolf Boogaart 286


Automatic Selection of an Optimal Power Plant Configuration

Christopher-John Cassar, Nick Bradbeer, Giles Thomas 301


The Implementation of Virtual Reality Software for Multidisciplinary Ship Design Revision

Svein P. Berge, Marianne Hagaseth, Per Erik Kvam 314


Hull-to-Hull Positioning for Maritime Autonomous Ship (MASS)

Ken Sears, Dejan Radosavljevic, Jan van Os 324


A Model-Based Approach to Modular Ship Design

Dogancan Uzun, Yigit Kemal Demirel, Andrea Coraddu, Osman Turan 332
Life-Cycle Assessment of an Antifouling Coating Based on a Time-Dependent Biofouling Model

David Drazen, Alysson Mondoro, Benjamin Grisso 344


Use of Digital Twins to Enhance Operational Awareness and Guidance

Thomas Porathe 352


Autonomous ships and the COLREGS: Automation Transparency and Interaction with Manned
Ships

Jarosław Nowak, Morten Stakkeland 359


Implementation of a Data Driven, Iterative Approach to Building Digital Twins

Adam Sobey, Jeanne Blanchard, Przemyslaw Grudniewski, Thomas Savasta 374


There’s no Free Lunch: A Study of Genetic Algorithm Use in Maritime Applications

Erik Stensrud, Torbjørn Skramstad, Christian Cabos, Geir Hamre, Kristian Klausen, Bahman 391
Raeissi, Jing Xie, André Ødegårdstuen
Automating Inspections of Cargo and Ballast Tanks using Drones

5
Ludmila Seppälä 405
Drawingless Production in Digital Data-Driven Shipbuilding

Tapio Hulkkonen, Teemu Manderbacka, Kei Sugimoto 415


Digital Twin for Monitoring Remaining Fatigue Life of Critical Hull Structures

Konstantinos Chatzikokolakis, Dimitris Zissis, Marios Vodas, Giannis Tsapelas, 428


Spiros Mouzakitis, Panagiotis Kokkinakos, Dimitris Askounis
BigDataOcean Project: Early Anomaly Detection from Big Maritime Vessel Traffic Data

Luca Antognoli, Simone Ficini, Marco Bibuli, Matteo Diez, Danilo Durante, Salvatore 438
Marrone, Angelo Odetti, Ivan Santic, Andrea Serani
Hydrodynamic Design Procedure via Multi-Objective Sampling, Metamodeling, and
Optimisation

Yong-Kuk Jeong, Huiqiang Shen, Youngmin Kim, Young-Ki Min, Jong Gye Shin, Philippe 451
Lee, Jong Hun Woo, Yong-Gil Lee
Discrete Event Simulation for Strategic Shipyard Planning

Stein Ove Erikstad 458


Designing Ship Digital Services

Joseph W. Donohue, Conner J. Goodrum, Michael J. Sypniewski, David J. Singer, Colin P.F. 470
Shields
A Method for Generation and Analysis of Feasible General Arrangement and Distributed
System Configurations in Early Stage Ship Design

Henrique M. Gaspar 485


A Perspective on the Past, Present and Future of Computer-Aided Ship Design

Robert Spencer, Jeremy Byrne, Paul Houghton 500


The Future of Ship Design: Collaboration in Virtual Reality

Kenneth Goh 505


Use of Virtual Reality Tools for Ship Design

Gabriel D. Weymouth 512


Roll Damping Predictions using Physics-based Machine Learning

Index of authors 520

Call for Papers for next year

6
A Hitchhiker’s Guide to the Galaxy of Maritime e-Learning
Volker Bertram, Tracy Plowman, DNV GL, Hamburg/Germany,
volker.bertram@dnvgl.com, tracy.plowman@dnvgl.com

Abstract

This paper surveys techniques for e-learning, discussing characteristics, suitability and cost aspects.
While the techniques and the employed software are generic, the discussion is in the maritime context,
with focus on technical and regulatory content and relatively small and scattered customer base.

1. Introduction

1.1. Resistance is futile – Get Digital (also in training)

Digitalization (a.k.a. Digital Transformation and Digitization) is a magic word in our times. Search the
COMPIT 2018 proceedings, http://data.hiper-conf.info/compit2018_pavone.pdf, and you will find 37
hits of these terms. Go to the world’s largest maritime fair SMM, http://www.smm-hamburg.com/en/,
and you find a dedicated Digital Route. While there is no clear definition of what exactly is “Digitali-
zation”, the general idea is widely understood and shared. It concerns the next wave of automation, not
just increasing efficiency but also offering new and better services – in theory at least. And all compa-
nies in the industry want to be part of it and “do it”, including our company DNV GL, see e.g.
https://www.dnvgl.com/article/dnv-gl-s-digital-journey-94148. Resistance is futile – get digital!

Fig.1: All maritime companies claim to embark on a digital journey – the concept remains hazy

While we might think first of Industry 4.0 or autonomous ships, the digital transformation affects vir-
tually all functions of the company – including training and competence management. These functions
are the responsibility of the MCLA (Maritime Competence Learning & Academy) in the Maritime Di-
vision of DNV GL; see Appendix A for more details on MCLA. Digitalization is central to DNV GL’s
strategy and consequently, we have embarked on a digital journey also in training for several years now.
Digital training solutions are on an exponential rise, not just in our company. The term “digital training
solutions” encompasses a much wider choice of training techniques than just self-paced e-learning, as
we shall elaborate in this paper.

In principle, digital solutions make us an offer we can’t refuse: flexibility. You can have training:

• When you want


The traditional classroom training required a critical mass of participants to happen, e.g. 6 pay-
ing participants to make break-even with the cost of a trainer and possibly venue and catering.
In a highly-fragmented industry, where the work force is often scattered globally, classroom
courses often were not conducted because there were not sufficient registrations for a given
date and location. The problem is aggravated for a classification society like DNV GL, where
certain tasks in surveying and auditing may only be performed if formal training and re-training
is proven. If you need a surveyor with certain training element for a customer in your port next

7
week, and you don’t have one on site, you either need to train him quickly (not an option with
classroom training) or fly in a qualified surveyor from some other station (involving extra cost
and unproductive travel time).
• Where you want
DNV GL Maritime had 3617 employees in mid-2018, spread around the globe in 190 stations.
Major hubs like Hamburg, Høvik and Piraeus have larger concentrations of employees, small
stations may have only a handful of employees. And they all need training. Traditionally, you
tried to cluster trainings regionally, but travel was unavoidable for many employees. The chal-
lenge is similar for all global classification societies. Digital solutions allow now training any-
where, as long as you have a computer and internet access (for most solutions). If there is no
internet access with sufficient speed and affordable cost, as typically so far on ships at sea,
digital solutions can be adapted to have e.g. download in port and offline training at sea.
• What you want
Digital solutions are generally faster than classroom training. Why is this? Traditional elements
in classroom training such as a round of introduction of all participants for social bonding cease
to apply. But the main contribution is that the trainee can skip parts at will, e.g. because he
knows the material already or because it is not relevant to his work. The trainee can also self-
pace the progress, advancing faster if he is a fast in reading and processing the offered material.
Classroom training by necessity has been a compromise between the interests, abilities and
learning targets of a group of participants. In contrast, digital training comes with the option to
tailor competence scheme to individuals.

It sounds like the best thing since sliced bread. Do we hear some critical voices in all that hype? Yes,
we do. We all use digital solutions in our individual quest for knowledge, both at work and at home.
Where we used to grab an encyclopedia, an atlas, or a manual, we now google, use Wikipedia or find a
“how to” on YouTube. But in work-related training, digital solutions (especially ‘next click’ e-learning
courses, where poorly designed course resemble digital page turning) have acquired a bad reputation in
the work force:

• “Most people hate e-learning. Or perhaps more accurately, they hate e-learning at work,” Bur-
rough (2016).
• E-learning software supplier Articulate surveyed 500+ people, finding that “38% say they get
bored with [e-learning] courses”, https://articulate.com/what-people-love-and-hate-about-
elearning. One might be tempted to ask “only 38%?”.
• Thalheimer (2017) confirms that “[…] elearning has had a reputation for being boring and in-
effective at the same time it is wildly hyped by vendors and elearning evangelists”.

But, as the vendors and e-learning evangelists are fast to point out, that is because the training solutions
were inappropriate and mistakes were made. In any case, resistance is futile – the future is digital, also
in training. But we can be not only part of this future, we can shape it. In our case, it means getting
digital training solutions right for DNV GL colleagues and customers. For that, we need to understand
the reasons for the widespread disenchantment better.

1.2. From collective disappointment to (cost) effective training

The maritime industry doesn’t place high value on training. If companies spend money on training, it
is more often for compliance with international regulations (e.g. IMO’s ISM Code, https://en.wikipedia.
org/wiki/International_Safety_Management_Code, forcing them to have a Designated Person Ashore)
or contractually imposed quality standards, such as within IACS, http://www.iacs.org.uk/, for classifi-
cation societies. Placing a relatively low focus on training is not particular to the maritime world; it is
typical for mature industries.

As a result, most of the work force in maritime companies, including top and middle management, has
little focus on training and even less expertise. They weren’t trained for it and there weren’t much

8
exposed to it in the normal business operations. But here lies a root cause for many short-comings in
the current training “eco-system” in the maritime industry. Lack of knowledge (about training, digital
training in particular) leads to false expectations. And false expectations lead to disappointment, invar-
iably.

The situation seems to be like a Russian drama. In the end, everyone is unhappy, at least a bit. Because
everyone started with false expectations:

• Management
Digital training – invariably referred to as e-learning by management – is part of the overall
drive towards a lean organization that has trimmed away all unnecessary fat, in this instance
cutting cost for unfortunately imposed training. The more enlightened management hopes at
least that digital training will give same or better learning results. But then the savings don’t
materialize (quickly). Self-paced, flexible, mobile learning already existed before the Internet
– we had books. But few managers contemplated having their experts writing books for training
in maritime topics. They were familiar with books and thus were able to instinctively estimate
effort involved and savings expected. For digital training solutions, this instinct is missing. See
Appendix B for a payback analysis for a digital training solution.

• Training providers / Training department


Digital training is exciting, if you are concerned primarily with training and not with financial
issues. Here we have new training tools, and the demos from vendors and at professional fairs
are impressive. There is a new-found optimism, a spirit of a new beginning, where we will leave
the drab, underfunded old world of classroom training in dull engineering/regulatory topics
behind us, and enter a new world of exciting training options, with videos, Virtual Reality, latest
pedagogy. But then, there is no funding to boil the ocean. Just enough for an instant coffee.
This kind of disappointment is common in technology hypes and mirrored in many projects in
Digital Transformation: “The idea is that we get excited with all the buzz and potential of the
technology that our implementation of these technologies also follows the Hype Cycle. Many
companies start by attempting to boil the ocean and not by focusing on something smaller and
attainable,” Denis Morais in his technology blog, http://blogs.ssi-corporate.com/wave-
form/2018/technology/compit-2018/.

• Customer/Trainee
Most trainees are not half as excited about e-learning as the managers or the training depart-
ments. Do you remember a really good training/lecture? Most people will: There was this
teacher I had in high-school. Professor Schimmöller’s course on fracture mechanics was bril-
liant. Oh, the course on certification of materials and components was great, such a charismatic
trainer, a real expert. We struggle to separate the course and the trainer; and mostly remember
the trainer (who invariably also developed the training). But who would answer the question
quoting an e-learning? But if we ask about a really good digital solution, be it a webinar, an e-
learning or instructional video, most people will come with a good example they remember.
And it will be invariably professionally produced material for a mass market. BBC’s documen-
tary “The Blue Planet” was nice, we enjoyed it ourselves and learnt a lot. (Production cost
estimates range between 8 and 21.5 million Euro.) Or can we have some professional training
involving gamification, preferably in Virtual Reality, for our new range of cruise ships? (Pro-
duction costs for high-end video games after 2010 range 40-200 million USD, https://en.wik-
ipedia.org/wiki/List_of_most_expensive_video_games_to_develop). In our private lives, we
are part of the mass markets (learning English, getting a great documentary to watch, playing
the hoppy ship master, etc.) In our professional lives, we are the niche markets, were training
is developed mostly in the twilight zone between low budget and no budget.
In addition, training has been not just about learning something. It always had a social compo-
nent, getting out of your office and spending time with like-minded people, networking over a
coffee. Forgetting or underestimating that function would be a mistake.

9
We may now understand better why each stakeholder involved in training is a bit unhappy with digital
training in maritime topics. We may not be able to change the size or fragmentation of our industry, but
we can at least mitigate the gaps between expectations and offered training solutions. The way forward
starts with guiding principle that is simple to state, simple to understand and hell to follow:

Disappointment is best avoided by having a realistic understanding of what is feasible


and reconcile budget limitations with customer (trainee) needs.

In the following, we will survey various digital training techniques with their pros & cons, giving a
snapshot of our experience. Follow us, fellow hitchhiker, to the galaxy of digital training options, from
pdf to Virtual reality, on the parts we have explored ourselves or heard of from other explorers.

2. Key technologies for digital learning solutions

In the following, we will discuss a multitude of options for digital training. The large galaxy of digital
training options can be decomposed into four major approaches:

- E-notes and e-books


- Online training
- E-Learning
- Social media

The borders between these approaches sometimes become fuzzy. Web-based teaching may be recorded
and downloaded for offline teaching (on board ships without cheap and fast internet access), online tests
may be transferred to downloadable pdf and uploaded again, etc. The structure is meant to help in
getting a clearer picture, not to start a philosophical argument.

2.1. E-notes, e-books and wikis

Pdf, really? You call that digital training? The e-learning aficionado may despair, but pdf files are often
a great and cost-effective option:

• Short instructions
“Sometimes it makes more sense to deliver new training content in the form of a job aid. Don’t
stretch out a small amount of content in order to create an hour elearning course,” Ferriman
(2013). If you have nothing to say, keep your mouth shut. If you have little to say, put it on one
page.

• Reference knowledge as add-on


A cardinal sin in training (classroom and digital alike) is ‘slidumentation’, Duarte (2008), the
mixing of slides with documentation (“We will use the PowerPoint handout as documentation;
therefore, all the tables and text needs to be on it.”) The result is poor presentation and poor
documentation. This tradition of poor classroom training makes its way into e-learning. It is
time to break with this bad habit! Much of our traditional training material contains reference
knowledge. Nobody can seriously expect trainees to retain this knowledge after brief exposure:
Catalogues of welding defects, Fig.1, pages of regulations applicable if A exceed this threshold
and B that. All the trainee should learn is where to find that documentation and how to work
with it. Transferring classroom training to digital solutions, we often include links to pdf files
or websites, where the reference knowledge is found, and focus on the learning goals “I know
this resource exists”, “I know where to find it” and “I know how to work with it”. This allows
focusing on a realistic learning goal for trainees to retain in the memory. Reference knowledge
(manuals, diagrams to work with, catalogues, etc.) are best attached as pdf files or kept for
reference on a website. However, linked websites should not be short-lived, few things frustrate
trainees more than clicking on hyperlinks and getting error messages. Websites under your own

10
control and rather stable links (Wikipedia, IMO regulations, ISO standards, etc.) work well,
though. Linking to such public knowledge sources has become best practice in digital training:
“A wise approach to workplace learning will harness all these materials as part of constructing
an overall learning environment”, Taylor (2017).
3d pdf, www.3dpdf.com, is a special case. It allows 3d models with interactivity (such as blend-
ing in/out parts of an engine) and can be read by the standard Adobe Acrobat reader. While 3d
modelling and documentation is discussed in the context of class approval in our company, the
technology has not been used by our training department, as the technology generally does not
match our training needs. It may be different for training applications (including instructions
such as user guides) in mechanical engineering.

• “Lecture notes”
Traditional self-studies were based on books. In some cases, having an e-book or lecture notes
as pdf for self-studies with an online quiz may work much better than an ‘next slide’ e-learning.
The lecture notes can be frequently and cost effectively updated for small changes, much faster
and cheaper than programming an e-learning, and the quiz may be kept constant online (e.g.
with random variations from a pool a quiz questions). In our experience, we converted an older
e-learning, which was text-heavy with many technical drawing, into a pdf-attachment of lecture
notes (96 pages) and a lean e-learning consisting essentially of a page for the download of the
e-notes and a quiz to ensure that trainees had studied the material.
Pdf files may also be used for interspersed tasks or case studies. For example, after an e-learning
has presented material for 20 minutes, you may attach a pdf with a cross-word puzzle reflecting
the presented material, Fig.2. Time for a coffee, a pen and let’s crack that cross-word. Such
media breaks work well and generally receive positive feedback from the customers.

Fig.1: Typical reference knowledge in pdf Fig.2: Cross-word puzzle as media break

Pdf files come with some inherent advantages:

11
• They can be downloaded and printed. We get a lot of reading down during our commuting to
and from work. And we often prefer reading a paper version, where we can work with a pen or
a highlighter, and were the strain of reading seems less after hours spent in front of computer
screens.
• They are standard software from a major supplier, in an open format based on ISO 32000. As
such, it can be expected that in decades to come, we will still be able to open and read pdf files,
with free and easily available software.
• The standard reader for pdf software comes with a search function, which is particularly useful
in large documents.

For quick reference, we have come so accustomed to Wikipedia that it is hard to find an old-fash-
ioned printed encyclopedia in our households. For corporate training, wikis have been suggested
for training and competence building, also within our company. (A wiki is a website on which users
collaboratively modify content and structure directly from the web browser.) However, wikis need
a critical mass of competent contributors to be built up and maintained, and they need governance
to ensure that uploaded material is acceptable (following company guidelines, not biased, etc.) In
practice, this makes wikis problematic for small and medium enterprises, i.e. the maritime industry.
We have never used them in our work.

2.2. Live online training (Webinars & Co.)

Short “conference-style” presentations of 20-30 minutes have often been converted into online webi-
nars. As with classroom/conference presentations, there are good ones and bad ones. Bad ones are of
the format "you look at PowerPoint slides while the expert drones on". Participants often zone out,
doing other things like checking their emails, passively absorbing the audio and tuning back into the
webinar every once in a while. The good ones are relatively brief, focused on single, tangible topic with
a clear take-home message, and strong user interaction. Key lessons learnt in our experience are, Ber-
tram and Plowman (2017):

• Subject matter experts (SMEs) are much more willing to take the time for a webinar than for
the development and wide-scale delivery of classroom training or e-learnings. Webinars are
often enthusiastically embraced, as they are quick to develop, allowing rapid response to new
developments in technology or regulations.
• SMEs are generally neither communication experts nor webinar technology experts. Raw ma-
terial (PowerPoint) needs sometimes extensive reworking for a webinar and delivery is similar
to being on the radio: SMEs need technical support and possibly some coaching on how to
speak during a webinar.
• Webinars should be designed for maximum 20-30 minutes presentation time. Beyond that au-
dience attention cannot be maintained and the message is lost. They may be combined with
prior or follow-up emailing of pdf files with more detailed information.
• PowerPoint slides used for webinars should have even less text than the classroom version and
rely much more on visual language to convey the message, Fig.3.
• After 5-10 minutes speaking time, an interactive element (“poll” in the jargon of webinar de-
signers, Fig.4) should stimulate the audience to refocus on the topic. Otherwise the temptation
to multi-task (i.e. read incoming emails, etc.) becomes overwhelming for most people.
• While recordings of webinars can be offered after the broadcast, in our experience virtually
nobody has downloaded these recordings. Consequently, webinars for a global audience need
to be offered several times “live” to cover different time zones. Extra resources then need to be
allocated for the repeats.

Technically, there are many platforms to support webinars; we have experience with Citrix and Adobe
Connect. The functionalities of the assorted webinar software are similar; it is not the platform, but the
content and design that decides on the impact of the webinar. Typical functionalities that we use fre-
quently are:

12
• Webinars always come with live audio. That makes them easy to prepare and generally more
lively that e-learnings, but introduces the accent challenge. Listening to a voice from another
nationality requires more concentration that listening to someone from your own language. The
first few minutes our brain tunes into to a different accent, and often native English speakers
are the hardest to understand for non-native speakers. Key words should then appear on the
screen to help the listeners. Also, simple vocabulary helps, where in engineering most engi-
neering terms are considered as simple by an engineering audience.
• Presentations can be broadcast with or without webcam video of the speaker. Having an inserted
window with a speaker makes a webinar more personal, but runs the risk that participants get
distracted from the content slides.
• Webinars allow “polls” (the functionality comes under different terms in different software),
where participants typically click a box and automatic statistics can be shared of the feedback.
• There is a chat function for instant messages, typically used to collect questions which may be
addressed immediately, but mostly are collected and screened at the end. Then selective ques-
tions may be answered live, and the rest via individual or collective emails or on a website with
FAQ (frequently answered questions).
• Webinars can be recorded to a video file, typically in mp4 format.
• Webinars track user behavior and allow exporting the statistics e.g. in Excel. Information gath-
ered includes: registration information (name, email address, company, etc.), time of joining,
time of leaving webinar, attention rate (percentage of time when window with webinar was
active; you check your email and the system knows it and reports it), questions asked, what was
answered in polls. Both for training and marketing purposes, these statistics are highly interest-
ing.

Webinars have become a standard tool for many companies, including DNV GL. The problem is that
we all get flooded with emails, touting upcoming webinars. We could sit on the computer all day and
watch these things, but who would then do our jobs? As a simple self-defense, webinar invitations often
land in the spam folder. In order to avoid this fate, best use a specific invitation from a known col-
league/manager and find a title for the webinar that raises curiosity or motivation to join.

Fig.3: Typical webinar slide Fig.4: “Polls” stimulate audience to think

For small-group knowledge and higher interactivity, video conferences (e.g. based on Skype) with the
option of mutually sharing screens may be used, Fig.5. This approach allows rapid, virtually ad-hoc,
knowledge transfer sessions. It may be considered as a flanking measure e.g. in software implementa-
tion schemes for super-user groups, etc.

More recently, “virtual classrooms” have been suggested as a variation or evolution of the webinar idea.
Virtual classrooms are essentially webinars with extended functionality, where participants can interact

13
more with each other (and not just with the trainer, as in webinars), e.g. in chats, audio and video con-
ferencing between (subgroups of) the participants. In principle, this allows small-group work (buzz-
groups, etc.) as a direct digital counterpart to classical classroom training. The participants may appear
as in real life (as in a video-conference) or slip into the role of an avatar, as e.g. in AULA, an immersive
3d learning environment, http://www.vcomm.ch/en/home/, DNV GL has employed, Fig.6. Problems
are similar as with larger video-conferences: participants may (mentally) wander off, different people
trying to speak at the same time make communicating difficult, always someone with a technical / IT-
competence problem. Slipping in the role of an avatar is fun initially, but distracts from the learning
goal. With proper design of the training material, it seems to be possible to use standard webinar soft-
ware instead of virtual classroom software to implement online training. The added value of current
virtual classroom technology does not seem to justify the added effort (software licenses, training of
technical support).

Fig.5: Videoconference for small-group know- Fig.6: Avatar-based virtual classroom AULA,
ledge transfer, Harries et al. (2015) www.vcomm.ch

2.3. E-learning

These days, most people think of self-paced, click-through e-learning when they hear terms like “com-
puter-based training”, “digital training solutions” or “digital transformation of classroom training”.
Alas, it is just one of our tools, albeit a powerful and useful one if properly used. We distinguish de-
pending on the duration of the training:

• Web courses/e-learnings have durations from anything between 20 minutes and several days
(then typically subdivided into modules, which in turn are subdivided into chapters; each chap-
ter has typically a duration of 10-20 minutes). Due to their longer duration, web courses gener-
ally employ a wider range of techniques to avoid fatigue.
• Nano-learning are short web courses, with typical durations of 5-10 minutes. They are often
employed for quick once-off instructions, e.g. when a new software is rolled out inside the
company or a short safety instruction is needed and a pdf one-page instruction is ruled out, e.g.
because some short video clip is needed.

Both web courses and nano-learnings use techniques akin to PowerPoint presentations – text (some-
times animated), images, embedded videos. In addition, e-learning allows information on demand (e.g.
mouse-over pop-up explanations for abbreviations, magnifying of images, links to websites or pdf doc-
uments) which allow decluttering slides and faster progress for those who don’t need that information
detail at that time. You can also include “go to” buttons to jump a chapter of choice (e.g. jump to a quiz
or skip a quiz). In principle, all good advice for designing PowerPoint presentations for classroom train-
ing also applies to designing e-learnings.

A time and cost effective way to get quickly to OK e-learnings is having an SME prepare a good training
in PowerPoint, with explanatory text in the PowerPoint Notes pane. This way, the SME can work with
a tool he is already familiar with and usually has already at least half the material in. Then the Power-
Point can be uploaded to the e-learning tool (in our case Storyline Articulate) and voice-over techniques

14
can be used without extra time burden for the SME. Fancier (and pedagogically adding value) tech-
niques may be added at a later stage by e-learning experts.
While e-learnings should be always strongly visual, there is always text information that needs to be
transmitted. Various options exist:

• Full text
All text information is given as full text (as in a pdf). This is the easiest and cheapest way to
produce; trainees do not need audio, i.e. they can use the training also in crowded areas (com-
muting, open-plan office, etc.) without headphones. But then learning becomes more fatiguing,
as the eyes have to do all the work. A good option for nano-learning.
• Keywords on slides + audio narrative
This is our standard option now for regular e-learnings. Subtitles are generally recommended
in a multi-national context (accent challenge). For the spoken narrative, there are various op-
tions:
- Text-to-voice
Some e-learning programming software, like Storyline Articulate used by DNV GL, has the
option to generate voice narrative by converting a typed text automatically, offering the
choice between US and British English and female and male speakers. Subtitles can be au-
tomatically generated. This is fast and cost effective, and the e-learning is easy to maintain
with future updates as the “speaker” stays with the software. The disadvantage is that the
automated voice sounds tinny and doesn’t have the speech rhythm that a human speaker
would use, e.g. to stress something or pause for dramatic effect. It is understandable, but
irritates some trainees who then switch of sound and just use the subtitles (with earlier fa-
tigue as the eyes then have to do all the work).
- Professional speaker audio
Professional native speakers may be hired for the narrative. This is an expensive and disrup-
tive option, as external services need to be sourced, with the associated paper work, a studio
production needs to be arranged, typically with a representative from our company, and the
resulting audio files need to be imbedded properly in the e-learning. Internal and external
cost taken together, budget 4000-6000 € for adding professional speaker audio to one hour
of e-learning. Production of subtitles is an additional manual labour that roughly doubles the
costs. If the e-learning needs to be changed, the same speaker needs to found or the whole
e-learning needs to get recorded again to avoid obvious patchwork.
- Subject matter expert audio
Using the in-house expert adds a personal touch and gives credit to the expert who most
often has represented the company’s expertise so far in classroom training. It is his/her ex-
pertise that the trainees ultimately are willing to pay for. The downside is that the labour
costs for SMEs are even higher than for professional speakers. Even if free audio recording
software (e.g. Audacity, https://www.audacityteam.org/) is used, this still needs to be in-
stalled, handling trained and often several takes are required to get the audio right. Another
issue is that subject matter experts are not selected or trained for particularly clear pronun-
ciation, leading to more fatigue in trainees concentrating to follow the audio. As for profes-
sional speaker audio, subsequent maintenance of the audio tapes and considerable extra costs
for subtitles should be kept in mind.

Videos are frequently used in e-learning, but to a largely varying degree from small add-on to complete
lecture recording. Basic options are:

• Recorded lecture
This option gives high focus on the SME, making the perception much closer to classroom
training. Blue-screen or green-screen recordings, https://en.wikipedia.org/wiki/Chroma_key, of
the speaker may be overlaid with slides (PowerPoint) while the trainer synchronizes narrative
with slide advance himself, Fig.7. While blue or green screens are quite cheap, you need a quiet
room with proper lighting, a good camera on a tripod, good microphones, etc. for the recording.

15
The set-up or studio rent needs to be considered as a cost factor. Alternatively, at significantly
lower cost, the expert may be recorded by a webcam clicking through the slides and running
his natural narrative. Speakers should wear the same clothes if recordings are made on different
days. Maintenance/update costs are high, as the same speaker needs to record a whole section
of slides between break points (which are typically 10-15 minutes apart). Typically, 2-3 takes
are necessary to get a useful recording.
• Technical video
For special, usually company or large-project promotional purposes, video production is out-
sourced. Google “DNV GL” with the video option and you will find assorted examples. While
not quite up to par with BBC documentaries, they are high quality and high price. Prices depend
always on content and length, but order of magnitude is 1000-3000 € per minute of video,
Bertram and Plowman (2017). For most training purpose, the production of such videos is pro-
hibitively expensive, but for both classroom and e-learning, videos may be recycled, embedded
in part or also through a hyperlink.
In order to use videos in professional training, we either need to own the copyright or have a
copyright waiver for our training purposes from the owner. This invariably disappoints subject
matter experts that come with YouTube links. A legal work-around is giving pointers to finding
these videos (“searching for A and B, you may find useful videos for further studies on websites
like YouTube”), but grabbing such videos with screen-capturing or similar techniques leads
into legal areas that are grey for individuals and absolutely no-go zones for a classification
society.
• Animation video
Rather dry (technical or regulatory) material may be made more entertaining by using animated,
cartoon-type videos. We use Vyond (ex-GoAnimate), www.vyond.com, Fig.8. One may get
tired of the style of the “anime” cartoons, but this style of videos is much easier to produce,
maintain and update than videos with real persons which, in addition, may no longer be avail-
able when updates are required. Costs depend on many factors, but as a rule of thumb, 1 minute
of such animated videos costs 200-400 € to produce. This includes the time to develop a script
similar to a movie script, but excludes costs for specification creep where the customer
changes/adds specifications after seeing the first prototype.

Fig.7: Trainer filmed with blue-screen technology giving narrative to PowerPoint presentation

Overall, producing new videos adds significant costs. It should thus be considered in each case
whether a video is “nice to have”, “important” or “essential” in the context of the learning goals. On
the other hand, it is recommended to re-use existing video (in full or in part) wherever this supports
the learning goals: pay once (for the development), use many times. For video formats, wmv and mp4

16
seem to give the least problems. Often, it is advisable to split longer videos into shorter chunks to
embed them in classroom or e-learning training. Beyond 30 s, the mind wanders…

Fig.8: Stills from e-learning videos merging cartoon-like animations with tailored image elements

A key risk with self-paced learning is that the trainee does not study, whether it is an old-fashioned
book or a programmed e-learning. The answer to “motivate” trainees is often the stick, rather than the
carrot: You will have a text at the end and must pass the test in order to get your certificate (in the widest
sense). It may not follow a feel-good modern view on pedagogy, but it has been proven to work and our
industry is used to it. For such an assessment, there are various options:

• Ungraded quiz
The softest option: Have a quiz (usually programmed in the e-learning software) with tasks,
most often multiple-choice questions, and give immediate feedback to the trainee whether the
answer was correct or incorrect, possibly giving additional explanations on the correct answer.
This type of quiz is intended to give just voluntary assessment to the trainee how much or little
was learnt. The SME has to furnish a list of questions, possible answers and correct answers.
The programming is straightforward not a major cost item. Answers should be given in a form
that allows shuffling them around, i.e. avoiding “all of the above”.
• Graded quiz
As above, but this time there is an overall grade at the end, most often without additional details.
The trainee gets the final score and whether this was enough to pass. The assessment result is
entered automatically in the learning platform and possible an e-certificate (e.g. pdf file with
name, course and success) is issued and emailed or offered for download. This is our standard
option for normal courses. However, if the certificate is important (e.g. a university degree, a
formal license, etc.), this approach makes it difficult to ensure the identity of the person taking
the test.
• Classroom quiz after e-learning
In cases where the identity of a candidate has to be checked and ensured that no external help
was received, we have not found an alternative to classroom testing under supervision. The
knowledge acquisition may be based on digital solutions, but the knowledge assessment in
classroom makes the approach “blended learning”.
• Human evaluation of free text
Similarly, in some cases the assessment may be in form of a free text (essay). This can be
avoided in most engineering and regulatory topics, but is e.g. used in our joined post-graduate
diploma courses with the World Maritime University, https://www.wmu.se/distance-learning.
Again, this “blended learning” approach comes with significant added cost and burden for the
subject matter expert.

At DNV GL, we have used Lectora, www.trivantis.com, and Storyline Articulate, articulate.com, as e-
learning software. The two are not compatible and do not support a neutral interface to export-import
e-learnings from one to the other. If need be, virtually a complete new programming is required to

17
convert an older Lectora training to the more modern Storyline Articulate. Storyline Articulate, how-
ever, can import PowerPoint files with text boxes, images and videos as separately manageable items.
This saves a lot of time in practice, as SMEs are familiar with PowerPoint and mostly have usable
material for classroom training already.

In general, e-learnings require high effort. The costs are similar, whether the e-learning is programmed
in India or in Germany, as added man-time for specification and quality control eats up the advantage
of the lower hourly rates in India. The required high effort (mainly man-time; software licenses are
almost negligible compared to labor cost) may be justified if there is a “return on investment”, typically
savings in term of man-time of your own staff or customers willing to pay for the added value. E-
learnings should then be targeted at large number of users and avoid frequent updates (as a rule of
thumb, an e-learning should be used 3-5 years before an update is necessary). See Appendix B for an
example of a pay-back calculation. Because of the long-term perspective with e-learning, the program-
ming should be based on standard software of major suppliers to ensure continued support.

2.4. Simulations including AR/VR

Simulations mimic the real world in computer models. In the context of training, simulations denote
any application where the user changes some interactive control and sees the outcome. Often, gamifi-
cation of training is connected to simulations in digital solutions.

There are various maritime applications in simulation-based training:

• (engineering) simulations
Computer simulations with suitable graphics can help trainees to learn and retain qualitative
relations, e.g. between ship form parameters and stability, Fig.9. The underlying simulation
model should be a simple as possible for quick response time and as complex as required to
give realistic behavior. Simple simulation may manipulate only one parameter, more complex
multi-parameter simulation models offer more flexibility and achieving different learning ob-
jective with one simulation model (E.g. in Fig.9, the effect of heel angle or ship breadth may
be studied to understand the effect of design on ship stability and the behavior of capsizing
separately.) Chaves and Gaspar (2016) present a 3d ship simulator intended for training ship
designers as an example of a more recent and more sophisticated multi-parameter simulation,
Fig.10. While DNV GL uses simulation extensively in its engineering services, e.g. Fach
(2006), Peric and Bertram (2011), we are not aware of any such application in our training. At
least in part, this can be explained by our portfolio of training subjects that does not lend itself
easily to physical simulations.

Fig.9: Simple simulation for ship stability e- Fig.10: Sophisticated simulation model for ship
learning, Bronsart and Müsebeck (2007) design, Chaves and Gaspar (2016)

18
• Virtual Reality (VR)
Virtual Reality for us means a computer-generated 3D space to navigate through, with control
devices allowing manipulation, operation, and possibly control of items in this 3d space. Ber-
tram and Plowman (2018) reviewed VR maritime training applications, concluding that the
high costs for VR based training limit applications severely. Even if we don’t expect a high-
end video game, think in budgets of several 100,000 € to create a complex ship training sce-
nario. DNV GL uses VR-based training exclusively on VR models that were developed within
an R&D project with significant external funding. New developments or extensions seem un-
likely in the current financial circumstances. Besides the cost issue, there are aspects to consider
with VR-based training:
- Symptoms akin to motion sickness may occur, especially if using head-mounted displays,
https://en.wikipedia.org/wiki/Virtual_reality_sickness
- Loss of trainee group coherence due to varying IT savviness. Much is intuitive for video
gamers, nothing for digital immigrants aged 50+.
Both issues can be overcome by using projected (2d) images and a guided tour by a trainer.
VR-based training does not seem suitable for self-paced learning without support.

• Augmented Reality (AR)


AR combines real world with overlaid computer-generated images. A typical application are
nautical simulators which combine a real bridge with a simulated outside world. At DNV GL,
we don’t develop Augmented Reality applications, but cooperate with professional nautical
simulator centres as required. The approach is ideal with scenario based learning, where a given
task in a scenario has to be solved, e.g. handling a rudder failure without causing an accident.

Simulation-based training is generally well received by trainees and effective, but comes at high costs.
Realistically, the only option is re-using existing models.

Simulations are best integrated in blended learning concepts where theoretical basics are covered in
classroom and/or e-learning. Our experience confirms Sitzmann (2011): “Trainees receiving instruction
via a simulation game had higher levels of […] knowledge […] and retention […] than trainees in the
comparison group. […] Learning from simulation games was maximized when trainees actively rather
than passively learned work-related competencies during game play, trainees could choose to play as
many times as desired, and simulation games were embedded in an instructional program rather than
serving as stand-alone instruction.”

2.5. Social media & Co.

As response to an invitation to a webinar, one of our customers sent us the following reply: “I shall not
register to the webinar, whatever the topic, for the simple reason that I do not see the point… no net-
working, no coffee, no time out of the office. Just for sharing…” Variation of the theme are many:
people miss the exchange of experience, the maritime gossip, the networking. We may then add digital
elements to bring in the social component, even if it will never be the same as sharing a joke over a nice
cup of coffee with a colleague. Can social media step in and help? We employ at DNV GL two tech-
nologies (besides video conferences which were covered above) to “reach out and stay in touch” within
groups of people interested in a certain theme:

• Yammer, www.yammer.com
This is a social networking service used for private communication within organizations. For
training purposes, it is thus limited to internal training. There are mixed feelings about using a
platform like Yammer as add-on in digital training solutions:
- Does it serve a specific purpose? “Want people to hate eLearning? Try including social
media tools that have no purpose,” Rosenberg (2018). How purposeful you may employ
social media for your training depends. As a posting platform for occasional nuggets of
information (“I was recently at a conference on our theme and the proceedings can be found

19
here”) or specific questions (“Has anybody any experience with…?”) it works well in rel-
atively small and coherent groups.
- Does it work for your trainee pool? You may want to ask members of the training target
group what they think about including social media, making sure that you get one from the
age group 25-35, one from 40-50 and one from 55-65. Don’t include training elements that
will alienate a sizable part of your trainee pool.
- Beware of Yammer spammer dilemma. The more Yammer groups you subscribe to, the
more messages pop up. Soon people react in mental self-defense and no longer open any
of them.
In some cases, only time will tell whether a social media channel like Yammer works for its
intended training purpose or not.

• Email as follow-up
Email may work as an electronic hotline in some fields where there is a designated person or
department in charge of the topic. Often specific question come when you have to solve a spe-
cific problem. If it so specific that it is of little interest to the rest of the trainees, individual
emails work much better than a broader discussion via Yammer.

3. Conclusions

Digital training solutions are more than the (in)famous e-learning. Digital solutions have a few key
characteristics to remember (call it an executive summary):

• It is flexible
The big advantage is the flexibility to train where you want, when you feel like it and pick what
you learn at that time. In essence, it allows individually tailored training.
• It ain’t cheap
There is a wide misconception that digital automatically means cheap. It requires cooperation
between (digital) training experts and SMEs. And it takes longer than writing (a book or an
article). The estimate of Defelice (2017) of 40-70 h work for 1 hour of simple, passive e-learn-
ing should be kept in mind. Double this if adding higher-end elements with more interactivity,
videos, etc.
• It ain’t better / worse than classroom training
“[E-]Learning often produces better results than classroom instruction, often produces worse,
often similar results. […] It’s the learning methods that matter, […] NOT whether [it] is elearn-
ing or classroom instruction. […],” Thalheimer (2017). Poor classroom training does not turn
into good training by programming it into some training software tool. “Taking a poor training
experience and putting it online only creates a poor online training experience,” Goldberg
(2017). But sometimes, poor classroom training turns into good digital training, because the
conversion prompts some long-overdue thinking about what can be achieved and shall be
achieved by the training, and the old training material is better structured and key learning tasks
stressed and reference knowledge relegated to attachments or deleted.

The next decade will see digital training solutions on the rise, no doubt. But we will see classroom
training (possibly improved by adapting some of the brain-friendly training techniques that came on the
new wave of digital training), a lot of blended learning and fully digital solutions together, as they
address different training needs and have each their justification.

Acknowledgements

We thank our colleagues: Bernhard Löbermann for many stimulating discussions and being the driving
force behind our quest for good digital training solutions at DNV GL, Burkhard Kohne for pioneering
thoughts on computer-based training, Ulrike Schodrok on writing most of Appendix A.

20
References

BERTRAM, V., PLOWMAN, T. (2017), Maritime training in the 21st century, 16th COMPIT Conf.,
Cardiff, pp.8-17, http://data.hiper-conf.info/compit2017_cardiff.pdf

BERTRAM, V.; PLOWMAN, T. (2018), Virtual Reality for maritime training – A survey, 17th COM-
PIT Conf., Pavone, pp.7-21, http://data.hiper-conf.info/compit2018_pavone.pdf

BRONSART, R.; MÜSEBECK, P. (2007), E-learning for higher education in naval architecture and
ocean engineering, 6th COMPIT Conf., Cortona, pp.488-495, http://data.hiper-conf.info/compit2007_
cortona.pdf

BURROUGH, S. (2016), Why does everyone hate e-learning?, Transform eLearning, https://www.trans
form-elearning.com/everyone-hate-e-learning/

CHAVES, O.; GASPAR, H. (2016), A web based real-time 3D simulator for ship design virtual proto-
type and motion prediction, 15th COMPIT Conf., Lecce, pp.410-419, http://data.hiper-conf.info/compit
2016_lecce.pdf

DEFELICE, R. (2017), How Long to Develop One Hour of Training? Updated for 2017, https://www.
td.org/insights/how-long-does-it-take-to-develop-one-hour-of-training-updated-for-2017

DUARTE, N. (2008), Slide:ology: The Art and Science of Creating Great Presentations, O'Reilly &
Assoc.

FACH, K. (2006), Advanced simulation in the work of a modern classification society, 5th COMPIT
Conf., Oegstgeest, pp.34-44, http://data.hiper-conf.info/compit2006_oegstgeest.pdf

FERRIMAN, J. (2013), 9 things people hate about elearning, LearnDash, https://www.learndash.com/


9-things-people-hate-about-elearning/

GOLDBERG, M. (2017), Does eLearning Work in the Maritime Sector?, Marine Learning Systems,
https://www.marinels.com/elearning-work-maritime-industry/

HARRIES, S.; MacPHERSON, D.; EDMONDS, A. (2015), Speed-power optimized AUV design by
coupling CAESES and NavCad, 14th COMPIT Conf., Ulrichshusen, pp.247-256, http://data.hiper-conf.
info/compit2015_ulrichshusen.pdf

KAPP, K.; DEFELICE, R. (2009), Time to Develop One Hour of Training, https://www.td.org/news-
letters/learning-circuits/time-to-develop-one-hour-of-training-2009

PERIC, M., BERTRAM, V. (2011), Trends in industry applications of CFD for maritime flows, 10th
COMPIT Conf., Berlin, pp.8-18, http://data.hiper-conf.info/compit2011_berlin.pdf

ROSENBERG, M. (2018), Marc My Words: Why I Hate eLearning, Learning Solutions Mag, https://
www.learningsolutionsmag.com/articles/marc-my-words-why-i-hate-elearning

SITZMANN, T. (2011), A meta‐analytic examination of the instructional effectiveness of computer‐


based simulation games, Personnel Psychology 64/2, pp.489-528

TAYLOR, D.H. (2017), Learning Technologies in the Workplace, Kogan Page, London

THALHEIMER, W. (2017), Does elearning work? What the scientific research says!, http://willthal-
heimer.typepad.com/files/does-elearning-work-full-research-report-final.pdf

21
Appendix A: DNV GL Maritime Competence, Learning and Academy

DNV GL’s Maritime Competence & Learning and Academy unit comprises the internal and external
training business. It is the first point of contact for all matters concerning training and competence
qualification. Internal training ensures that DNV GL employees are qualified and fully trained for their
tasks. The globally distributed work force makes classical classroom training cumbersome and costly.
In response, digital training solutions have been deployed to a larger extent than for external training.
DNV GL’s Maritime Academy, https://www.dnvgl.com/maritime/maritime-academy/index.html, is in
charge of external training for customers. The Academy provides one of the broadest portfolios of train-
ing courses for the maritime industry (> 120 courses). A network of training coordinators in 16 key
locations offers classroom courses and serves as first point of contact for digital solutions, Fig.A-1. The
local Academy in Gdynia/Poland runs a Virtual Reality Training center offering courses based on the
survey simulator SuSi. The Academy has its own line of webinars, called Smart-Up, and cooperates
with the World Maritime University in post-graduate diploma courses for distance learning, which com-
bine several digital training techniques.

Fig.A-1: Digital training solutions offered by DNV GL Maritime’s Academy

22
Appendix B: How much do we save by going from classroom to digital training?

How much does e-learning cost? The correct answer is the time-honored “that depends”. However, it
is not cheap:

• "Industry research suggests that a basic, but professionally produced, hour of eLearning re-
quires about 185 hours." https://community.articulate.com/discussions/building-better-courses/
cost-of-developing-1-hour-of-elearning
• Defelice (2017) updates an older estimate [of 90-240 h] for the development of 1 hour of e-
learning of Kapp and Defelice (2009), giving 42-71 h for passive or limited-interactivity e-
learning, 130-143 h for complex digital training techniques. The reduction is required time may
be explained by more user-friendly development tools, but also the development of a more
experienced e-learning developer workforce.
• Bertram and Plowman (2017) give costs for converting 40 slides of PowerPoint into e-learning,
that translate roughly into 50-100 h. And the number of slides may roughly equate 1 hour of e-
learning.

For our calculation, we assumed:

• 60 h for the development of 1 hour training (realistic for plain-vanilla basic e-learning).
• 2 day = 10 h classroom training (the rest is lost because participants are late, round of intro-
duction of who is who, safety instructions, etc.)
• 10 participants in a classroom training, 4 local, the others flown in from the region.
• The travel time is two half-days, i.e. 1 day.
• 3 nights in a hotel, flight, local transport, meals, etc. are grouped together and expressed in
hours at a typical company rate: 5h.
• Self-paced e-learning saves time: No breaks, skipping familiar material or material not deemed
useful for one’s work. Instead of a day (7.8 h), training then takes 2/3 or the face-to-face time,
i.e. 4 h.

In this case, 40-45 trainees accumulated (or 4-5 conductions at 8-10 participants) are needed to
make break even. Frequent updates before this number is reached would destroy any business case
for e-learning.

On the other hand, opportunity costs incurred because an employee is not trained in time are not
considered here and would shift the break-even point to lower numbers of trainees.

23
Modular and Interactive Simulation Training Environment (SimTE) for
Customer-Specific Maritime Training
Mario Gehrke, benntec Systemtechnik, Warnemünde /Germany, mario.gehrke@benntec.de
Volker Köhler, benntec Systemtechnik, Warnemünde / Germany, volker.koehler@benntec.de

Abstract

This paper presents an interactive Simulation Training Environment (called iSi-Frame®) in


combination with other digital training aids like e-Learning lessons. The iSi-Frame® is used for the
design of customer specific training solutions for operation, troubleshooting or maintenance training
of complex technical systems. It provides close-to-real-time simulations of technical and non-
technical processes and is an optimal tool for interactive operation training. Simulation Based
Training (SBT) combines both: transfer of theoretical knowledge and practical knowledge through
interconnecting the simulation with E-learning lessons. After learning theoretical details via E-
learning lessons, the trainee will deepen and test the newly acquired knowledge by being guided
directly to a corresponding simulation exercise.

1. Introduction

Since more than 25 years MarineSoft® in Rostock-Warnemünde, since 2015 part of benntec System-
technik Bremen, has developed real-time simulation systems for training. This has been supported by
several funded projects like WESSP, DIAG-SIM or VIRTUSIM. The focus of the developed
simulation solutions is the support of knowledge transfer for sophisticated technical systems like those
installed on board of modern navy vessels. Improvement of system understanding, safe operation in
all operation modes as well as handling of exceptions or management of failures – also with
catastrophic results, if wrong decisions or actions are made – are the main targets to use simulation
solutions from benntec.

In the last years, customers not only from Germany have installed such solutions for the training of
technical crews on board their naval vessels or as part of their land-based training facilities.

Now benntec has developed the 5th generation of its own simulation solution, the “iSi-Frame” and
delivered it to the first customers.

The “iSi-Frame®” is a very flexible, modular simulation environment supplemented by an integrated


Simulation-based Training Environment. The main features are

• Modular Framework,
• Provides close-to-real-time simulation,
• Features distributed computing,
• Supports various interfaces, e.g. HLA, CORBA, Sockets...,
• Integrated User Administration,
• Different User rights (Roles),
• Record / Replay / Management of iSi-Frame® exercises,
• Assessment / Debriefing of iSi-Frame® exercises,
• Team training via network interface,
• Supports connection of both: OEM Software (e.g. Automation System) and OEM Hardware
like handles for operation.

The aim is to reproduce system behaviour and design as close as possible to original, thus high-
fidelity physical-mathematical models are combined with realistic software replication of on-board
operating panels and user interfaces.

24
2. Challenges for the Customer – Specific Maritime Training

Today current onboard systems become more and more complex regarding system design, operation
and exception handling including maintenance and repair. This requires a deep system understanding
and operational capabilities by the crew members. They must be able to handle the vessel and the
systems safely also in dangerous and emergency situations. Based on the knowledge of the trainees
and the educational objectives different types of training (with and without computer-based solutions)
can be used to reach the required training objectives. All the different types of computer supported
education and training have advantages and disadvantages, Table I.

Table I: Types of computer supported methodical – didactical approaches (incl. Blended Learning)
Hands-on Local CBT WBT / Apps Part Task Full
lessons and Simulation Mission
classroom Simulation
Basic knowledge +++ ++ + + -
Generic knowhow +++ +++ ++ ++ ++
Type-specific ++ + ++ ++ +++
knowhow
SPOT ON – Training - ++ +++ + -
Complexity of ++ + + ++ +++
exercises
Empirical realism - + + ++ +++
Direct and individual +++ ++ ++ ++ +++
feedback
Team training ++ ++ ++ + +++
Actuality of content ++ ++ +++ ++ ++
Invest costs Medium Low – Low - Medium – High
Medium Medium High
Operational costs High Medium – Low Medium High
Low

Very often is it impossible to teach or train at the technical limits of systems because the behaviour in
case of malfunctions or wrong operator actions can cause serious damages of the equipment or injured
persons. To avoid these, the simulation by using mathematical – physical models with near real time
calculations is a way to improve knowledge and awareness of the trainees.

Furthermore, the use of trainers or instructors is limiting the conventional classroom training. A way
is the SPOT ON – self paced on time on need – training is with or without support by trainers. In the
case there is the common way to establish an internet connection between trainer and trainee in case
support is needed. Depending on distance and time this can be rather costly, also today with modern
ship – shore communication tools.

Not only the availability of the instructors is limiting the training. Also the availability of the
equipment for operator training and maintenance training is restricted. This includes the training of
emergencies procedures or repair activities.

So the proposed technical approach by using simulation as part task as well as full mission simulation,
both combined with CBT or WBT, has answers to these challenges:

• Different knowhow levels of the trainees at the beginning (basic to refresher training)
• Training / Education on board as well as on shore with same or very similar tools
• Operation and system behaviour is close to reality
• Individual or team training possible with same tools

25
• Integration with other training methods possible (e.g. to prepare demonstrations for classroom
training)

The following use cases will illustrate this in more detail.

Fig.1: Types of training

3. Use cases for Modular and Interactive Simulation Training Environment

3.1. Use for operation and control of technical ship systems

To face these simulations implemented in the interactive simulation, the iSi-Frame® provides several
powerful features and functions to trainees and instructors, serving one primary objective: to enable
the trainees to acquire process know-how, even from sophisticated technical systems in a short period
of time.

This overall objective is ensured by different key features:

• High-fidelity physical-mathematical models,


• Realistic software replication of on-board operating facilities,
• Interactive process flowcharts,
• Load/Save of defined system conditions,
• Definition environmental conditions,
• Predefined malfunctions,
• Integrated Simulation-based Training Environment (ISi-Frame®).

All technical simulation models implemented in the simulation, are designed to imitate the real
systems behaviour in good approximation. Thus, high-fidelity physical-mathematical models are used
in a high calculation frequency. The complete simulation, which can cover ten thousands of values, is
recalculated several times per second.

By means of realistic software replication of on-board operating facilities, the trainee is able to carry
out operation procedures in exact the same manner as later in practice. These might be simple e.g.
pump-starting units or very complex PLCs including a number of different display pages. This
generates a true recognition value when operating the real equipment, the first time.

All levels of control can be simulated with an iSi-Frame® based training simulation. Starting from the
automation system that can be connected as an OEM software module, continuing with the upper
control level (e.g. bridge), down to local operating panels and even manual operations such as valve
control and filter operation.

26
With ‘Process flowcharts’ a very powerful training-tool is provided to the trainees to acquire process
know-how. A ‘Process flowchart’ is a dynamic visualisation of a technical system with all its details
such as pipes, tanks, pumps, fans, heaters etc. The visualisation is connected to the physical-
mathematical models. System status is indicated and manual operations can be carried out in these
interactive ‘process flowcharts’, Fig.2.

Every state of the simulation system can be stored or loaded at any time. Thus, instructors can prepare
lessons in advance to explain specific system behaviour without time-consuming preparation in the
lesson itself.

Simulated systems are not only influenced by operation procedures and malfunctions, but also by
environmental conditions, to be defined within a special dialog window. In particular, the instructor is
able to configure the following options:

• Sea Area, Sea State,


• Wave direction,
• Water temperature,
• Salinity,
• Wind State, Wind Direction and
• Air Temperature.

Fig.2: Example of process flowchart

Predefined malfunctions alter simulated systems behaviour in a realistic manner. Instructors are able
to select malfunctions from a list and send them via network interface to a selected trainee or all
trainees. Malfunctions are defined during design of a project phase in close contact with the customer.
The training simulation is supplemented by an integrated Simulation-based Training Environment
(iSi-Frame®), its key-features are:

• Integrated User Administration,


• Different User rights (Roles),
• Load / Save of user-defined System states,
• Record / Replay / Management of simulation exercises,

27
• Assessment / Debriefing of simulation exercises,
• Team training via network interface.

Creating exercises in iSi-Frame® is quite easy. User actions can be recorded and stored together with
an appropriate initial state. All user actions and system events are registered and can be modified
afterwards. Exercises can be replayed as demonstrations, trainings and tests. Targets for trainings and
tests are automatically generated from recorded actions. All exercise targets can be modified
individually by the administrator. Debriefing and assessment of exercises are implemented in iSi-
Frame® as well, Fig.3.

Fig.3: Simulation Exercise Editor

The tool set iSi-Frame® supports Team-Training via network. Special simulation exercises are used
for creating target-oriented Team-Training sessions. ‘Instructor’-users can access the team setup
dialog. Thus, they are able to influence each trainee’s simulation at any time, Fig.4. They can:

• Load the actual simulation state of a student,


• Send the actual simulation state to a student,
• Load the actual system view of a student,
• Send the actual system view to a student,
• Lock a trainees’ simulation,
• Block a trainees’ station for user inputs.

3.2. Use in communication training

Beside training for technical operation and control, iSi-Frame allows the generation of real
communication training applications including full operation of voice communication terminals or
mobile radios or additional communication equipment, generic or type specific. Based on system
design of the installed communication solution and the related fleet map, the training scenarios can be
defined. Compositions as well as operational features of digital radios are in the focus of
communication training application. So e.g. direct call, group call and emergency calls can be realized
with iSi-Frame. The real software behaviour and functions of the different types of radio equipment
are the basis of practical classroom training. An integrated communication matrix, the fleet map,
similar to the real world allows sophisticated communication training via the network.

28
Fig.4: Team setup dialog

4. Setup of exercises combining Simulation and eLearning

The interactive simulation provides additional intelligent functions for the combination with
computer-based training. User actions can be recorded and stored together with an appropriate initial
state. All user actions and system events are registered and can be modified afterwards. Exercises can
be replayed as demonstrations, trainings and tests. Targets for trainings and tests are automatically
generated from recorded actions. All exercise targets can be modified individually by the
administrator. Debriefing and assessment of exercises are implemented as well. All users connected to
the network and holding a ‘Student’ role may work in one single world, affecting each other’s
simulations. Alternatively, all students may work in their individual world, trying to solve the same
task. The users holding ‘Instructor’ roles are able to influence each trainee’s simulation at any time.
The Role-World concept described above is the central administration philosophy of iSi-Frame®.

Fig.5: SBT classroom

29
Simulation Based Training (SBT) is a training solution that combines both: transfer of theoretical
knowledge and practical knowledge through interconnecting the iSi-Frame® with E-learning lessons.
After learning theoretical details via E-learning lessons, the trainee will deepen and test the newly
acquired knowledge by being guided directly to a corresponding simulation exercise. Thus, SBT is a
unique training approach to establish skills for decision-making and responsibility, Fig.5.

The trainings lessons as SBT show a new innovative didactical approach for the trainer as well as the
trainees. SBT is based itself on the concept of computer-based training (see Figure 6: CBT single
screen solution).

Fig. 6: CBT single screen solution

The SBT lessons fully support the SPOT-ON (Self Pace On Time – On Need) approach for self-
learning and include interactive elements as well as a mix of modern media data like text, 2d and 3D
graphical elements, animations, audios or videos. The concept, which is well known from CBT, has
been improved to simulation-supported lessons with self-controlled exercises.

Technical documentation, operator manuals, service and maintenance documentation or training


manuals are the base for the SBT lessons. The methodical and didactical preparation of these
documents is included in the SBT development. All these documents are part of the SBT and they are
available all the time as “background information” within the SBT lessons if needed.

The SBT modules include also the monitoring of the performance of the trainees regarding the level
of knowledge or new skills. Tests with exercises to be solved or monitoring of the performance by
different didactical means are also part of the SBT modules. These actions are based on the content
provided by theoretical input or practical exercises, all tests have unambiguous solutions. Tests are
performed without any help function or similar means. The precise evaluation of the results and
feedback to the trainees can be supported by a points-based system.

SBT is a solution for two screens, Fig.7. The CBT part on one screen guides the trainee through the
learning content or the whole lesson. There is an interface between the CBT and the SIM. Via this
interface, the access from the CBT part to the simulation on the second screen is possible.
Demonstrations, exercises or tests can be started to improve the capabilities of the trainees by

30
practical activities. Through this link within the SBT between the teaching (impart of knowledge) and
practical use of it (by the simulation in different operation modes), trainings results are significantly
improved.

Fig.7: SBT dual screen solution

5. Interactive Simulation as “System of the Systems”

The shown simulation framework is a company internal developed framework, which serves as the
basis for different kinds of simulations. It has a state-of-the-art software structure with the following
main features:

• Modular software architecture


• Scalability of the solution
• Distributed interoperable network
• Implementation by using C++
• Internal interfaces based on CORBA
• RTI modules as the backbone
• Segmentation between simulation models, user interface and communication / control
functions of the framework
• SBT modules provide functions for simulation-based exercises
• Bridge modules provide interfaces to external solutions

The software is designed according a three-layer architecture:

31
• Data management layer
• Logic layer
• Presentation layer

Each layer consists of independent processes, which are working together in this distributed system.
The RTI (Real Time Infrastructure) processes are responsible for the time synchronisation, data
storage, data distribution and supervision of all processes - this is the data management layer. Data
communication is completely separated from the logic layer and is only part of the RTI.

The physical-mathematical models of the different real processes form the so-called logic layer. The
simulated world and its processes are implemented in this layer. These are pure console processes
without user interface. So the system is prepared to implement user-specific modules, like customer’s
specific calculations etc. without any knowledge about the internal structure, if the agreed interface
conventions are fulfilled. The timing and synchronisation of these physical-mathematical modules is
also part of this layer.

The GUI (Graphical User Interface) modules are the presentation layer and form the interface to the
user.

All these features as well as the knowledge in the application of iSi-Frame provide the capabilities to
allow it also to be integrated into a tactical or total ship training. iSi-Frame is ready for current and
future challenges in modern training and simulation environments.

Acknowledgements

The iSi-Frame 5.0 Training Environment has been designed and developed in close cooperation with
Thyssenkrupp Marine Systems. Further input for the development was also coming from other
industrial partners like MTU Friedrichshafen. Furthermore, different national and local project funds
have supported the development substantially.

References

BERGER, A. (2008), Bericht zu WESSP – Webfähige skalierbare Simulatorplattform, BMFT,


Förderkennzeichen: IW 061278

KUCHARZEWSKI, H. (2011), Projektbericht DIAG-SIM (Simulatorbasiertes Trainingssystem für


Diagnostik und zustandsbasiertes Management von Schiffsdieselmotoren), ZIM des BMWi, Förder-
kennzeichen KF 2472801PR9, 2010 - 2011

LUCAS, U.v.; KÖHLER, V. (2012), Machbarkeitsstudie: Simulationsbasierte Ausbildung in virtu-


ellen Welten, TBI für WiMi Mecklenburg-Vorpommern, Förderkennzeichen V-630-2-063-2011/180,
2011 - 2012

32
Autonomous Ships – Changing Perceptions and Expectations
Stig Eriksen, Svendborg International Maritime Academy, Svendborg/Denmark, ser@iti.sdu.dk

Abstract

This paper investigates the perception of what autonomous ships are and whether this perception has
changed over time. Research project material, scientific material and news articles on autonomous
ships are analysed to investigate how the concept of the autonomous ship and its benefits are perceived.
Some common understanding is found, but also considerable uncertainty as to whether autonomous
operation implies unmanned operation. In the scientific material, a tendency to clearly distinguish
between the autonomous and unmanned is becoming more prevalent. The implications of this ambiguity
in relation to the expected benefits of the autonomous ship are discussed.

1. Introduction

Automation has been used on ships since probably before the introduction of the steam engine.
Technological innovations allow tasks previously carried out manually to be automated, enabling fewer
crew members to operate ever larger and more complicated vessels, safer and with greater efficiency.
With technology advancing faster than ever, it is a small mental leap to extrapolate this evolution to a
scenario where automation takes over completely, and crew members become obsolete all together. This
idea of the fully automated and unmanned ship is nothing new, rather it has been around for at least 30
years, Bertram (2003). In the 1980s, it was called the intelligent ship, see e.g. Noma (2016), but it is
now often referred to as the autonomous ship. While the concept of the fully automated, or at least highly
automated, ship is not new, using the term autonomous to describe the ship is relatively recent. The term
“autonomous ship navigation” was used as early as 1991, Stamenkovich (1991), but the earliest instance
of autonomous being used to describe the vessel as an entity found in the literature search for this paper
was an article from 2004, Young-il (2004). Autonomy is proclaimed as a disruptive technology with the
potential to revolutionise the maritime transport business, Jokioinen et al. (2016), and autonomous ships
have become an important research topic. Despite the interest in the topic, there is considerable
uncertainty regarding what actually constitutes an autonomous ship, Pico (2017). Much work has been
done to define the term. IMO has begun work to define the autonomous ship within their regulatory
framework, IMO (2019). Lloyds Register and other classification societies have described autonomy
levels that can be assigned to ships, Lloyd's Register (2016). Academic work has also been done on
defining the autonomous ship, including Rødseth and Nordahl (2017) in “Definitions for Autonomous
Merchant Ships”.

The Oxford lexical definition of autonomy as “the ability to act and make decisions without being
controlled by anyone else”, Hornby et al. (2005), may not at first glance appear very different from the
common perception of autonomy in shipping. To “act and make decisions”, however, implies the ability
of an entity to generate desire and act on that desire as opposed to acting on the desires of the creator,
Intel (2018). A truly autonomous system would act on its own laws and objectives and thus be inherently
un-deterministic. In the words of Rødseth and Burmeister (2012), “the more autonomy that is assigned
to a robot, the less controllable it is. The ultimate autonomous robot is the fully intelligent robot which
in principle is not controllable at all, except by very high level objectives.” Such a system is obviously
not desirable and quite far from even the most futuristic versions of the autonomous ship.

The word autonomy has clearly come to mean something else in the general discussion of autonomous
ships. This is not in itself a problem: words change meaning or are used in different contexts all the
time. For example, if an object was said to be artificial in the 14th century, this meant that it was artfully
created by someone with great skill @listverse (2016). The problem arises when the new use of the
word is not clearly defined or if there are conflicting definitions, perceptions and uses. An unclear
definition is problematic since a statement or projection may lead to unrealistic expectations if the
reader's understanding of the term does not match that of the author. It may on the other hand also lead

33
the reader to conclude that the author is making unsubstantiated claims. Ambiguity in the term also
makes it hard to challenge statements made by authors if exact definitions are not stated in every case.

The focus of this article is not to question the existing definitions of autonomy or the autonomous ship.
Instead, it will explore how the autonomous ship is perceived and defined by those working with the
concept. Research projects, scientific literature and maritime news sources are analysed to determine
what characterises the autonomous ship in the broader maritime community as well as if and to what
extent there is agreement on this perception. Perceived benefits of the autonomous ship and the expected
challenges in its development are also collected from the three different sources and analysed.

2. Autonomy versus automation

Automation is the use of machines or computers instead of people to do a job, Cambridge Dictionary
(2019). We are surrounded by automation in our daily lives, from the simplest mechanical radiator
thermostats to the complex algorithms running internet search engines. The terms automation and
autonomy certainly sound similar but, while a refrigerator is able to maintain a stable temperature
without human interaction, most would agree that a refrigerator is not an autonomous system, even if
the definition of the word may have changed. Autonomy and automation are two different things, but
they certainly overlap. Both can be described as existing on a scale with manual operation on one end
and autonomy or automation respectively on the other. An autonomous system within the realm of
technology must surely require a high level of automation. The question is: if the level or complexity of
automation is in itself enough to make a system autonomous and where in that case the dividing line is.

Modern ships have an enormous amount of automation on board without being what is commonly
considered autonomous. Most systems are standalone systems designed to automate a single piece of
equipment such as a boiler, purifier or automatic filter. Some systems are connected together in groups
or even into one integrated control system encompassing many different subsystems. On many ships,
generators are started and stopped without any human interaction on signals given by the power
management system, for example. Engine control systems are becoming increasingly more complex,
regulating fuel injection and valve opening timing within milliseconds for the purpose of optimising
fuel efficiency and exhaust gas quality.

The same engine control systems have existed in cars for many years, making these vehicles highly
automated but still no nearer to autonomous. In the automotive world, the term autonomous suffers the
same issue of not really meaning autonomy any more. In the words of one Nissan engineer, “a truly
autonomous car would be one where you request it to take you to work and it decides to go to the beach
instead,” Autotrader (2019). The common understanding is that the term autonomous car refers to one
where the driving is automated. This idea also seems to apply to ships, but the issue of self-driving gets
a little more complicated here. The autopilot controls the ship’s heading for most of the ship’s voyage
under constant monitoring by the navigating officer. Even simpler autopilots now offer the feature of
following a track as laid out on the electronic chart and automatically changing course at waypoints. Of
course, the autopilot only normally controls the rudder and not the machinery or thrusters. Ships with
Dynamic Positioning (DP) systems go further than this and are able to maintain a stationary position
and even carry out planned manoeuvres with great precision totally independent of human interaction.
None of these systems, however, are able to “look” out of the window, perceive other ships and avoid
collisions based on those inputs.

If self-driving is the dividing line between autonomous and non-autonomous, then the only missing
piece is the ability to detect and avoid other ships or objects in accordance with the International Regula-
tions for Preventing Collision at Sea (COLREGS). If unmanned operation is, however, part of what
defines an autonomous ship, then self-driving is only part of the puzzle. After all, it must be remembered
that, unlike in road transport, driving the vessel is just a small portion of the work carried out on board
large cargo ships. Autonomy, as already mentioned, is sometimes described as being on a scale. One
commonly used scale is Lloyd’s autonomy levels that ranges from AL 0, “Manual – no autonomous
functions” to AL 6 “Fully autonomous”, Lloyd's Register (2016). The problem presented by these scales,

34
in the context of understanding what defines an autonomous ship, is that all ships no matter how
primitive essentially fall into the spectrum and could be designated as autonomous in some form.

Where does that leave us in the understanding of the autonomous ship? In summary, the definition of
autonomy found in the dictionary is not what is meant by autonomy in shipping. Automation is crucial
for vessel autonomy, but the level or complexity of automation may not in itself be what enables
autonomy. Self-driving may be a crucial ability, but if unmanned operation is part of the perception of
autonomy, then this is only part of the solution.

3. Methodology and data collection

The basis of this article is literature on the subject of autonomous surface ships, with particular emphasis
on merchant vessels. The sources can be divided into three categories: research projects as discussed in
section 3.2, scientific literature and news articles both discussed in section 3.3. The material used has
been gathered through an exploratory literature search. As much literature from the research projects
that could be found and publicly accessed has been used. Regarding scientific literature and news
articles, a volume has been found, selected and used that is believed to constitute a representative part
of the total available material. In neither of the categories does the list of materials claim to be
exhaustive.

3.1. Methodology

The material for this article was read and analysed with the purpose of investigating how the texts
discuss the concept of autonomous ships. The focus is on how words or phrases are used in the texts and
how they describe the subject in general. The material was processed with a focus on how a
knowledgeable reader would understand the text. Understanding a text, however, is highly subjective
and different readers will form different understandings of the texts based on their specific knowledge,
attitude and focus. Effort has been put into not extrapolating or interpreting statements and not reading
preconceived understandings into the text beyond what the author intended. The material was largely
read and analysed by the same person. Selected material has been analysed by more than one person to
validate the method and results.

3.2. Research projects

The available material was studied and analysed with a focus on the following categories:

• How is autonomy defined, and what characterises the autonomous ship?


• How is the term automation used in relation to autonomy?
• What are the expected benefits of the autonomous ship?
• What are the expected challenges in developing and operating the autonomous ship?

The analysis of the projects is presented in sections 4.1 and 4.2. In the category of autonomous and/or
unmanned ships, two projects stand out: “Maritime Unmanned Navigation through Intelligence in
Networks”, MUNIN (2018), and “Advanced Autonomous Waterborne Applications”, AAWA (2016).
Other commercial projects exist, most notably the Yara Birkeland project, Kongsberg (2019), but also
the planned NYK (2019) test of an autonomous container vessel. The publicly available material on
these projects is, however, very limited. The material published by MUNIN and AAWA provides the
most comprehensive project descriptions available and is also referenced frequently in other sources.
The two projects are briefly described in the following.

3.2.1 MUNIN

The MUNIN project, MUNIN (2018), was a three-year collaborative research project that completed its
work in 2016. The project was mainly funded by the European Commission under its Seventh

35
Framework Programme. There were eight partners in the project, consisting of educational institutions,
private research institutions and technology companies.

3.2.2 AAWA

The AAWA initiative was led by Rolls-Royce and ended in 2017, Jokioinen et al. (2016). The project
was funded by the “Finnish Funding Agency for Technology and Innovation”. Contributors to the
project included several of Finland’s largest universities as well as industry partners such as DNV-GL
and Inmarsat.

3.3. Scientific literature and news articles

The literature included in this paper comprises articles published in scientific journals and articles from
conference proceedings including five from previous COMPIT conferences. One working paper is also
included. Articles and presentations from the MUNIN and AAWA projects are discussed under the
research projects category. The focus of processing scientific literature in this paper is on understanding
how researchers not associated with these projects perceive the concept of autonomous ships. Material
with different focuses, such as legal, business, technological and human nature aspects, have been
included to get as broad a view of the perception of autonomous ships as possible. In all, twelve scientific
papers were analysed.

News coverage should not be treated as an accurate source of scientific material, but news articles are
included in this article as they convey how the concept of the autonomous ship is perceived by the
public. To the reader with a cursory interest in the subject, maritime and business-oriented news sources
are likely to be their main source of information. How the definition, benefits and prospects of
autonomous ships are portrayed in the news is likely to form the perception of the subject in the broader
public. The news articles mostly originate from open source online magazines. One press release is also
included in this category. The search was conducted as one normally would if looking for news on
autonomous ships online, through searching well-known online maritime news sites, such as
worldmaritimenews.com and shippingwatch.com, and through generic web search engines.

A quantitative approach was adopted in the analysis of both scientific literature and news articles. One
matrix for the scientific literature and one for the news articles, as seen in tables I and II, were
constructed with categories based on what was found in the research project analysis. The matrixes in
tables I and II are similar but not identical. Some terms or statements are prevalent in the scientific
literature but not in the news articles and vice versa.

Date of publication is noted, and for the news articles also which sources are quoted. The use of the term
autonomy versus automation and unmanned versus autonomous is assessed to determine if there is a
“clear separation” or “no or unclear separation” between the terms. “No or unclear separation” means
that the terms are explicitly used synonymously, or the connection is clearly implied such as “[…] a
challenge for an autonomous vessel designed to operate safely without any crew onboard”, Willumsen
(2018).

The analysis treats the article or paper as one collective piece but allows for conflicting statements in
the text. If one source refers to a benefit of autonomous vessels, for example, while another source, the
journalist or the author presents an opposing statement, both views are registered for that article. If a
benefit is only mentioned for the purpose of opposing it, only the opposing view is recorded. The same
is the case for relationships between statements. In one news article, two sources are interviewed that
each has their own take on autonomy. The first source explains “[…] there are many perceptions of what
"autonomous" actually means and whether the term refers to unmanned or manned vessels”, Pico
(2017), separating the two terms. Later in the article, the other source states that “in general if we
compare manned and autonomous vessels, the savings for autonomous vessels is roughly 23 percent
[…]”, clearly implying that the autonomous vessel is unmanned. In this case, both categories “no or
unclear separation” and “clear separation” are registered.

36
4. Analysis

4.1. MUNIN

The material studied in this analysis consists of twelve papers, eleven deliverable reports and four
brochures completed by twenty-two different authors or editors. All the material is available on
MUNIN’s website (2018).

4.1.1. MUNIN’s perception of the autonomous ship

MUNIN investigated potential concepts for partially and fully unmanned ships by exploring the specific
case of a bulk carrier in intercontinental trade. The focus was on unmanned operation, but the vessel
was intended to operate autonomously or automatically for large parts of the voyage. When operating
close to land, the ship was either to be controlled remotely or operated by an onboard crew. The vessel
must be able to call for assistance by remote control in situations where the onboard control systems
cannot cope. If a connection to land cannot be established, the vessel must enter a fail to safe mode,
which could mean maintaining a stationary position, essentially requiring a dynamic positioning system,
Rødseth and Burmeister (2012). The ship must have a large degree of mechanical redundancy and would
require more advanced automation and more sensors than conventional ships.

The MUNIN project was initiated by the Waterborne TP, which defines the autonomous ship in its
“Strategic Research Agenda” (2011) as a vessel incorporating: “next generation modular control
systems and communications technology [that] will enable wireless monitoring and control functions
both on and off board. These will include advanced decision support systems to provide a capability to
operate ships remotely under semi or fully autonomous control.”

The article “Developments Towards the Unmanned Ship” by Rødseth and Burmeister (2012) sets out
the initial position and rationale regarding the development of an unmanned ship within MUNIN. In the
article, the relationships between the terms automatic, autonomous, and intelligent control are discussed.
It is explained that full autonomy as defined by Waterborne TP is not desirable within the MUNIN
project since such a system with no constraints would be non-deterministic and “[…] it cannot a priori
fully know what the possible outcomes of the decision will be.” This level of autonomy, which
Waterborne TP calls fully autonomous, is described as intelligent within MUNIN. Somewhat
confusingly, intelligent comprises an autonomy level above autonomous as seen in fig. 1. Automatic is
below autonomous on the scale of autonomy.

Fig 1: Autonomy versus determinism in MUNIN, Rødseth and Burmeister (2012)

Within MUNIN, autonomous control is defined as “the ability to make complex decisions that may not
be easily described through mathematical or logic formulas, but which still are constrained within certain
predefined limits”, Rødseth and Burmeister (2012). The authors explain that “most systems that claim
to have autonomous control functions […] are mostly automatic rather than truly autonomous or even

37
intelligent” and goes on to explain that “MUNIN will develop the principles for a basically automatic
ship, but with some capability to handle certain unplanned situations within defined constraints.”

4.1.2. Autonomy versus automation in MUNIN

As seen from the statement in section 4.1.1., what is meant by autonomy in MUNIN is to a large extend
an advanced form of automation. This is supported by fig. 1, where autonomy and automation are
presented on the same scale.

The term automation is used infrequently in the material. When it is, the notion that automation will
lead to autonomy is mostly supported, such as here: “it is assumed that gradual automation will step by
step lead the way from today’s conventional shipping to truly autonomous shipping in the future”. In
other places, it appears that automation in itself will not lead to autonomy, such as: “as there is normally
no crew aboard during autonomous operation, the unmanned vessel not only has to be equipped with
high fidelity automation and various additional sensor systems, it also needs facilities for autonomous
operation”, Kretschmann et al. (2015).

4.1.3. Autonomy versus unmanned in MUNIN

In one of the project's earlier publications, the terms autonomous ship and unmanned ship are defined
in the contexts of MUNIN: “An autonomous ship is navigating and making evasive maneuvers based
on an automated software system. The system and the ship are under constant monitoring by a Shore
Control Center (SCC). An autonomous ship does not have to be unmanned but can contain maintenance
or service crews, while the bridge and/or the engine control room is unmanned. An unmanned ship is a
ship with no humans onboard. An unmanned ship does not have to be autonomous; it can be under
autonomous control but it can also be under remote control from a SCC, or from other places (e.g. a
pilot or tug boat, or a mooring supervisor)”, Porathe et al. (2013).

This definition clearly separates the two terms: an autonomous ship can be manned, and an unmanned
ship is not necessarily autonomous. Despite this clear separation, the two terms are often used
interchangeably and sometimes synonymously throughout the material. Some examples: “obviously
with the autonomous vessel there is no Master and crew on board.”; “it is not considered that the liability
risks of the autonomous ship will differ greatly from the manned ship save for the obvious exception of
there being an eliminated risk of personal injury/illness and death of crew”, Kretschmann et al. (2015).

Use of the terms unmanned and autonomous varies greatly throughout the different publications, which
sometimes favour one term or the other but often use a mix of the two interchangeably. Unmanned is
used both to describe the vessel as an entity and the operation of the vessel. Autonomous is used to
describe the ship as an entity, its mode of operation or navigation as well as the bridge and engine room
systems controlling the autonomous ship.

A detailed analysis of where the terms unmanned and autonomous are used has not been carried out but
unmanned tends to be used more in sections discussing legal issues, for example, while autonomous is
used more frequently in parts where control systems are described. It is also possible that the different
authors favour one term over the other in different contexts.

4.1.4. Benefits and challenges of autonomous ships in MUNIN

The main benefit of the autonomous ship expected in the MUNIN project is a reduction in crew costs.
Improved safety of both the ship and people due to a reduced risk of human error is also an important
advantage. Human error is the cause or a contributing factor in the majority of marine accidents, Allianz
(2017). Autonomy is expected to improve ship safety by replacing the human operator with automation.
By eliminating the crew, the risk of harm to people on board also disappears. The risk of fire onboard
or pollution from the ship is also expected to be reduced since it will be possible to fill enclosed spaces
with inert gas without having to consider the safety of a crew.

38
Regarding operational factors, MUNIN explains that autonomous operation would promote slow
steaming and reduce off-hire time. The vessel’s construction could be optimised if there were no need
to house a crew. There would be no need for an accommodation, which would reduce the weight of the
ship, reduce wind resistance and leave room for more cargo. Hotel systems for the crew, such as sewage
and air conditioning, could also be eliminated. This would make the ship simpler and more reliable.
Eliminating the accommodation and hotel systems would also reduce the building cost of the ship as
wells as the fuel consumption and therefore also the emissions. MUNIN also considers it necessary to
run the engines on diesel instead of heavy fuel oil, which would simplify the fuel oil system, making it
cheaper and more reliable.

Ship intelligence is what MUNIN calls the benefits originating from increased data collection and
processing. Increasing the intelligence of the ship is expected to optimise operation, route planning and
weather routing as well as allowing for better condition monitoring and management of machinery.
MUNIN’s main focus is on unmanned operation and it explains that the benefits originating from ship
intelligence are not exclusive to unmanned ships. Traditionally manned ships or vessels with reduced
manning could incorporate these technologies and reap the benefits of increased ship intelligence.

The challenges in developing and operating autonomous ships as perceived within MUNIN are
achieving sufficient technical robustness, breakdown of non-redundant machinery and communication
link failure. Also mentioned is the difficulty of participating in rescue operations and interacting with
conventional ships in general. The lack of legal and contractual frameworks around autonomous ships
is also mentioned. Autonomous ships may need longer port stays to allow time for maintenance.
Regarding human factors, the difficulty of transferring “ship sense” to the control centre is mentioned
along with the issue of automation awareness where the operator is not fully conscious of what the
automated system is doing.

4.2. AAWA

This analysis is based on the document entitled “AAWA Whitepaper” or “Remote and Autonomous
Ships – The next steps” published in 2016 by Jokioinen et al. (2016). The document spans 84 pages and
is a collaboration between 16 authors.

4.2.1. AAWA’s vision of the autonomous ship

The object of AAWA’s research is not a specific concept of one type of ship but rather the development
of remote and autonomous ships in general. Some general prerequisites are, however, mentioned for a
ship to safely operate autonomously. The autonomous ship must be under constant supervision from a
shore control centre that is alerted and able to take over control if a situation that the onboard system
cannot cope with is encountered. If the connection is lost and the onboard system is not able to resolve
a situation, the ship must be able to proceed to a safe location and maintain its position, essentially
requiring dynamic positioning capabilities. More mechanical redundancy is required, along with more
advanced automation and more sensors.

Autonomy, or the autonomous ship, is not explicitly defined in the AAWA material. Autonomous is
generally used to describe the ship as an entity but the term is also used synonymously or inter-
changeably with smart and intelligent. “Autonomous shipping is the future of the maritime industry. As
disruptive as the smart phone, the smart ship will revolutionise the landscape of ship design and
operations.” Mikael Makinen, President Rolls-Royce Marine, Jokioinen et al. (2016). “Two years ago
talk of intelligent ships was considered by many as a futuristic fantasy. Today, the prospect of a remote
controlled ship in commercial use by the end of the decade is a reality”.

4.2.2. Autonomy versus automation in AAWA

Automation is not frequently used as a term in the material. When it is used, it generally supports the
notion that automation leads to autonomy. “From a technology point of view, autonomous vessels and

39
self-driving cars may have many things in common[…] For example, ships are more likely to be
operated by companies than by private individuals and automation is more focused on remote control
than complete automation, at least in the early phases”; “safety and security impose essential
constraining requirements that need to be fulfilled in the design and implementation of ship automation.
In principle, autonomous or tele-operated ships are required to be, at least, as safe as conventional
vessels in similar service”, Jokioinen et al. (2016).

4.2.3. Autonomy versus unmanned in AAWA

The object of the AAWA project is the autonomous ship but in the context of AAWA in general this is
an unmanned ship. Many of the benefits of autonomous ships proposed in the material originate from
there being no crew on board. Autonomous vessels are also juxtaposed with manned vessels in the text,
such as here: “however, there will always be manned vessels sailing along with autonomous ships”.
Autonomous vessels are also referred to as having no crew on board: “autonomous ships involve greater
legal challenges than remotely operated ones. The latter ones still have a crew, even if not on board
[…]”; “lack of permanent crew on-board the autonomous ships would emphasise the role of port
operators in accepting the cargo […]”. In a few places in the text, the autonomous ship is discussed as
having a crew: “the crew members need to be trained in any case to fulfil all functional tasks and
capabilities left for the crew in autonomous ships,” Jokioinen et al. (2016).

Autonomous is generally used more frequently than unmanned but use of the two terms varies greatly
throughout the text. Autonomous is used almost exclusively to describe the ship as an entity except in
the chapter discussing legal issues, where unmanned is mostly used.

4.2.4. Benefits and challenges of autonomous ships in AAWA

Many of the same benefits and challenges mentioned in MUNIN are also proposed in the AAWA
project. AAWA also considers the main benefits to be reduced crew costs and improved safety for the
ship and people due to the elimination of human error. No accommodation and hotel systems with all
the resulting benefits is also frequently mentioned. Building and operational costs are expected to fall,
while productivity, reliability and eco-efficiency are expected to improve.

More data is expected to optimise the ship's operation, enable more effective pooling for the shipping
operator, open the door to new leasing opportunities and allow for better online cargo services. It is also
expected to optimise route planning and weather routing as well as allowing machine diagnostics,
improving the maintenance schedule and enabling the ship owner to obtain a better fuel price.

The challenges in developing and operating autonomous ships as perceived within AAWA are a
shortage of internet bandwidth, the dangers of hacking, signal latency in remote operation, difficulties
in obtaining satisfactory fusion between sensors and conflicting sensor data in general. There is also the
danger of skill degradation in operators, since they may not get sufficient experience if the autonomous
system is in control most of the time. Skill degradation, combined with reduced situational awareness,
is reported as especially critical when operators are required to take over control at short notice in
emergency situations. Poor situational awareness and poor automation awareness also pose a risk
especially when the operator is expected to monitor multiple ships. It is expected to be difficult to
incorporate the concept of “good seamanship” into an autonomous system, making it a challenge to
comply with COLREGs. The fact that existing legislation and contracts do not accommodate
autonomous ships is also mentioned. AAWA also believes the risk of cargo-related incidents may rise.

4.3. Scientific papers

The scientific papers covered in this article focus on different aspects of autonomous ships, such as
legal, commercial and human aspects, or on the development or testing of specific technical systems.
How precisely the concept of the autonomous ship is defined varies depending on the focus of the article.
Papers that explore the broader implications of the introduction of autonomous ships, for example, tend

40
to be very specific in their definition. Articles that describe technical systems tend to describe the system
and how it was developed or tested in great detail, while the general concept of ship autonomy tends to
be only vaguely defined if at all, since it is not important for the understanding of the system described.
The scientific papers used in this article are considered to be of a high quality and the analysis is not an
assessment of this in any way. Nor should the analysis be taken as a comment on whether the concept
of the autonomous ship is considered to be adequately defined or not. The analysis serves only to
investigate how the papers present the concept of the autonomous ship within their own frameworks.

Automation is very rarely used as a term in the scientific material in general and only in one article is it
used synonymously with autonomy. The term smart is used as either an overarching term that
encompasses the autonomous ship or synonymously with autonomous in three articles.

In four articles, there is a clear separation between the terms autonomous and unmanned. One example:
“note that the terms unmanned and autonomous ships are often interchanged, but they are not the same.
An unmanned vessel could be remotely operated, and it’s therefore not autonomous; while an
autonomous ship could be manned”, Mediavilla et al. (2016). In six articles, there is no or unclear
separation between the terms, such as here, where manned and autonomous are juxtaposed: “future work
will deal with: conflicting rules, interaction of autonomous and manned vessels […]”, Mediavilla et al.
(2017). Sometimes, the two terms are used interchangeably: “it is established that not all commercial
goods will be suitable for autonomous transport by sea. Unmanned vessels are expected to carry cargoes
that are stable and non-hazardous,” Hogg and Ghosh (2016). In other cases, it is stated or implied that
there is no crew on board the autonomous vessel, such as: “one could think that the autonomous ship
would be the solution to this kind of unlucky events, because it is unmanned”, Ahvenjärvi (2016); “First,
the lack of human presence on‐board may render the proposed autonomous ships unseaworthy[…]”,
Carey (2017). In only two articles is the separation between the terms not or insufficiently discussed.
Five articles state that autonomous does not necessarily mean unmanned. In four of these five, there is
clear separation between autonomous and unmanned in general. The last of these five articles makes
statements elsewhere in the text implying that autonomous is unmanned. There is a clear tendency for
articles published later to clearly separate autonomous from unmanned, while the earlier articles
generally do not.

Two papers consider remote controlled operation to be a level of autonomy that is not fully autonomous,
while two papers do not distinguish between remote controlled operation and autonomous operation.
Three articles do not consider remote controlled ships to be autonomous.

4.3.1. Benefits and challenges of autonomous ships in scientific papers

Of the expected benefits of autonomous ships, improved safety for the ship and people is mentioned in
six out of the twelve papers. Four papers state that accidents due to human error will be reduced on
autonomous ships, while three papers challenge this notion. One paper refers to statements that support
both views. Three papers expect autonomous ships to result in improved fuel efficiency, while increased
operational efficiency is only mentioned once. Lower crew costs are mentioned as a benefit in five
papers and another expects lower operational costs in general, which could encompass crew cost,
although this is not specified. The possibility of eliminating hotel systems due to the absence of crew
on board is mentioned three times, and the possibility of carrying more cargo due to not having an
accommodation is stated in two papers. The benefit of increased reliability on autonomous ships is
mentioned only once. Three papers mention the autonomous ship having a positive impact on the
expected shortage of seafarers in the future.

The most frequently mentioned challenge in the development of autonomous ships is that maritime
legislation not allowing for autonomous operation. This issue is mentioned in eight of the twelve papers,
while data security is mentioned as a challenge in four. The business case of the autonomous ship being
uncertain is mentioned as an issue in two papers. The man-machine interface between the autonomous
ship and remote or onboard operators is mentioned as problematic in two papers also. Three papers state
that the building cost of an autonomous ship will be higher than that of a conventional ship, while one

41
expects it will be lower. The public’s perception of autonomous ships as being unsafe and the public’s
perception of autonomous ships putting people out of jobs is both mentioned in three papers.

4.4. News sources

Popular news articles contain views, opinions and statements from many different sources. The short
format of a news article does not allow for exhaustive elaboration on or definitions of all the topics
discussed. Journalists may quote out of context or paraphrase sources, giving an inaccurate
representation of the originally intended message. Perceptions and statements on autonomous ships
presented in news articles should therefore be treated with some caution. The analysis shown in Table
II, however, presents some interesting trends when all twenty-three news articles are considered
together. Of the twenty-three articles, ten quotes, paraphrases or contain interviews with representatives
from Rolls-Royce. Six articles contain statements from shipping companies and nine others from the
remaining sources, Kongsberg, Yara, MacGregor, Wärtsilä, Norled, labour unions and authorities.

In three articles, the term smart is used either as an overarching term that encompasses the autonomous
ship or synonymously with autonomous ship. The term intelligent ship is used in two.

Ten articles use the words automation or automated in the discussion of autonomous ships. In none of
these ten articles is there a clear distinction between the two terms. In some articles, the words seem to
be used synonymously, such as here: “despite the fact that ULCVs [Ultra Large Container Vessels]
would not opt for full automation they are likely to adopt many parts of R&A [Remote and Autonomous]
technology”, World Maritime News (2018b). In other places, it appears that autonomous and/or
unmanned operation is the end result of increased automation, such as “the ongoing push toward
automation of ships is not likely to result in crewless containerships anytime soon […]”, World Maritime
News (2018a). From the analysis, it does not appear that using autonomy and automation synonymously
is becoming less prevalent with time. There is unclear separation between the terms in six out of the
nine articles published in 2018. Of the articles published before 2018, there is only unclear separation
in four out of fourteen.

There is generally very little distinction between the terms autonomous and unmanned in the news
sources. In only two articles is it clear that autonomy does not mean unmanned. In one of these two
articles, a quote from another source ties unmanned to autonomous elsewhere in the text, meaning the
article displays both “clear” and “unclear separation”. In five cases, however, it is indicated that an
autonomous ship is not necessarily unmanned all the time, and that autonomous technologies could be
applied on manned vessels. In all these five articles, statements implying that autonomous is unmanned
are found elsewhere in the text.

In five articles, the term “fully autonomous” is used in a context meaning both unmanned and not remote
controlled. The term is used often when referring to the Yara Birkeland project: “Yara Birkeland will
initially operate as a manned vessel, moving to remote operation in 2019 and expected to be capable of
performing fully autonomous operations from 2020,” World Maritime News (2017b). A vessel under
fully autonomous operation is an unmanned vessel when presented in this way.

If remote control is autonomous operation or not is the matter of some contention. Five articles make
this distinction, while three treat remote control as being some form of autonomy, such as here:
“Japanese shipping company Nippon Yusen Kabushiki Kaisha (NYK) intends to test an autonomous
containership in the Pacific Ocean in 2019. […] The boxship, the size of which has not been specified
yet, would be remotely controlled”, World Maritime News (2017a).

4.4.1. Benefits and challenges of autonomous ships in news sources

When considering the perceived benefits of autonomous ships, safety for ship and crew is mentioned
most often, appearing in eleven articles. However, one source also questions the truth in this claim. In
eight cases, it is specified that this improvement in safety is the result of reduced human error as the

42
human operator is removed. This claim is contested in two articles. A reduction in fuel consumption is
mentioned seven times, while optimising operational efficiency is highlighted in six articles but also
challenged in one. Regarding other benefits, reduced crew costs have four mentions, while higher cargo
capacity and a reduction in the number of hotel systems because of not having an accommodation are
mentioned two and five times respectively.

Regarding challenges in the development of autonomous ships, the fact that the business case is not yet
clearly proven is mentioned most frequently. This issue is pointed out in eight articles, while the issue
of maritime legislation not accommodating autonomous ships is mentioned in five. Other highlighted
challenges are data security at two mentions, and operator skill degradation and public perception at one
mention each.

5. Discussion

From the analysis, a general agreement on some aspects of the autonomous ship is seen. The autonomous
ship is expected to require a large degree of mechanical redundancy, it will require more advanced
automation and more sensors, and it must be possible to monitor it constantly from shore. Dynamic
positioning is mentioned as a probable necessity in both the MUNIN and AAWA projects. There is also
agreement on the need to be able to control the vessel remotely from an on-shore control centre, although
whether remote controlled operation constitutes autonomy is a point of contention.

Whether the autonomous ship is unmanned or not is another aspect, and a vital one, where there is little
agreement. This issue is crucial since most of the expected benefits of the autonomous ship originate
from there being no people on board. Some say that autonomous means unmanned, while others say
that it definitely does not. Many sources, however, are either vague about the distinction or say that
autonomous may be manned in one place in the text but then use autonomous and unmanned
synonymously in another. This ambiguity is found in all three categories of research projects, scientific
literature and news articles. In the scientific literature, there is a trend of the latest papers being clearer
in their separation of autonomous and unmanned. It could be that there is a growing awareness of the
need to clearly define autonomy in the context of these papers due to the term's perceived ambiguity.
This trend is not seen in the news articles. In the news, there is a tendency towards increasingly unclear
separation between automation and autonomous. It could be that the definition of autonomy has
increasingly come to mean something that is closer to automation and that the two terms are now used
synonymously. It could also mean that there is a growing awareness that what has previously been
discussed as or attributed to autonomy is in fact automation.

The trends seen in Tables I and II relating to the separation of the terms automation and autonomous,
and in particular between autonomous and unmanned, are not reflected in the expected benefits. There
is a big difference in how many and which benefits and challenges are mentioned in the papers and
articles, likely depending on the subject. Some benefits seem to group together: the lack of qualified
seafarers only appears in the first half of Table I, but it is hard to see any clear trends in general.
Regarding the challenges, one development stands out. An unproven business case appears only in the
second half of both tables but is mentioned frequently, in eight out of thirteen articles, in Table II.

Much of what autonomy is expected to bring is related to unmanned operation. Removing the
accommodation and eliminating the hotel systems is only possible if the vessel is completely unmanned.
Without doing this, the benefits of a simpler, more reliable ship that is cheaper to build, with more room
for cargo and more fuel efficient due to lower weight and less wind resistance cannot be claimed.

Some benefits can only be partially claimed if the crew is merely reduced. What can be saved in terms
of crew costs, and with this the increased incentive for slow steaming due to reduced crew costs as
mentioned in MUNIN, is uncertain if it is not specified to what extent the crew size is reduced. If there
is still a crew on board, they will also still be subjected to the hazardous working conditions at sea and
there is still an expected shortage of seafarers in the future to account for.

43
With humans in the operating loop the benefit of increased safety for the ship and people due to reduced
human error is also uncertain. Without specifying exactly which processes will be made autonomous it
cannot be determined if and how autonomy will result in reduced human error. Increased safety is the
most frequently mentioned benefit of the autonomous ship, but it is also the most contested.

Other of the expected benefits of the unmanned and/or autonomous are effects derived from having to
make vessel simpler, with more redundancy and a greater number of sensor points that could arguably
be achieved equally well on a conventional manned vessel. One example is the simpler and more reliable
fuel system due to running on diesel instead of heavy fuel. Optimised route planning and weather routing
along with better condition monitoring and management of machinery are the result of more sensors
and better data processing, which does not require the ship to be unmanned or even autonomous in any
way. The same is the case for more effective pooling, new leasing opportunities and better online cargo
services as a result of greater data flow as mentioned in AAWA.

This does not in fact leave many of the perceived benefits of the autonomous ship. Increased operational
efficiency and better operational reliability remain, although it is not always clear from the material
exactly how autonomy will contribute to this.

5. Conclusion

There is some common understanding of the overall concept of the autonomous ship but there is no
consensus as to whether an autonomous ship is unmanned or not. There does seem to be a growing
awareness of the necessity to separate the terms within the scientific material, although this
understanding has not permeated through to the news media. The changing perception of the
autonomous ship not necessarily being unmanned, however, has not changed the expectations of what
autonomy has to offer the maritime world. Whether an autonomous ship is unmanned or not has a huge
impact on the benefits that can be expected. Most benefits of autonomous ships found in the analysed
material are in fact related to the lack of crew on board.

What an autonomous ship is if it is not unmanned is not clear from the analysed material. IMO,
classification societies and researchers have already done much to define the autonomous ship and
autonomy levels. This paper emphasises the importance of this work, but it is also clear that these
definitions have not been fully absorbed by the broader public yet. Efforts should be made to precisely
define which kind of autonomy is being discussed in each case to avoid supporting or creating
unfounded expectations. Perhaps the term autonomy should be used with greater caution in general,
especially when describing systems that could more accurately be defined as automated.

Acknowledgments

This paper was written in connection with the PhD project entitled “Autonomous Ships from the Per-
spective of Operation and Maintenance”. The project has been generously funded by The Danish Mari-
time Fund, Lauritzen Fonden, A/S D/S Orient’s Fond and Svendborg International Maritime Academy.
The author would like to thank PhD Project Supervisor Marie Lützen and Co-Supervisor Mads Bruun
Larsen for their support and contributions.

44
Table I: Analysis of scientific material

45
Table II: Analysis of news articles

46
@LISTVERSE (2016), 10 Words That Originally Meant Something Really Different - Listverse
https://listverse.com/2016/02/12/10-words-that-originally-meant-something-really-different/

AHVENJÄRVI, S. (2016), The Human Element and Autonomous Ships, TransNav, 10/3, pp. 517-521

ALLIANZ (2017), Safety and Shipping Review 2017, annual review, Allianz Global Corporate &
Specialty SE, Munich, Germany

AUTOTRADER (2019), Automated vs. Autonomous Vehicles: Is There a Difference, Autotrader,


https://www.autotrader.com/car-news/automated-vs-autonomous-vehicles-there-difference-273139

BENSON, C.; SUMANATH, P.; COLLING, A. (2018), A Quantitative Analysis of Possible Futures of
Autonomous Transport, INEC 2018 Conf., Glasgow

BERTRAM, V. (2003), Cyber-Ships - Science Fiction and Reality, 2nd COMPIT, Hamburg, pp.336-349

CAREY, L. (2017), ALL HANDS OFF DECK THE LEGAL BARRIERS TO AUTONOMOUS SHIPS,
NUS Law Working Paper 2017/011, Singapore

CAMBRIDGE DICTIONARY (2019), Cambridge Business English Dictionary, Cambridge University


Press Online, https://dictionary.cambridge.org/dictionary/english/automation

HOGG, T.; GHOSH, S. (2016), Autonomous merchant vessels: examination of factors that impact the
effective implementation of unmanned ships, Australian J. Maritime & Ocean Affairs, 8/3, pp.206-222

HOOYDONK, E.V. (2014), The law of unmanned merchant shipping, an exploration, J. Int. Maritime
Law 20, pp.403-423

HORNBY, A.S.; ASHBY, M.; WEHMEIER, S. (2005), Oxford advanced learner's dictionary of
current English, Oxford, Oxford University Press

IMO (2019), IMO takes first steps to address autonomous ships, press briefing 08 25/05/2018,
http://www.imo.org/en/mediacentre/pressbriefings/pages/08-msc-99-mass-scoping.aspx

INTEL (2018), Autonomous Technologies, video,


https://www.intel.com/content/www/us/en/education/highered/autonomous-technologies-video.html

JOKIOINEN, E.; POIKONEN, J.; HYVÖNEN, M.; KOLU, A.; JOKELA, T.; TISSARI, J.; PAASIO,
A.; RINGBOM, H.; COLLIN, F.; VILJANEN, M.; JALONEN, R.; TUOMINEN, R.; WAHLSTRÖM,
M. (2016), Remote and Autonomous Ships the next steps, Whitepaper, AAWA project

KOBYLINSKI, L. (2018), Smart ships – autonomous or remote controlled, Scientific J. Maritime Univ.
of Szczecin 53/125, pp.28-34

KONGSBERG (2019), Autonomous ship project, key facts about YARA Birkeland - Kongsberg
Maritime, https://www.km.kongsberg.com/ks/web/nokbg0240.nsf/AllWeb/
4B8113B707A50A4FC125811D00407045?OpenDocument

KRETSCHMANN, L.; RØDSETH, Ø. J.; SAGE-FULLER, B.; NOBLE, H.; HORAHAN, J.;
McDOWELL, H. (2015), D9.3: quantitative assessment, MUNIN project deliverable

LLOYD'S REGISTER (2016), Cyber-enabled ships ShipRight procedure – autonomous ships, Lloyd's
Register guidance document, Southampton

47
MEDIAVILLA, J.; CAHARIJA, W.; SMITH, R.; BHUIYAN, Z.; NAEEM, W.; CARTER, P.;
RENTON, I. (2016), Autonomous COLREGs Compliant Ship Navigation, Using Bridge Simulators and
an Unmanned Vessel, 15th COMPIT, Lecce, pp.280-287

MEDIAVILLA, J.; HIRDARIS, S.; SMITH, R.; SCIALLA, P.; CAHARIJA, W.; BHUIYAN, Z.;
MILLS, T.; NAEEM, W.; HU, L.; RENTON, I.; MOTSON, D.; RAJABALLY, E. (2017), MAXCMAS
Project - Autonomous COLREGs Compliant Ship Navigation. 16th COMPIT, Cardiff, pp.454-464

MERENLUOTO, J. (2018), One Sea: Steps Towards Autonomous Maritime Operations. 17th COMPIT,
Pavone, pp331-340

MUNIN (2018), MUNIN website, http://www.unmanned-ship.org/munin/partner/

NOMA, T. (2016), Existing conventions and unmanned ships - need, Master Thesis, World Maritime
University, Malmö

NYK (2019), NYK to Participate in Demonstration Project to Remotely Operate a Ship, NYK Line
Website, https://www.nyk.com/english/news/2018/1191211_1687.html

PICO, S. (2017), Svitzer still exploring the benefits of autonomous sailing, Shippingwatch Website,
https://shippingwatch.com/secure/carriers/article10042149.ece

PORATHE, T.; BURMEISTER, H.-C.; ØRNULF, J.R. (2013), Maritime Unmanned Navigation
through Intelligence in Networks the MUNIN project, 12th COMPIT, Cortona, pp177-183

RAHAV, A. (2017), Case Study - Totem Fully Autonomous Navigation System, 16th COMPIT, Cardiff,
pp.97-100

RØDSETH, J.Ø.; NORDAHL, H. (2017), Definitions for Autonomous Merchant Ships, report,
Norwegian Forum for Autonomous Ships

RØDSETH, Ø.J.; BURMEISTER, H.-C. (2012), Developments toward the unmanned ship, Int. Symp.
Information on Ships, pp.30-31

SINGH, Y.; SHARMA, S.; SUTTON, R.; HATTON, D. (2017), Path Planning of an Autonomous
Surface Vehicle based on Artificial Potential Fields in a Real Time Marine Environment, 16th COMPIT,
Cardiff, pp.48-54

STAMENKOVICH, M. (1991), An Application of Artificial Neural Networks for Autonomous Ship


Navigation Through a Channel, Vehicle Navigation and Information Systems Conf., Michigan/USA,
pp.475-481

WATERBORNE TP (2011), Strategic Research Agenda Waterborne transport & operations Key for
Europe’s development and Future, WATERBORNE TP Route map issue 2, http://www.waterborne.eu/
media/20002/wirmplus2011plusprint-2-.pdf

WILLUMSEN, T. (2018), A legal reality check for autonomous shipping in 2018, World Maritime
News, http://www.seatrade-maritime.com/news/asia/a-reality-check-for-autonomous-shipping-in-
2018.html

WORLD MARITIME NEWS (2017a), NYK to Test Autonomous Boxship in 2019, World Maritime
News, https://worldmaritimenews.com/archives/228202/nyk-to-test-autonomous-boxship-in-2019

WORLD MARITIME NEWS (2017b), VIDEO: World’s First Autonomous, Zero Emissions Ship to Be
Ready by 2020, World Maritime News,https://worldmaritimenews.com/archives/219669/video-worlds-

48
first-autonomous-zero-emissions-ship-to-be-ready-by-2020

WORLD MARITIME NEWS (2018a), Maersk CEO: Unmanned Containerships Not in My Lifetime,
World Maritime News, https://worldmaritimenews.com/archives/244613/maersk-ceo-unmanned-
containerships-not-in-my-lifetime

WORLD MARITIME NEWS (2018b), Unmanned Ships – Are We There Yet?, World Maritime News,
https://worldmaritimenews.com/archives/247204/interview-unmanned-ships-are-we-there-yet

WRÓBEL, K.; MONTEWKA, J.; KUJALA, P. (2017), Towards the assessment of potential impact of
unmanned vessels on maritime transportation safety, Reliability Engineering & System Safety 165,
pp.155-169

YOUNG-IL, L. (2004), A collision avoidance system for autonomous ship using fuzzy relational
products and COLREGs, Lecture Notes in Computer Science, vol 3177, Springer, Berlin, Heidelberg

49
Application of Continuous Integration in Decision Support and Integrity
Management Systems of Offshore Structures

Babak Ommani, SINTEF Ocean, AMOS-NTNU, Trondheim, Norway, babak.ommani@sintef.no


Lasse Bjermeland, SINTEF Ocean, Trondheim, Norway, lasse.bjermeland@sintef.no
Vegard Aksnes, SINTEF Ocean, Trondheim, Norway, vegard.aksnes@sintef.no
Neil Luxcey, SINTEF Ocean, Trondheim, Norway, neil.luxcey@sintef.no
Svein-Arne Reinholdtsen, Equinor ASA, Trondheim, Norway, sverei@equinor.com
Timothy Edward Kendon, Equinor ASA, Trondheim, Norway, tike@equinor.com

Abstract

Simulation-based decision support systems play an important role in safe operation of offshore float-
ing platforms. These systems, which consist of digital models, simulators, and analysis procedures,
are subject to continuous change. The conventional engineering practices for developing models and
performing analysis are not designed to implement these changes, and track their consequences, effi-
ciently. Therefore, ensuring the reliability and quality of models and analysis results becomes a chal-
lenge. In software engineering, development methods such as Continuous Integration (CI) are de-
signed to handle complex development processes with high frequency of modification. This process is
adopted here to present the modelling and analysis procedures needed for station keeping of floating
offshore platforms. It is shown that this adaptation can increase the efficiency and reliability of analy-
sis, facilitate communication, and shorten the time needed for adopting the outcome of new research.

1. Introduction

Application of digital solutions, in particular in modeling and simulation, is well established in differ-
ent areas of research and engineering. Innovative ideas and new solutions in most areas of science and
engineering must first be translated into software before they can be tested and adopted in practice.
Computer models and simulators play an important role in transferring the state-of-the-art knowledge
and methodologies from a researcher in a university or an institute to where it is going to be used,
whether it is an engineering company, a medical facility, a part of government, or an international
organization establishing long term policies and strategies. Computer models help us to investigate
and understand the behavior of complex systems. Using them, it is possible to predict the outcome of
different scenarios in order to increase safety and reliability of systems. This is of particular im-
portance in areas such as oil and gas, where unforeseen events and accidents can have devastating
environmental and human consequences.

A reliable and up-to-date computer model, together with simulators armed with state-of-the-art
knowledge in each area, is desirable to increase the reliability of systems and reduce the possibility for
accidents. This is achievable by continuous evaluation of the systems' integrity, i.e. Integrity Man-
agement (IM), and making sure they satisfy the limits put forward by regulations. Moreover, such
systems could act as Decision Support Systems (DSS) and be used during planning and execution
phases of an operation, as well as providing technical support during the design phase.

To answer these needs, a wave of digitalization is on the horizon, in particular for the oil and gas sec-
tor. Concepts such as digital twins, Grieves and Vickers (2017), and machine-learning-based models,
Shalev-Shwartz and Ben-David (2014), are developed and adopted by different companies on a rate
considerably higher than before. Hence, rethinking conventional analysis and assessment procedures
to better benefit from the advantages of digitalization is important.

50
2. Station keeping of offshore floating platforms

A brief overview of the issues related to offshore floating platforms' station keeping is given here.
Fig.1 shows a typical view of a floating offshore platform. Types and applications of such platforms
could be found in textbooks such as Faltinsen (1990), among others. These structures are commonly
used for drilling and producing oil and gas from offshore fields; when fixed structures are not appli-
cable, for example due to high water depth. Similar structures have in recent years been adopted for
offshore wind, Sclavounos et al. (2008) and aquaculture plants Bjelland (2017) as well.

Floating platforms are subject to harsh weather conditions combining waves, wind and current. A
combination of mooring lines and active thruster-assisted dynamic positioning system, Sørensen
(2011), can be utilized to maintain a platform's position. When it comes to semi-submersibles, spread
mooring systems are by far the most common way of ensuring that the platform keeps its intended
position. Risers are pipes that connect the platform to the sub-sea system on the sea bottom. For pro-
duction platforms, they are used to bring oil and gas to the surface; whilst for drilling platforms, they
act as a conduit for the drilling fluids and cutting returns, and provide a barrier for drill string, casing
and downhole tools against the sea. In either case, it is important to make sure that the motions of the
platform stay within the designed limits, otherwise riser breakage and leakage may occur which can
cause environmental catastrophes. The life-span of offshore floating platforms typically varies be-
tween 20 to 50 years. Therefore, to increase safety and operability, continuous inspection, evaluation,
and maintenance of all equipment's onboard is a must. Regulations play a key role with this respect,
DNV-GL (2010).

An offshore platform is a very complex system. As an example, the aspects relevant for station keep-
ing is chosen to be the main focus here. The main goal for station keeping evaluation, as defined in
regulations, such as DNV-GL (2010), ISO (2018), is to ensure that the platforms mooring system can
withstand the environmental conditions it may face with acceptable level of reliability during the plat-
form's lifetime. The requirements are usually categorized under ultimate, accidental and fatigue limit
state analyses. These criteria consider different platform states and different severities of environmen-
tal conditions. For example, Ultimate Limit State (ULS) analysis concerns the intact operational con-
dition of an oil and gas producing floating platform. The regulations require that under 100-year re-
turn period environmental conditions, i.e. one present probability of exceedance in one year, the plat-
form can withstand the environmental loads. For each scenario a group of acceptance criteria, such as
mooring line safety factor or remaining fatigue life, is considered. In the ULS case for instance, the
most probable, Naess and Moan (2012), safety factors for mooring lines is one of the criteria.

Fig.1: Schematic view of a semi-submersible Fig.2: Digital model of a FPSO vessel in the simula-
(front) and a ship-shape (back) floating pro- tion workbench SIMA®
duction platform, Norsk-Hydro (2018)

To be able to evaluate a station keeping system the following components are considered. First, a
dynamic model of the platform is needed. This model contains the mathematical representations of all
key factors in determining a platform's response at sea. For instance, the platform's mass distribution,

51
models representing forces on the platform due to wind and ocean current, models describing platform
interactions with waves, and models for mooring lines and risers (e.g. Faltinsen (1990) for details of
such models). Models for representing the stochastic nature of the environment, that is wind, waves
and ocean current, come in addition. By combining these models, it is possible to predict a platform's
responses at sea. Rules and regulations guide the selection of reliable models. Moreover, they put
forward a reliable method for performing the analysis and evaluating the outcome. Using the two, it is
possible to evaluate the station keeping integrity and reliability in terms of statistical safety factors.
With this information, it can be ensured that the operation of the platform is within the limits of the
relevant rules and regulations for the station keeping system. Similarly, this information can be used
during a design phase.

In addition to the regular evaluation of platform's safety, special analysis could be performed for spe-
cial conditions. For instance, the model can be used together with weather forecast to assess the risk
of operating in a storm. The same model can also be used to plan for performing an efficient and safe
operation. Repositioning the platform, changing the pretension of the mooring lines, connecting or
disconnecting risers, equipment maintenance, and crew transportation are among these operations.

3. Simulation based integrity management and decision support systems

Application of simulation-based design and operation is well known in all fields of engineering. How-
ever, the procedures to take advantage of the potentials of such methodologies are developed differ-
ently in different fields. Here a view into the essential components of such a system, applied to station
keeping of floating offshore platforms, described in Sec. 2, is presented. A similar system can be
adopted for different issues with similar nature in other areas. Later, the associated opportunities and
challenges are addressed.

3.1. Definition

Having a reliable digital model of an important asset, e.g. a floating offshore platform, is essential in
ensuring efficient operation and safety. Fig.2 shows a view of a digital model for an FPSO in SIN-
TEF's marine simulation workbench (SIMA). The digital model contains data, models, and proce-
dures representing the platform and its dynamic behavior.

The digital model is used in simulation-based decision support systems. The latter is defined as a digi-
tal system which can be used to predict the consequence of changes in environmental or operational
factors. This system can be used to assess the outcome of different scenarios in a virtual environment
and provide input to the decision-making process. These scenarios, for instance, can include planning
for a lifting operation, or what to do before an approaching storm arrives.

Adopting a digital duplicate, or a digital twin, of a real asset in a virtual environment for supporting
the decision-making process is not a new idea. The term digital twin is originally defined in the con-
text of Product Lifecycle Management, Grieves and Vickers (2017), for a similar purpose. In the pre-
sent context, this term is loosely equivalent to a decision support system. A digital twin implies that
all aspects of a real asset are presented digitally. Although it may be desirable in some cases, for a
complex system such as floating offshore platform, it would include many unrelated models. For ex-
ample, pressure valves in the process system, and chain winches in the mooring system are essentially
parts of the same digital twin. However, they have little to do with each other and operate on very
different time scales. Therefore, it seems logical to try and simplify the models by creating digital
models of specific aspects of the real asset. Further, it should be possible to combine the models when
needed to study possible interactions between different aspects. For example, a decision support sys-
tem for station keeping should contain a digital model covering all aspects of the real asset relevant
for the station keeping problem.

Another core part of the decision support system is the simulation engine which receives the environ-
mental factors and predict the outcome. For instance, the outcome can be the most probable safety

52
factor for mooring lines for a floating oil platform in the next three days, or the most probable maxi-
mum acceleration on a gangway in the next three hours. This information could be used by the crew
to make decisions as to continue or stop production, or a certain operation.

3.2. Components

Several components are needed in a reliable simulation-based decision support system. Fig.3 presents
a schematic view of these components, which includes models and their relevant data, simulators, and
procedures.

Each ‘Model’ includes a mathematical representation of a component in the system. The representa-
tions may be analytical, numerical, or empirical. The Data completes the models to represent a real
asset. For instance, the mass and stiffness coefficients in a dynamic solver, or the shape and boundary
conditions in a Computational Fluid Dynamic (CFD) solver. Moreover, ‘Data’ represent the state of
the asset, which can be obtained from live sensors, measurements, or simulations.

Fig.3: Components of a simulation-based decision support system

Fig.4: Example for the components in a simulation-based decision support system for station keeping
of a moored floating offshore platform

Models representing the components of a system are put together in ‘Simulators’. Simulators ensure
proper communication between different models and to the world outside the system. This outside
world could be real world, for instance when an operator controls a crane model, or a virtual world.

53
The most common form of simulators are time-domain solvers which may represent the passing of
time in real or virtual world. For example, the dynamic behavior of a simple mass-spring-damper
system can be solved in time using a time-domain solver.

The ‘Procedures’ describe the analysis methodology and the process of providing data and performing
simulations, as well as how to interpret the results. For instance, in the dynamic system example, the
procedures clarify how long the simulation should be in order to achieve the steady-state solution
based on the system's natural period and damping.

An example for such a decision support system for integrity management of an offshore floating plat-
form's station-keeping system is given in Fig.4Error! Reference source not found.. Here, models
include mathematical representation of the platform dynamics at sea and environmental conditions.
They range from simple dynamic models for mass-spring-damper systems, to more complex hydro-
dynamic models for describing the loads on a platform in stochastic waves. Examples are models for
environmental forces and platform's global responses, an empirical model for viscous forces on col-
umns (Morison formula), and a simple model for considering the changes in drift force due to plat-
form's low-frequency horizontal motions (wave drift damping model), among others. The Finite Ele-
ment Method (FEM) is usually used to present a numerical model of the mooring lines. Stochastic
models for environmental factors are also needed. Interested readers are references to Faltinsen
(1990) for a better view of different models which are needed to simulate a floating platform dynam-
ics at sea.

All these models require additional inputs, i.e. model data. The ‘Data’ usually depends on the plat-
form and its location, and is obtained through experiments, simulations, or field measurements. For
instance, the shape and hydrodynamic coefficients of the platform is needed for wave-body interac-
tion models. Moreover, the specification of mooring lines is needed in the finite element model, while
a drag coefficient is needed in Morison equation and a damping coefficient in wave drift damping
model. The metocean models for wave, wind, and current also need specification. The model data can
be considered dynamic, meaning that it changes over time, e.g. environmental conditions, or static,
such as the platform's mooring line lengths, which still may be subject to changes but less frequently.

Simulators combine the models to realize a scenario. The outcome of a simulation will be platform
responses and mooring line loads for a specific environmental condition. Then, a simulation outcome
depends on the properties of the adopted models as well as simulation configuration, e.g. simulation
duration and time step.

Having the models with data and the simulator, we have a digital twin of the platform which can be
used to establish the outcome of a certain change. However, the model is still incomplete. Depending
on what we are interested in, how to utilize the model to reliably predict the outcome needs clarifica-
tion. Clear ‘Procedures’ are needed to establish models, perform simulations, and evaluate the out-
come in order to increase the conclusions reliability. In this context, suggesting models for represent-
ing different hydrodynamic forces on the platform, defining methods to select environmental condi-
tions, and to obtain statistically reliable results, are among the examples. For example, to estimate the
most probable safety factor for a mooring line in the next 3 days, a series of simulations with different
environmental conditions and duration must be performed. Then the results should be combined with
a proper statistical model to address the sample variability in the environmental condition. Such pro-
cedures are usually complicated and without them it is difficult to setup a valid simulation and evalu-
ate its outcome. Therefore, they are considered a part of a digital twin, or a decision support system
for station keeping of offshore platforms. Proper guidelines for such procedures are usually presented
in regulations, such as DNV-GL (2010) and ISO (2018).

3.3. Challenges

The system depicted in Fig.3 has the advantage of encasing all the necessary parts to create a reliable
digital twin of an asset which can be used to predict its behaviour. The challenge lies in the fact that

54
each one of these components can and must eventually change. Keeping track of these changes, and
what they mean for decision making process is not an easy task. The changes can be categorized into
physical and non-physical. The physical changes are directly coming from the changes in the asset,
for instance

• The state of the asset can change, e.g. losing a mooring line, or a change in the platform's in-
clination angle due to ballasting.
• The environmental input to the model can change, e.g. due to a revised metocean data or
weather forecast.

Non-physical changes are related to changes in modelling, simulators, or regulations. For instance,

• The mathematical and numerical models representing the physics behind the asset’s dynamics
could change due to introduction of new methods, for instance a new model for describing the
wave drift force on a floating platform.
• The empirical coefficients representing the platform behaviour can change considering new
measurements or experimental data, e.g. a new set of wind coefficients from a new wind tun-
nel test.
• The analysis method can change due to changes in the regulations.

Some modifications are the direct result of changes in the real model, while the others are made to
investigate the outcome and therefore temporary, such as changing the configuration of the ballasting
system or changing the pretensions in the mooring lines. These changes are usually imposed on the
digital twin to help with planning of an operation or to decide on the optimum configuration. In other
words, in the first group the change is dictated, while in the second group we would like to decide
what the change should be, so we can get the desired outcome. Although, both groups use similar
procedures, the difference between the nature of the two is important. The second type of changes
results in many hypothetical situations, while the first group is real.

Necessary reliability leads to the requirement that a decision support system, or a digital twin, should
be alive and responsive to changes. Fig.5 shows various components which interact with the system.
The model needs to represent the actual asset, which is in a state of constant change. Therefore, the
model data must be changed in a controlled way to keep up with the reality. These changes can be
made by the operator, or they may directly come from the sensors and live measurements (see for
instance wireless sensor network (WSN), Raghavendra et al. (2006).

The same can be said for the mathematical models describing the governing physics concerning the
platform. The main rules of physics are not subject to change in this context. However, due to the
problem's complexity, the models are rarely based on first principles. Simplifications and techniques
such as linearization, decomposition, and perturbation are heavily used to derive applicable models.
The model's validity is then tested against reality using experiments. This means, with advances in
computation technologies there is always room for creating applicable and more reliable models.
Moreover, changes in the environment, and birth of new concepts and designs may raise questions on
the validity of old models. A good example for this is the need for research on new models which can
better describe a floating platforms behavior in extreme seas (example ExWave JIP, Ommani et al.
(2017). Ideally the work of researchers who are developing new models needs to be regularly put in
use in a controlled and reliable way. In this way, we ensure that the decision support system is con-
stantly using the outcome of the latest studies to represent the important effects. Other tools such as
artificial intelligence, machine learning, and software analytics may be adopted to automatize and
improve the process of updating and handling live changes in the model as well (see e.g. Pollock et al.
(2018).

Any piece of software requires maintenance and updates. Simulators are no exception. In addition,
depending on the adopted models, simulators may need to use new forms of data communication or

55
time-marching methods. Rules and regulations, i.e. procedures in this context, are no different when it
comes to changes either. Unforeseen events or accidents may initiate research which preferably re-
sults in more reliable models, as well as new regulations. The main goal for the regulations is, and
correctly so, safety. Hence, it is natural to expect an updated version of regulations with each change
in the system, whether it is new environmental predictions, or better models, or a new design concept
for floating platforms.

Fig.5: Various factors which play important roles in the reliability of a decision support system

Now that is it established the components of a reliable decision support system are constantly under
development, a principal issue arises in combining all different components; how to ensure that the
latest changes on different models work properly in the system? Hence, changes in all components
need to be tracked, and the influence of each change on the outcome of the system must be document-
ed. The conventional process of developing and using digital models in engineering, in particular with
respect to offshore platforms, is not designed to this end. In the following, the Continuous Integration
process from software development is briefly introduced and adopted for this purpose.

4. Continuous integration in software development

Continuous Integration (CI) is a widely used methodology in software development, Duvall et al.
(2007). It has been shown that it reduces development cost, increases productivity, and improves
communication, Hilton et al. (2016). The essence of the continuous integration process is regular in-
tegration. In this method, team members integrate their work frequently, usually several times per
day. Automatic building and testing is used to verify each integration and detect integration errors as
quickly as possible, Fowler (2006). Here we try to give a brief introduction to the methodology.

Fig.6 shows a schematic view of the continuous integration process. In this scenario, the product,
which will be deployed at the end, is a software package. The action usually starts when one or sever-
al developers introduce changes in the source code. The changes are tracked incrementally using a
version control system. GIT, Loeliger and McCullough (2012), is a well-known and widely used ex-
ample of a distributed version control system.

The version control system oversees tracking changes and integrating them with the previous and
other ongoing modifications. The process of branching and merging plays a key role to this end. The
changes are kept on separate branches and integrated frequently. The integration is usually carried out

56
on a remote server which all different developers can access. This service provides the possibility for
the developers to see the changes made by others and communicate regarding development plans.

Fig.6: Schematic view of continuous integration process in software development

After integrating changes, the software needs to be built and tested. The tests usually include unit
tests, to ensure software internal integrity, and targeted tests, which are defined by the developers to
verify the outcome of simulations. If the software does not build and test successfully, the process
goes back to the developer with a report of the problems. The software which passes the tests is made
available to the user through the deployment process, for example sending an installation file, or di-
rectly deploying it on the web. If deployment is done automatically, the term continuous deployment
is used together with continuous integration.

The building, testing, and deploying process is usually done using a build-server. Submitting changes
to the integration service is usually set to trigger the building and testing service. What kind of change
and on which repositories should trigger a response from the build server, and what kind of building
and testing should be performed in response, are decided by the developer who set up the process. For
instance, a software package may use different libraries for different purposes. A proper build service
trigger must be set up in response to changes in each library where related tests are considered.

In larger projects, a separate issue tracking system can be used to better plan for the required devel-
opment and tie all stages of the production together (see e.g. Jira issue tracking system (Atlassian)).
Agile software development methodology, Martin (2002), is usually used for planning and managing
development projects adopting Continuous Integration. SCRUM, Schwaber (1997), is a widely used
example of such methods. In this method the systems development process is assumed to be an un-
predictable, complicated process that can only be roughly described as an overall progression. There-
fore, the system's development process is defined as a loose set of activities that combines known,
workable tools and techniques with the best that a development team can devise to build systems.

5. Continuous integration in decision support systems

Modern software development processes acknowledge the constant state of change in a product
(Sec.4). The progress in development is measured, not in completing carefully planned steps, but in
roughly described overall progress. Experience has shown that a similar statement can be made with
respect to many aspects of a simulation-based decision support system. In the following sections we
show how we utilized the Continuous Integration process to create a decision support system which
handles changes in different components in a reliable and efficient way. The ideas are presented in the
form of examples related to station keeping of offshore floating platforms but can be applied in differ-
ent fields as well. The tools are realized in SINTEF Ocean's simulation workbench for marine appli-
cations (SIMA).

57
The various parts of a simulation-based decision support system are presented in a CI process in
Fig.7Error! Reference source not found.. As discussed before, a decision support system can be
divided into several components, Fig.3. These components are the model and data (model state), sim-
ulators, and procedures for performing simulations and evaluating the outcomes. Changes to all these
components need to be tracked, similar to software source code.

Fig.7: Continuous Integration process for simulation-based decision support systems

The models for the offshore platform and its mooring system are presented in the simulation work-
bench SIMA (SINTEF-Ocean). This workbench includes two simulators, SIMO, Ormberg (2012), for
the platform dynamics and RIFLEX, Ormberg and Passano (2012), for the dynamics of slender struc-
tures, i.e. mooring lines and risers in this case. The mathematical models for the main components of
the problem are presented in SIMO and RIFLEX. These two simulators have their own development
cycle based on Continuous Integration. Any change in the mathematical models or the simulators
introduced by the developers are tracked using a distributed version control system, GIT in this con-
text, which is followed by automatic integrating, compiling, testing and deploying processes. In addi-
tion to the mathematical models included in the mentioned simulators, the simulation workbench aims
at providing the possibility for developing and coupling new models to the existing ones. The Func-
tional Mockup Interface (FMI), Blochwitz et al. (2011), standard is adopted for this purpose, together
with automatic code generators to speed up the development process.

The platform's data is taken from the structured representation in SIMA and serialized in text format
for version controlling using the same distributed version control system. A view of the platform's
model presentation in SIMA is shown in Fig.8. The data for each platform is kept on a separate re-
pository. When the data is large, for instance results of a Computational Fluid Dynamics (CFD) simu-
lation, a data base system is adopted, while the versioning of the data is linked to the version control
system. The platform state can be changed by an operator through the SIMA interface. The live meas-
urements by the sensors could also change the platform state. When the change is committed to the
version control system the process of evaluating the model will be started automatically.

58
Fig.8: Digital representation of a semi-submersible floating platform model in SIMA

A data flow Visual Programming Language (VPL), Burnett et al. (1995), known as Workflow is im-
plemented in SIMA to represent the simulation and analysis procedures derived from regulations.
This VPL is adopted with the intention to make quality control of the implemented analysis easier,
and to simplify the process of modifying or introducing new procedures by the domain experts. An
advanced user, i.e. a domain expert, can use Workflow to create processes for setting up and perform-
ing one or many simulations, post-processing the results, extracting the required indicators, and pro-
duce a report. Example of a workflow for ULS analysis (see Sec. 2) is shown in Fig.9.

Fig.9: Part of the ULS analysis procedure developed using Workflow

59
Fig.10: A view of a Customized Graphical User Interface for running ULS calculations on a semi-
submersible floating platform developed by an advanced user.

To better control the analysis procedures implemented by Workflow, the functionality to create cus-
tomized graphical user interfaces is provided. Using this feature, an advanced user can create a dedi-
cated GUI for each analysis. Through this customized GUI, the operator can easily access, change,
and perform analysis without the need to go into the details of each analysis. An example of such
Customized-GUI for ULS analysis is presented in Fig.10. The procedures defined by Workflow are
also serialized and kept on a separate version-controlled repository. The user can directly change and
submit new procedures in the form of workflows.

A complete system requires integration of models, data, simulators and procedures. The detailed pro-
cess of integrating, compiling, testing and deploying is different depending on what kind of change
has triggered the process. With this respect, an advanced user, who has the knowledge about the rela-
tions between various parts of the model, can define the operations related to each change, in the same
way a developer creates a build plan. A dedicated interface has been designed for this purpose where
an advanced user can describe the actions related to a submitted change to each repository. For exam-
ple, when the change is submitted to the platform model repository the actions include, fetching the
repositories, combining them in one workspace and connecting them, running analysis defined as
Workflow, deploying the obtained reports.

Implementing the CI process shown in Fig.7 requires a suitable software infrastructure. Remote serv-
ers and reliable backup systems, issue tracking capabilities, and build servers with distributed resource
management are among the required features of such a platform. At present, the software develop-
ment and collaboration tools from Atlassian (Atlassian) are used for managing most of software de-
velopment projects at SINTEF Ocean, including SIMO, RIFLEX, and SIMA. Since a similar CI pro-

60
cess is used for decision support systems, the same tool is adopted to build it. Nevertheless, the pro-
cess itself is not tool dependent and can be implemented using other tools as well.

6. Decision Support Scenarios and CI

In this section, a selection of scenarios, relevant for station keeping of offshore platforms, are present-
ed, followed by the related CI process for each scenario. It is shown how the automated CI process
facilitates the decision-making process with increased efficiency and reliability.

6.1. Deactivating Dynamic Positioning System

A Dynamic Positioning (DP) system is a combination of thrusters and control units which is used to
control the position and heading of floating platforms such as ship-shaped floating production, storage
and offloading units (FPSOs), Sørensen (2011). In certain conditions, it may be desirable to shut
down this positioning system, e.g. to save cost and energy. In order to do so, it must be documented
that the vessel can operate safely without the DP system according to regulations.

Fig.11 presents the CI process for evaluating the deactivation of a dynamic positioning system. As in
the context of version control of source code, a new branch of the platform model is created by the
platform engineer and the DP system is deactivated. The change is submitted to the version control
system. The system integrates changes to the remote repository and notify the build server. The build
server executes the action list provided by the domain expert to test the integrity of the provided mod-
el as well as the required analysis based on the regulations. This includes, for instance,

• fetching the stable version of simulators and simulation workbench


• fetching and importing active version of all necessary models, e.g. metocean models, mooring
and positioning system, and newly updated platform model
• fetching and importing relevant analysis procedures, for example, ULS analysis
• performing analysis on a distributed compute server and generating reports according to regu-
lations

Fig.11: CI process for evaluating the dynamic positioning (DP) deactivation on a floating platform

61
If an error arises during the build process, for example simulation fails, the platform engineer will be
notified to take proper action. At last, the results of successful analysis will be deployed. This includes
sending the reports and archiving them. Dedicated deployment procedures could be introduced to
handle special deployment scenarios. For instance, in the present example, the platform engineer will
be notified of the consequences of deactivating the DP system. If it appears as acceptable, an approval
request can be issued. The request then will be received by the platform responsible together with the
model modifications and reports of analysis. Then, the platform responsible can make an informed
decision to approve the change or not. This is equivalent to issuing the so-called "pull request" in
software development (Atlassian).

Approving a change on the model results in merging the modifications to the active branch of the
model. This branch, which is called the "master" branch in software development, represent the cur-
rent state of platform at all times. By using the active version of the platform, engineers guarantee that
they are using what actually exist at sea and not a hypothetical model of the platform. The modifica-
tions, analysis results, and the decisions are all linked together through the issue tracking system.
Then, it would be possible to retrieve and understand the changes and consequent actions, in the fu-
ture. The issue tracking process is also a common practice in software development when introducing
new features or fixing bugs.

6.2. Metocean design basis update

A metocean design basis describes the statistical models for the metocean conditions. These models
are constructed based on the available measurements, hindcast simulations and common practices,
and they are updated regularly. The steps of the process are as follows:

• A new design basis is submitted by a metocean engineer to the version control system.
• The system integrates changes to the remote repository and notifies the build server.
• The build server executes the action list provided by the domain expert to test the integrity of
the provided metocean data and produce a report.
• The generated report is sent to the metocean engineer and an approval request is issued.
• The person responsible for metocean data receives the approval request with the reports and
approves/rejects the change.
• Approving the request merges the changes to the active branch of metocean design basis.

A basic, but crucial task for the person responsible for ensuring the integrity of a floating platform’s
mooring system, herein referred to as the platform-responsible, is to make sure that the platform al-
ways satisfies the requirements for safe operation, considering the updates to the metocean design
basis. Therefore, a change in the active branch of metocean design basis for a certain offshore field,
will trigger the integrity analysis CI process for the platforms in that field. This includes, for instance,

• fetching the stable version of simulators and simulation workbench


• fetching and importing active models for all the platforms in that field, and the newly updated
metocean model
• fetching and importing relevant analysis procedures, for example, ULS analysis
• performing analysis on distributed computing resources and generating reports according to
regulations

At last, the results of successful analysis will be deployed. In the present case, a dedicated deployment
procedure could be introduced to notify the platform responsible if analysis showed that the platform
is not fit to continue operation based on the new metocean data. In that case, a separate process will be
started by the platform responsible to investigate mitigating actions. The actions, and the results of the
analysis are archived and linked together with the registered issue on the tracking system. In this con-
text, approval of the results means that the platform-responsible has reviewed and is aware of the con-
sequences of the present change in the metocean design basis.

62
6.3. Planning a mooring line replacement

Planning any operation requires an extensive amount of analysis. The CI process helps automatizing
the analyses procedures and make sure the necessary components have been studied. For planning
such operations, the platform engineer can make branches from the active model and apply different
modifications to each of them separately. Submitting changes will automatically trigger the analysis
procedures. The outcome of different branches can then be compared to find the most suitable modifi-
cations, for which an approval request could be issued.

For example, during the replacement procedures, one mooring line will be disconnected from the
system. The goal would be to determine a new pretension configuration for the remaining lines to
keep platform within the limits of position and acceptable line tension. Different configurations could
be submitted and compared with each other. Finally, a complete set of analysis could be ordered for
the selected condition. When the platform-responsible receives the approval request, it will contain
the results of all necessary analysis for the final configuration provided by the build server.

6.4. Model for drift force in extreme seas

As presented in Sec. 3.3, the mathematical models representing the platform are subject to change.
New models can be created based on new theoretical or experimental investigations. Moreover, exist-
ing models can be improved, for instance, using machine learning algorithms, Pollock et al. (2018);
Shalev-Shwartz and Ben-David (2014). As an example, a new model for calculating wave drift forces
in extreme sea was developed during the ExWave JIP, Ommani et al. (2017). The CI process present-
ed here could be used to facilitate adopting the new model for practical applications in a quality con-
trolled and reliable way.

Introducing the new model in a separate branch of model repository will notify the build server. The
build server then starts the list of actions defined for validating a new model. These include, for in-
stance, accessing an experimental database and collect relevant cases for validations, running simula-
tions with the new models and compare the results to the old models and experiments, and producing
reports. The reports will be viewed by the researcher and possible modifications will be made. When
the models are validated, an approval request will be issued. The request triggers the platform's verifi-
cation procedure which includes performing the routine analysis as presented in the regulations. The
platform-responsible then will receive the results of the validation together with the analysis results in
the approval request.

6.5. Simulator update

Simulators may receive regular updates due to maintenance fixes or new developments. It is important
to document the effect of these changes on the results of platform analysis. For example, a change in
the simulators time-marching scheme can introduce a meaningful change on a mooring line's most
probable safety factor. These changes must be documented. In this way, the developer can make sure
that the changes in the results are intended or could be described as improvement. Further, the plat-
form responsible must be notified about the improved results and the consequence that they may have
on future decisions.

7. Benefits and potentials of CI in engineering

The benefits of continuous integration tools are well known in software development, Hilton et al.
(2016). Similar to a computer program, a reliable decision support system is constantly changing.
When it comes to development and maintenance of such systems, the benefits are of the same nature
but with different implications.

63
7.1. Benefits of CI in decision support systems

Adopting CI process and tools for keeping track of the digital model of a platform has several bene-
fits. The version control system ensures that engineers, in all disciplines, work with the latest model.
In this way, improvements by one discipline, e.g. platform dynamics, will be used by another disci-
pline, e.g. riser design, reliably and efficiently. This facilitates communication between different dis-
ciplines considerably. Moreover, the improvements and modifications on the models are tracked and
well documented. Therefore, at each stage, the history, reason, and consequences of modifications are
available to a platform engineer. This would increase the reliability of models and consequently the
safety of personnel.

Besides the models, the modifications in analysis procedures are tracked and controlled, which in-
creases the quality and reliability of the simulations. The results of these simulations are important
input to the decision-making process. Automatic assessment of the safety of offshore platforms using
the automatic build system is another crucial improvement. The process of modifying a model, for
example metocean, or an analysis procedure, to rerunning simulations for all platforms which may be
affected by the changes is not simple. Using this automated system, such evaluations will be done
automatically upon introducing a change, and the responsible personnel will be notified of the results.
Another strong point of such a system is related to improvement of simulators and mathematical mod-
els. The path from research and development to application on real platforms is usually long. Using
this system of automated integration and testing, the developers and platform engineers have a com-
mon space to validate new models and documents their improvements. This would considerably re-
duce the time needed to move a new technology or methodology from research to deployment for
application, while ensuring quality and reliability.

7.2. Future applications

Several applications of a decision support system for station keeping of offshore platforms are pre-
sented, and it was shown how they can benefit from the CI process. Other applications for such a sys-
tem include, but are not limited to, decision support system for cranes, pumps and ballast tank sys-
tems, free-fall life boats, sub-sea operations, helicopter landing, and maintenance operations.

For instance, a continuous integration process can be invoked every hour to reevaluate a crane's op-
erational condition based on the weather forecast for the next hours. The crew can actively use this
information, obtained from a web interface for instance, to safely execute the operation. Another ex-
ample is continuous evaluation of free-fall life boats applicability based on weather forecast, which is
directly relevant for personnel safety.

The applications of the continuous integration process in modeling and analysis with the aim of in-
creasing safety and reliability, is not limited to offshore platforms. The procedures described here
could easily be generalized to other fields of similar nature, such as process factories, marine opera-
tions, and shipping.

8. Conclusions

In the present paper, the applicability of Continuous Integration process in evaluating the station keep-
ing of floating offshore platforms was investigated. It was described how a digital twin of a floating
platform is adopted to evaluate the operability criteria based on regulations. Moreover, a brief intro-
duction to the Continuous Integration process and tools used in software development was presented.

The definition, components, and requirements of a reliable simulation-based decision support system
(DSS) were outlined. The challenges in developing and maintaining such system were discussed. It
was shown how a reliable DSS, similar to a software product, is in constant development. It was ar-
gued that the present development and modelling procedures for engineering purpose do not support

64
such frequent modifications efficiently. Therefore, Continuous Integration process and tools were
adopted to represent the analysis and development procedures relevant for decision support systems.

Station keeping of an offshore floating platform was chosen as an example. A brief introduction to the
station keeping problem was presented. The relevant analysis procedures were represented as a Con-
tinuous Integration process, together with several practical scenarios. The CI tools for version control,
building and testing, usually adopted for software development, future develop and used for imple-
menting these processes. The SINTEF Ocean Simulation Platform for Marine Applications (SIMA)
was used for modelling and simulation of the offshore platform. Moreover, it was further developed to
be the user interface for establishing and running analysis procedures using the CI tools.

It was concluded that it is possible to present many of the analysis procedures relevant for offshore
platforms using Continuous Integration process and tools. This adaptation increases the efficiency and
reliability of analysis, facilitate communication, and shorten the time needed for adopting the outcome
of new research.

Acknowledgements

This research activity is financially supported by the Research Council of Norway through a Strategic
Institute Program at SINTEF Ocean.

References

Atlassian Jira Service Desk | IT Service Desk & Ticketing.

Atlassian Atlassian | Software Development and Collaboration Tools.

Atlassian Pull Requests | Atlassian Git Tutorial.

BJELLAND, H.V. (2017), Exposed Aquaculture Operations, Annual Report 2017 (SINTEF Ocean).

BLOCHWITZ, T.; OTTER, M.; ARNOLD, M.; BAUSCH, C.; CLAUSS, C.; ELMQVIST, H.;
JUNGHANNS, A.; MAUSS, J.; MONTEIRO, M.; NEIDHOLD, T.; et al. (2011), The Functional
Mockup Interface for Tool independent Exchange of Simulation Models, pp. 105–114.

BURNETT, M.M.; BAKER, M.J.; BOHUS, C.; CARLSON, P.; YANG, S.; ZEE, P.V. (1995).
Scaling up visual programming languages, Computer 28, pp.45-54.

DNV-GL (2010), DNV-OS-E301: Position Mooring.

DUVALL, P.M.; MATYAS, S.; GLOVER, A. (2007), Continuous Integration: Improving Software
Quality and Reducing Risk, Pearson Education

FALTINSEN, O.M. (1990), Sea loads on ships and offshore structures, Cambridge University Press

FOWLER, M. (2006), Continuous Integration

GRIEVES, M.; VICKERS, J. (2017), Digital Twin: Mitigating Unpredictable, Undesirable Emergent
Behavior in Complex Systems, Transdisciplinary Perspectives on Complex Systems, Springer, Cham,
pp.85-113

HILTON, M.; TUNNELL, T.; HUANG, K.; MARINOV, D.; DIG, D. (2016), Usage, Costs, and
Benefits of Continuous Integration in Open-source Projects, 31st IEEE/ACM Int. Conf. on Automated
Software Engineering, New York, pp.426-437

65
ISO (2018). ISO 19901-7:2013 - Petroleum and natural gas industries -- Specific requirements for
offshore structures -- Part 7: Stationkeeping systems for floating offshore structures and mobile
offshore units, Int. Standard Org.

LOELIGER, J.; McCULLOUGH, M. (2012), Version Control with Git: Powerful Tools and
Techniques for Collaborative Software Development, O’Reilly Media

MARTIN, R.C. (2002), Agile Software Development, Principles, Patterns, and Practices, Upper
Saddle River, Pearson

NAESS, A.; MOAN, T. (2012), Stochastic Dynamics of Marine Structures, Cambridge Univ. Press

NORSK-HYDRO (2018), Norsk Hydro oilfield project, North Sea Northern, https://www.offshore-
technology.com/projects/njord/

OMMANI, B.; FONSECA, N.; STANSBERG, C.T. (2017), Simulation of Low Frequency Motions in
Severe Seastates Accounting for Wave-Current Interaction Effects, (ASME), p. V001T01A088

ORMBERG, H. (2012), SIMO Theory Manual V4.0, rev 1, MARINTEK

ORMBERG, H.; PASSANO, E. (2012), RIFLEX theory manual, MARINTEK

POLLOCK, J.; STOECKER-SYLVIA, Z.; VEEDU, V.; PANCHAL, N.; ELSHAHAWI, H. (2018),
Machine Learning for Improved Directional Drilling, Offshore Technology Conf.

RAGHAVENDRA, C.S.; SIVALINGAM, K.M.; ZNATI, T. (2006), Wireless Sensor Networks,


Springer

SCHWABER, K. (1997), SCRUM Development Process, Business Object Design and Implemen-
tation, Springer, pp.117-134.

SCLAVOUNOS, P.; TRACY, C.; LEE, S. (2008), Floating Offshore Wind Turbines: Responses in a
Seastate Pareto Optimal Designs and Economic Assessment, pp.31-41

SHALEV-SHWARTZ, S.; BEN-DAVID, S. (2014), Understanding Machine Learning: From Theory


to Algorithms, Cambridge: Cambridge University Press

SINTEF-Ocean SIMA, Simulation Workbench for Marine Applications

SØRENSEN, A.J. (2011), A Survey of Dynamic Positioning Control Systems, Annu. Rev. Control 35,
pp.123-136

66
Big Data Analysis Application: Brake Power and Fuel Oil Consumption
Estimation based on Public History Voyage Data of Ships
Myeong-Jo Son, Korean Register, Busan/Korea, mjson@krs.co.kr
Sang-yeob Kim, Korean Register, Busan/Korea, sangyeobk@krs.co.kr
Yeon Hwa Jo, Korean Register, Busan/Korea, yhjo@krs.co.kr
Gap-heon Lee, Korean Register, Busan/Korea, ghlee1@krs.co.kr
Min-Jae Oh, Seoul National University, Seoul/Korea, mjoh80@snu.ac.kr
Myung-Il Roh, Seoul National University, Seoul/Korea, miroh@snu.ac.kr

Abstract

In KR, based on AIS data and ocean weather data, we acquired the information of ships and
processed them to establish an environment for big data analysis. Through this, we developed a
variety of applications that can analyze the ship's past operation profile, FOC analysis, statistical
analysis of ocean weather condition during operation, sea margin analysis by sea state. We have
acquired, processed and analyzed 2017 data for about 4000 ships of various types of vessels
including container ships, LNG carriers, bulk carriers, and tankers, including 1400 KR registered
vessels, respectively to verify the availability of this big data analysis environment.

1. Introduction

In the maritime industry, big data is mostly data related to ship operation, so it is not easy to obtain
data for big data analysis. It takes time and budget for measuring, sensing, collecting and processing
these data. In addition, since such a series of activities take place on individual vessels, their
ownership also belongs to the ship owners. Therefore, from the point of shipyards who design and
construct of a ship, and classification societies who conduct various surveys and certifications, it is
difficult to obtain data about the operation of ships without consent or provision of shipping
companies.

On the other hand, ocean-going vessels are obliged to install AIS (Automatic Identification System)
transmission equipment by IMO (2002), IMO (2003). The real-time position of each ship is published
by this equipment. If there is receiver equipment in the coastal area, this data can be collected and
utilized, Rakke (2016), Cepeda et al. (2018). However, in the case of ocean, such transmission
equipment is not able to be utilized from the point view of onshore, so that AIS data can be accessed
only through a satellite service provider. In other words, it is possible to obtain AIS data of
international sailing vessels from all over the world through these satellite transporters, even if they
are not ship owners.

In addition, ships will be affected by sea conditions such as currents, waves, and surface water
temperatures, and weather conditions such as wind at that time. These data are mainly utilized as a
forecasting form in route planning. These ocean weather data are also gathered and processed for
research purposes and released in the form of the hindcast data. However, these data are subject to
varying coverage, different update periods, and varying degrees of accuracy (range of ocean
zone/grid), depending on organizations providing them and their main purpose of data usage.

The previous mentioned AIS data and ocean weather data can be connected with time and latitude /
longitude. Therefore, we will introduce the establishment of big data analysis environment that can
utilize and process AIS data and ocean weather data, which are public big data, and correlated data.
And we will also explain corresponding applications.

Rakke (2016) suggested method to estimate the fuel oil consumption (FOC) based on the collected
AIS data with Holtrop-Mennen method, Holtrop and Mennen (1982). There have been many
approaches using the Holtrop-Mennen and the Holtrop (1984) method to estimate power, FOC, and

67
related environmental aspects of ships for calm water condition, Prpić-Oršić and Faltinsen (2012,
Gaspar (2017), Lakshmynarayanana and Hudson (2017), Ebrahimi et al. (2018), Fonseca et al.
(2018). As a big data analysis approach utilizing combination of AIS data and ocean weather data,
Kim (2018) presented the estimation method of EEOI (Energy Efficiency Operational Indicator)
based on AIS data with calm water resistance calculation using Holtrop-Mennen and added resistance
calculation using ISO 15016 (2015). On basis of Kim’s approach, we have developed FOC estimation
method that can be applied on the big data analysis environment which can analyze operation data for
thousands of ships for a year. And we also consider the sea surface water temperature for the added
resistance that was not considered in Kim’s approach. In addition, block coefficient has been
approximated by L, B, D, Gross Tonnage (GT) for each ship type of vessels. And we utilize the ocean
weather data from multiple sources such as NOAA (National Oceanic and Atmospheric
Administration), ECMWF (European Centre for Medium-Range Weather Forecasts), and HYCOM
(HYbrid Coordinate Ocean Model).

The purpose of this study is to introduce the big data processing and analysis methodology which can
acquire information related to the operation of ship in any organization related to the ship, and the
applications developed by KR as an example of it. And in this regard, there is difference from
existing related researches. Fig.1 shows the overall outline of the big data analysis in this study.

Fig.1: Overview of big data analysis based on the public history voyage of ships

2. Establishment of Big Data Analysis Environment

2.1. Introduction of Big Data Analysis Technology

Big data analysis is suitable when the data in the form of a table is accumulated every day and this
data is correlated with other types of data in different periods according to various connection
relationships. This is because Hadoop, https://hadoop.apache.org/, an open-source big-data frame-
work due to the recent development of distributed processing technology, makes it possible to
configure multiple servers as one cluster. In addition, it has become easier to enter the development in
the Hadoop environment by script languages (natural-language-like interpreter development language)
such as Python, https://www.python.org/, or Scala, https://www.scala-lang.org/. And the packages of
these script languages developed by many users can be utilized, make it suitable for big data
processing. For example, Pandas package, https://pandas.pydata.org/, makes it easy to read various
types of data (CSV, JSON, Excel Sheet, Text, etc.) and configure them in so-called ‘Dataframe’ form

68
of big data. In ‘Dataframe’, data can be easily extracted, processed, merged, modified, and output in
various file formats supported as same as the input formats. This data processing can speed up in
Hadoop's distributed processing environment called HDFS (Hadoop Distributed File System).
Furthermore, by performing the above data processing and ETL (Extract, Transform, and Load)
operations in Spark environment supporting in-memory operation, https://pandas.pydata.org/, the
speed of big data analysis is accelerated, and big data analysis can be popularized as it is now.

2.2. Linkage between AIS and Ocean Weather Data for Big Data Analysis

We have conducted research of obtaining and pre-processing the dynamic big data to be ready for
analysis, both of AIS including position, speed, course, destination, ETA (Estimate Time of Arrival),
draft of a ship and ocean weather data including wind speed and direction, significant wave height,
wave direction and period, current speed and direction, surface water temperature, and water depth,
for each latitude and longitude.

Since AIS data is not periodic depending on the operating condition of the ship, it is organized into
data of one-hour period through interpolation and filtering. In this case, voyage-related data such as
draft, ETA, and destination, which do not change during a single voyage are data input by human
onboard, thus the same data during the same voyage was used when there were missing data or
different values. And we used IMO number and MMSI (Maritime Mobile Service Identity)
simultaneously as the unique identification information of the ship. However, since MMSI may be
changed due to the sale of the ship or the replacement of the communication equipment, the IMO
number is prioritized.

Another dynamic data is ocean weather data. The ocean weather data is stored and shared in various
types of data, but recently it is organized in netCDF (NC; Network Common Data For) file format,
https://www.unidata.ucar.edu/software/netcdf/. In order to utilize data in NC file, a separate python
package must be used. Even for same NC file format, its components and approaches to them are
different, and type of variables and basis of the time are different according to the organization who
defines and provides it. Therefore, NC file can be grasped data structure through its user’s manual or
programming query of each file. After that, in order to access the desired data, it is necessary to
perform programming so as to obtain the value by converting latitude and longitude of the NC file
into the array values, and finding the cell value to the corresponding time. NC files can be understood
as data in the form of a three-dimensional array in which a two-dimensional array of latitude and
longitude for the global is stored in terms of time. In this study, we construct ‘Dataframe’ of hourly
data format with a zone of 1 degree by 1degree of data resolution. As contents of data, the above-
mentioned ocean weather data is extracted. In the case of the masked data at a specific time, data of
other sources are used. In this study, NOAA data is utilized as a basis, water depth (m), wind speed
(knot), wind direction (degree), significant wave height (m), wave direction (degree), wave period (s),
wave length (m), current velocity (knot) and current direction (degree) are extracted, respectively. If
wind velocity or significant wave height are masked in NOAA at the certain time, valid data is
obtained in adjacent 1 degrees zones. This case is frequently observed in the sea area near the shore.
If the adjacent zone data is also masked, the ECMWF data is used for the Mediterranean region. The
data of HYCOM were used when NOAA current velocity was masked or zero. Sea surface water
temperature is obtained by using ECMWF's temperature data (degree Celsius). If this value is zero,
data of HYCOM is used. The selection of these data values is based on the comparison with the actual
operation data measured from the vessel and, which shows the most similar tendency, and this can be
replaced with more accurate source data in the future.

In addition, we have gathered and stored in a database the static data of principal dimension, engine
specification, ship type, built date, ship owner, and ship builder, and classification society, and SFOC
(Specific Fuel Oil Consumption) of the installed engine of each ship. All of these big data were
collected, processed, stored in big data cluster, and combined according to interrelationships such as
time, latitude, longitude, and IMO number. As a result, table-shaped data including the voyage route
and the corresponding ocean weather for each ship is generated as shown in Fig.2.

69
Fig.2: Big Data Pre-Processing for Analysis

3. Development of Big Data Analysis Applications

We have developed various applications, so that the data processed at any time can be visualized on
the web at a glance by visualizing the past voyage route of the ship, the ocean weather situation at
that time, and the changes of speed. Furthermore, we implemented the application that can estimate
average speed, the number of days of operation in rough sea, DFOC (Daily Fuel Oil Consumption)
and the brake horsepower for each vessel as shown in Fig.3.

Fig.3: Overview of Big Data Analysis Applications

70
3.1. Past Voyage Review

In order to review the processed big data effectively in the perspective of past voyage with
visualization, we developed various approaches as shown in Fig.4. First, it is a web visualization
method using daily location data to grasp the whole route for a long period of more than one year. At
this time, main specification of the ship is provided as a pop-up marker on the web, and the pop-up
marker information of the voyage is generated at the position of the day on the global map. In other
words, when we select the position information we want to know numerically, the information
(position, speed, draft, destination, ETA) and surrounding environment information (wind and wave,
water temperature) of the vessel will pop up at that time. There is also another visualization method
using a cluster marker that can grasp at a glance the frequency of in and out of the major port and
voyage speed variation during a long voyage rather than the daily position. It functions as a method of
expressing the density per unit area of a marker, which means 1 hour through zoom in/out operation,
as a number. As a method of visualizing environmental information visually at a glance, we also
developed a method to visualize hourly position with the ocean weather environment at that time as
shown in Fig.4-(C). In this case, the significant wave height of the date and time is represented by a
heat map, the wind intensity is represented by the length of the arrow, and the wind direction is
represented by the direction of the arrow. All three methods are suitable for publishing purposes and
service purposes because they function as a method of generating HTML file operable on the web
using a pre-processed big data file as an input. For multi-level detailed analysis of various routes, we
have developed in-house program in which the path and the weather environment are represented
directly by a heat map and arrows on the GUI for the selected voyages, after reading the pre-
processed big data file and classifying data by voyage.

Fig.4: Effective Voyage Review with Web Visualization

Given the ship's digital twin model, the function to preview AIS path and weather and sea conditions
at that time by configuring VR (Virtual Reality) environment are shown in Fig.5. This function is an
exploratory development that examines whether the actual operation data and the MetOcean data can
be utilized as an environmental scenario of VR simulator of a ship. It is considered that the pre-
processed ‘Dataframe’ type big data can be sufficiently utilized in the VR environment.

71
Fig.5: VR Visualization using MetOcean Data

With this public big data, not just showing the route on the map, but also we can see the speed
variation during voyage with mat-ocean data. In the example of Fig.6, speed drop was observed but it
is difficult to understand what had happened only with this graph. However, with past route
visualization on the map, we can figure out that the ship was waiting to entering the port.

Fig.6: AIS Data-based Past Route Tracking and Analysis for Single Vessel

72
In this way, we could observe the "Rush To Wait" when we observed a single vessel that operated the
same route at different times during 2017. In Fig.7, 10000 TEU class vessel whose design speed is
25.6 knots, sailing from Portugal to the eastern USA is shown. The vessel reached its destination with
a voyage of about 7.5 days. The upper left is the port of entry to the Miami Port in the south-eastern
United States and the remaining three ports are ports in the New York Port of the US Northeast. The
left side was operated at a relatively high speed, and it was confirmed that they arrived after 1.5 days
of waiting time. At this time, it was estimated that high DFOC of 99.5 and 81.2 ton was consumed at
the time of sailing excluding the waiting time. We could see that the right side was sailing at a speed
of around 17.5 knots and arrived immediately in 7.5 days. At that time, the ocean weather condition
was worse than that in the lower left in April, but it could be estimated that it operated with better
mileage.

Fig.7: Comparison Analysis for Same Voyages of the Container Carrier

3.2. FOC Estimation

In addition to this simple data statistical processing, we have combined shipbuilding and marine
engineering knowledge with big data analysis. As a result, we could get an algorithm for estimating
the required horsepower and fuel consumption of a ship considering additional resistance using only
public data. Based on the speed through water considering the current, we can calculate calm water
resistance with Holtrop-Mennen method, and using ISO 15016, additional resistance considering
wave, wind can be estimated by every hour as shown in Fig.8. With the estimated total resistance, we
can calculate brake power considering hull and propulsion efficiency. And as we already established
ships’ engine specification including SFOC, hourly FOC can be computed.

3.3. Big Data Analysis Applications for KR registered Vessels

We have acquired, processed and analysed 2017 data for about 4000 ships of various types of vessels
including container ships, LNG carriers, bulk carriers, and tankers, including 1400 KR registered
vessels, respectively to verify the availability of this big data analysis environment. And with ocean
weather data at that time was processed and analysed together with big data analysis cluster. As
shown in Fig.9, the sea conditions that were actually operated for each vessel type were plotted as
wind speed (BF) and sea state of WMO. When we look at the wind speed, lots of operations were in

73
BF 3 ~ 4 for overall vessels. In general bulk carriers operated in a slightly windier environment than
container ships. When we look at the wave height, lots of operations were in WMO Sea state 3, which
is 0.5 m ~ 1.25 m, for overall vessels. And we have observed that bulker and LNGC have operated a
lot in more rough sea condition.

Fig.8: FOC Estimation Using AIS Data and MetOcean Data

Fig.9: Beaufort Scale of Voyages for BC/Container/LNG/Tanker in 2017

For container ship, we have calculated DFOC for each ship types. Fig.9 shows distribution of DFOC
among each ship types. The estimated DFOC might be smaller than actual measurement, because this
only consider brake power for the main engine. There is consumption due to auxiliary engine around
10% in actual operation. However, when we consider “ton/nm” index, in the right of Fig.10, that
means how much fuel was consumed for 1 nm (Nautical Mile) sailing, as efficiency of operation, we
can tell for the certain ship, that she sailed efficiently or not, compared to other ships in similar size.

74
For single vessel, we can derive the speed-power curve for the actual operation of ship based on the
FOC estimation algorithm and Beaufort scale. Fig.11 shows the result of a comparative analysis of
two different container ships in same design speed and cargo capacity, but are different of shipyards
and ship owners. The reason why we have compared these two vessels was big differences of average
DFOC in 2017. One was 50.6 ton/day but the other is 81.5 ton/day. There was a difference in
performance between the two ships in the head weather condition, but main reason was, high-speed
operation. In the case of the vessel in the right-hand side of Fig.10, the frequency of high-speed
operation was higher than that of the vessel in the left, and it was found that a lot of fuel was
consumed.

Fig.10: Average FOC of Voyages for BC/Container/LNG/Tanker in 2017

Fig.11: Sea Margin Analysis Based FOC Estimation Algorithm (13000 TEU Comparison)

75
4. Conclusion and Future Works

In order to overcome the limitation of obtaining big data regarding to ships, we have established an
environment that can analyze the operational performance of the actual voyage of ships using only
public big data. Using the established big data analysis cluster, we processed and combined the
acquired big data (AIS, ocean weather, ship specification) as ‘Dataframe’ for furthermore easier and
quicker utilization for the applications. We have developed past voyage analysis applications such as
route visualization, average speed computation, travel distance statistics, Beaufort (wind, wave)
classifications, and brake power and FOC estimation. In conclusion, through this big data analysis
environment establishment, we have figured out that the big data and its analysis technology are tools
to check data effectively related to the classification society and other parties related to the ship, and
the ship itself.

In the future, we plan to enhance the application of big data by connecting and verifying with actual
operational measurement data, and support the customized vessel-specific or ship-type-specific
analysis according to customer's request. And the application of deep-learning technology to this big
data analysis environment is on-going research.

Acknowledgements

This research was performed as a part of the research project below and supported by the
organizations indicated. We acknowledge and appreciate the support provided. ‘In shipbuilding
design, 3D Development and commercialization of safety education, training VR contents of sailor
using virtual reality technology” project funded by Ministry of Science and ICT of Korea (No.
S0602-17-1016).

References

CEPEDA, M.A.F.; MONTEIRO, G.; MOITA, J.V.M.O.; CAPRACE, J.D. (2018), stimating ship
emissions based on AIS big data for the port of Rio de Janeiro, 17th COMPIT Conf.

EBRAHIMI, A., BRETT, P.O.; GARCIA, J.J. (2018), Fast-track vessel concept design analysis, 17th
COMPIT Conf.

FONSECA, I.A.; GASPAR, H.M.; RYAN, C.F.; THOMAS, G.A. (2018), An open and collaborative
object-oriented taxonomy for simulation of marine operations, 17th COMPIT Conf.

GASPAR, H.M. (2017), JavaScript applied to maritime design and engineering, 16th COMPIT Conf.

HOLTROP, J. (1984), A Statistical Reanalysis of Resistance and Propulsion Data, Int. Shipbuilding
Progress 31, pp.3-5

HOLTROP, J.; MENNEN, G.G.J. (1982), An approximate power prediction method, Int.
Shipbuilding Progress, pp.166-170

IMO (2002) SOLAS Chapter V Safety of Navigation

IMO (2003) Guidelines for the installation of a Shipborne Automatic Identification System (AIS)

ISO15016 (2015) Guidelines for the assessment of speed and power performance by analysis of speed
trial data

KIM, S.H. (2018), A Study on the Method for the Estimation of Energy Efficiency Operational
Indicator of a Ship Based on Technologies of Big Data and Deep Learning, Master Thesis, Seoul Nat.
Univ.

76
LAKSHMYNARAYANANA, P.A.; HUDSON, D.A. (2017), Estimating Added Power in Waves for
Ships Through Analysis of Operational Data, 2nd Hull Performance & Insight Conference

PRPIĆ-ORŠIĆ, J.; FALTINSEN, O.M. (2012), Estimation of ship speed loss and associated CO2
emissions in a seaway, Ocean Engineering 44(1), pp.1-10

RAKKE, S.G. (2016), Ship emissions calculations from AIS, Master Thesis, NTNU

77
A.I. Technologies Applied to Naval CAD/CAM/CAE
Jesus A. Muñoz Herrero, SENER, Madrid/Spain, jesus.munoz@sener.es
Rodrigo Perez Fernandez, SENER, Madrid/Spain, rodrigo.fernandez@sener.es

Abstract

Artificial Intelligence is one of the most enabling technologies of digital transformation in the industry,
but it is also one of the technologies that most rapidly spreads in our daily activity. Increasingly,
elements and devices that integrate artificial intelligence features, appear in our everyday lives. These
characteristics are different, depending on the devices that integrate them, or the aim they pursue. The
methods and processes that are carried out in Marine Engineering cannot be left out of this technology,
but the peculiarities of the profession and the people that take part in it must be taken into account.
There are many aspects in which artificial intelligence can be applied in the field of our profession. The
management and access to all the information necessary for the correct and efficient execution of a
naval project is one of the aspects where this technology can have a very positive impact. Access all the
rules, rules, design guides, good practices, lessons learned, etc., in a fast and intelligent way,
understanding the natural language of the people, identifying the most appropriate to the process that
is being carried out and above all. Learning as we go through the design, is one of the characteristics
that will increase the application of this technology in the professional field. This article will describe
the evolution of this technology and the current situation of it in the different areas of application. Some
proposals for the future will also be highlighted to provide the integration of technology with the design
systems and work methodologies. This integration will be based on the needs of the users of the
shipyard, considering the constraints of the business and framing in the current reality, the digital
transformation of the industry, the extension of new technologies in today´s society and the
incorporation of the generations of Millennials into the labor market. This technical paper is ground-
breaking for the Shipbuilding Industry and brings an innovative way to integrate new disruptive
technologies to marine and ocean engineering projects. This is an integrated proposal from companies
that encourages the collaboration between the different industrial stakeholders.

1. Introduction

There are many aspects in which Artificial Intelligence (A.I.) can be applied in ship design and ship
production. The management and access to all the information necessary for the correct and efficient
execution of a ship project is one of the aspects where A.I. can have a strong impact:

• Access all the rules, design guides, best practices, lessons learned, etc., in a fast and intelligent
way,
• understanding the natural language of the people,
• identifying the most appropriate approach to a process carried out, and above all,
• learning as we go through the design.

These benefits will increase the application of this technology in our field. Gartner’s (2017) report on
the 10 most strategic technology trends concluded that those included in the field of Artificial
Intelligence (A.I.) occupied the first three positions, https://www.gartner.com/smarterwithgartner/
gartners-top-10-technology-trends-2017:

1. A.I. applied
2. Smart Apps
3. Smart things

A brief analysis indicates that their position in the supposed cycle of expected maturity suggested that
they would reach maturity in an interval of between five to ten years so that they could be exploited
effectively.

78
Fig.1: Gartner Hype Cycle, Paneta (2017)

In 2018, it seems to conclude that while technologies related to virtual assistants have matured faster
than expected, other technologies related to A.I. would have disappeared from expectations, particularly
those that explicitly mentioned cognitive knowledge. This does not mean that these elements have
disappeared, but rather have found application within others that continue to evolve in maturity, thanks
to this application.

Fig.2: Gartner Hype Cycle 2018 Paneta (2018)

And this can be better understood by finding, once again, the technologies related to A.I. in the first
positions of the strategic technological trends for the year 2018.

79
Leaving aside the criticisms that can be made to the loop of the technologies of Gartner (Gartner Hype
Cycle) as explained in Mullany (2016) and the suspicions that after reading the referred article can be
raised, it seems sensible to think that the technologies related to A.I. are going to have a strong impact
on our lives and this is something that is already being seen.

There is no doubt that technological evolution is approaching towards a future that few years ago was
only in the imagination and in the movies. The concept of A.I. opens a wide field of study which we
could be writing for days and talking about it for hours. This paper aims to analyze what these
technologies are, how they are influencing society, how they will influence it in the future and what we
can do now to get a value from them seeking its application to the shipbuilding sector and more
specifically to maritime design.

2. Background

It can be said that there is no single origin of the concept known as A.I. and therefore there is no
consensus to define that concept, but to be able to understand any of the definitions that apply, it is
convenient to know at least briefly the most relevant facts and some of the milestones in its history.

It can be considered that A.I. was born as a philosophical study on human intelligence based on the
concern of man to imitate the behavior of other beings with capabilities beyond the reach of human
beings (such as flying or diving), reaching the point of trying to imitate itself. In this sense, it can be
said that A.I. is the search to imitate human intelligence. It is clear that it has not yet been completely
achieved, but it is also increasingly true that we are closer.

2.1 A short history

• The first man who became aware of his own existence and was able to think, surely wondered
how his thought would work and would conclude the idea superior creator, an intelligent being
capable of creating another one. In this sense, the idea of a virtual design for intelligence is as
old as human thought.
• In 1920, the Czech writer Karel Čapek, Capek (2017), published a science fiction stage play
called “Rossum’s Universal Robots”. The play is about a company that builds artificial organic
humans in order to lighten the workload of other people. Although in the play these artificial
men are called robots, they have more to do with the modern concept of android or clone. They
are creatures that can be passed as humans and have the gift of being able to think.
• The English mathematician Turing (1950) publishes an article entitled “Computing Machinery
and Intelligence” that opens the doors to A.I. The article itself began with the simple question:
“Can machines think?” Later Turing proposed a method to evaluate if the machines can think,
which got to know itself like the Turing test. The test, or “Imitation game”, as it was called in
the document, was presented as a simple test to judge if machines could think.
• 1956 Dartmouth conference convened by McCarthy and where the term “Artificial
intelligence” was coined. The conference was attended by researchers from Carnegie Mellon
University and IBM, including: Minsky, Newell and Simon. In this conference extremely
optimistic forecasts for the next ten years were made that were never fulfilled, which caused
the almost total abandonment of the investigations during fifteen years, known today as A.I.
winter, Aggarwal (2018).
• Later, in the second half of the decade of the 1970s, A.I. resurfaced again with the appearance
of the expert systems. Expert systems are programs that answer questions and solve problems
in a specific domain. They emulate an expert in a specific branch and solve problems by rules.
There are two types of engines in expert systems: First, there is the knowledge engine, which
represents facts and rules about a specific topic. Second, the inference engine, which applies
the rules and facts of the knowledge engine to new facts. To picture this: in 1981, an expert
system called SID® ( Synthesis of Integral Design ) designed the 93% of the logic gates from
the CPU VAX® 9000 . The SID® system was built around 1000 hand-coded rules. The final

80
design of the CPU cost around 3 hours of calculations and surpassed human experts in many
ways. As an example, the SID® produced a 64-bit adder faster than the one designed manually.
The error rate of the human experts was 1 error per 200 gates while that of the SID® was around
1 error per 20000. However, these expert systems required large computing capacity and the
rise of personal desktop computers made these expert systems lose interest from investors,
causing the fall of companies that were dedicated to build hardware and software for these
systems, giving rise to what is known as the second A.I. winter.
• On May 11, 1997, the IBM computer Deep Blue defeated Gary Kasparov with three victories
going to Deep Blue, two to Kasparov and a draw.
• In 2011, the Watson® IBM system beat two of the most successful human contestants on the
American television game Jeopardy, a game which requires participants to ask a question in
response to clues of general knowledge. In the event, Watson® marked a breakthrough in A.I.
with its understanding of natural language and the ability to make sense of a large amount of
written human knowledge.
• In June 2018 the Watson® IBM system participates in a debate to demonstrate the progress of
the project Project Debater, developed by IBM since 2012. In one of these debates, the IBM
computer discussed with Noa Ovadia, a former Israeli national debate champion. The debate
was based on the following statement: Should we subsidize space exploration? If you are
interested in finding out who won, check out some of the many references you can find on the
internet, such as: “What it is like to see an IBM A.I. successfully discuss with humans.”
• Since the beginning of the 21st century the advance of A.I. has been unstoppable, driven by the
hardware improvements that make it possible to handle huge amounts of data in increasingly
shorter times, as well as the efficient use of neural networks and the full connectivity of devices
through high-speed internet. What previously required a lot of time, is now almost immediate.

Fig.3: Stages of computing. Source: IBM

Fig.3 shows the history of computing from the perspective of IBM, which is undoubtedly the leading
company in A.I. development, with its set of solutions called Watson®. According to Juan Ramón
Gutiérrez, responsible for industrial solutions at IBM, the evolution of A.I. can be divided into three
stages and we are now entering the cognitive era.

2.2 Present

In recent years the most important companies have begun to position themselves in the use of A.I. and
they do so through acquiring companies, startups and technologies that have knowledge and
technologies and develop them looking for applications that can add value to their investments.

81
It is interesting to collect the acquisitions of companies, many of them startups, which have been
produced to position themselves in this reality. In the following table it is possible to see the purchases
of companies by large companies that have opted for A.I. until 2017.

Fig.4: Acquisitions of A.I. startups. CBINSGHITS font

The development is so clear that we can speak of real race to appropriate the startups with the most
potential within A.I. technologies. As stated in the CBinsight report “Large corporations in all
industries, from retail to agriculture, are trying to integrate machine learning into their products. But at
the same time, there is an acute shortage of A.I. talent”, CBINSIGHTS (2018).

The positioning technology has made itself known through virtual assistants. Thus, large technology
companies such as Google, Apple , IBM , Facebook, Amazon or Microsoft make marketing of their
assistants as a banner to attract the attention of companies and thus also of the markets. Apple with
Siri®, Google with Google Assistant® , Microsoft with Cortana®, Amazon with Alexa® or IBM with its
Watson® project that goes far beyond what we would expect from virtual assistants.

82
Many well-known personalities are betting on the future of A.I., trying to ensure that there is some
standardization of technologies and so Elon Musk promoted a project called ‘Open A.I.’, which seeks
to unify all A.I. developments in a single project that being free and open can overcome the restrictions
of commercial products. Under this project we can find interesting research projects, but where we have
found a fertile field to grow is in the Multiplayer Online Battle Arena (MOBA), online games among
multiple players who are changing the rules of leisure and even sports, as stated in Martín (2018).
However, it is still far from being able to make decisions in a minimum reaction time as many of these
games require in certain circumstances. It is necessary to mention that Elon Musk decided to abandon
this initiative to avoid “future conflicts of interest”, http://fortune.com/2018/02/21/elon-musk-leaving-
board-openai.

3. Basic Concepts

While A.I. and machine learning are by now widespread, people know very little about it. We hear
about many concepts and tend to simplify and understand A.I. as an agglomeration of concepts and
technologies that can mean different things to different people: virtual assistants, robots that pretend to
do what people do, machine learning, automata that drive cars, etc..; and its applications are wherever
we look. It is necessary to identify the basic elements that make up A.I.

3.1 What is A.I.?

What definition can be adopted for A.I.? As pointed out by Vicenç Torra in one of his articles, the first
definition of artificial intelligence was given in the document prepared by J. McCarthy, M. Minsky, N.
Rochester and CE Shannon for the preparation of the meeting held in Darthmouth (USA) during the
summer of 1956 and in which the term ‘Artificial Intelligence’ appears for the first time. According to
the author of the article. It seems that this name was given at the behest of J. McCarthy. The proposal
cited above of the meeting organized by J. McCarthy and his colleagues includes what can be considered
as the first definition of artificial intelligence. The document defines the problem of artificial
intelligence as that of building a machine that behaves in such a way that if the same behavior were
carried out by a human being, it would be called intelligent, Torra (2011).

“There are, however, other definitions that are not based on human behavior. They are the following
four:
1. Act like people. This is McCarthy's definition, where the model to follow for the evaluation of
programs corresponds to human behavior. The so-called Turing Test (1950) also uses this point
of view. Eliza system, a natural language bot (software program) is an example.
2. Reason like people. The important thing is how the reasoning is carried out and not the result
of this reasoning. The proposal here is to develop systems that reason in the same way as people.
Cognitive science uses this point of view.
3. Reason rationally. In this case, the definition also focuses on reasoning, but here we start from
the premise that there is a rational way of reasoning. Logic allows the formalization of
reasoning and is used for this purpose.
4. Act rationally. Again the objective is the results, but now evaluated objectively. For example,
the goal of a program in a game like chess will be to win. To achieve this goal, the way to
calculate the result is indifferent.”

One might ask where the assistance systems available in our mobile phones fit or if one can think of the
famous (although a bit obsolete) Turing test to determine if when talking to Siri ® on the mobile phone
and we get a reasonable response, we are in a position to affirm that we are facing an A.I. According
to Kirkpatrick (2018), A.I. is like an umbrella covering multiple technologies designed to supply
computers with human-like capabilities in listening, vision, reasoning and learning. These techniques
that include ML or machine learning, DL or deep learning, Computer Vision (CV) vision by computer,
Natural Processing Language (NPL) or natural language comprehension, unmask hidden patterns in
large data sets and later using complex algorithms can relate the findings between apparently unrelated
variables. For all of this we could visualize A.I. in Fig.5.

83
Fig.5: A.I. Techniques, Kirkpatrick (2018)

Leaving aside the ‘Big Data’ and analytical techniques and focusing exclusively on A.I., three
fundamental elements can be detected:

• A.I. covers everything that allows computers to behave like humans. The techniques included
are among others: automatic learning, natural language comprehension, NPL, language
synthesis, computer image recognition, robotics, signaling and results analysis, optimization
and simulation, etc.
• Machine Learning (M.L.) is the subset of A.I. which deals with the extraction of patterns from
data sets. In this subset we will find: Deep learning, support vector machines, decision trees,
learning Bayes, clustering k- means, learning of association rules, regression algorithms, etc.
• Deep Learning is a specific class of M.L. algorithms that use complex neural networks. In a
sense, it is a group of related techniques comparable to a group of ‘decision trees’ or ‘support
vector machines’. They are the engine of the applications that use them and thanks to advances
in parallel computing they have become accessible to common use. Its components include:
artificial neural networks, convolution neural networks, recursive neural networks, long, short-
term memory networks deepest beliefs and many more.

3.2 Why is A.I. important?

There are many reasons to underpin the importance of A.I., but focusing on the most relevant one, we
want to share the view of DataRobot (2018), for which “A.I. systems are fundamental for companies
that seek to extract value from data by automating and optimizing processes or producing actionable
knowledge. A.I. systems Impulse damaged by machine learning enables companies to leverage their
vast amounts of data available to discover ideas and patterns that would be impossible to fathom for
one person, allowing them to deliver more targeted and personalized communications, predicting events
critical care, identifying likely fraudulent transactions, and more.”

Companies that do not adopt A.I. and machine learning technologies are doomed to be left behind:

• The world expenditure on A.I. will grow 50% annually and will reach 50 billion euros for the
year 2021.
• Industries such as retail sales, marketing, health, finance, insurance and others will not only
benefit from A.I. and machine learning, but those that do not adopt will disappear.
• As of 2020, the companies that bet on the management of data within A.I. will snatch 1.2 trillion
dollars a year to those others that they do not do it because they do not have the same vision.

84
• 83 percent of first users are already gaining value from A.I. initiatives e automatic learning.
• The net gain in jobs resulting from the adoption of A.I. It will be above 5 million.

These are overwhelming figures. And as McAffe and Brinjolfsson (2017) point out “The effects of A.I.
will be expanded in the next decade, since the manufacturing industry, retail, transportation, finance,
health, advertising, insurance, entertainment, education and virtually any other industry will transform
their core processes and business models to Take advantage of automatic learning.”

4. Applications of Artificial Intelligence

After everything written in this article, anyone can understand how A.I. relies on three fundamental
pillars: data, algorithms and computing power. This explains why the greatest development of applica-
tions is in those sectors where there is a lot of data, that can be analyzed and from them conclusions can
be drawn that serve in time for a specific purpose. In short, that the application of A.I. to the analysis
of the data provide a significant value to the sector, company or individual that uses its results.

There are countless sectors in which we can find the application of A.I. as it is currently built. Compa-
nies around the world are trying to take advantage of A.I. to optimize their processes and obtain higher
revenues and profits. How are they doing it and in what sectors? It is known that some of the
applications we use every day use A.I. for its operation, such as Netflix®, Spotify® and Siri®, among
others. To illustrate the extension of the applications, some use case are collected below:

• Chat-bots: “A computer program designed to simulate conversations with human users,


especially through the Internet.” They are applications that interact with A.I. programs and
provide a human like conversation answering frequently asked questions from users. The chat-
bots they save time and effort by automating the first line of customer service. Gartner predicts
that by 2020, more than 85% of the interaction is with customers will be handled without a
human. However, the opportunities provided by the chat-bot systems go beyond providing
answers to customer inquiries. ‘WeChat’ Chinese bots can schedule appointments, call a taxi,
send money to friends, sign up for a flight and many others. They are also used for other
business tasks, such as gathering information about users, helping to organize meetings, and
reducing overhead costs. It is no wonder that the size of the chat-bot market is growing
exponentially. The importance of the chat-bots are the interface between and human, i.e. it is a
tool used by A.I. to materialize.
• Electronic commerce: Electronic commerce programs that include A.I. label, organize and
search the content visually allowing buyers to discover the associated products, whether in size,
color, shape or even brand. This technology allows companies of any size to reach an
extraordinarily broad market.
• Human Resource Management: A.I. and machine learning is used in companies that have
advanced in the management of human resources through specific software. The reasons why
it has spread so much in this area have to be looked for in two aspects. First, by the amount of
data handled in human resources, and secondly by the need to increase efficiency in an essential
area of the company. The A.I. deals with the most laborious work of Human Resources (HR)
(screening, paperwork, data entry, reports, etc.), in addition to offering powerful analysis tools
to automatically generate high quality data for HR departments.
• Medicine: A.I. programs can take advantage of data collected from patients to provide support
for clinical decisions during critical medical situations, as well as document those events
electronically in real time. A.I. improves reliability, predictability and consistency of data and
results of clinical trials. It also constitutes a tool for increasing decisions.
• Communication and collaboration: The A.I. can integrate communication and collaboration to
improve employee interaction with data, providing real-time translation even improving
management calendars or activating electronic meetings, etc.
• Energy: Interconnected power plants that obtain data on operation, consumption, climatic
circumstances that influence energy needs or their generation.

85
• Automotive: Autonomous vehicles will make use of A.I. for its operation and is something that
will be seen in the medium term. However, it is not necessary to wait to have some capabilities,
such as the assistants integrated in the vehicle that anticipate the needs of the driver and
passengers, or the monitoring of mechanics and driving to increase safety.

These are just some of the fields of application, but we don’t want to miss the opportunity to mention
others, in which A.I. application is generating good results or arouses enormous expectations: Intelligent
cybersecurity, logistics and supplies, leisure, sports betting s, etc., these are just some of the areas where
applied and A.I., but very soon we will see in applied to things in that never would have thought.

4.1 A.I. applications in the industry

Looking at the applications of A.I. to the industry, it is difficult to separate what is purely industrial
from all other areas that help the development of the industrial business. The digitalization of industrial
processes, robotization, improvements in the collection of data and its analysis that allow improving
decisions and reducing risks, etc. All this, without a doubt, are indirect applications of A.I. in the
development of the industry. But what A.I. can be found in the industry, and in particular in the naval
industry?

In order to find a more direct application in the industry, the focus must be on the phases of the life
cycle of the manufactured products and to look for opportunities to apply A.I. in them. If we consider
that the phases of the life cycle of a product are the following: specification, design, operation,
withdrawal. In all of them, applications of A.I. can be found to a greater or lesser extent.

During the definition or specification phase, A.I. tools linked to the Big Data and analytics allow us to
better define what the client wants, help predict the behavior of the market and build a strong business
case. For this phase, several of those applications explained above are of perfect application.

The design phase is the one that in our opinion is most lacking in A.I. applications that have a direct
impact on generating value. It must be taken into account that the design phase is the one that most
commits the necessary resources for the development and operation of the product. There are two points
of view at the time of seeing the possibilities of application of A.I. in the design phase. A.I. applications
that help to make a design and A.I. applications that help to make a good design.

If A.I. helps to make a design well, it will provide a time saving and probably reduce the number of
errors. This means adding value to the product by decreasing its cost and being able to transfer this
improvement to the consumer, obtaining a more competitive product. Given that currently designs are
made with computer tools either Computer Aided Design (CAD) or office automation or management,
such as Product Life Cycle Management (PLM), and Documentaries, (PDM), etc. It is necessary to look
for in these tools the possible applications of A.I., and honestly, at present it is not easy to find them.
As Naoyuki et al. (2017) assure “The technology of A.I. applied to the design of products in Monozukuri
(form of Japanese manufacture) has as objective to provide computerized support for diverse tasks in
the development of products that at the moment depend on the human experience.” Given that this has
its limitations propose a platform in the cloud that allows to collect data and manage learning models
extracted from said data to take advantage of them in the design of new products. In this article, they
propose cases and an implementation plan. As of today there are no results yet.

Companies of mechanical CAD or general design have tried to get its position by announcing tools of
integrated A.I. into CAD and Solidworks® recently announced recognition capabilities characteristics
and features, although its practical application is unknown in the CAD. More interesting is the proposal
of EXALEAD OnePart that offers a product to recognize similarities in parts and informs the user to
avoid duplicities. This product is integrated with 3DEXPERIENCE®, in addition, although the gain of
reducing and simplifying a model is evident, it is a tool that works during later stages. The powerful
Autodesk has a research project known as Dreamcatcher that seeks to facilitate the design of hundreds
of alternatives that meet the specifications of the designer. In his own words “Dreamcatcher It is a

86
generative design system that allows designers to develop a definition of their design problem through
objectives and limitations. This information is used to synthesize alternative design solutions that meet
the objectives. Designers can explore the tradeoffs between many alternative approaches and select
design solutions for manufacturing”. This project is based on various ideas such as the DreamSketch
interface that “combines the expressive qualities of the sketch and free forms with the computational
power of the somewhat generative rhythms of design”, Rubaiat Habib (2017). At last, to mention the
interesting approach proposed by the Artificial Intelligence Laboratory for Design (LAI4D) “LAI4D is
an R&D project whose objective is to develop an A.I. capable of understanding the user's ideas
regarding the spatial imagination”. On its web page they have a web application that allows to
experience the level they have reached.

Fig.6: LAI4D sketch analyzer

A.I. also has the field of application to make a good design and this passes by being be able to
parameterize the designs, collect usage and performance data and evaluate them. But this is the great
difficulty facing the expansion of A.I. in this field since not enough information has been collected or
parameterized from those designs that are to be made.

In terms of development and production, there are innumerable existing possible applications. It is
precisely in this area where the implementation of work methodologies that use A.I. to obtain
improvements in the productive process has advanced the most. Governments are paying for digital
transformation of the industry through similar programs known as Industry 4.0 and it has made many
initiatives materialize. However, not all initiatives are purely A.I., nor those that are, produce value to
the product. There are repetitive tasks or requiring a certain programmable logic that can be automated
through the use of A.I. applications. Industrial production can improve from the collection of
performance data, analysis of results and the use of applications that use A.I. algorithms to detect
inefficiencies in the production chain. For example, evaluating defective products can allow knowing
which specific machines produce them or if the production design is wrong.

The great leap of A.I. in the industry has been given by the hand of what is known as Internet of Things
(IoT) or internet of things and the momentum of Industry 4.0 to improve the operation of products. All
companies have launched a race to incorporate their products, sensors and connectivity with platforms
that collect a multitude of data to draw conclusions about their function, improve and extend their
operational life. Companies that provide products in this way sell what they call the industrial operating
system. Thus, the giant Siemens offers MindSphere®, “the operating system of IoT open and based on
the Siemens cloud that connects its products, plants, systems and machines, which allows it to take
advantage of the large amount of data generated by IoT with advanced analysis”, Siemens (2018). Note

87
that ‘open’ does not necessarily mean that any other product can be connected to that cloud
unconditionally and this is one of the aspects that significantly difficult the progress of the extension of
A.I.: the absence of open standards, which anyone can use for our products and applications. Not only
Siemens offers that approach, also PTC or IBM among others, have developed their connectivity and
analytics platforms powered by A.I. applications.

We should not forget about the entire waste treatment or recycling industry, which like any other
industry can make use of A.I. applications to optimize their processes.

5. A.I. applications in Marine Industry

The naval industry has always been very traditional and seems to be always in the tail of the
implementation of the improvements that in other industries have matured previously. This fact has a
reasonable explanation in the difficulty that the naval industry has to convert investments into profits.
The naval industry is not friend of risks, especially when the simple fact of making boats is a huge risk
in itself. On the other hand, the naval industry collects, to a greater or lesser extent, all the other
industries and it may be thought that what is good for the others must also be good for it.

Focused on the shipbuilding industry, A.I. shall address important limitations among which are the lack
of data or the confidentiality of the same. The marine industry has focused on immediate results, it is
looking very quickly for a solution and does not store data and results in a systematic way that allows
it to be used again in similar scenarios. The development of powerful algorithms requires that they can
be applied in similar conditions with some recurrence. The data must be correctly structured and
reasonably clean so that they can be used with advantage, Muñoz et al. (2018). In the most successful
cases there may be limited series of ships that have the same characteristics. It is difficult to find a
systematic use of the data, but still there are interesting contributions that occur in the different phases
of the life cycle of a naval project. However, it is possible to find a recurrent pattern in the characteristics
of the steel parts of the ship. So A.I. systems can assess if they are correctly defined.

5.1 A.I. in marine design

The first stage of the lifecycle of ships and naval artifacts is designing. We can find interesting
approaches to the use of A.I. in this phase. And it must be said that some of them are quite old and date
back to the time of the first explosion of A.I. interest. In 1989, the Defense Advanced Research Projects
Agency (DARPA) of the United States promoted a workshop held at Rutgers University, New
Brunswick, NJ to support research initiatives of hydrodynamic designs of ships. One of the objectives
was to clarify the relationships between the hydrodynamic design problems of ships and the areas of
A.I. research related to the design and analysis of complex systems. Note that the results cannot be said
to be very promising, since they concluded the need to acquire computational fluid mechanics analysis
tools CFD and integrate them into the design processes and effective control of design processes,
focusing on concurrent design and including approaches to explore feasible design space configurations
and systematically recording and storing results, Amarel and Steinberg (1990). However, the expecta-
tions remained open and unspecified. Later approaches have been made to apply A.I. to the resolution
of complex design problems through expert systems and the appropriate selection of them for certain
problems such as structure dynamics or vibrations, Díez de Ulzurrun (1992).

Where it is possible to find a greater variety of proposals is in the task of optimizing the designs using
A.I. algorithms that analyze the design space of certain vessels in which it has been made a systematic
parameterization of the variables that allow to define different design alternatives. One of the examples
can be found in the article, Abramowsky (2013), for the application of A.I. to the design of cargo ships.
However, it is not easy to find applications of A.I. to really systematic processes that are in real use,
beyond purely academic or research attempts that have not finally materialized in the field of design.
The reason can be found in the difficulty to develop these tools and the low return that companies and
organizations derive from them.

88
5.2 A.I. in naval products and companies

It is possible to find some more tangible applications with a certain validity in the field of the operation
of the ships, that is to say, the operation and the management of the transport. In this sense, the initiatives
are much more numerous, although some are in phases of research or prototypes, others are in more
advanced stages of implementation.

Thus, it is possible to find prototypes of unmanned vehicles that are used in very hostile environments
and that require the support of A.I. This is well described by the MIT professor, Henrik Schmidt, in his
course “Unmanned Marine Vehicle Autonomy, Sensing and Communications”. These types of artefacts
in difficult environments, such as ice water, where communication is an impossible limitation, the role
of A.I. is crucial, O´Leary (2017).

Another interesting field of application of A.I. is described in an article about the use of A.I. techniques
for the detection of small vessels, Del-Rey et al. (2017). The approach is interesting because it raises a
situation in which the vessel is the subject of the observation, but can also be the owner of the
application. Having on board systems with detection techniques of other vessels, based on A.I., opens
the horizon of unmanned vessels and their possibilities of realization.

A.I. also is being integrated into the combat systems of modern ships as essential to identify threats.
Thus, the software STARTLE® of the company Dstl was selected by the Royal Navy for the
management of threats and is described as a software that continuously monitors the ship's environment
in a cut and medium range, processes the data it receives and through techniques of A.I. helps crews to
make decisions. “It is inspired by the way the human brain works, emulating the conditioned fear
response mechanism of mammals. It quickly detects and evaluates potential threats, the software
significantly increases the situational awareness of the human operator in complex environments,”
Mathews (2016). More recently the company Rolls-Royce has signed an association agreement with
Google to use the latter´s machine learning engine to improve the company's intelligent awareness
systems, Kingsland (2018).

It is also possible to locate A.I. applications in management systems for the exploitation of energy at
sea or proposals of companies dedicated to energy in ships. Recently the company Eco Marine Power
announced that it would start using the Neural Network Console provided by Sony Network
Communications Inc., as part of a strategy to incorporate A.I. in various technological projects related
to the ship, including the further development of the patented system of Aquarius MRE® propulsion
(Marine Renewable Energy) and EnergySail®, MI News Network (2017).

One of the great references in the naval field is the marine area of Rolls Royce that is trying to promote
the application of A.I. in ships in two lines: the intelligent management of assets that covers energy,
health, data and fleet management and a second line of business of remote and autonomous operations.
The latter includes intelligent detection or recognition, remote operations, autonomous navigation
systems and connection with ships, Rolls-Royce (2018). As we can see on their website, not all lines of
work are in operation, but some are in development.

5.3 The future of A.I. in marine business and industry

The exploitation of the marine business has an undeniable field of growth for A.I. There are many
computer solutions that, based on the operating data of the different ship systems, can help manage
assets in a more optimal way. The application of IoT to ships provides both data collection and the
ability to act on assets to obtain their best performance, Muñoz and Pérez (2017). For this, it is necessary
to have some essential elements. In the first place, we must have comprehensive solutions that cover all
aspects of connectivity and integrate them in a coherent manner. It is necessary to have the signaling,
connectivity and appropriate representation model based to provide interactivity with the end users.
IBM with its program MAXIMO® and SENER with FORAN® are developing a proposal that integrates
reality model made during the initial design stage, with the solution of IBM to merge asset management

89
with the power of data IoT obtained of sensors, devices and people to have visibility of them in real
time. Have a model of the database in a single database, it allows obtaining virtual reality and augmented
reality images on which the obtained data can be superimposed and compared with the technical
performance measures expected for each element of the monitored vessel. In this way it will be possible
to act in the way that each situation advises.

Another interesting field of work for the future of A.I. is image recognition. In this sense and placing
the focus on the marine industry, two fields of application appear. The identification of images in
autonomous vehicles that can help the mission and the operation of them. Although it does not only
have application in ships and unmanned devices, it can also be used in surveillance systems and
detection of possible threats or risks in manned ships. Part of this is what one of the projects that have
been mentioned above covers, Mathews (2016).

The recognition of images through A.I. is also of interest in design stages. The need to have virtual
models of the objects that are part of a project makes real models can be scanned and then try to be
recognized to create the virtual model. This is of particular interest in ship revamping and retrofit. The
need to have a virtual model from a real one in order to evaluate the possibilities of retrofitting, including
the processes and maneuvers necessary to carry out such operations. While cloud applications are able
to work with that amount of visual information, they have not yet passed the threshold of identifying
the elements that appear in the scene and converting them into analytical geometric representations or
not, on which can obtain measurements or manipulate as a whole. An extension of this, can be applied
to the component models that are used in the design stages by the CAD applications. Currently, it is
increasingly common for components to be modeled in CAD that are obtained through external files
that for the most part have been obtained for marketing purposes. These format are superficial
representations of many faces that do not have a geometric and parametric representation. This, which
is useful simply to see a model makes it useless or even a problem to carry out projects, since it is
necessary to have metadata that only exist when the models have a formal geometric representation.
That is to say, it is not the same to handle the six faces of a cube, that the cube in its totality. This
limitation opens a field of action to A.I. programs that are able to recognize that certain faces form a
determined surface, and in turn that certain surfaces form a certain solid. By the moment the available
programs help the user, but it is finally the user who validates the conversion. However, A.I. programs
can go on igniting this type of recognition that makes a human being to be more decisive each time.

The realization of a naval project is something certainly complex, and not only by the transversality of
it but also by the number of tools that must be handled and the limitations imposed by design rules or
standards of various kinds: construction, security, etc. CAD systems provide more and more tools, but
they are also increasingly complex to be used optimally. Along with this scenario, the marine
engineering companies and the shipyards are faced with very demanding deadlines and staff or very
young who, although they are familiar with new technologies, do not know the art of naval architecting
and marine engineering or with very old people who are more reluctant to work with the CAD and with
new capabilities. Therefore, it would be interesting to have a virtual assistant who can provide all the
information necessary to do the job correctly. SENER and IBM are developing a project that integrates
the cognitive abilities of Watson® with the functionalities of the CAD in the different stages of design.

The platform Watson® will have a corpus of information and data that will integrate everything needed
to use correctly and optimally the FORAN® system Furthermore it also will include all regulations
applicable to different types of vessels, regulation of IMO, fighting against the pollution, safety
regulations, etc. It can even integrate the design and work rules of the shipyards themselves in such a
way that CAD users can make designs according to all the regulations and applicable standards at the
different levels: administration, construction and ship-owners. The system will be trained to learn in
order to give the correct information to each type of intention and will be able to learn and trained in
different and new things that may affect the design. The integration can be done at different levels,
completely decoupled from the CAD or coupled to it to perform certain operations in the CAD. It
launches events that are captured by the listener system and this in turn links with the virtual assistant
to provide the information or data that are linked to those operations. The interaction with the system

90
may also be on demand or taking advantage of the natural language processing capabilities of the
Watson ® system. This project allows the capabilities of the FORAN® system to be increased to provide
the cognitive and analytical capabilities of the Watson® system and make them available to the marine
industry to put it in the field of digitalisation 4.0.

Fig.7: IBM Cognitive Engine in FORAN. Courtesy of IBM

As we have found possibilities of application of A.I. in the field of design, it will also be possible to see
the realities of A.I. in manufacturing processes. Perhaps the first steps we will see in the ability of the
machines to select the material of a manufacture. Machines that select clipping plates to take advantage
of the material according to the remaining pieces or robots capable of organizing the movements of
intermediate products in the workshop. The novelty of A.I., will be that these machines and robots will
not have to be programmed, but they will receive the data or the objects with which they have to work
and they will know how to act. “In the future, robots will no longer have to be expensively programmed
in a time-consuming way with code pages that provide them with a fixed procedure for assembling
parts, we just have to specify the task and the system will automatically translate these specifications
into a program,” Wurm (2017).

6. Conclusions

A.I. is one of the enabling technologies of digital transformation that has the greatest potential among
those that make up the fourth industrial revolution. Knowing their characteristics and possibilities is
essential to decide their application to certain processes and products, especially industrial ones and
very particularly those related to the marine sector. It is important to identify the value that A.I. can
contribute to the use cases where it can be applied.

A.I. automates the learning of repetitive tasks and the discovery of relationships through data. It is
necessary to populate accurate and reliable data to A.I. systems as well as to provide sufficient
information that is well structured and correctly tagged. The A.I. highlights the importance of the data.

It is necessary that those who use A.I. know how to make themselves understood and ask the right
questions. The A.I. must be correctly fed, with questions and answers. An A.I. system is as intelligent
as the individuals who prepare it.

A.I. adds intelligence to products, which means that it makes those technologies that incorporate them

91
better, but we must not forget that it is these technologies that provide the core of value to work
processes and methodologies.

The use of A.I. in industrial environments such as the naval one is just beginning. There is still a long
way to go, in the field of design, optimization of projects, maintenance of data and results. Fields such
as the recognition of images, for their conversion into models, the automatic intervention in the
validation of the requirements, the optimal exploitation of the processes inherent to naval engineering,
are still practically unexplored.

With or without debate, the truth is that A.I. is present day by day in our environment, and its
development will be growing. We will see A.I. in applications that we would not have imagined weeks
ago. Musical compositions that are the most listened to can be created with A.I. or the most curious and
tasty cooking recipes, intelligent cars without a driver or the best doctor capable of correcting diagnosis
and treatment. But in spite of all these advances, we will continue to need intelligent people, people
who are smarter than machines and who are ahead of them, because there are capacities of the human
being who can never be embedded in an artificial intelligence.

References

ABRAMOWSKY, T. (2013), Optimization, Application of Artificial Intelligence Methods to


Preliminary Design of Ships and Ship Performance, https://www.researchgate.net/publication/
259361068_Application_of_Artificial_Intelligence_Methods_to_Preliminary_Design_of_Ships_and_
Ship_Performance_Optimization

AGGARWAL, A. (2018), The Birth of AI and the first AI Hype Cycle, https://www.kdnuggets.com/
2018/02/birth-ai-first-hype-cycle.html

AMAREL, S.; LOUIS, S. (1990), Artificial Intelligence and Marine Design, AI Magazine (SPRING)
11 (1), pp.14-17

CAPEK, K. (2017), R.U.R. (Robots Universales Rossum), Books Mablaz

CBINSIGHTS. (2018), The Race for AI: Google, Intel, Apple In A Rush To Grab Artificial Intelligence
Startups, https://app.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline

DATAROBOT (2018), Artificial Intelligence (AI), www.datarobot.com/wiki/artificial-intelligence

DEL-REY, N; MATA, D.; JARABO, M.P. (2017), Artificial intelligence techniques for small boats
detection in radar clutter. Real data validation, https://www.sciencedirect.com/science/article/
pii/S0952197617302610

DÍEZ DE ULZURRUN, I. (1992), Aportaciones al diseño dinámico de buques, Universidad Politécnica


de Madrid

KINGSLAND, P. (2018), Ship Technology, https://www.ship-technology.com/features/rolls-royce-


teams-google-ai-driven-ship-awareness

KIRKPATRICK, K. (2018), Considerations for getting started with AI, TRACTICA LLC (Cray Inc.)

MARTÍN, R. (2018), Open AI, la inteligencia artificial que nos daría una paliza en cualquier MOBA,
https://www.esportsunlocked.com/especiales/open-ai-la-inteligencia-artificial-que-nos-daria-una-
paliza-en-cualquier-moba

MATHEWS, A. (2016), Artificial intelligence to play key role in maritime combat, https://aerospace
defence.electronicspecifier.com/marine/artificial-intelligence-to-play-key-role-in-maritime-combat

92
McAFFE, A.; BRINJOLFSSON, E. (2017), What’s driving machine learning explosion, Harvard
Business Review

MI NEWS NETWORK (2017), Marine insight, https://www.marineinsight.com/shipping-news/eco-


marine-power-study-use-artificial-intelligence-research-projects

MULLANY, M. (2016), 8 Lessons from 20 Years of Hype Cycles, https://www.linkedin.com/pulse/8-


lessons-from-20-years-hype-cycles-michael-mullany

MUÑOZ, J.A.; PÉREZ, R. (2017), Design of smart things for the IoT, 2nd Int. Conf. Internet of Things,
Data and Cloud Computing, New York

MUÑOZ, J.A.; PÉREZ, R.; GUTIERREZ, J.R. (2018), Design Rules Evaluation through technologies
of treatment of Big Data, 17th COMPIT, Pavone

NAOYUKI N.; et al. (2017), Application of Artificial Intelligence Technology in Product Design.
Fujitsu Science Technology 53 (4), pp.43-51

O'LEARY, M.B. (2017), MIT News, http://news.mit.edu/2017/unlocking-marine-mysteries-artificial-


intelligence-1215

PANETA, K. (2017), Smarter with Gartner, https://www.gartner.com/en/newsroom/press-releases/


2017-08-15-gartner-identifies-three-megatrends-that-will-drive-digital-business-into-the-next-decade

PANETA, K. (2018), Smarter with Gartner, https://www.gartner.com/smarterwithgartner/gartner-top-


10-strategic-technology-trends-for-2018

ROLLS-ROYCE (2018), Ship Intelligence, https://www.rolls-royce.com/products-and-services/


marine/ship-intelligence.aspx#

HABIB, R.; et al. (2017), DreamSketch: Early Stage 3D Design Explorations with Sketching and
Generative Design, ACM Symp. User Interface Software & Technology

SIEMENS (2018), MindSphere - The Internet of Things (IoT) Solution, https://www.siemens.com/


global/en/home/products/software/mindsphere.html

TORRA, V. (2011), La inteligencia Artificial, LYCHNOS - Cuadernos de la Fundación General CSIC


(7), pp.14-18

TURING, A.M. (1950), Computers and Thought, Mind 49, pp.433-460

WURM, K. (2017), Prototype Robot Solves Problems without Programming, Siemens, https:www.
siemens.com/innovation/en/home/pictures-of-the-future/industry-and-automation/the-future-of-manu
facturing-autonomous-assembly.html

93
Future of Autonomous Shipping from an Administration Point of View
Svein David Medhaug, The Norwegian Maritime Authority, Haugesund/Norway, sdm@sdir.no

Abstract

This paper presents the future of digitalization from an administration point of view. What is the ad-
ministrations responsibility, and how do we cope with innovation and new technology within the
scope in Norway.

1. Introduction

The national authority has a social, financial and ethically responsibility towards its citizen. When it
comes to the maritime authority there is no exceptions. The Maritime authority is the administrative
and supervisory authority in matters related to safety of life, health, material values and the environ-
ment on vessels flying the national flag. The administration also needs to ensure that foreign ships in
the flag states waters are acting save, secure and environmental in accordance with international
agreements. In addition, the administration has a responsibility for ensuring the legal protection of the
national fleet, sailing with the flag of convince.

2. The Authority’s responsibility

2.1 Adviser

National and international interests will always be complexed and due to various maritime perspec-
tives, the administration may have different prioritized agendas. Depending of what kind of clients or
customers it has, the flag state takes a stand. However, the authority will always have a duty to pro-
vide guidance to its customers. Regulations and instructions need to be available for the relevant
stakeholders. Guidance to governmental organizations is of course a part of this.

2.2 Driving Force

The authority should also be a driving force for the industry and political authorities in the safety and
environmental activities. It should encourage the maintenance and development of a strong national
flag. The administration should be a visible and clear participant in the national and international reg-
ulatory work. In my opinion, this is one of the clearest indicators of a quality flag! Attitudinal
measures should also be a central issue to its work. Research, innovation, risk assessments and lessons
learned from accidents/ incidents should form the basis for its priorities.

2.3 Supervisory authority

In most cases the administration also has the supervisory authority pursuant to the national

• Ship Safety and Security Act,


• Product Control Act and;
• Act relating to Recreational and Small Craft.

The supervision includes certification, document control, inspection and auditing to ensure compli-
ance with the legislation. The supervision contributes to the creation of strong behavioural attitudes
with regard to health, safety and the environment.

94
2.4 Register

To maintain control over the flag vessels, the administration needs to manage the function of real
property register, a registration which ensures the legal protection of registered rights and is main-
tained by correct and updated registers of the national ships. In Norway, this is covered by law, in the
Norwegian Maritime Code to manage the function of real property register.

Fig.1: The cog-wheels of a quality Flag

3. The preferred maritime administration

To achieve the center attraction and become a preferred flag - both for shipping and maritime indus-
try. The administrations need to make a difference. It cannot be passive. If you like to be a preferred
stakeholder, you cannot be a free rider! If you do so, your effort will limit itself. The future of ship-
ping holds new segments of which flags of convenience will have great difficulties to obtain if they
are not dedicated and active. It’s not enough to hold a management regime.

The administration needs to be:

• Available to its customers. 24/7.


• It will not be enough in the long run, just to have protocols, regulations and acts.
• The owner’s need a competent and experienced administration, which can make safe and se-
cure decisions on, contravenes of established routines and regulations. As the maritime do-
main is moving into a significant change. The administration needs to hold a relevant exper-
tise and be recognized for its competence in the maritime cluster.
• The administration needs to act and contribute to the development of new and advanced tech-
nology. To make it safe and secure. WE need to be committed and serious.
• Administration with significant influence need to use their channels and co-work with other
administrations to make common ground.
• Hopefully good reputation is worth something. And quality will concur simplicity and greed.
Safety need to be a part of the development.
• Countries with ability to take decisions and create sustainable ways ahead will be preferred
partners as they can set the premises.
• Administrations, which has the ability to think new, turn around and make innovation and
new technology available for its users, will become attractive for the new shipping era.
• We need to be somehow pre-active, dynamic and looking ahead for technology, which can
make the oceans more reliable, safe and secure. With low environmental risks.

95
Of course, the relations to the economy also need to be in line. The flag needs to offer competitive
services so that the industry chooses to register its vessels in respectively flag. We need a blue econ-
omy in a green and secure shipping environment!

4. Contribute to making National innovation to international standard

In Norway, we have the benefit of being a small but complete cluster within the maritime domain. We
have a chain of participants from the industry research and development (R&D), manufactures, mak-
ers, insurance, management, union, class societies and authorities working together. We have arenas
of cooperation capable of concluding. The Norwegian Maritime Authority encourage the industry by:

• Stimulate to innovative thinking. We strive to be open minded, always looking ahead for bet-
ter and safer way to operate.
• We Coordinate and facilitate ideas which seems to be sustainable.
• We actively participate in international forums, promoting safe, secure, environmental and
sustainable ideas, which have beneficial potential.
• Invite the industry to participate in the Authority’s international work
• We even take the initiative in many ways, which earlier were unthinkable.
• Active and Participate in many development projects.

4.1 Innovation and new technology

Norway sees itself as one of the big contributors in the international shipping and maritime industry.
We therefore have taken an enterprising initiative with concern to sustainable new technology.
Megatrends within “innovation” which we see today is mostly triggered by “new technology &
Environmental requirements”. These megatrends are seen as the main drivers towards new maritime
innovation. Within the terminology of new technology, we have concepts as:

• e-Navigation
• Digitalization
• Automation
• Autonomous ships

Increased environmental requirements is also drivers for these megatrends. Concepts, which will min-
imize human environmental footprints. By establishing regulations for Ballast water management and
reduction of Greenhouse gases. Together with Establishment of an Energy Efficiency Design Index
(EEDI), a Ship Energy Efficiency Management Plan (SEMP) to reduce emissions and create a more
efficient and cleaner shipping. And by creating a designated Emission Control Areas (ECA) to reduce
Sulphur content we have achieved great environmental benefits of these megatrends already.

4.2 New Technology - Driving Forces

Due to these trends, Norway has developed many new achievements in the name of innovation. To
mention a few, we were the first to build:

• LNG tug boat - Borgøy with gas turbines delivered by Rolls-Royce.


• The very first UECC’s LNG car carrier.
• And the first methanol tank boat
• The first battery ferry in operation – “MF Ampere”
• The first battery small smack vessel
• The hybrid “Vision of the Fjords”
• The first autonomous zero emission container vessel – “Yara Birkeland”

96
4.3 Automation, digitalization and Remote functionality

One of these new megatrends is driven by the new technology within the revolution of global
digitalization. The technology has already started making Man superfluous and obsolete in many
ways. The machine has started to take intelligent decisions by its own. Processes which earlier where
dependent of a human interfering, the machines are now able to make their own decisions based on
calculated algorithms or pre-programed behavior pattern.

The digitalization trends, which we see on the near horizon, have an autonomous approach. Where the
human machine interface is moved from the decision making and operational to the supervising table.

• Artificial intelligence (AI)

AI seems to be the goal. We are moving from SIRI to self-driving cars, the new self-learning
technology is progressing rapidly. The technology can now encompass anything from
Google’s search algorithms to autonomous driving. Even though the self-learning technology
today is designed to perform a narrow task, e.g. facial recognition, internet searches or driving
a car or vessel, it seems like the long-term goal is to create a technology, which will outper-
form humans in whatever the specific task is. Soon and if we let it, the technology will be
able to outperform humans at nearly every cognitive task there is.

Areas of expertise where man already is outperformed in many ways include banking, insur-
ance, brooking, travel agencies, cashiers. Also, automated industry assembly work, which ear-
lier was tantamount with poor safety, or work related to repetition causing fatigue or inatten-
tion is replaced by robotics.

• Why the eager towards autonomous system

Nevertheless, and despite these circumstances, we still see an eager towards a digitalization
from the industry in general, but now also in the maritime industry. We can see that new seg-
ments of businesses which eager to make a soon approach. We actually see a potential of new
business models emerging, which can be a competitor to traditional trade as we see it today.

This may be one of the reasons why we do not see the biggest enthusiasm towards autono-
mous systems from the owner’s side. Another reason is probably the fact that the shipping in-
dustry is deep into a low conjecture, and for the time being have enough just to get the wheels
running.

The maritime industry has been shielded from many of these new technology developments
for better or worse. The digitalization and automation in the maritime segment have been reg-
ulated through IMO and strong type approval regimes and have therefor been held back. The
technology has not been prioritized for commercial use even though it has been available for
some time.

5. Benefits

When it comes to the potential of the autonomous systems – in my opinion, few industry segments
can achieve bigger benefits than the maritime industry and shipping. Benefits such as

• Cost efficiency
− When constructing autonomous vessels, there will be lower building cost due to less su-
perstructure and no accommodation; there will be less to maintain.
− The reduced superstructure construction will improve aerodynamics and stability.

97
− Digitalization and optimization of the power management will reduce fuel consumption
and emissions.
• Safety
− With fewer people on board there will be fewer accidents involving humans and to evac-
uate in case of emergency.
• Security
− Both cyber-security and general security will be at a higher standard than today. Because
we can build the system around the concept and from scratch, we can adjust and interface
cybersecurity for a digital approach.
− Physical security will be better by making vessels unboardable, making them of no inter-
est to hijackers or stowaways.
• Environment
− These vessels will most likely to have an environmental footprint with zero emission as a
goal. That means less emission to air and water.
• Efficiency
− The administration on board is digitalized and taken care of ashore or automated on
board. This enormously impact the administration burden.
• Digitalization
− All systems on board will be harmonized and interfaced. The systems will work together
and make the navigation more exact and safer.
• Reliability
− Because all operation scenarios must have been gone through thoroughly, the reliability is
better. The fact that autonomous vessels need to have compensating measurement to
achieve better safety and reliability gives these vessels an advantage.
− Redundant systems make it more reliable.
• Reduction of Human errors
− Reduction of human errors is obvious.
− Continuity – Non-stop process is also a benefit.
− Machines do not need breaks as humans. There is no fatigue and operations can go on
24/7.
• Automated Logistics
− The logistics is calculated and put into transportation chains, automated and available for
all relevant stakeholders.
• Shore-based control stations
− Control stations onshore will be a big contributor and take care of incidents which may
occur accordingly, and if necessary take over control of the vessel.

Fif.2: Cost efficiency equals quality

98
6. From concept to project

Even though we have a lot of enthusiasm around the autonomous trends today, I think it still will take
some time until the world is ready for a general fleet of commercial autonomous ships. The future is
still unknown! All future views are tentative assumptions based on the development trends. Even due
to the circumstances that we see a rapidly development today around the subject. I think general
automation and digitalization of navigation and communication equipment will come first. New tech-
nology will pave the way for the new era of automation. When the bridge is harmonized, integrated
and automated, we will harvest experience and statistics that will make the change possible. R&Ds
will lead an important role in this development. Project based experience will help with showcases
that will show us that autonomous systems can be made safe and secure in a cost-beneficial manner.

7. When and Where

7.1 Test areas for autonomous and remote ships

The future will hold an interaction between unmanned and manned vessels operating. Vessels which
is navigating safe and secure and in accordance with COLREG! The autonomous vessels will behave
as they were manned, and the manned vessel, which will be very much automated, will navigate in
accordance with the traffic pattern and COLREG as today.

At the start of this maritime autonomous era, we will need dedicated areas to operate these vessels.
Dedicated areas so that we can supervise and take necessary precautions in a period to provide tests
and to collect data from these.

Due to these circumstances, Kongsberg Seatex in Trondheim took the initiative in August 2016 to
establish such a location for new and innovative technology to operate under restricted conditions, and
under authorities’ supervision.

And on 30.09.2016, the Norwegian Maritime Authority, Norwegian Coastal administration, NTNU,
Kongsberg Seatex, Kongsberg Maritime, MARINTEK, Maritime Robotics and the port of Trondheim
signed an agreement of intent to make the Trondheim fjord available for testing of remotely controlled
and autonomous systems. This became the start of the new era for autonomous shipping in Norway.
The first test area of this kind in the whole world were established! There will also be new test areas
as the need arises. Grenlandsfjord in the southeast part of Norway is most likely the next test area to
be considered. These test areas will be the first locations which we will see autonomous vessels in
action. Much likely, will these areas also grow into commercial areas as well.

7.2 When

When will this take place? When will the first autonomous vessel sail? Well, there are stages to climb.
There is complexity to define and there are regulations to make. Even though Norway has Sovereign-
ty over its own territorial waters, it does not so in international waters. Due to these circumstances and
the nature of convenience, national trade will be made available first.

Nevertheless, the unmanned autonomous vessel is the goal. We will first see a growth of automation
in general. Already existing vessels and new buildings will be equipped or built with new and more
complexed equipment making an efficiency footprint. We will see:

• More advanced DP systems with sailing modes, which will be functional for voyages from A
to B, operating with an eco-efficient pattern.
• automated crossing systems based on
− advanced autopilots,
− DP (dynamic positioning) system,

99
− Collision avoidance systems with integrated algorithms making safe real-time decisions
based on the vessel’s maneuvering characteristics, local traffic, topographic and weather
conditions
− relative position reference system of Kongsberg), hydro-acoustic positioning systems,
etc.) complexed sensor-fusion functionality going beyond GNSS references systems (e.g.
gyro, radar, laser, spot track, RADius (relative position reference system of Kongsberg),
hydro-acoustic positioning systems, etc.)

We will also see

• Automated docking systems, taking the vessel safe and secure from end of voyage to all fast
at the berth, based on sensor fusion.
• harmonized bridges (e-navigation/ INS) built for a digital and interfaced purpose
• de-centralized navigation tasks
• wider communication carriers - Capable of transmitting and receiving
- IoT (Internet of Things) or “Internet of Sea” if you like
- Big data and such

Interfacing of data received both from shore and interoperations between the equipment on board will
make the steps towards autonomous and remote operations available.

7.3 Sustainable Innovation

In Norway, the authorities have acknowledged that the digital world is imminent and already here. It
is no use stopping it. Why should we prevent new technology and innovation if it is safe, secure, ben-
eficial and sustainable? Of course, there is social aspects in this matter, which we need to consider.
However, I think that this is an issue that will be taken care of and standards and limitation will be
defined as we go.

We think that the safety of sea in general is dependent of a co-working between governmental organi-
zation, industry and R&D institutions with regard to automation and autonomous systems. We need to
work in close relation and take advantage of the complete maritime chain to achieve a sustainable re-
sult. The Norwegian Maritime Authority is working in parallel with national and international organi-
zations and maritime governments worldwide to make a safe approach towards an autonomous ship-
ping.

We are still at the Conception stage of these innovation, but we are preparing. NMA have an obliga-
tion and a corporate social responsibility towards safety of life at sea, health, material values and the
environment on vessels flying the Norwegian flag. And it is therefore important that we make the pro-
cess safe, secure and sustainable.

Parallel with these obligations we also see a responsible to obtain a Norwegian maritime quality repu-
tation. And we like to see the maritime industry succeed making safe Norwegian innovation into in-
ternational standard.

8. Norway’s adoption of new technology

8.1 The leading innovative maritime nation

Due to the Maritime Cluster in Norway, we need to take a proactive role. As one of the mayor mari-
time Flag states in the world, we - together with other have a responsibility to prepare the future with
a safe and sustainable maritime industry. The Norwegian government therefor has made a political
statement of being “the leading innovative maritime nation”, given significant amount of founding to
Research and development (RD) within innovation and new technology.

100
8.2 Strategy

Key documents for Norway’s maritime strategy are:

• Ocean Strategy 2017 (Havstrategi 2017): https://www.regjeringen.no/contentassets/


097c5ec1238d4c0ba32ef46965144467/nfd_havstrategi_uu.pdf
• Maritim21 strategy project: https://maritim21.no/prognett-Maritim21/Om_Maritim21/
1254006265213
• Maritime Strategy 2016: https://www.regjeringen.no/contentassets/05c0e04689cf4fc895398bf
8814ab04c/maritim-strategi_web290515.pdf
• National Transport Plan 2018-2029: https://www.regjeringen.no/contentassets/7c52fd2938
ca42209e4286fe86bb28bd/no/pdfs/stm201620170033000dddpdfs.pdf

8.3 Funding

• MAROFF – 137.5 mill. NOK 10/19 approved project is automation related project
• ENNOVA – 2.3 billion (2017) Financial support to innovative, environmental and
sustainable creations.
• Innovation Norway – ~930.2 mill. NOK (RD)

In Norway, there is a big activity with concern to autonomous and remote shipping and Automation in
general. There are several projects, which have a multimillion funding from the Norwegian Research
council Innovation Norway and ENOVA. There is a considerable turnover of funding in the
Norwegian maritime cluster. The funding is meant to stimulate the industry during a phase of
significant change.

Fig.3: Norway’s R&D funding

9. Projects in progress

9.1. Connection studies

• SIMAROS - Safe Implementation of Autonomous and Remote Operations of Ships (DNV


GL)
• ROMAS - Remote Operation of Machinery and Automation Systems (DNV GL)

101
• Open Bridge (VARD)
• ASTAT - Autonomous Ship Transport at Trondheimsfjorden (Kongsberg Seatex)
• SAREPTA (SINTEF OCEAN)
• EGNSS H2H (SINTEF OCEAN)
• Autonomous Zero emission container ship (DNV GL) from the green coastal program at the
west coast of Norway

Fig.4: Green coastal shipping – DNV GL

9.2 Newbuilding projects for autonomous or remote ships in Norway

There are several projects scheduled for newbuildings of autonomous and remotely controlled vessels.
Not yet for approval at the administration, but conception studies at consultancies, where the
Norwegian Maritime Authority is involved, and we see the potential in the near future.

10. Regulation Framework

As per today, we do not have any regulations that cover vessels sailing without an officer on watch
(OOW). Before we have a national regulation framework to build autonomous and remotely con-
trolled operation upon, we need instructions and precautions made available to set the premises. These
premises are the Authority’s responsibility.

• National regulations
In national waters, Norway can decide where and how to operate its vessel. Nevertheless,
even in national waters we have international traffic, which we need to pay attention too. So,
national regulation is not preferable. However, national regulations or temporary instructions
can be an option in the vacuum of existing regulations. However, the goal is to achieve an
international regulation regime also for autonomous shipping. To achieve this, we need to go
through the United Nation’s – International maritime organization (IMO).
• International regulations
International regulation will take time. Based on experience, it takes between 10 to 15 years
to get a new regulation regime established in IMO, if there is a need for one! I think there will
be established bilateral agreements between national coasts before we see a full-scale
international regulation and standards regime. Due to the fact that the process in IMO is very
slow, we have already started the approach towards autonomous regulation international.
Strategic adjustment was notified to IMO at council last year, and this was acknowledged.
The Council decided that “Automation and remote operations” should be included in the
IMO's strategic framework for 2018-2023 and will therefore be a considerable agenda item
for MSC and the sub-committees and put in the High-Level Action Plan accordingly.

102
Fig.5: Regulations will move from national to regional to international level

Fig.6: IMO decided to include “Automation and remote operations” in its strategic framework

11. Conclusion

We will see a change! It will make the shipping more Digital and automated. The extent of the auto-
mation is dependent of the trade area and the type of vessel. National small coastal trade of container
vessel and general cargo ships will have an easier way of approval than a Passenger ship. International
regulations will take time while national adjustment may be possible in a shorter period. Even though
national legislation is easier to achieve, we need to stress the international legislation despite a nation-
al regulation regime.

Success is dependent of a safe and secure way to approach autonomous- and remote operations, and a
co-working with R&D, Industry and authorities. Norway will keep on supporting innovation, which is
beneficial, safe, environmental and sustainable. We see the potential of digitalization and automation.
The maritime authority will facilitate and prepare for a blue shipping in a green environment. And the
industry in Norway is pushing forward.

103
Towards Unmanned Cargo-Ships:
The Effects of Automating Navigational Tasks on Crewing Levels
Carmen Kooij, Delft University of Technology, Delft/The Netherlands, C.Kooij@tudelft.nl
Robert Hekkenberg, Delft University of Technology, Delft/The Netherlands,
R.G.Hekkenberg@tudelft.nl

Abstract

This paper presents a method to analyse the required crew composition on board of a short sea cargo
vessel. By using a purpose-built tool it is possible to assign all tasks on board to the most appropriate
crew member. This tool is used to analyse the changes to crew composition when the navigational
tasks on board of the ship are removed from the workload of the crew. The analysis shows that during
the normal sailing and arrival and departure phase, this results in a decrease of the required crew
size of respectively 3 and 1 crew members. This reduction can, however, only be realised if the
procedures during the loading and unloading phase are changed too, since this is the normative
phase of the voyage for the crew size.

1. Introduction

In recent years, the shipping industry has strongly increased its research into unmanned and
autonomous shipping, driven on the one hand by the availability of new technology and on the other
hand a reduction in the availability of skilled personnel (Lloyds Register, QinetiQ, & University of
Southhampton, 2017). Within the challenge of unmanned or autonomous vessels, navigation, route
following and collision avoidance have been identified as a few of the key challenges that need to be
solved. This in turn has led to a significant amount of research in this area. For example; the MUNIN
project, (Burmeister et al., 2014), AAWA project (Poikonen et al., 2017) and Lloyds register, (Lloyds
Register et al., 2017) have investigated this as part of their projects. There has also been a significant
amount of research that not directly related to these major projects. Much of this research presented
before 2003 has been summarised by (Fossen, 2000) and (Roberts et al., 2003). Recently the main
focus of the research into automation of navigational tasks has progressed to incorporating collision
avoidance by programming the International Regulations for Preventing Collisions at Sea or
COLREGS (Beser & Yildrim, 2018; Kuwata et al., 2014; Perera, 2018).

The shift towards more automation and autonomy does not only happen within the shipping industry.
In the aerospace and automotive industries significant strides have already been made. Within cars
and aeroplanes the navigation problem is the main challenge towards unmanned or autonomous
vehicles. For ships, this is different. On a ship, the crew is larger and no crew member is only tasked
with performing navigational tasks, as is the case for a car or plane. This makes it more difficult to
reduce the crew size of ships. This paper aims to identify the crew reduction that is possible on board
of a short-sea container vessel due to the removal of the navigational tasks from the crew’s workload.
To this end a crew analysis method and tool have been set up that can be used to analyse the effects of
this change on the crew composition of a ship.

1.1 The history of crew reduction

In the past, crew reductions have mostly taken place under influence of increasing technical
capabilities or the introduction of completely new technologies. E.g. the introduction of the diesel
engine meant that the required crew in the engine room could be decreased significantly since it was
no longer necessary to manually shovel coal into the engine (Bertram, 2002). The introduction of the
radar and other navigational equipment meant that specialised crew for the purpose of location
keeping was also no longer required.

104
1.2 The history of automation of navigational tasks

The automatic steering system was introduced on merchant ships in the early 1920s. Before that, the
ship was steered by dedicated crew members for whom the steering of the ship was their sole
responsibility (Bhattathiri, 2017). Even before this, seafarers, such as fisherman, used ropes to
temporarily fix the rudder in a position in order to perform other tasks on board. In more recent years
the course keeping of the 1920s has evolved to dynamic positioning and waypoint following and more
recently linear and non-linear ship control (Fossen, 2000; Roberts et al., 2003).

2. Approach

Bertram (2002) states that in order to progress with automation, the following questions should be
answered for all crew members:

• What are the functions (tasks) of this crew member?


• Can the functions (tasks) be performed on shore of via communication from shore?
• Can the functions (tasks) be performed by a machine (computer) as well or better?

In this paper, the answer to first question, changed slightly to not look at a specific crew member but
at the crew as a whole, is used as input for the crew analysis tool. The basis of the task analysis of the
crew members is formed by a functional breakdown of the functions of the ship created in previous
research, (Kooij et al., 2018). Using a variation of Watson’s (1998) approach, the ship’s main
functions and their sub-functions have been identified. These functions are then linked to a breakdown
of the systems that are present on the ship to fulfil these functions. From these systems, the tasks of
the crew that are required for the fulfilment of the ship’s functions can be determined. This is
translated into a task list of all the tasks that need to be performed, in the current situation, for the ship
to operate smoothly.

The second and third step of Bertram’s analysis are performed, focussed specifically on solutions for
the navigational tasks. However, the focus is not only on automation but focusses on the question:
“how can the task be made unnecessary. This could be achieved by increasing the automation level
but also by selecting different equipment to use or by changing the process.

2.1 Main principles of the analysis method

The aim of the crew analysis method is to analyse the effects that specific changes in the workload of
the crew can have on the crew composition. To this end, the crew analysis tool starts off as a
representation of the current situation. The information used as input is gained from expert interviews
with experienced seafarers and observations made by the first author during a voyage on a short sea
container ship. This information will be discussed further in Paragraph 3.1. The validity of the input
data is checked by calculating the crew size in a conventional workload configuration. The outcome
in this scenario matches the expected outcome and has been validated by industry experts.

With a verified and validated starting situation, it is possible to make changes to the workload of the
crew to analyse the effect this can have on the crew composition. By finding specific solutions for
specific (groups of) tasks, the new situation can be analysed by removing the original task from the
task list or replacing it by a replacement task if required.

The tool can also assist with deciding where changes to the ship or operating procedures are required
to decrease the size of the crew. The normative phase with regards to the crew size can be determined
by individually analysing each of the travel phases. The travel phase in which the required crew is the
largest, is the normative phase. To reduce the crew size the first changes need to be make in this
phase.

105
3. Crew analysis method

To analyse the effects of the changes in workload, a crew analysis tool (CAT) is used. This purpose-
built tool resembles on a crew design tool created for the Dutch Defence Materiel Organisation
(DMO) by a consortium led by The Netherlands Organisation for Applied Scientific Research (TNO)
(van Diggelen et al., 2016). This tool is used by the Dutch defence to optimise their crew composition
and task assignment. However, the ships this tool is used for are naval vessels with crews that are
significantly larger than those of a cargo ship. This has resulted in a tool that incorporates much more
details than is required for an analysis of a small container vessel. For that reason, a tool tailored for
cargo vessels has been built.

The main aim of the tool is to distribute the required tasks as efficiently as possible over the crew
members. It does this by running through a task assignment algorithm for each task. Fig.1 shows a
high-level overview of the Crew Analysis Tool. Each of the important elements discussed in this
paragraph.

Fig.1: Basic overview of the crew analysis tool

3.1 Input

The backbone of the analysis is formed by the two databases that provide the input to the tool. The
first database consists of an overview of all the tasks that the crew need to perform and the
capabilities the crew members have with respect to these tasks. The second database is comprised of
the important information of these tasks. Both are discussed in more detail below.

The database about the crew capabilities shows which crew members have the capability to perform
each of the identified tasks. This data was collected by using a combination of observations by the
author and expert interviews, combined with the findings from the previously mentioned functional
breakdown. This analysis resulted in a list of 66 tasks. Some of these tasks were grouped together for
the purpose of this analysis, finally resulting in a list of 29 tasks that are analysed. These tasks can be
performed by 10 different crew members, ranging from a captain to a deck boy. For the skill level a
three-tiered scale has been set up:

0 : The crew member is not able to perform this task


1 : The crew member is able to perform the task under supervision
2 : The crew member is capable of performing this task without supervision

For each of the tasks, each crew member is assigned a skill level. Fig.2 shows a small excerpt of the
crew capability database.

The second database specifies the task details per task. In this database information such as the
number of involved crew members, the required skill level and the time it will take to complete a task
is included. Fig.3 shows a short excerpt from this database.

106
Able Bodied Seaman
Oridinary Seaman
Second Engineer
Second Officer
Chief Engineer
Chief Officer
Tasknumber

Deck Boy
Captain

Bosun
Cook
Task
Maintenance on main engine/ maintenance during loading 60 0 2 0 2 0 0 0 0 0 0
Maintenance paperwork 61 0 2 0 1 0 0 0 0 0 0
Responsibility for ship 62 2 0 0 0 0 0 0 0 0 0
Responsibility for engine room 63 0 2 0 0 0 0 0 0 0 0
Fig.2 Excerpt of the crew capability database

As discussed, the information for this database was collected using observations and expert
interviews. The tasks are sorted descending according to the paygrade/rank of the cheapest crew
member that can perform the task This will ensure that the method leads to the cheapest crew
composition, as is explained later on in the method section.

Loading and unloading


Arrival and departure
Required skill level
Number of crew
Total time in hrs

Normal sailing
Tasknumber

Communal?
Location
Split?

13 0 1 1 1 0 2 0 1 0
12 0 2 1 1 0 2 0 1 0
48 0 2 1 1 0 2 0 1 0
11 0 1 1 1 0 2 0 1 0
55 0 3 4 1 0 2 0 1 0
Fig.3: Excerpt from the task database

A single trip of a ship can be split into three distinct phases:

• The loading and unloading phase


• The arrival and departure phase, during which the ship sails in busy and shallow waters.
• The normal sailing phase, which is the entire voyage outside shallow waters

The model uses this distinction to first calculate the cheapest crew composition per phase before
determining the cheapest overall solution.

3.2 Task assignment algorithm

The next step of the program is the main element in which the tasks are assigned to different crew
members. In general terms the algorithm can be split into three main parts. These three parts can be
found in Fig.4 along with a more detailed visualisation of the steps that are taken in the task
assignment.

3.2.1 Step I: Data preparation

In the first step the relevant data is prepared for use. From the task list the most expensive task, i.e. the
task requiring the most expensive crew member(s), that still needs to be assigned is selected. This is
done to ensure that the crew composition that is found by running this tool is in fact the cheapest

107
option. How this works is explained in the assignment of tasks in step II. The second input that is used
is the list of available crew. The tool keeps track of the crew members on board, to which tasks they
are assigned, what skills they have and how much time they have left to perform more tasks. Using
the task information database which has information regarding the required skill of the crew members,
the list of available crew is updated to only consist of crew members that meet the requirements.

From here a first division is made. In some cases, tasks can be split among crew members, (e.g.
allowing two different crew members to each work on a task for an hour). These tasks end up in
section II. In other cases, this is not a possibility (e.g. it takes one woman nine months to make a baby,
having 9 woman each work for one month will not work). These tasks end up in section III.

Fig.4 Detailed flow diagram of the task assignment algorithm for one task

108
3.2.2 Step II: Tasks that can be split

In the second part of the program tasks that can be split between crew members are assigned. The first
step is to see if there are any crew members on board who could perform this task. If this is the case
the crew member is assigned as many hours as possible. If there are multiple crew members already
on board that could perform the task the most expensive crew member is chosen. This is an effect of
sorting the tasks from the most expensive to the cheapest. The tasks that can solely be performed by
the most expensive crew member on board are assigned first. This ensures that the crew is as cheap as
possible and the assigned tasks match their training level as best as possible. If, after assignment of
the tasks that the crew member is trained for, this crew member still has time left for other tasks, it is
much more economical for this highly paid crew member to perform tasks that are the most closely
related to his own pay level (i.e. a captain taking over a task from the first officer) than if the crew
member performs tasks that can also be done by crew members with a much lower salary (e.g.
cleaning the deck).

When the maximum number of hours is assigned to the crew member the program loops back to the
list of available crew for as long as there are hours that need to be assigned. If there is no crew
member available to perform the task, a new crew member needs to be created. This is always the
cheapest crew member that has the ability to perform the task. After all, there is no possibility that a
more expensive crew member is required to perform other tasks since the tasks are sorted most
expensive to cheapest.

3.2.3 Step III: Tasks that cannot be split

The process for tasks that cannot be split is mostly identical to that of tasks that can be split. The only
difference is that an additional update is added for the list of available crew. Since the task cannot be
split the crew member that is assigned the task must have enough hours available to perform the
whole task. From there the same loops are run though as for section II. If multiple crew members are
required to perform the task, the whole loop is completed the required number of times before ending.

3.3 Communal tasks

Within the assignment of the tasks a distinction is made between communal tasks and other tasks.
Communal tasks are tasks that require different levels of crew members, for example a work planning
meeting. In the current conventional manning situation, this would require the chief officer, the chief
engineer and the bosun as representatives from the three different departments on board of the ship.
As long as there are crew members on board of the ship these meeting have to take place in some
form or other. This means that in any given scenario, a representative from each department, with the
required skill will be assigned this task. The separate assignment of the communal tasks takes place
before the regular tasks are assigned. This is due to the fact that these crew members need to be on
board to perform this task. By assigning the communal tasks first, a high occupation rate can be
achieved for these crew members. If these tasks would be assigned later on in the process, it is
possible that the crew members have a low occupation rate, which is not the aim of the program.

3.4 Output

The final element of the tool is the output. The three most important outputs of the tool are:

• The task list per crew member


• The crew composition
• Occupation per crew member

The first main output is the list of tasks that are assigned to each crew member. This list shows every
task and all the hours of each task that have been assigned to this crew member. It can be used to

109
identify which tasks should be automated in order to remove a specific crew member from the ship.
The second output is the crew composition. This output shows the user how many, and which, crew
members are needed to complete all of the tasks on board. This outcome can also be used to calculate
the total crew cost, thus allowing for a quantification of the benefits of the changes. The final output is
the occupation percentage of each of the crew members. In an ideal case, each of the crew members
works a full shift, in order to make optimal use of the resources. With the removal of the tasks it is
very well possible that the resources are not utilised in an optimal way. This data shows which of the
crew members are not utilised fully and allows the user of the tool to identify possible other tasks that
need to be automated in order to reduce the crew size.

4. The case study: automating navigational tasks

To explain the workings of the tool in more detail a case study has been performed. In this case the
effects of automating the navigational tasks on the crew composition are studied. For this case study a
container feeder is analysed. The typical crew on a ship this size consists of approximately 10 to 12
crew members, dependent on the cargo type, route and ship operator. In this case, the original ship has
a crew that consist of 12 crew members: a bridge crew of 3, an engine room crew of 2, a deck crew of
6 and a cook. Within this case study, three scenarios are compared to each other:

1. The conventional manning situation without any changes to the workload


2. The manning situation when the navigational tasks during the normal sailing period are
automated
3. The manning situation in which the navigational tasks during arrival and departure are
automated

4.1 Tasks that are removed from the workload

For this case study it is assumed that all tasks that pertain to the navigation of the ship are no longer
required. On a conventional ship the crew performs the following tasks with regards to navigation
during the normal sailing phase; watch keeping on the bridge and look-out duties at night. The watch
keeping task consists of performing situational awareness, communication with other ships, keeping
track of the ship’s route and making changes to said route if required. For this case study it is assumed
that all these tasks are taken over by a computer. Therefore, these tasks have been bundled into one
overarching task.

For the arrival and departure phase the following tasks have been identified to be part of the
navigational tasks: Manoeuvre the ship, prepare bridge for arrival or departure and watch-keeping on
the bridge. During the normal sailing period, the ship does not require active steering, most of this is
done by the steering computer by the way of way-point following. This is different for the arrival or
departure. In these cases, the captain manoeuvres the ship by hand. In this case study it is assumed
that all of the above-mentioned tasks are performed by a navigation system and that the crew does not
need to perform these tasks anymore.

4.2 Results

The results for this case study can be split into two parts; an analysis of the changes to the crew
composition and an analysis of the occupation percentage of the crew members and their qualification
with respect to the tasks they perform. Both will be discussed in the subsections below. The results
show the difference between a conventional ship (the original input) and an automated ship.

4.2.1 Changes in crew size

The main aim of this paper is to analyse the effect of removing the navigational tasks from the crew’s
workload on the crew size. Table I shows the required crew members per travel phase in the current
situation. The required number of crew members is the highest for the normal sailing and loading and

110
unloading phases. The required crew members are nearly identical but for the loading and unloading
an ABS (Able Bodied Seaman) is required, while for the normal sailing the 12th crew member is a
deck boy. This shows that the loading and unloading phase is normative for determining the minimum
number of crew members on the ship. This also means that any changes in the workload of the crew
in the other travel phase will not have an impact on the crew size. A change in crew size would also
require a change in workload during the loading and unloading phase. However, regardless of this, it
is still possible to analyse the effects of the reduced workload on the required crew size during the
other travel phases. Fig.5 shows the changes in the crew composition between the conventional
situation and the abovementioned scenarios.

Table I: Required crew members per travel phase in scenario 1


Required number of crew per travel phase, Conventional ship
Crew member Loading & unloading Arrival & departure Normal sailing
Captain 1 1 1
Chief Engineer 1 1 1
Chief Officer 1 0 1
Second Engineer 1 0 1
Second Officer 1 1 1
Bosun 1 2 1
Cook 1 0 1
Able Bodied Seaman (ABS) 2 0 0
Ordinary Seaman (OS) 0 0 0
Deck boy (DB) 3 4 4
TOTAL 12 9 11

Fig.5: Occupation and qualification per crew member for the conventional situation (left) and the
automated situation (right)

111
In the bottom two sub-figures of Fig.5, which shows the change in crew composition for normal
sailing, it can be seen that the required crew drops from 11 to 8 crew members. While the tasks are
removed from the higher ranking (e.g. the captain, the first and the second officer) and thus the more
expensive crew members, the crew members that are superfluous in this case are mainly the deck boys
of which three are no longer required. In addition, the second officer is no longer required. In the top
two figures, the change in crew composition for the arrival and departure phase can be found. For this
phase, the second officer is also no longer required, reducing the required crew from 9 to 8.

4.2.2 Change in crew occupation and crew qualification

A second element to investigate is the occupation of the crew during the different travel phases. The
cheapest possible solution is a solution in which the crew members are occupied for all of their
allowed working hours. Having crew members work for significantly less than their allocated working
hours means that a change in policy could be warranted to keep them occupied for a longer period of
time. Due to the set-up of the program, the crew occupation is relatively high, as each task is
continuously assigned to an available crew member. This can also be seen in Fig.5.

The qualification of the crew members is more interesting. Even during the conventional crewing
situation, a significant part of the crew members is assigned a task that could be done by a cheaper
crew member. This is mainly due to the hierarchical structure on the ship, where crew members have
different responsibilities but also perform the same jobs. A prime example of this is the bridge crew,
where the bulk of the work is the same for all three members, watch keeping on the bridge, even
though the captain is significantly overqualified to perform this task. He is on the ship due to his
responsibility tasks and administrative tasks and has time to also perform watch keeping duties.

The fact that the program does not take a hierarchical situation into account also explains why, in the
conventional manning situation, the ship’s crew shows minor deviation from practice. The model
calculates a deck crew consisting of 5 deck boys and one bosun. The capabilities of the deck crew are
mostly identical across the different ranks of crew members. The distribution is mostly dependent on
their years of experience, but a higher rank does not mean a higher-level capability in most cases.
Since these crew members all perform the same tasks, the program simply assigns the task to the
cheapest option available, in this case, the deck boy.

In the conventional situation a ship would most likely have a more distributed crew in the lower
ranks. For example, one bosun, two able bodied seamen, two ordinary seamen and one deck boy. The
current outcome of the tool does not represent that. This is due to the way the task assignment has
been set up. The difference in skill between the different members of the deck crew (ABS, OS and
DB) is small. Due to the fact that a task is always assigned to the cheapest possible crew member, the
outcome shows an overrepresentation of the number of required deck boys.

4.3 Workload distribution

To remove a specific crew member from the ship all of their tasks (i.e. the jobs that they are qualified
for) need to be removed from their workload. For this to be possible it is important to know which
tasks this workload includes. In Fig.6 all tasks for which the crew members are qualified are plotted.
In general, it can be seen that in the conventional situation the percentage of tasks that are assigned to
the cheapest crew member is higher than for the automated situation.

During the arrival and departure only one or two tasks need to be automated per crew member. These
tasks are also relatively small, and the phase only lasts for a short amount of time. It should be
possible to find solutions for these tasks.

During the normal sailing phase, there is also only a small number of tasks per crew member.
However, in this case the task clusters are much longer and encompass more different tasks within
one common name (i.e. maintenance and repair encompass many aspects).

112
Fig.6 Distribution of qualified workload per crew member for the conventional situation (left) and the
automated situation (right)

In Fig.6, in the top right graph it can be seen that both the captain and the chief engineer only perform
tasks for which they are overqualified. This is due to the fact that both these crew members are also
assigned the task of responsibility for the ship and for the engine room respectively. While this task
does not require any hours to be spent on it as long as no unexpected events occur, it is necessary that
someone carries the responsibility. Due to these tasks, these crew members are present on the ship
while the rest of their tasks could also be performed by lower ranking crew members.

4.4 Adhering to traditional crew roles

In the previous section it has been noted that the higher-ranking crew members perform a lot of the
tasks that can also be performed by lower ranking crew members. This is due to the set-up of the
program, where task distribution is done according to who can do a task, not who normally does the
task. In this section, the crew capabilities have been set a bit more rigorously. Crew members are only
able to perform tasks that are originally assigned to their department. This means that, for example,
the task Maintaining deck and superstructure can only be performed by the deck department and can
no longer be assigned to the officers from the bridge department, even if they would have time
available to perform the task. This additional analysis sheds light on the reason of the crew reduction.
It might be possible that the decrease in crew size is only achieved due to the rather liberal assignment
of tasks, while in practice some of these assignments might never happened. E.g. traditional captains
will not scrub the deck or paint the superstructure.

113
Fig.7 Workload distribution and qualification distribution for a stricter task assignment regiment in
the conventional situation (left) and the automated situation (right)

Fig.7 shows the workload, crew size and qualification of the crew members in this situation. When
comparing these figures with Table I and Fig.5 several changes can be observed. The first change is
the difference in the required crew members. For the conventional situation an extra crew member is
required for the loading and unloading phase due to the different assignment of tasks. However, this
crew member is only required to perform less than 0.1 of the total possible work-load making it a
negligible change.

114
A larger change can be found for the automated ship. The deck crew is freed of a significant part of
their workload due to the automated steering. However, due to the restrictions placed on the task
assignment, they are not assigned additional tasks to perform, as most of these tasks came from the
deck department. This has resulted in a reduction of one crew member in both cases, meaning the
crew reduction within the normal sailing phase is now significantly smaller. This reduction has also
resulted in the fact that there are now two crew members that have a very light workload.

4.5 Case study conclusion

The most important conclusion that can be drawn from the case study is that while automating the
navigational tasks leads to a smaller required crew in the normal sailing phase and the arrival and
departure phase, it does not immediately help to reduce the size of the overall crew. This is due to the
fact that the loading and unloading phase determines the required number of crew members on board
of the ship, as became clear in Table I. This means that before it is useful to automate the navigational
tasks, the required crew for loading and unloading also needs to decrease.

While there are solutions for the large number of crew members required during the loading and
unloading of the ship, they might be costly. For example, using an automated terminal could make a
significant part of the ship’s crew redundant, just like using stevedores would be, but especially for
short sea ships, which call at ports frequently, both these options are likely to cost considerably more
than several additional crew members for the deck department according to industry experts. It has
been found that the cost of stevedores in ports drives companies to sail with larger crews than strictly
necessary, as a cost reduction measure. This practice, especially in combination with the findings in
this case study that the loading and unloading phase is so important to the size of the crew, could form
a significant obstacle towards low-manned and unmanned ships.

The second part of the analysis, where a more traditional distribution of tasks is used, shows the
importance of letting go of traditional roles when moving towards low-manned, unmanned and
autonomous ships. When choosing to hold on to the traditional roles, the difference between the
conventional situation and the automated situation is significantly smaller than when these traditional
roles are ignored.

5. Discussion

The case study performed proves that the crew analysis tool can be used to make an estimation on the
changes in crew composition that can occur. However, there are a few things that need to be taken
into account. The first thing to note is that the tool is only as good as its input. For the case study a
rather coarse distribution of tasks has been used. A finer distribution was not necessary since the tasks
that were removed from the workload were rather big themselves. If the effects of removing a smaller
task from the workload are to be identified, due to e.g. the installation of a main engine that requires
less maintenance, the crew tasks will need to be refined.

One of the most interesting simplifications in this case is the assumption that with the automation of
the navigational tasks, the communication with other ships is no longer required. For this case study,
this is assumed to be the case, however, in practice this might not be so simple. As long as there are
manned ships that can come into contact with the automated ship, communication is required.
Currently, this communication takes place during watch keeping on the bridge. If the navigational
tasks can be automated and the communication task remains it could be the case that a crew member
is required to pay attention to communication from other ships. This would severely limit this crew
member in the tasks he is capable of performing and might also limit the effects of automating the
navigational tasks.

Paragraph 4.4 has shed an interesting light on the task assignment algorithm that has been used in this
paper. By assuming that each crew member will perform any and all tasks that they are capable of,
large reductions in crew can be realised. However, in real life this method of assigning tasks might

115
lead to some resistance from the crew. In that case, changes in the training of the crew might be
required to allow for a distribution of tasks that both matches the skill each crew member has and
allows for an even distribution of the workload.

The analysis method that has been created in the form of the crew analysis tool is an invaluable aid in
determining the effects of implementing specific automation solutions or procedural changes on board
of ships. The tool provides information on the changes in crew composition that are important for a
further analysis of the effects and potential of these changes. The tool can be used as a set-up towards
a further economic analysis to asses not only the possibility of removing crew members from the ship
but also the economic viability of this.

At this point the input requirements are taken to be constants. However, this might not always be the
case, some tasks take longer than expected while others might be done quicker than scheduled. This
could be taken into account by adding some form of uncertainty to the length of the tasks where this
could be the case. This would make for a more truthful representation of the current situation.
However, as this has not been attempted in this case study, it is not known if the effects that are
achieved by adding the uncertainty are enough to cause significant changes to the results.

Acknowledgements

We would like to thank the industry experts who have graciously agreed to answer all questions they
might have had concerning work and life at sea, Alco Weeke from STC Rotterdam, and Harmen van
der Ende and his colleagues from Maritiem Insituut Willem Barentz. We would also like to thank JR
shipping for allowing us to travel on one of their vessels and the crew of the MS Endurance for
answering all our questions.

References

BERTRAM, V. (2002), Technologies for Low-Crew / No-Crew Ships, Forum Captain Computer,
Hamburg

BESER, F.; YILDRIM, T. (2018), COLREGS Based Path Planning and Bearing Only Obstacle
Avoidance for Autonomous Unmanned Surface Vehicles, 8th Int. Congr. Information and
Communication Technology (ICICT-2018), pp.633-640

BHATTATHIRI, N. (2017), 10 Things to Consider While Using Auto-Pilot System on Ships,


https://www.marineinsight.com/marine-navigation/10-things-to-consider-while-using-auto-pilot-
system-on-ships/

BURMEISTER, H.; BRUHN, W.C.; RØDSETH, Ø.; PORATHE, T. (2014), Can unmanned ships
improve navigational safety? Proc. Transport Research Arena

FOSSEN, T.I. (2000), A Survey on Nonlinear Ship Control: From Theory To Practice, IFAC
Manoeuvring and Control of Marine Craft, 33(21), pp.1-16

KOOIJ, C.; LOONSTIJN, M.; HEKKENBERG, R.G.; VISSER, K. (2018), Towards autonomous
shipping: operational challenges of unmanned short sea cargo vessels, Int. Maritime Design Conf.

KUWATA, Y.; WOLF, M.T.; ZARZHITSKY, D.; HUNTSBERGER, T.L. (2014), Safe maritime
autonomous navigation with COLREGS, using velocity obstacles, IEEE J. Oceanic Eng. 39(1),
pp.110-119

LLOYDS REGISTER; QINETIQ; UNIVERSITY OF SOUTHHAMPTON (2017). Global Marine


Technology Trends 2030 - Autonomous Systems

116
PERERA, L.P. (2018), Autonomous Ship Navigation Under Deep Learning and the Challenges in
COLREGs, Volume 11B: Honoring Symp. for Professor Carlos Guedes Soares on Marine Technology
and Ocean Engineering

POIKONEN, J.; HYVÖNEN, M.; KOLU, A.; JOKELA, T.; TISSARI, J.; PAASIO, A. (2017),
Technologies for Marine Situational Awareness and Autonomous Navigation, AAWA

ROBERTS, G.N.; SUTTON, R.; ZIRILLI, A.; TIANO, A. (2003), Intelligent ship autopilots - A
historical perspective, Mechatronics, 13(10), pp.1091-1103

VAN DIGGELEN, J.; JANSSEN, J.; VAN DEN TOL, W. (2016), Crew Design Tool, Int. Naval Eng.
Conf.

WATSON, D.G.M. (1998), Practical Ship Design, Vol.1, Elsevier

117
OpenCalc - An Open Source Programming Framework for Engineering
Stephen Hollister, New Wave Systems Inc., Jamestown/USA, shollist@newavesys.com

Abstract

OpenCalc is an open source framework that allows users and programmers in engineering to create
cross-industry interactive and automated design solutions based on common batch calculation
interfaces and XML data files. While OpenCalc is open source and free, developers can add in their
own proprietary components that allow users, software developers, students, and researchers to
independently work together in complex markets not properly supported by commercial programs. This
paper explains the basic parts with application to interactive CAD programs and multi-discipline
optimization (MDO) processing of single and multiple wrapped calculation components. OpenCalc is
a flexible and agile framework that can adapt to any design optimization, lifecycle management, or
digital twin methodology.

1. Introduction

The interactive application (“app”) model of computer programming, where independent developers
create and link together user interfaces, calculations, and proprietary data definition and file formats, is
not adequate for the evolving demands of users and subject matter experts (SMEs) who want automated,
cross-industry calculation and optimization tools. Traditional interactive app control prevents
automated connection and processing of calculations from different sources - a requirement for multi-
discipline engineering and optimization (MDE/MDO). The evolution of the internet increases the
demand for collaboration between all engineers and designers, especially in the conceptual and
preliminary design phases. This is hindered by independent and proprietary app development with
incompatible data definitions and two-step neutral file conversions.

OpenCalc offers an open source framework that solves these problems. Apps are split into three
separately developed and tested external programming objects: batch calculations or Calc Engines
(CEs), User Interface Frameworks (UIFs), and layered and cross-industry open source data file
definitions (XML) with input/output (I/O) code that can store any program data or structure in a
common file. This split organization better fits the needs of all stakeholders in the software development
world: subject matter experts (SMEs), computer scientists, and industries. OpenCalc allows these
groups to work independently on calculations and data definitions that can be later combined by users
to create completely new tools with no extra programming. New solutions can no longer be the domain
of competing software companies that write applications or suites for “users.” Engineers will become
creators of new solutions and programming will become scripting at a much higher level using common
calculation components and data definitions.

OpenCalc components are independent objects that can be combined to meet any analysis approach,
whether it’s multi-objective programming, set-based design, critical path methods, digital twin or
anything else. These driving solution systems (UIFs) can be either interactive hands-on or automated
and batch. The low level calc engines and independent XML data files can be used for any higher-level
processing methodology with no reprogramming or custom containerization. In general, calc engines
and XML data definitions are created once with a common and open source programming template that
allows many user and UIF combinations with no additional coding.

2. Historical Development

OpenCalc evolved from the author’s work in computer-aided ship design software development over
the last 45 years, from card programming on mainframes to PCs and DOS, then Windows and now the
internet. Many calculations were lost at each change in technology, especially when DOS evolved into
Windows. Calculations might stay the same, but the cost of rebuilding new applications with new user

118
interfaces was and is too expensive, thereby losing much good software. The author created ‘The
Nautilus System’ in DOS as a spiral ship design system with a modular calculation approach for a
common user interface with graphical output and a common design database. However, only the
NURB-based hull design and fairing CAD software (ProSurf) justified conversion to Windows. Useful
code, such as longitudinal strength calculations, Wageningen propeller optimization and a Velocity
Prediction Program (VPP) for sailboats, never got updated to Windows-based user interfaces. Maritime
and other industries can’t support these technological changes with the traditional and costly all-in-one
application approach to computer software development. This is a market and cost issue as much as it
is a technology issue.

Hollister (1996) combined modified Lackenby hull variation, hydrostatics, and Holtrop resistance
calculations into one DOS program to search for an optimum hull shape starting from a parent hull.
This required all the source code and a custom user interface front end that needed a lot of development
and testing. That this could be done was of no real benefit since the market would not support the cost.
Also, that was when DOS was replaced by Windows and the recoding cost was no longer viable for the
new technology and size of market. Now, twenty years later, the internet is causing further disruptions
in software development. Some now believe that the solution to ship design is to build custom
calculations inside of a general interactive CAD program, but the goal is not to hope for one CAD
program to win the market battle. A different approach was sought.

Hollister (2014) described tests that showed how one can launch large reusable calculations from a
spreadsheet user interface in a general way using comma separated value (CSV) files. This turned into
SNAME Project 114 and culminated in the first release of the software at the World Maritime
Technology Conference (WMTC) - Hollister (2015). The second release of the software occurred at the
SNAME Maritime Convention (SMC 2016) in Seattle where multiple wrapped calc engines (recreating
the author’s work in 1996 on hull variation and optimization) were shown using a more general XML
database format for all program variables and structures - Hollister (2016). A status paper was given in
Hollister (2017). A third revision was in Hollister (2018). It generalized the basic components to create
an open source framework that can be applied to all areas of programming. Three underlying objects
were formalized, documented, and separated from Project 114. These programming components
became the foundation of the Tri-Split Programming Architecture (TSPA) (www.trisplit.com). The
maritime components were built on top of TSPA and became the newly named OpenCalc System. This
hierarchical structure allows for compatibility and automation of calculations from many different
industries.

3. Splitting the App

The traditional programming model in computer science is the interactive app where one software
developer or team creates the user interface, calculations, and the data definition and file format.
Separate calculations from multiple apps cannot be automated and the data must go through a two-stage
filter process to and from a neutral file format, Fig.1.

Fig.1: Traditional programming architecture

119
TSPA splits this structure into three separate “external” programming objects: User Interface
Frameworks (UIFs), Calculation “Engines” (CEs), and open source hierarchical levels of cross-industry
and specific-industry defined variables and data structures (XML) that can be defined separately by
computer scientists, subject matter experts, and industries.

These new external (as opposed to internal class-based) programming objects can be mixed and
matched in many new and creative ways by users and software developers, and each component can be
located anywhere on the web. This will be explained in more detail later in the paper.

Fig.2: New Tri-Split external programming objects

As illustrated in Fig.2, not only can one calc engine have more than one user interface, but one user
interface can launch many different calc engines. The glue holding these pieces together are common
data and structures built using open source XML files. Note that the UIFs can be either interactive
programs or batch command files. They can also be open source or proprietary Process Integration and
Design Optimization (PIDO) systems that meet specific needs.

The fundamental change defined by this work involves splitting traditional interactive “apps” into three
separately developed and tested parts. Calculations and data processing tools can be written separately
by subject matter experts (SMEs), tested once, have long lives, and be used and reused for many
interactive and automated applications.

OpenCalc currently offers a UIF using an Excel spreadsheet with open source VBA code that can launch
any external calc engine. The spreadsheet prompts for all input, launches the external CE or sequenced
CEs, reads the results back in, and then displays, prints, and graphs the results. This UIF works with
any CE without any changes to the spreadsheet or the calc engine, i.e., no extra programming or
containerization process.

The spreadsheet launches the CEs in the background and the user never knows that the calculations are
done externally from the spreadsheet. There is no delay. Results are displayed immediately even for a
CE as large as a full damaged stability calculation. The benefit is that the user can now add in additional
custom calculations, say for a specific ABS or USCG stability rule, and save the changes to a new
spreadsheet. Better yet, each rule could be a separate calc engine that a user wraps with a stability calc
engine to search for a hull that meets specifically-selected rules. That is how separate calculation CEs
can be mixed and matched by users to solve custom problems without writing additional code. A key
element in this process is the use of common data and data structure formats using XML. Industries
need to exert influence over their data to enable compatible and low-cost tools.

The next sections discuss each of these three external TSPA objects in more detail.

120
4. Calc Engines (CE)

The first TSPA object type is a "Calc Engine" (CE) executable batch program that has no user interface.
It is a stand-alone program that reads an XML text file of input, processes it, and writes the results out
to the same or different text file.

Calc engines can be written in any computer language, validated separately by subject experts, used for
many purposes, located anywhere on the web, and have a long life not affected by computer technology
changes. It's just a simple EXE (or other type of executable program) batch file that reads and writes a
text file and acts like a subroutine - one that is tested and validated and can be used automatically in
many ways. The following UIF section will describe various classes of user interface program frame-
works (UIFs) that can be constructed to launch any calc engine automatically.

A calc engine is like well-known Unix filters such as "grep" (A batch program to search for a string in
a text file.) except that the calculations can be as complex as computational fluid dynamics (CFD)
analyses and support any program variables and data structures - not just strings in text files. Also, calc
engines do not just filter out data from one text file to another. They treat the text files as random-access
databases of any program variable or data structure. This is described in the section on the layered XML
definition of data.

To turn separate calculations into reusable stand-alone tools for any XML data file, TSPA requires that
CEs include an XML text file that defines all input and output data (like a subroutine argument list)
using the TSPA XML schema for variables and data structures. Fig.3 shows a TSPA/XML subroutine
argument list definition for a simple calc engine (Add2.exe) that adds variable ‘A’ to variable ‘B’ to
produce variable ‘C’. (This is like a “Hello World” example for TSPA.) The UIF spreadsheet and macro
code that comes with this system can read this definition file to prompt for all input and show all output
without knowing any more about the external calc engine. The CE definition file allows any UIF
program to use that calc engine without any custom programming or containerization.

Fig.3: XML CE/subroutine definition file

These variable (VAR) definitions will be discussed in the XML section.

Like Unix filters, batch CE programs can be launched from the command line, a script file, or from
inside other programs and passed “arguments” just like a subroutine. For this Add2.exe calc engine, the
command line string that would start the batch program looks like this:

Add2 (A,B,C) [io=MyDataFile.xml]

Note that all batch programs (like Unix filters) can include string data after the name of the executable
file. That string is passed to the “main” routine of the program for processing. For calc engines, the
arguments are put in parentheses and the options are put in brackets.

The open source I/O code of TSPA/OpenCalc reads and parses the Add2 argument string to know what
variables to get from the XML file. Also, like standard subroutines, arguments are passed by position,

121
so one could launch Add2 with “(X,Y,Z)” and the code would read the input variables (X,Y) from the
“io” file (MyDataFile.xml) and assign them to “A” and “B” in the calc engine. Writing out the “C”
value would go into the “Z” variable in the file, creating one if it didn’t already exist. If the argument
list becomes long, which is not uncommon, it can be put into a text file and referred to like this:

Add2 [args=add2args.txt io=MyDataFile.xml]

If no argument list is given, the CE will use its own variable names to access data in the XML data file.
That might be useful for common data names and carefully designed and coordinated CEs.

TSPA allows calc engines that define any complex sequence of input and output variables and structures
equivalent to any programming subroutine. A TSPA calc engine is an external executable batch
program that works just like an internal subroutine with the XML data file acting as memory. However,
calc engines will typically perform more complex tasks than adding two numbers. For example, the
included Savitsky Planing Hull Resistance calc engine has about 50 inputs and 100 outputs. With
modern computer speeds and fast static data file storage, the overhead is minimal and more flexible
than connecting to a dynamic link library (DLL). The benefits of unlinking, open and flexible
definitions, separate development and validation, and generality offer many more possibilities.

TSPA/OpenCalc provides open source tools, templates, and data schemas to create, read, and write any
program variable and data structure. To create new CEs, subject expert programmers open TSPA calc
engine program templates in the programming language of their choice (eventually) that includes all
the necessary I/O code, add in their calculation or processing routines, generate the program, and then
define the input/output XML text file subroutine argument list. It can then be immediately used by
anyone on the web using a common UIF framework.

There is no reason to combine and bind one set of calculations or data processing to one custom user
interface or process. There is also no reason to require separate custom containerization of apps and
data for each new PIDO or UIF methodology that comes along. That increases complexity, validation
problems, computer and content expert interaction issues, and cost. It also makes those calculations less
available for other uses. Even if the calculations were isolated in software libraries, their attachment to
and validation with new custom user interfaces is not justified, especially in smaller markets.

Many subject experts build apps using spreadsheets, but the calculations are tied to the cell locations of
the user interface and cannot be combined with other calculations automatically. Comma Separated
Value (CSV) files can be used to transfer data to other applications, but users still need to manually
transfer those files and agree upon common data definitions with other users. In addition, one study
Panko (1998, 2008) has shown that 88% of all spreadsheets have errors. Many find that it's not worth
the time and effort to understand spreadsheet formulas to be able to modify or use the calculations for
other purposes. TSPA/OpenCalc provides subject matter experts with a general and flexible model for
non-trivial spreadsheet applications with separate and reusable calculations.

Calc engines can also be combined or “wrapped” together with a script or command file to create a new
calc engine. This was demonstrated in Hollister (2106) that showed the combination and sequencing of
three separate calc engines: a modified Lackenby hull variation CE, a hydrostatics and stability CE, and
a Holtrop ship resistance CE. This was done with the simple batch command file shown in Fig.4.

Fig.4: Batch file wrapped sequence of three separate calc engines

122
This batch file works just like a regular calc engine with the combined argument and options list passed
in as “%2.” The individual calc engines require no changes and they all run off data from a common
XML database file of variables and data structures. Users can combine calc engines from any source
(and web location) into a batch file like this or use a more powerful scripting language with additional
custom code. This defines a programmable way to create new calc engines using combinations of
existing calc engines without having the source code or adding more code. Note that this also allows
for concurrent processing of tasks.

You can also have a calc engine that programmatically launches other CEs. An example of this could
be a static or dynamic Free-Body Diagram (FBD) solver that would read a list of calc engines - one for
each force and moment. It would then launch each one, resolve the system and produce the results. This
FBD CE solver could be launched by a web-based UIF front end and the individual force and moment
calc engines could be located anywhere on the web and provided by different people. This is a way for
a diverse, web-based group to work on a common problem.

The sailboat Velocity Prediction Program (VPP) analysis is a classic FBD solution process, and this
FBD arrangement would allow for new and varied force and moment calc engines to be swapped in
with no additional programming. It would be a simple task to plug in a new sail force or keel lift and
drag calc engine. This system could even evolve to using CFD for the hull resistance force and adding
in hull generation or variation for an automated optimization loop. The author is currently investigating
the conversion of the OpenFOAM CFD program into a calc engine format.

The author once received a call from a design office that created a proprietary sailboat hull generator
macro for a CAD program and they wanted to automate the connection to a meshing routine and then
to a CFD program. They found that the custom cost of development would be too great. With OpenCalc,
however, their hull generator CE would be simple to create and they could plug it into the above system
with no additional programming. The other calc engines might not be free, but they would be much
more affordable than a custom-built collection.

TSPA will provide program templates for different languages that include all the necessary source code
for data file I/O. The code also includes a large set of Scalable Vector Graphics (SVG) routines for calc
engines to output graphics. UIFs could display or animate the results for one or more automated
calculations. All one needs to do to create a new calc engine is to add in the calculation routines,
compile, link, and test. The CE can be tested with simple XML data files and used immediately for
many different purposes by users anywhere on the web.

5. Open and Layered Data Schemas (XML)

The second and key TSPA object type is open source text data files built in layers of XML schemas.
This is the glue that ties everything together. The days are over when independent software developers
define and control industry data and structures in proprietary file formats that work with only one
program or suite and require two-step and bi-directional neutral file conversions. The Web is built on
separate and compatible programs that work off a common HTML data file definition, and HTML has
now been defined with XML and is governed by the World-Wide Web Consortium (W3C) and the
International Standards Organization (ISO). Web programmers do not define their own data file
formats. HTML defines a file definition for interactive data (text, graphics, audio, video) with display
and format information, while TSPA defines a layered XML data file framework for any program
variables and structures. It’s no longer necessary to have a separate custom data file definition, format,
and I/O routines for each application. TSPA offers an open source way to store that data in a common
XML file. These I/O routines replace the Document Object Model (DOM) of XML file access, and it
can be used stand-alone even if one does not adopt any other part of OpenCalc.

This single TSPA/XML text file data definition acts like a general database for program variables and
structures for all programs. There is no need for separate data files and two-step neutral file format
conversion. Industries need to assert some level of influence to ensure better compatibility and data

123
flow between calculations from different sources. In naval architecture, that might involve naming and
organization recommendations for things like different hull definitions, weights, and operating
conditions. TSPA offers open source I/O code in many languages so that software developers can access
any program variable or data structure by name from a common XML file without knowing anything
about the file format, and one text data file can contain data from all programs. When separate calc
engines are launched in sequence, they can pass data via this common data file automatically.

XML is not a file format in a traditional sense. It is a way to surround data with opening and closing
tags in a text file. TSPA/XML wraps each program variable and data structure in tags so they can be
put into the file in any order or combination. Access is made by name and not position, so one might
think of it as program memory in a file. Programmers don’t know where a variable is located in memory.
They just use the data by name. It works the same way for TSPA/XML, which reads in the entire XML
file as a string, builds a list of tags and variables, and then offers routines to get/put any variable by
name. At any time, the programmer can overwrite that entire string to the file without affecting any
other tag.

TSPA data schemas are built in layers to define everything from simple variables to complex geometric
data structures. Since XML is a text markup language that surrounds data with tags that are accessed
by name, anyone can add in their own custom data tags without affecting any program or the open
TSPA I/O code. This is a very important concept that allows both common and custom data content at
the same time in a single file using the standard I/O code. Custom tag data just gets passed along using
the common tag format. It’s up to each CE to use or ignore the data. This is a key element that should
satisfy users, industries, and independent software developers.

XML data definitions are defined by schemas, not file organizations or formats. Text files are just
streams of characters and in many cases, the tag data can be in any order and broken into any number
of separate files to fit the needs of a database or solution process. This is quite different than the
traditional method where independent programmers define specific data files and line formats for each
program. Data in TSPA form is more like a random-access database definition than a sequential
collection of data useful for only one program. Fig.5 shows how data definitions form layers that are
built on top of each other.

Fig.5: Layered XML schema example

The first layer of TSPA data definition builds on top of a base XML tag layer to define any program
variable, array, matrix, or data structure - linked lists, tables, tree structures, network organizations - to
match any that can be created in a computer program. This makes it a simple matter to save and retrieve
any variable and data structure organization to/from a file. This variable tag layer also allows pointer
and link variables.

Fig.6 shows a small TSPA XML file that contains 4 variable <VAR> tags; two as single variables and
two as arrays. The order of the <VAR> tags does not matter. Notice that all variables can have units

124
defined and the I/O routines will translate units on the fly. This allows different units for each
component: CE, XML file, and UIF framework. The CE units are chosen by the programmer, the XML
file/database units by the organization, and the UIF units by the user.

Fig.6: Variable and array definitions

TSPA also allows for common program “structure” definitions. Structures collect a group of variables
together as a single entity. A structure can be used as a single variable or as an array. Fig.7 shows a
variable (HULL2) that has a structure data type (DT) called STATION2 in the NAME (Naval
Architecture and Marine Engineering) namespace consisting of variables that define one station of a
station definition of a hull. In this example, HULL2 consists of two stations with offset points and
knuckle/curve indicators (IND). Each station is surrounded by <I> … </I> index tags. This definition
contains enough information to allow a program to use the open I/O code to get or put any number or
array using a name like “HULL2[1].Z[3]”. This is the 4th Z-value of the second station of HULL2
(zero-based arrays). Note that the variable HULL2 defines a 100 ft long box barge with two stations -
one at each end. Programmers no longer need to do any file I/O programming.

Fig.7: Structure definition for a hull with two stations

However, instead of defining a polyline (or path) definition specifically for hull stations, this definition
could use a general or cross-industry definition of a path curve. It is the goal of TSPA/OpenCalc to
define and submit to W3C/ISO a common cross-industry definition for all geometry, from base entities
like points, curves, and surfaces, to parts and assemblies, to relationships like bonded edges, unions,
intersections, and joints. All the files will be open XML schemas and open source code will be provided
for all I/O. Users and software developers may add in their own custom tags and the I/O code will be
able to access them by name without affecting any other application. This will eliminate all two-step
neutral file geometry conversions and custom data containers.

Programmers modelling complex geometry should not be defining their own polyline definitions as
incompatible sub-tags of custom XML schemas. A polyline tag should be part of a common cross-
industry XML schema (namespace) that can stand by itself. A user who needs to work with polyline
data should only have to learn that one simple tag definition. Software developers in different industries

125
will no longer need to redevelop the wheel for data file formats for things like polylines, curves, or
surfaces with formats that are only usable by their program.

Fig.8 shows a layered variation of the structure shown in Fig.7, where the 2D stations are defined using
a lower level “GEOM” namespace structure DT called “PATH2D.” This is a general 2D geometric
object that defines a combination polyline/curve path using indicators (zero indicates a knuckle or
polyline point rather than a curve-fit point).

Fig.8: Layered structure definition for a hull

Note that the offsets (OFF) for each station points (P) to a separate named variable called “Box2010”,
which is a cross-industry definition of any 2D path shape. Since both stations are the same, they can
point to the same definition. A calc engine could get the ‘Y’ array of the second station by passing the
string “HULL1[1].OFF.Y” to the open source I/O routines. The routine would return a set of 4 doubles:
0.0;0.0;10.0;10.0.

What this allows for is access by other non-naval architecture programs to that data because they know
what a PATH2D tag is and not what NAME:STATION2 Y and Z arrays are. One could have a CAD
program read, edit and store PATH2D tag data without knowing that it has anything to do with a hull.
Common cross-industry data definitions in this form are much more flexible than the use of neutral
files.

Higher TSPA schema layers can be defined by industries that want to take control over their variables,
data structures, and geometries so that subject experts can create compatible calc engines and
combinations. Since many geometries can be built on top of the lower level geometry tags, calc engines
developed for a lower level tag will work for that tag when it’s used in any higher-level schema
definition in any industry. Low level calc engines will be able to pull out and operate on the lower level
data tags in any higher-level XML data structure file. One could write a calc engine that is applicable
(sold) to all areas of engineering, thereby lowering costs and increasing market size.

For example, a general calc engine that creates a 2D airfoil section (Path2D) curve could be used as
part of a larger airplane XML definition or a larger maritime hydrofoil definition. The open TSPA I/O

126
code allows any calc engine to find the tag variables they need from any XML file, process the tag
information, and put it back into that same file without ever knowing about or affecting any other tag
data. A 2D airfoil variation calc engine could be combined with a 3D airplane analysis calc engine or a
3D hydrofoil analysis calc engine to automatically search for the best foil sections to use. Up until now,
users have been happy enough to manually connect data between separate interactive apps, but that will
fundamentally not work when the goal is to automatically search through thousands or millions of shape
variations. Interactive and manual apps are not the only programming need anymore. Separate calc
engines are needed along with common industry data definitions.

6. User Interface Frameworks (UIF)

The Third TSPA/OpenCalc object type is a "User Interface Framework" (UIF) program or suite that
can launch one or more calc engines without writing any code. Most processing falls into specific
classes that can be generalized and offered for large numbers of calc engines. Each one of these can be
a separate UIF that may be open source, proprietary, or fee-based. The UIF is the component that binds
everything together into an interactive or automated process or service.

The most common form of processing is launching and analysing any calc engine. OpenCalc currently
provides an open (macro) source UIF using Excel that can launch any calc engine that performs a “black
box” calculation of Y=f(X). A very large class of calc engines takes a set of input variables (X),
processes them, and then writes out the resultant numeric variables (Y) back to the same data file. The
calculations could be as simple as C = A + B [C=f(A,B)], or they could be as complex as a finite element
analysis with forces and a large mesh as input and stresses as output. Note that a calc engine (large or
small) does just one set of calculations for one set of input. This is a key point. The OpenCalc UIF front
end provides all the data input, looping, display, printing, graphing, searching, and visualization. The
UIF does this for any calc engine of this class without coding by reading the XML list of inputs and
outputs of each CE (see Fig.3). It can launch one calc engine or it can start a batch (BAT) or command
(COM) file that launches many different calc engines, automatically feeding the results of one
calculation into the next using the common XML data file.

Fig.9 shows two Excel sheets of this UIF analysis for the wrapped “H3” calc engine (see Fig.4) that
combines three separate and independent calc engines: hull variation, hydrostatics, and Holtrop
resistance.

Fig.9: Input sheet and contour graph of EHP over a range of LWL and BWL at a constant displacement,
LCB, Cp and Cm.

127
The open macro Excel UIF performed this hull variation and calculation search using a combination of
base CEs with no extra programming or containerization. A user can mix and match CEs from multiple
sources to quickly create completely new analyses.

A variation of this UIF could take two different data sets (say, for two different hulls) and run them
through the same sequence of calc engines. Printed and plotted results could be compared side-by-side
or on the same graph. Yet another variation UIF might run one data set through two different sequences
of calculations. A calc engine writer could compare the results of an old version to a new version without
custom programming or piecing together output from two different runs.

A different application of OpenCalc might create sensor calc engines that are located anywhere on the
web - even on cell phones with GPS or other attached sensors. These CEs would output the data to an
XML file in the cloud that is monitored by a UIF that collects data from many sensors and
displays/analyses the information. This could form the basic components of an Internet of Things (IoT)
system.

As businesses combine and organize complete definitions of product and lifetime data, Digital Twin
and Product Lifecycle Management (PLM) UIF systems can be built with everyone contributing
different types and levels of calc engines and report generators. Generalized CE tools and independent
data schemas can be created and integrated that would work for many large-scale systems. These
systems can grow organically from the bottom up rather than pre-defined from the top down. In many
cases, pre-defined often means “analysis paralysis” or wrong. Bottom up generated systems can turn
out to be wrong, but restructuring from the bottom up ends up being easier and more flexible than
restructuring from the top down. For general industry and cross-industry applications, however, top-
down is not even a possibility.

The current UIF offered will expand the contour plotting to include the ability to search for the optimum
of any user-defined merit function. This could be expanded by others to create a UIF that offers multi-
objective optimization to define a Pareto Front. Another variation could apply Set-Based Design or
critical path methods. The low-level CEs and XML files are agnostic and can adapt to higher-level UIF
design and analysis approaches. Any Process, Integration, and Design Optimization (PIDO) system
could be a separate UIF built on top of the common base CE and XML components of TSPA/OpenCalc.
No one PIDO analysis approach will satisfy all needs, but TSPA/OpenCalc can form a compatible base
of component CEs and XML data files that are built and tested independently and work with all higher-
level analysis methodologies.

The possibilities for calc engine UIF front ends are endless, and the TSPA goal is to encourage them to
be open and free. Note that spreadsheets provide a perfect platform for many of these UIF frameworks.
It might seem odd to use a spreadsheet to launch external calculations, but users can add in their own
customizations or calculations in the spreadsheet to alter the standard UIF for each use case. Instead of
tying one user interface with one set of calculations, users can customize the standard UIF spreadsheet
that launches one calc engine or they can use the standard UIF to launch many calc engines wrapped in
a command file script.

An important class of UIF deals with interactive CAD software. That is discussed next.

7. Interactive Computer-Aided Design (CAD)

A current trend in interactive CAD is the move away from stand-alone calculations for ship design to
those custom-built inside of a general-purpose program. This divides the market into smaller communi-
ties and isolates those who use other CAD programs or cannot support the extra CAD costs and learning
curves. It increases the complexity and development costs for independent software developers. The
tools created for one CAD environment are not available to other CAD environments without expensive
reprogramming. Also, these CAD-specific tools are used interactively and not available for combined
or automated use, as with searching for optimum solutions of geometry variations.

128
OpenCalc, however, defines a way to create stand-alone calculations that can run separately by any UIF
or launched by any CAD program using the open source I/O interface. In this way, a naval
architect/programmer could create and test one standard calc engine that would work with any
compatible CAD or other UIF program. In general, a batch calc engine should be usable by any system
in any industry that needs those calculations without custom programming or containerization.

There are three types of external CE processing a CAD program can do. The first is to output geometry
and perform one or more calculations. This could be as simple as calculating the area of a surface or as
complex as the generation of a hull mesh and the calculation of CFD code. A second type is to use an
external CE to generate a geometric shape, such as a hull shape or a cross-industry 2D/3D foil shape,
and then return it to the CAD program.

The third type is to output selected geometry, launch an external shape variation CE, and return the
result back to the CAD program. It is also possible to offer a CAD program that can perform these shape
variations dynamically in the background using slider bars or user-defined shape handles for input. All
these techniques are currently being implemented and tested using the author’s Pilot3D NURB surface
CAD program. The goal of OpenCalc is to encourage the implementation of these CE connection
techniques by all CAD programs and all independent UIF frameworks.

In Hollister (2016), three separate and standard calc engines: hull variation, hydrostatics, and Holtrop
resistance, were combined to search for an optimum hull shape without writing custom code and without
having the source code for any of the calculations. With a compatible CAD program, one could launch
this search sequence using the current “parent” hull shape being designed in the CAD program,
ultimately replacing it with the optimum shape determined by the external search. All types of global
and local hull variation CEs could be swapped into this sequence by any user. This organization
combines the best of both interactive and automated worlds.

8. Conclusion

"There continues to exist competition between the proponents of self-contained, relatively closed
turnkey systems of increasing functional scope and the advocates of open systems and a modular, in
part heterogeneous approach to system growth. This argument will probably go through many more
cycles, but by and large, the long-range trend has been in favor of more and more open systems."
...
"Such openness tends to lie in the interest of not only the user communities, but in the long run also of
the system vendors," Nowacki (2010).

As a student and project programmer of Professors Horst Nowacki and Bertram Herzog in the 1970’s,
and after 45 years of frustrations as an independent software developer dealing with changing techno-
logy in a small, technically complex market, the author developed TSPA and OpenCalc as the solution.
After losing programs when the world changed from mainframe computing to PCs (programs were put
on 9-track tape and forgotten), and then after losing a huge amount of program development when DOS
evolved to Windows (many still keep old computers for those programs), and now viewing more code
being lost in the change to cloud computing, a new programming model was created to isolate basic
components and provide separate tools for computer scientists, subject experts, users, and industries.
This change is made more viable with the open source movement and Web standards like HTML and
XML.

TSPA/OpenCalc, now in a third working version, has shown how this is possible and how it offers ways
to build automated solutions that can’t be built any other way unless one has all the source code or uses
a complex containerization process. Calculations are separated from interactive user interfaces and
common data is defined with open definitions and code. Everyone can independently add their own
compatible piece to large web-based solutions. Computing will no longer be dominated by all-in-one
apps, proprietary data formats, and suites that divide markets and limit cooperation. Industries will
assert more control over their data and subject experts will have flexible tools to create completely new

129
solutions without programming. New, high-level design and analysis methodologies can evolve over
time from the bottom up using common base calc engines and XML data formats. A Kindle-like market
is envisioned for calc engines written by subject matter experts. They can be used with the standard
open source UIF front end to create a traditional “app” or used with other CEs to create automated
processes with no additional code.

For any automated or optimization process, one needs non-interactive batch versions of calculations.
OpenCalc has shown that they can be written once and used with both interactive and automated
processes (UIFs) with no additional coding. If one is starting a new project, OpenCalc provides all the
tools needed to create batch calc engines using programming language templates that provide XML text
file I/O code for any variable and data structure. Just drop in the calculations and test. If an application
already exists and the calculations are being converted to a batch form, OpenCalc makes this conversion
simple. The result is a batch program that doesn’t need extra containerization to work with any
compatible UIF processing methodology. Specialized container code wrappers could be added to
OpenCalc objects, but that’s an unnecessary extra layer. Higher-level processing systems (UIFs) should
adapt to common low-level components, not the other way around. In addition, data should be thought
of as a separate and independent programming object rather than one defined and controlled by different
software developers. External data is the domain of industries and cross-industries. Internal object-
oriented class encapsulation of data and methods does not deal with the external needs of real-world
stakeholders. As programming opens up to the needs of industries, subject matter experts, users, and
the internet, data and methods must become separate objects.

TSPA/OpenCalc is being developed as an open framework with many tested and open source tools.
Some might create calc engines that are not open or free, but the XML layered data schemas and I/O
code will be free along with a number of UIF interactive and automated processing front ends. After
more than 50 years of rapid computer developments, lost programs, and ignoring the needs of small
markets, OpenCalc defines an open, flexible, independent, agile, and external object-oriented approach
that is web-based, cross-platform, cross-language, and cross-industry. Subject matter experts can and
should become creators of completely new computer solutions without becoming computer scientists.
TSPA/OpenCalc gives them that direct creative control over their subject domain.

A new independent web site will be available for all TSPA/OpenCalc open source downloads seeded
with free calc engines for hydrostatics, stability, Holtrop hull resistance, Savitsky planning boat
resistance, modified Lackenby hull variation, and the wrapped H3 hull variation/resistance combina-
tion. An online version of this system is also being developed. The web site will include a library of
public domain hull shapes in various definition file formats that will allow user contributions. A
community support forum will be offered to encourage use, new solutions, and framework develop-
ment, and a Kindle-like system for CE contributors is being planned.

References

HOLLISTER, S. (1996), Automatic Hull Variation and Optimization, 1996 meeting of the New
England Section of SNAME, http://www.newavesys.com/hullvary.htm

HOLLISTER, S. (2014), SNAME Marine Computing Initiative, SNAME white paper, http://www.
sname.org/project114/home

HOLLISTER, S. (2015), A New Paradigm for Ship Design Calculations, SNAME Maritime Conven-
tion (WMTC) Conf., http://www.sname.org/project114/home

HOLLISTER, S. (2016), Automatic Hull Variation and Optimization using SNAME’s OpenCalc
System, SNAME Maritime Convention, Seattle, http://www.sname.org/project114/home

HOLLISTER, S. (2017), SNAME OpenCalc: History, Status, Future, 2017 Meeting of the NYC
Metropolitan section of SNAME, http://www.sname.org/project114/home

130
HOLLISTER, S. (2018), The SNAME OpenCalc System, SNAME Maritime Convention, Providence,
http://www.sname.org/project114/home

NOWACKI, H. (2010), Five decades of Computer-Aided Ship Design, CAD-Elsevier 42, pp. 956-969

PANKO, R. (1998,2008), What We Know About Spreadsheet Errors, J. End User Computing's Special
Issue on Scaling Up End User Development 10/2, http://panko.shidler.hawaii.edu/SSR/Mypapers/
whatknow.htm

SNAME (2015), Open Source Computing Tools, http://www.sname.org/project114/home

TSPA (2018), The Tri-Split Programming Architecture, http://www.trisplit.com/

131
A Multi-Scenario Simulation Transport Model to Assess the
Economics of Semi-Autonomous Platooning Concepts

Alina P. Colling, Delft University of Technical, Delft/NL, A.P.Colling@tudelft.nl


Robert G. Hekkenberg, Delft University of Technical, Delft/NL, R.G.Hekkenberg@tudelft.nl

Abstract

The Vessel Train (VT) concept, aims to increase the level of autonomy of ships in order to develop a
competitive low-manned waterborne transport concept. A transport model has been developed to
determine the concept’s performance. In this paper, that model is used to assess the impact of the
concept for the lead vessel on the overall performance of the VT. If one knows the cost of the Lead
Vessel, the required benefit for the follower vessels can be determined. The model uses multiple
scenario simulation to gather data for a sensitivity study of the LV features. The insights gained into
the behavioural properties of the VT leads to recommendations on boundary conditions for a
profitable implementation of the VT.

1. Introduction

The NOVIMAR (NOVel Iwt and MARitime transport concepts, https://novimar.eu/) project develops
a waterborne transport system called the Vessel Train (VT) that is based on the platooning principle
that is also researched in the trucking industry. The train is commanded by a lead vessel (LV) that is
fully manned and takes over navigation, communication and situational awareness responsibilities for
the follower vessels (FV), Fig.1. The aim of the concept is to create a transportation solution for the
European transport sector that makes use of the existing waterborne transport potential to help expand
the transport chain up and into the urban environment. Although the concept is being considered for
both the Inland Water Transport (IWT) and the Short Sea (SS) Shipping sector, this paper will mainly
focus on its application for the inland navigation.

Fig.1: Vessel train (VT) Concept, https://novimar.eu/concept/

A FV needs to be equipped with the technology to make it possible for the LV to monitor and control
the navigation and parts of machinery systems. This enhancement of automation on the FVs allows
lower manned vessels. Increased automation on board of the FVs lead to the possibility to reduce crew
and thereby, operational cost. Such a cost reduction will allow especially smaller class II inland
vessels, where crew cost may be as high as 56% of the fixed cost (Beelen, 2011), to become more
attractive. This should lead to increased use of these small vessels and increased use of small
waterways.

The VT is a means to achieve increased autonomy of ships, without having to address the big
challenges of autonomous navigation and communication in confined and busy waters.

132
The concept for the LV is just as important for the VT as the FVs, since it determines the additional
cost that have to be overcome by cost savings of the FVs. Therefore, the LV is the main focus of the
study in this paper. For the case study, a purpose-built model in which the mentioned lead vessel
features are embedded, is used to calculate cost for various scenarios. The data obtained allows an
impact assessment to be made that helps understand the economic viability of the VT concept.

This paper first explains the background and method according to which the transport model is set up.
It then moves on to desribe the input data used for this specific case study and states the scenarios that
are used to help assess variations within the cost.This is followed by the presenation of the results and
a discussion section in which particulatities about assumptions and the application of the concept in
different sectors is mentioned. The final section summarizes the main conclusions that were drawn
from the simulations and informs on the next steps taken within the research of the VTs viability.

2. Modelling the Lead Vessel Cost

This section starts by introducing the different LV types, then moves on to describe the cost features
than influence the LVs and explains the alterations in cost features dependent on the application of the
LV. The last part of this section explains the structure of the cost model and the type of data it
provides for analysis purposes.

2.1. LV Vessel Types

Dependent on the desired business application for the LV, the role of it may differ. The focus of this
paper’s case study is placed on the LV either being a dedicated vessel or a cargo vessel. The dedicated
vessel refers to a vessel whose sole purpose is to provide a service of leading other vessels. It can be
any type of vessel, e.g. a refit cargo vessel or a vessel that may have been designed for speedy
transportation of people. Its only restriction is that it needs to be able to meet the operating speed of
the fastest FV and support the required control systems as well as the additional crew on board. By
using a vessel that has been designed to only carry people, the vessel operating cost can be reduced,
since the hull shape can be optimized for speed instead of for cargo carrying. It is yet to be decided
whether the dedicated LV will be specific to a sector or can operate for both the IWT and the SS
sector. For the sake of comparison to demonstrate the property differences between the dedicated and
the cargo vessel; the specs of the Damen FCS2610 fast crew supplier, http://products.damen.com/en/
ranges/fast-crew-supplier/fcs-2610, has been chosen as a sample base-case LV in this paper. This
would theoretically allow the LV to lead at SS operating speeds as well.

Table I: Benefits and Drawbacks of Different LV Types


LV Type Benefits Drawbacks
• Available when needed (suitable for
both liner and tramp services) • Costlier for the user, since the total LV
Dedicated • Flexibility in choice of sector (IWT operating cost has to be compensated
or SS) application, since operating for by the FVs.
speeds can adapt to any vessel type
• Availability restricted by loading of the
cargo (not suitable for liner service)
• Lower FV contribution cost since, the
• Less attractive to FV due to more
Cargo income from cargo partially covers
restrictions in destination and departure
operating cost of the LV
• Space required on board for the VT
monitoring personnel

The cargo vessel refers to a vessel that has normal income from transporting cargo and the added
benefit of proving a service as a LV. For the case of the simulations of this specific research, a class V

133
inland vessel of 110 m long and 11.4 m wide has been chosen as vessel type. The reasoning behind it
is that such ships are fast enough to lead any inland VT without restricting its speed. If a different
vessel were to be chosen, that is not able to operate at the speed of the larger inland vessel, a
disadvantage could be created to the business case, since the VT operating speeds may be restricted by
the speed of the LV. The cargo LV only leads other vessels if they fit in the LV operator’s schedule. In
essence, the lead vessel acts as a normal cargo ship but allows others to tag along to generate
additional income. As a result, only the additional cost of the monitoring & control equipment and
associated crew need to be charged to the followers.

Both business cases have their benefits and drawbacks, Table I. Choosing between these two business
cases is a trade-off between service reliability and cost.

2.2. Cost Features of the dedicated and the cargo LV

The two vessel types make up an important part of the cost elements. However, as can be seen from
Fig.2, there are four other main factors that influence the LV cost. The five main cost elements are
identified to be: 1) extent of automation, 2) vessel type, 3) operating times 4) manning 5) investment.
Within these five factors there are two dominant features:

I) Vessel type: Dependent on the type of vessel the cost are influences differently:
a. fuel cost will differ dependent on the size and performance properties of the LV. This cost
influence is considered within the vessel type sphere in Fig.2. The fuel cost is only of
relevance to the dedicated LV cost, since for the cargo LV this cost element falls under the
standard operating cost that is covered by the cargo transport income.
b. manning requirements and resulting crew composition are different for both types of
vessel as well as per operational regime (14,18 or 24 h/day). Hence this cost is considered
under its own category ‘manning’.
c. investment cost for a modified cargo ship are likely to be lower than for a dedicated ship,
since the ship does not need to be constructed, just refit.
d. operating time of the LV can be limited for the cargo vessel since it needs (un)load cargo.

II) Extent of automation/monitoring: Identifies how much of the monitoring and control tasks
are transferred from FV to LV. This influences:
a. manning requirements, since different crew members may be needed to monitor and
control the automated tasks of the follower vessels.
b. investment cost may be different dependent on the functionality and type of technologies
used for the monitoring and control of the vessel.

This description shows a strong interrelation between the different factors. The investment cost
element is composed of the capital cost requirement for the ship construction and the cost for the
investment of the VT technology together with it being imbedded on board.

Similarly, the manning cost of the LV is split into the crew that allows the sailing operations to be
performed and the crew that allows the LV service to be performed, which is referred to as the moni-
toring crew.

As seen from the dominant feature description, the two vessel types create different cost. For the VT
concept to be economically viable, the FVs need to compensate for any cost created by the implemen-
tation of the VT and simultaneously benefit from sailing in the VT. To make it possible to compare the
two vessel types it is thus important to have a clear understanding on what cost actually influence the
specific LV type.

The cost created by the implementation of the VT only directly comprises of the VT tech cost and the
monitoring crew cost for the cargo vessel, since the general operating cost are covered by the income
created through the cargo transportation. Yet, the investment in ship and/or control system leads to

134
several costs that would have to be regarded within the cost breakdown of the total cost. These are all
time dependent cost, being depreciation, insurance, interest. Even though the maintenance cost is tech-
nically separate from the investment cost, in this case it is calculated as a function of the investment
cost, which is why it is counted under this same category.

The dedicated vessel has all the same cost elements as the cargo vessel and more, since the FVs also
have to compensate for the general operation of the vessel. The depreciation, insurance, interest and
maintenance cost are significantly higher than for the cargo LV. They are not only based on the in-
vestment cost of the VT technology, but also the investment cost of the ship. The two operational cost
elements that are added to the dedicated vessel are the operating crew and the fuel cost. Both of these
cost elements are influenced by the properties and size of the dedicated vessel chosen. A summary of
all cost considered in either business case application is provided in Table II.

Fig.2: Influence of LV Type on LV Features

Table II: Cost Element Breakdown for LV Type


Cost Dedicated Cargo
Ship Investment ✓ X
VT Technology Investment ✓ ✓
Operating Crew ✓ X
Monitoring Crew ✓ ✓
Fuel ✓ X
Depreciation ✓ ✓
Insurance ✓ ✓
Interest ✓ ✓
Maintenance ✓ ✓

A cost feature that has deliberately not been mentioned in is overhead cost. It is disregarded since it
largely dependent on factors that are not directly linked to the LV’s technical.

2.3. Model Structure

A transport model has been developed to help assess the overall performance of the VT. Not all
elements that are calculated in the model are addressed in this paper. This section aims to explain part

135
of the VT transport model that provides the data used for the multi-scenario analysis of the LV. Fig.3
shows the model set up. It is split into three different entities; external data, decision steps and actions
performed to calculate the cost. The external data are case study dependent and values are presented in
the next section, while the other two entities are further elaborated upon in this section.

Fig.3: Flow Chart of LV Cost Calculations of the VT transport Model

The calculations of the hourly monitoring crew cost and the travel time (A) are independent of the
vessel type. It is only after the decision step, that the calculation for each LV type differ. The cost
estimation of the dedicated LV requires the determination of all operating cost elements i.e. crew, fuel,
depreciation, maintenance, interest and insurance cost per hour (B.2.). All but the fuel cost are
calculated based on a constant percentage from the input data, referred to as the ‘LV operating cost
estimations’. The fuel consumption is estimated on the basis of the vessel’s engine data, assuming the
engine never runs at more than 85% MCR, and a cubic power-velocity relationship. When combining
the power data that is deduced from the speed-power curve, with the fuel consumption data of the
ship’s engines (Caterpillar Marine Populsion Engine 3406E), the fuel cost and the trip time, the hourly
fuel consumption of the dedicated LV can be calculated.

A high availability of the dedicated LV is expected. It is assumed the dedicated LVs service is imme-
diately available and hence provides an availability of 100% for 360 days a year. However, there may
be instances where the LV will have to wait for all FVs to be ready to depart, since exact operations
surrounding the train are unknown. These waiting times or unavailability create extra cost the FV’s
have to compensate for. So, a variation of availability is built into the model (C.2.). The waiting time
is simply deduced from the input data, ‘percentage of time leading’. Once the total trip time is known,
one can use it together with all the previously calculated hourly cost elements to calculate the total cost
per trip.

The cost that the FVs needs to compensate for the cargo vessel (B.1.) are fewer than for the dedicated
case. The main difference between the two calculation paths is that some cost elements are omitted.
Furthermore, the waiting times of the cargo vessel (C.1.) also comprises the times spent in port, where
it might be bunkering or (un)loading cargo. Port time in particular is extremely relevant for inland
vessels, since IWT vessels are not given priority by the terminal operators and therefore have to wait
significantly longer compared to SS vessels, (Malchow, 2010).

The final step (F.) of the models first stage is to determine the added cost each FV has to pay for the
use of the VT. Shipping is a highly comparative market, where the profit margins are low (Blauwens
et al., 2012). It is, therefore, assumed that the operating cost of the LV or the additional cost of VT

136
operations for a cargo vessel need to be compensated for by the FV. This ensures a profitable scenario
for the VT operator. Thus, the simplest representation of the VT dues are the total cost/added cost for
leading of the respective LVs, divided by the number of FVs.

Simultaneously, these individual VT dues have to be less than the savings achieved by sailing in the
VT. Hence, a minimum VT length has to be found in which both of these conditions are met.

Up to now, the description only covered the cost calculation stage of the model. The cost alone are not
useful unless they are put into perspective of the overall VT concept by comparing them to the cost
savings of the FVs. Fig.4 describes the reasoning behind the determination of the cost savings and the
minimum viable VT length calculations.

The main cost saving that the VT concept aims to achieve is a reduction in crew cost by the FVs.
Comparing the current base case conditions to the FV scenario with crew reduction allows a cost
savings for the entire trip to be calculated. This does not only include the time the vessel spends
sailing in the VT, but also the time it spends in port. Making use of the VT should allow less crew to
be on board for the same operating conditions, while sailing in- and outside of the VT.

With increasing number of FVs in the VT, the required VT dues of each individual FV reduces. The
point at which the cost saving of the FV is larger or equal to the VT dues, is identified to be the
minimum required VT length to make the concept economically viable.

Fig.4: Flow Chart of Minimum VT Length Determination

3. Case study

This section describes the input data that is used in the model and explains the specific scenarios that
are set up to allow a spread of results to be analyzed and conclusions to be drawn from them. The last
section has already described some of the differences in data requirement dependent on the role of the
LV. The underlying values that are used, as presented in Table IV, are based on existing inland
navigation cost models (Beelen, 2011; Hekkenberg, 2013; van Hassel, 2011).

3.1. Input Data

The data in Table IV provides the information referred to as external data in the flow chart of Fig.4.
The input differences between the dedicated and the cargo vessel are presented in Table IV. Most of
the data is self-explanatory. There is, however, some information that requires some further
commentary.

The hourly crew cost for the LV are based on the crew cost provided in Table III (data is currency
converted and inflation adapted from Stopford (2009)). Since the exact specs for the dedicated LV are
not known, it is assumed that the operating crew will also fall under the SS vessel crew cost, that for
certain roles is larger than the average expected for the IWT vessels. It is also not known what size the
LV will have. Hence Table III also provides the crew composition assumptions, that have been set for
a crew requirement of 4, 6, 7 or 8 operating crew on board of the dedicated vessel.

137
The monitoring-crew cost is considered equivalent for the cargo and the dedicated vessel, but since the
job description does not yet exist in practice, the cost has to be estimated. It is set to a cost of 13.32€/h
per person, which is equivalent to the cost of a seagoing chief officer and can hence be classified as a
highly skilled crew member.

Table III: Crew Cost for a Sea Going Crew Member


Wage Operating Crew Level
Crew Role
€/h 4 6 7 8
Master 17.54 1 1 1 1
Chief Engineer 17.11 1 1 1 1
Chief Officer 13.32 1 1 1 1
Second Engineer 13.32 1 1 1 1
Second Officer 8.11 0 0 0 1
Cook/Bosun 4.22 0 2 3 3

Table IV: Input Data for base case scenario


Distance (km) 325km (Antwerp to Duisburg)
Trip Information
Current (km/h) 4
LV Type Dedicated Cargo
I VT Monitoring Crew Level 2
VT Monitoring Crew Cost 13.32€/h/person
Operating Speed of VT 7.5
VT Tech Investment Cost 60 000 €
Design speed (kn) 20 Not applicable, since
Installed power fuel cost falls under
LV Specification 2237.5
(kW) standard operational
Sfc (g/kWh) 208 cost
LV Operating Crew Requirement 6 4
LV Ship Investment Cost 3 000 000€ 0€
LV Operating Days 360 (99%) 128 (35%)

Insurance 0.75% annually of total 0.75% annually of VT


II investment technology investment

5% annually of ship
Depreciation investment 20% annually of VT
LV Cost 20% annually of VT technology investment
Estimation technology investment
5% annually of total 5% annually of
Interest
investment technology investment

2% annually of total Not applicable, falls


Maintenance
investment under standard
operational cost
Number of FV 5
III Type of FV IWT Class IV
Number of Crew Reduction on FV 2

138
In Table IV, the percentage behind ‘LV operating days’ denotes the percentage of time these number
of days make up the total year. This percentage will later be taken as a variation to analyse the effects
of a variations cause by the percentage of time leading. This percentage includes waiting times that
have to be attributed to the leading of a VT.

Special attention also has to be paid to on the different depreciation rates of the ship and the VT
technology. The steel of a ship hull is more durable than technology which is constantly evolving and
will need updates. Hence, it is not surprising that ships investment is depreciated over a period of 20
years (5% per year), while the technologies investment is depreciated over 5 years (20% per year).

The data, that allows the cost savings by FVs to be calculated, is based on the crew cost provided in
Hekkenberg (2013). In contrast to the sea going crew cost, the IWT crew cost in this data source, are
not differentiated by role, since there is no information available yet, that indicates which crew
member will be taken from board or even what the crew members jobs will entail. Taking the average
crew cost is therefore the best starting point. The crew cost per IWT class also change dependent on
the class of the vessel. The corresponding hourly cost are presented in Table V.

Table V: Crew Cost for an IWT Crew Member


IWT Vessel Type Average Crew Cost per Member
Class V 9.45 €/h
Class IV 8.84 €/h
Class II 8.29 €/h

The last input data to mention, is the time not spent leading, which is identified as un/loading time,
waiting time in port or times during which there are no FVs following the lead vessel. For this specific
case study, the travel time of the distance between Antwerp and Duisburg (see distance in Table IV) of
23h (at 7.5kn) is approximately equal to the time spent in port at the start or end of the trip. Hence, the
total port time was simply set to equal twice the travel time to mimic both the pre-departure and post
arrival time needed. Assuming that the LV is always leading followers when it is sailing, this causes
the sailing time in the VT to only make up one third of the total trip time.

3.2. Scenarios

Some of the input data provided in Table IV varies dependent on the different assessment scenarios
that are set. These variations are based around the following factors:

• Investment Cost
• Crew Cost
• % of Time Leading

To keep an overview of the variations of each of these factors, three scenarios have been set: An
expected, a best and a worst-case scenario.

All factor variations are run from the base case, which is the ‘expected’ scenario. This means that if,
for instance, the investment cost is varied, only those values are altered from the expected case, all
others stay as they are indicated in the expected scenario.

Hoekstra (2014) identifies the cost for a tug boat of a similar size to the parent vessel, used for the
dedicated vessel, to be between 3.5 to 5 million €. A tug boat has a large amount of installed power on
board to be able to tow other ships. A dedicated LV does not require this much power. Hence, a first
cost estimation for the vessel is set at 3 million €. The best-case scenario is set to half and the worst
case to twice this cost estimation.

139
The estimation of the technology investment cost was given by experts from the NOVIMAR
consortium, who provided a preliminary rough estimate of 60000 € for a LV. The best-case and worst-
case scenarios are set in the same way as for the ship investment cost, i.e. half and double the
reference value.

Concerning the crew cost, the number of operating manning is set to six, which is the minimum
number of crew members on a small short sea vessel needed to operate it continuously. This excludes
the crew for cargo handling. The best-case scenario takes off two crew members from that value and
the worst case adds two. The same principle is applied for the monitoring crew, where three crew
members are needed to cover 24h monitoring with 8h shifts per crew member. The last variation is
done in the monitoring crew cost where the worst case assumes a master level skill set, instead of a
chief officer, and the best case assumes a lowered crew cost of about 20% from the expected value.

Table VI: Scenario Set-up for Investment and Crew Cost Variation
Investment Cost Crew Cost
Monitoring
Scenario Operational Monitoring
Ship Tech Crew Cost per
Manning Manning
operator
Expected 3 000 000 € 60 000€ 6 3 13.32 €/h
Best 1 500 000 € 30 000€ 4 2 10.50 €/h
Worst 6 000 000 € 120 000€ 8 4 17.54 €/h

The availability or the so called ‘percentage of time leading of the LV’, is a matter of business case
application of the operator. Thus, values have been picked to be representative for a range of possible
operations. The reasoning behind the chosen percentage lead times are as follows:

• The expected lead time for the dedicated vessel assumes 10% of time spent waiting for all
FV’s to gather at the departure location before sailing operations can be stared.
• The expected lead time for the cargo vessel assumes the same 10% of time spent waiting for
all FV’s, but also factors in the port time required to (un)load the vessel. If the class V vessel
where to lead full time directly when it leaves port it can achieve an availability of 45%. This
is why that value has been set to the best-case scenario. This percentage is based on the
assumption that there it continuously operates on this trip with 70% utilization of cargo space
and a cargo handling rate of 400t/h.

Note that 100% availability is assumed to be equivalent to 360 days of operations per year.

Table VII: Variations for LV % of time Leading


Scenario Dedicated Service Cargo Service
Expected 90% 35%
Best 100% 45%
Worst 80% 25%

Finally, the input data provided in section III of Table IV will be varying with every simulation case.
The model calculates all values for 1 to 30 FVs with a crew reduction from 1 to all crew members on
the specific FVs, for IWT Class II, IV and V vessels. This is done to be able to provide the necessary
data presented in the result section. The last point to be noted in this case study is that for purpose of
simplification and to place focus on the LV features only, all VTs modelled are composed of the same
class FVs.

4. Results

140
The first part of the results demonstrates the cost breakdown of the two LV types, Figs.5 and 6. Both
of these assume the best-case availability for the LV type. The values provided are given in cost per
hour of operation in the VT. The cost provided are the hourly rate of that specific cost element. The
sum of the pie charts differs between the two LV types. The cargo vessel is clearly creating less cost
for the FVs to compensate. The cost breakdown comparison makes it clear that the time related cost
impact is dominated by the crew rather than the combined cost for the depreciation, interest, insurance
and maintenance cost estimation.

Fig.5: Cargo LV Additional Cost Breakdown Fig.6: Dedicated LV Cost Breakdown

To provide a picture on what these costs represent in terms of required FV crew reduction, the indi-
vidual cost structures have been translated into cost equivalents to crew reduction, assuming a crew
cost of a standard class IV IWT crew member. Fig.6 total cost for the dedicated vessel adds to 184.5
€/h. The FVs only operate one third of their time in the VT but do profit from a reduced crew reduc-
tion the rest of the time as well. Therefore, the required savings for a FV per operational hour in the
VT drops to 61.83€/h. To achieve these savings, at least seven crew members need to be taken off in
the VT. This value is indicative over the entire spread of the VT not just one FV. This value is of
course highly dependent on the wage of the crew member that is being reduced. Table VIII summariz-
es this calculation procedure for both LV types.

Table VIII: Sample of Required Crew Reduction for a Dedicated LV VT


Required Savings in VT /h 184.50
% of time FVs spent in VT/h 33%
61.5
Savings/operational h of FV
8.84
Average Crew Cost/h

Required Crew reduction while in VT 7

The results from the model simulation in Figs.9 to 11 are all presented in the same manner. Up top,
one can identify two sections that denote the difference between the dedicated and the cargo vessel.
The next line indicates the number of crew members that were reduced at that particular simulation.
The different scenario descriptions are lightly shaded, while the class type of the FVs in the VT are
presented in the right most column. The actual values that are presented determine the minimum of
FVs needed in the VT to be able to make it an economically viable solution. The dash indicates that
there is no relevant value for that category, since the vessel class does not have that number of crew
members on board.

The comparison between the three different scenarios of Table IX shows that the maximum variation
in investment cost between the best and the worst case causes an increase in the VT length of at most
three FVs for the dedicated LV. The cargo LV on the other hand barely undergoes any changes across
the versions in technology investment, implying that the impact of a cheaper or more expensive con-
trol system on viability of the VT will be limited.

141
Table IX: Multi-Scenario Analysis Results of the Impact of LV Investment Cost Variations on the
Minimum VT Length
Dedicated LV Investment Cost Cargo LV Investment Cost
Crew Reduction 1 2 3 4 5 1 2 3 4 5
Best Scenario 1 500 000 € + 30 000 € 30 000 €
Class V FV 6 3 2 2 2 5 3 2 2 1
Class IV FV 6 3 2 2 - 6 3 2 2 -
Class II FV 7 4 3 2 - 6 3 2 2 -
Expected Scenario 3 000 000 € + 60 000 € 60 000 €
Class V FV 7 4 3 2 2 6 3 2 2 2
Class IV FV 7 4 3 2 - 6 3 2 2 -
Class II FV 8 4 3 2 - 6 3 2 2 -
Worst Scenario 6 000 000 € + 120 000€ 120 000€
Class V FV 9 5 3 3 2 6 3 2 2 2
Class IV FV 9 5 3 3 - 6 3 2 2 -
Class II FV 10 5 4 3 - 6 3 2 2 -

The scenarios that vary the crew cost, Table X, demonstrate a larger impact on the minimum VT
length. The direct comparison between the two LVs types makes it visible that the cargo vessel is
much more affected by a variation in crew composition. While the FV requirements for the cargo ves-
sel at least treble, between the best-case and worst-case scenarios, the requirement for the dedicated
vessel less than doubles. This is an expected outcome, since it was seen from the cost breakdown that
the crew cost makes up a much larger part of the cargo vessels cost than it does of the dedicated LV.

Table X: Multi-Scenario Analysis Results of the Impact of LV Crew Cost Variations on the Minimum
VT Length
Dedicated LV Crew Cost Cargo LV Crew Cost
Crew Reduction 1 2 3 4 5 1 2 3 4 5
4 Operating Crew and 2
Best Scenario 2 Monitoring Crew at 10.50 €/h
Monitoring Crew at 10.5 €/h
Class V FV 6 3 2 2 2 3 2 1 1 1
Class IV FV 6 3 2 2 - 3 2 1 1 -
Class II FV 6 3 2 2 - 3 2 1 1 -
6 Operating Crew and 3
Expected Scenario 3 Monitoring Crew at 13.32 €/h
Monitoring Crew at 13.32 €/h
Class V FV 7 4 3 2 2 6 3 2 2 2
Class IV FV 7 4 3 2 - 6 3 2 2 -
Class II FV 8 4 3 2 - 6 3 2 2 -
8 Operating Crew and 4
Worst Scenario 4 Monitoring Crew at 17.54 €/h
Monitoring Crew at 17.54 €/h
Class V FV 9 5 3 3 2 9 5 3 3 2
Class IV FV 9 5 3 3 - 10 5 4 3 -
Class II FV 10 5 4 3 - 10 5 4 3 -

The last collection of simulation results regards the availability of the LVs in Table XI. Even though
the changes in scenario results for the availability are less impactful than for the crew variation, the
need for FVs still increases by two vessels for the cargo LV at a single crew reduction per vessel.

142
Table XI: Multi- Scenario Analysis Results of the Imp act of LV Availability on the Minimum VT
Length
Dedicated LV Availability Cargo LV Availability
Crew Reduction 1 2 3 4 5 1 2 3 4 5
Expected Scenario 90% 35%
Class V FV 7 4 3 2 2 6 3 2 2 2
Class IV FV 7 4 3 2 - 6 3 2 2 -
Class II FV 8 4 3 2 - 6 3 2 2 -
Best 100% 45%
Class V FV 6 3 2 2 2 4 2 2 1 1
Class IV FV 7 4 3 2 - 5 3 2 2 -
Class II FV 7 4 3 2 - 5 3 2 2 -
Worst 80% 25%
Class V FV 8 4 3 2 2 8 4 3 2 2
Class IV FV 8 4 3 2 - 8 4 3 2 -
Class II FV 9 5 3 3 - 8 4 3 2 -

5. Discussion

The higher the required number of FVs is, the less advantageous it is for the business case, since there
is a higher risk that the minimum required number of vessels is not met. Not having enough FVs could
either mean a loss for the LV operator, an increase in VT dues or the cancellation of the VT, implying
that the followers would have to make their journey on their own.

Having a small dependence on the numbers of FVs in the VT improves the business case for routes
that have smaller and more sporadic cargo flow requirements. Routes that are known to have a large
and constant cargo flow are, however, ideal for the dedicated LV business, since it can provide a high
availability and prompt departures. This line of thought leads to the contemplation of choosing a style
of operational service, such as tramp or liner shipping, for different LV types to achieve the most
benefits in certain areas. Even though this aspect has high importance for the successful application of
the VT concept, it is not directly related to the vessels properties and is therefore not further elaborated
in the research of this paper.

As discussed briefly in the input data section, the effectiveness of the reduction of crew members on
the FVs is dependent upon the type of crew member taken off board. This case study assumes the av-
erage crew cost of all roles on board of an IWT vessel. In reality however, the removal of a deck re-
sults in a much smaller cost savings than removal of e.g. a helmsman, thus increasing the required
number of followers in a commercially attractive VT. This is especially an important aspect to note
when looking into applying this concept in the Short Sea sector. That sector has large differences in
cost between different crew members on board. The crew cost vary between 4€/h and 17€/h (Stopford,
2009) as shown in Table III. The FV requirement can thus either double or half dependent on what the
cost of the reduced crew member may be. This implies that the identification of the correct crew role
reduction is of high importance in the determination of the concept’s viability. Such a characteristic
falls under the FV features and is not investigated in this paper. It is however important to acknow-
ledge the awareness of this point of influence on the LV results obtained.

The results presented are representative of the economic viability of the concept. There may, however,
also be technical reasons why exceeding a certain number of FVs may not be viable, especially when
considering the IWT sector. The economic and technical limitations are likely not to be equivalent to
one another. For instance, two to four FVs can reasonably be expected to follow one another when
navigating along busy waterways. However, eight or even nine FVs could create some technical chal-
lenges. Not only would overtaking manoeuvres be reaching kilometres in length, but also the naviga-

143
tional awareness for the LV may be impacted by a possible absence of a line of sight between the LV
and the FVs towards the end of the train. This demonstrates that even though the values provide eco-
nomic viabilities of the concept, the physical constraints the concept will have to deal with, are yet to
be elaborated and may further impact the obtained results.

The evaluation of cost related to the investment, crew and availability shows that a reduction of more
than three ‘average crew members’ has very little or no additional benefit for the VT economic viabil-
ity. This shows that that full automation of inland vessels is not necessary to achieve a competitive
concept.

6. Conclusions

The assessment performed emphasises the complexity of the challenges the development of the VT
concept brings along. Comparing the multiple different scenarios made it possible to realize that the
cost priorities for the two vessel types are different from one another as seen in Table XII.

Table XII: Cost Prioritization


Dedicated Cargo
1) Crew Cost 1) Crew Cost
2) Investment Cost 2) Availability
3) Availability 3) Investment Cost

Minimizing the human effort of the monitoring and control system of the VT is the most important
aspect to achieve in the development of the VT concept with regards to the LV. Doing so will also
reduce the cost created due to unavailability, since the crew with no task during those unavailable
times will be reduced. A further conclusion that can be drawn from this analysis is that the effort in the
development of the VT concept should be especially focused on adjusting the roles of crew members
on-board to create a smaller multi-purpose crew. Such a crew should be able to perform tasks for the
VT control but also vessel operational tasks if need be. Thus, while a very accurate cost estimation for
the required VT technologies is of limited importance, the understanding of the operating of the con-
trol system is vital.

In case of misestimations in any of the cost factors, the results show a maximum increase in the re-
quired number of FVs of four, when comparing it to the expected scenario. For the application in the
IWT sector, required follower numbers of approximately of eight vessels may become questionable
for applications due to technical challenges. Even though these boundary conditions of minimum VT
length will still change with the further assessment of the VT features, it is expected that at least two
crew reductions will be needed to provide a successful implementation of the concept in the IWT sec-
tor.

It also became apparent that the step, leading from semi-to full automation, by reducing the last crew
member on the FV, is economically speaking does not make a large difference. Most scenarios have
the same FV requirement for an unmanned or a single crew member on board of the FV. The results
from the multi-scenario assessment provide an underlying understanding that the concept will be deal-
ing with a size of roughly half a dozen followers at that form the platoon. The next step in the viability
research of the VT is to gain an understanding of the FV features. Special emphasis will be placed on
the identification of the most suitable crew role that is to be taken from board, without impacting the
independent capabilities of the lone travel leg of the FVs.

Acknowledgement

The research leading to these results has been conducted within the NOVIMAR project (NOVel Iwt
and MARitime transport concepts) and received funding from the European Union Horizon 2020 Pro-
gram under grant agreement n° 723009.

144
References

BEELEN, M. (2011), Structuring and modelling decision making in the inland navigation sector,
Universiteit Antwerpen, Faculteit Toegepaste Economische Wetenschappen

BLAUWENS, G.; DE BAERE, P.; VAN DE VOORDE, E. (2012), Transport Economics, De Boeck

HEKKENBERG, R. (2013), Inland Ships for Efficient Transport Chains, TU Delft

HOEKSTRA, T.J. (2014), Optimizing Building Strategies for Series Production of Tugs under Capital
Constraints, Gorinchem

STOPFORD, M. (2009), Maritime economics, Allen and Unwin

MALCHOW, U. (2010), Innovative Waterborne Logistics for Container Ports, Port Infrastructure
Seminar 2010, 17

SCHUTTEVAER (2011), Onafgebouwde binnenvaartcasco’s blijven nog jaren in de hoek liggen,


Edition August 27th

VAN HASSEL, E. (2011). Developing a Small Barge Convoy System To Reactivate the Use of the
Inland Waterway Network

145
Automatic Geometry and Metadata Conversion in Ship Design Process
Joanna Sieranski, PROSTEP AG, Hamburg/Germany, joanna.sieranski@prostep.com
Carsten Zerbst, PROSTEP AG, Hamburg/Germany, carsten.zerbst@prostep.com

Abstract

Choosing your ship design toolset leaves you with two possible strategies: use the best available tool
for each domain (best of breed) or use one tool which covers all domains (best of suite). Using the best
of breed approach has undoubtedly its merits regarding each discipline but bears the risk off a
discontinuous design process. Only a tight integration spanning different tools will enable companies
to leverage the benefit of the best of breed strategy, otherwise these are lost in translation. This paper
discusses the benefits and challenges of such a solution and presents integration concepts based on a
productively used installation.

1. Introduction

The shipbuilding design process from the initial idea until production ready design is typically divided
into different phases, Fig.1. All these phases are performed in a step wise manner and need the result
of their predecessor as input. Each design phase aims to achieve a certain result and is associated with
specific tasks. Giving feedback to enhance the design using insight gained from a later step is common
practise. To make the things more complicated, most yards have multiple streams for the same ship
project. These siblings concentrate on the different domains like steel, piping or HVAC. Nevertheless,
they are not completely disconnected, as e.g. a hole request from piping needs to be considered in the
steel stream. Managing these streams and achieving the production ready design in time and cost is one
of the most demanding topics on a yard.

In today’s world all tasks are performed with the help of dedicated software. But pure dedication on a
certain task is not enough by itself, receiving information from the previous step and providing to the
successor is vital. The best software to perform detailed design would make life much more difficult if
the result would be available only as a set of paper prints.

Keeping that in mind, yards face two different options when choosing their software tools:

• Best of suite approach. Purchase all tools from the same vendor because he promises to support
all design phases and provide having a seamless data
• Best of breed. Purchase the best tool available on the market for a certain task, knowing that
the integration with the neighbours needs to be solved

Fig.1: Typical phases of the design process

146
From the yard’s business perspective, the best of suite approach has some certainly some merits. Suite
vendors promise a seamless data stream without duplication of data or even manual transfer. This should
allow to easier go forward from step to step and even apply feedback in an iterative approach. Cross
domain suites even add the promise to have one common data model available to all participants.
Collaboration between steel and outfitting should become easy. Additional promises contain the same
user experience and thus less training, less switching between different tools and therefore better user
acceptance and proficiency.

But there is a certain downside to the best of suite approach. Ship design contains a lot of very specific
task. Providing the same level of support for all tasks is hard and so compromises must be made by the
suite vendors sometimes. Especially if the yard has higher needs for one topic, e.g. because they offer
a special type of vessel or have their unique way of manufacturing. Such needs are hardly considered
by the suite vendors, as they aim to offer an overall coverage suiting the needs common to all their
customers. A second issue with the best of suite approach is the risk of vendor-lock-in, making it quite
hard to switch to another vendor due to the taken investment.

The best of breed approach on the other side promises to choose the best available tool foreach task
performed on the yard. This is typically a tool designed specifically to achieve the best possible result
for a certain task with the lest effort to spent on. As this tool has no need to support also other tasks, it
fits this task much better than its counterpart from the best of suite approach. Using multiple tools also
means that the whole process may be changed later easier if new challenges turn up. This also reduces
the risk of running in a vendor-lock-in.

The most prominent argument against this approach is exactly the distribution of those task over
multiple tools. This surely could have an impact on the designers themselves, as they may need to learn
two or more tools to perform their tasks. But more importantly those tools are developed independently,
and their underlying data models are fitted for their own purpose. Some work is to be done to achieve
a seamless data stream in the yard throughout the design process. Only when time and effort spent on
interfacing the chosen best of breed tools is less than the benefit gained by using the dedicated tools,
then this approach is attractive. This applies both for the initial effort to implement such an integration
as well as for the costs to run this on a daily business.

The implementing of these integrations plays thus a crucial role both for the decision which approach
to go as well for the success on the yard. In this paper we want to highlight some of challenges and
solution strategies we applied in the several integration projects we run in the last years based on one
example project.

Best of Suite Best of Breed


«Vendor A» «Vendor A» «Vendor B» «Vendor C»
Basic Detailed Basic Detailed

Integration
Common Specific with Specific
Data Data Predecessor / Data
Model Model Successor Model

Fig.2: Best of Suite vs. Best of Breed approach

147
2. Requirements and Challenges

No matter which tools to integrate, there is always a similar set of needs to investigate from the business
and IT perspective. The questions below are the most important once to ask to both the involved IT
department as well as business:

• Is this a single or bi-directional integration?


• Who is the master of the data, e.g. is the data designed only on one side, the master-ship trans-
ferred from one tool to the second or is it even a concurrent design on both sides?
• Is it necessary to transfer data as native model or is it enough to have only geometrical repre-
sentation on the receiving side?
• Is all information needed in the target tool available in one source tool?
• Are catalogue data involved, e.g. for valves?
• Are standard export / import interfaces available in the involved tools?
• Are the data available in a similar data model?
• How often is the integration used, e.g. permanently, daily, weekly, only occasionally?
• How big is the transferred data volume, e.g. a complete steel design or only some surfaces?
• Is it enough to run the transfer only once or necessary to run updates?
• Is it necessary to permanently monitor the integration?
• …
After those question have been answered, there is a certain set of challenges one meets in most projects.
Typical challenges are:

• There is no data model to cover the needed data, demanding an extension on either side
• There is no support for a common highlevel format (e.g. AP227 or PCF)
• There is no support for a common geometry format (e.g. AP214 or DXF)
• There are no (open) APIs to export / import the necessary information
• The granularity differs, e.g. different sizes of assemblies or blocks
• The data is catalogue based, but the tools do no use the same catalogue data
• The target tool does not support a delta / update import, but the source data changes over time
• The mastership changes over time, but the tools could not switch from master to slave mode or
vise versa.
• Export and import happen on different sites, which have no direct link
• Only a subset of data should be available on the target side
• Source Information is distributed over multiple systems and needs to be integrated
Both the requirements and challenges need to be investigated thoroughly. The target solution needs to
meet a good balance with the bare minimum possible with the available source and target system and
full-blown gold plating.

3. Example Solution

Right at the start of the ship design process comes the design of hull form and the hydrostatic and -
dynamic calculations. And it is a logical next step to also perform initial structural design in the same
tool, as the shell surface and room are available anyhow. The detailed steel design on the other side is
performed in another tool. This leaves a gap in the design process, in our example between NAPA Steel
and AVEVA Marine Hull. As long as this gap is open the design work performed in NAPA Steel has
to be transferred manually to AVEVA Marine Hull (AM). Existing, pure geometry interfaces like STEP
AP214 do not solve the problem, as the design data is needed as a native model in AM to allow refining
the design from initial design to detailed design.

148
The solution to this is to export the complete steel model from NAPA with geometry, metadata and
topology information using a dedicated extension using the new plugin system. This data than is
interpreted and imported as native AVEVA Marine Hull data. As a topological defined panel from the
source system becomes again a topological defined panel in the target system there is no loss in the
transfer process. The initial scope of the project was to transfer the planar and knuckled panels, this is
currently extended to also cover curved panels.

3.1. Requirements

The requirements given by business departments was to provide a lossless integration between NAPA
Steel and AVEVA Marine Hull (AM). The complete steel design performed in the source system needs
to be available in AM as native model. Unlike a pure geometry transfer this enables the user to further
detail the design until the complete production model is available inside AM. There is no need for a bi-
directional exchange, one initial transfer on block level is enough for a start. As this is only a small
number of transfers there is no need for a 100% automated transfer. Nevertheless, a designer needs to
see the quality of the transfer on the import side.

3.2. Challenges

As both systems are designed for ship steel design, they not only use similar terminology (stiffeners,
blocks, panels, …), but also have at least on the first glance a similar data model. This is not always the
case, as even tools used for the same purpose like for example ship steel modelling may differ in their
data representation and data maintaining, Dusch et al. (2017). For this specific combination we found
a lot of smaller differences, but nothing which is impossible to overcome.

A different topic was the missing common high-level transfer format. Most shipbuilding tools support
pure geometry export file formats like DXF, STEP or IGES. Needed information on topology or
material grade is lost, block / panel relationship only covered by naming and numbering. Several neutral
higher-level formats like AP218 (shipbuilding steel) have been defined, but none of them is
implemented in the available tools. This means that using built in export and import mechanics between
the different tools in a Best of Breed strategy leads to loss of important data and information. Thus, the
potential of the tools cannot be fully utilized unless the gap will be closed with a conversion software
between the tools. Using the existing APIs from the source and target system during the development
of such solution is more robust than writing input format files that would be imported later.

Having a closer look at the available data reveals important topics in the data itself. The data provided
by NAPA Designer do not care about the order of the neighbour limit elements or the number of them
so they may be few big pieces or some smaller ones. In AVEVA Marine Hull on the other side exists
an upper limit on the number of limits, and they need to build a closed contour for best results. This
means that we had to close the gaps between neighbours and sew together smaller pieces into bigger
ones if this was possible. Especially the cases where the contour of such elements is going clockwise in
one neighbour panel but counter clockwise in the other, Fig.3, must be recognized and handled properly.

Another challenge when using the API directly is the geometry handling. Math allows to describe the
same geometry in different ways and each tool uses the representation that fits best its purpose.
Unnecessary data is not stored. A simple example is a diameter of a circle, radius and circumference. If
one of the variables is known one can easily calculate the other two as the diameter is twice the radius
and the circumference is equal to pi times diameter.

If one of the systems provides the radius and the other needs the diagonal our solution can easily convert
one information into the other. But the tasks of the design tools are much more complex than that and
soon enough one has to start searching for clues about transformations from one mathematic description
into another. Often a problem seems to be easy at the start and becomes more difficult as more test data
becomes available. For example, the limit contour for objects in the target system was just mostly taken

149
as-is from the neighbour object at the beginning as shown on the left in Fig.3. This worked just fine in
the first stages as long as our examples had objects with edges pointing into the right direction. If the
contour of the neighbour objects looks like in the right Fig3, the overall limit contour of the main object
is not recognized as closed any more. If you than have the case where your source data also allows for
gaps between the limit objects and the neighbours themselves are not ordered building the right limit
contour needs more consideration.

Fig.3: Limit contour depending on neighbour contour direction

Beneath the different data models there also is a possibility to customize the tools by the yard. This also
may need to be considered during the conversion. For example, many yards have own naming rules or
a set of parts that are not the tool standard. Nevertheless, such parts must be converted to the
corresponding customized data between the source and the target system. To allow such cases a mapper
may allow to define rules how to process the data at runtime. As identifying all the possible rules is
nearly impossible due to limited resources providing an extensible mapper with some flexible
configuration possibilities may convert a wide range of cases that may even be unknown when the
mapper is developed. On the other hand, some example input for mapping and concrete requirements
are needed to start the implementation of a more general rules.

3.3. Structure of the Solution

In this example NAPA Designer is the source system with AVEVA Marine Hull as target system. Both
allow direct access to their underlying geometry and meta data using their respective application pro-
gramming interface (API) in C#. These APIs are used to write two separate plugins one for each system
as shown in the Fig.4.

Due to naming differences e.g. on panel names and stiffener cross-section identifiers we added a
mapping component which allows the yard to adapt the solution to their needs. For example, it is
possible to influence the naming of panels during the conversion so the resulting panels in AVEVA
follow yard conventions used in this tool. The yard specific mapping rules are defined in a separate xml
configuration file. The mapper never overwrites an existing NAPA attribute but adds new attributes that
are used during import by the AVEVA Marine Hull plugin.

There is no direct connection between source and target tool at run time, the information is transferred
using a dedicated, XML based file. These files contain high-level information on the source steel
structure, the IGES files are only used as reference surface for curved panels.

150
«source system» «target system»
NAPA Designer «source plugin» AVEVA Marine Hull
Mapper

«source plugin» config «target plugin»


NAPA AVEVA Marine Hull
DesignerPlugin Plugin

.nsbx, .nsgx
.nspx, .nstx,
.iges

Fig.4: Architecture of the solution

3.4. Conversion Process

From the users’ point of view the conversion process has two parts: the export of the data from the
source system and the import of the resulting files into the target system. As the solution was developed
as a plugin for each tool the user may work with the tools as he is used to and need only few additional
interactions to get the current steel model into the target system.

The first step in the conversion is starting the plugin in the source system using the proper command.
As discussed in the previous section there are three file formats that always export the same data about
the whole project. Thus, the user may decide to omit to export the information for example if another
block must be exported without changes on the grid. There also is a possibility to skip the export of the
hull IGES data. The most interesting part is the selection of the block system and the blocks that should
be exported. This allows to convert a ship step by step or just on a reasonable level or replace an existing
import in the target system later easily faster compared to running an export with the whole ship. At
this stage the user may also provide a configuration file for the already mentioned mapper as the
mapping is done in the source system plugin.

When the user confirms the export settings the plugin takes some time to convert the knuckled and
planar panels, plates, holes, stiffeners, seams and brackets. For this purpose, we use the NAPA Designer
API to obtain the needed information and already convert it into format convenient for use in the
AVEVA Marine Hull import. Whereas some of the data like for example grid or block definition is fast
and easy to process there is much data that must be calculated from the given native format. Especially
the conversion of the geometry into more AVEVA Marine Hull friendly and retrieving limit information
are time-consuming tasks. The challenges we encountered there are described in section 2.

When the data of each object is preprocessed properly to be written out the mapper takes it and applies
the rules defined in the provided configuration file. The mapping possibilities are limited to some cases
identified during development and testing with a partner yard. It basically uses a subset of the raw
attributes of the object in the source system. To preserve the original data for later usage during the
import into the target system the mapper doesn’t overwrite existing attributes but adds new ones instead.
The target plugin recognizes and uses these if possible or just takes the original information if no
mapping was applied before.

151
Fig.5: Conversion Process

After the requested information (block definition, steel objects etc.) is successfully exported user
interaction becomes necessary. The user starts the AVEVA Marine Hull as usual and starts the import
of the data with a new UI element added by the target plugin. There the files to be imported must be
selected from the folder previously defined during the export from the source plugin. Here again the
user may import only a subset of the files like for example all the files or just the ones with changed
steel objects.

After confirmation by the user the target plugin uses the exported files to perform the import of the data
into AVEVA Marina Hull. This import is performed step-wise starting from a simple, geometry-based
import to a full blown, topologically defined panel. The target plugin is also the right place to decide
which attributes of the objects must be considered during the final handling of the converted objects
(like for example the object naming) so the mapping results are evaluated and applied there. At the end
the converted planar and knuckled steel is stored in the target system and may be used from here for
further work.

4. Example Transfer

This section shows an example export, starting wit the user interface that a constructer sees in NAPA
Designer when starting the export. As depicted in Fig.6, the UI uses the same fonts and colours as
NAPA Designer to provide a uniform experience to the user. On the left side of the figure the block
system and the blocks for export are determined. As described in the previous section the user may also
further customize the export and choose what will be exported for example he may decide to skip the
export of the grid, IGES surfaces or even some part of the geometry like cut-outs or notches to reduce
export time. After hitting the “Export”-button the plugin starts to do the job and a progress bar is shown
to the user. It contains information about the currently exported block and panel name.

To import the block in AVEVA Marine Hull a new UI element as shown in Fig.7 on the left was
introduced in the tool that can be installed with the target plugin. This allows the user to choose a nstx
file with the block that should be imported. As depicted in Fig.8 there also is a possibility to convert T-
Bars into panels with flanges what is more common in AVEVA Marine Hull. There is no further

152
customization of the import needed as all the other information was provided during the export.
Especially there is no need to provide the mapping rules again as these will be applied automatically
when the information is available in the exported file. Depending on the used block size the import may
run several minutes, the current status is available to the user as shown in Fig.8. After the import the
user may check the model and work with it.

Fig.6: Starting and customizing NAPA Designer to AVEVA Marine Hull export in source system

Fig.7: AVEVA Marine Hull UI elements to start import plugin for steel from NAPA Designer

Fig.8: Progress information shown to the user during the import in AVEVA Marine Hull

Fig.9: Exported block “1008” in NAPA Designer

153
Fig.10: Imported block “1008” in AVEVA Marine Hull

An example transfer could be seen in Fig.9 (NAPA Steel) and Fig.10 (AVEVA Marine Hull), it shows
the complete steel as defined in the source system transferred to the target system. In reality there are
always topics where the transfer does not achieve 100% quality, but has somewhat reduced quality. One
example would be a stiffener which is not referencing another panel but is simply limit by the equivalent
coordinate or other topics. As nobody really looks into log files, we apply an extensive coloring scheme
which help designers to know about those items.

As already described in the challenges, sometimes different granularity causes problems in the transfer.
In the shown example block “1008” the green deck from Fig.9 looks like two objects due to the block
boundaries. In the source system this is one deck with a hole as shown in Fig.11.

Block border

Block border

Fig.11: Bigger part of the deck beneath block “1008” shows that it is a single panel

154
Recognizing such cases and handling the limits and geometry properly was again one of the tasks that
had to be identified during the development as the source data provided such information implicitly.
We had to first identify such panels and treat as separate panels to enable smooth transfer.

5. Conclusion

The best of breed approach offers designers the perspective of using the perfect tool to perform their
task. A certain integration hell is promised by the proponents of the best of suite approach. If faced by
the decision to go either the one or the other way, it makes sense to carefully balance the benefits and
risks for either way. The benefits are usually already known, we provided an initial list of questions in
chapter 2 to estimate the effort and risks when integrating two different tools.

As could be seen in chapter 4, implementing an integration could be worth the effort. It enables
designers to smoothly transfer an initial steel model to another tool to apply further details. The overall
process is much faster compared to manually transferring the model using e.g. an intermediate drawing.

References

DUSCH, T.; FRANKE, B.; GRAU, M.; ZERBST, C. (2017), Intent-driven CAD vs. Mechanical CAD
in Shipbuilding – A review and Solution Outline, ICCAS 2017

155
Applying the Navigation Brain System to Inland Ferries
Xinping Yan, Wuhan University of Technology, Wuhan/China, xpyan@whut.edu.cn
Feng Ma, Wuhan University of Technology, Wuhan/China, martin7wind@whut.edu.cn
Jialun Liu*, Wuhan University of Technology, Wuhan/China, jialunliu@whut.edu.cn
Xuming Wang, Wuhan University of Technology, Wuhan/China, ted@whut.edu.cn

Abstract

This paper describes the Navigation Brain System (NBS) and its prototype application to a ferry
across the Yangtze river. The NBS combines Virtual & Augmented Reality with Artificial Intelligence
for enhanced situation awareness, allowing remote control of a ferry crossing a busy waterway. The
key components of NBS are explained, experience of component trials is reported and an outlook for
the future of autonomous ferry operation in China is given.

1. Introduction

Since 2012, research on smart, autonomous and/or unmanned ships has begun to flourish world wise.
In September 2015, Lloyd’s Register of Shipping (LR), Quinetiq Group and Southampton University
jointly published the “Global Marine Technology Trends 2030” report (Shenoi el al. 2012), in which
the topic of intelligent ships is listed as one of the eight key marine technologies in the future. In June
2017, the 98th session of the Maritime Safety Committee of the IMO was held at the IMO
headquarters in London, http://www.imo.org/en/MediaCentre/MeetingSummaries/MSC/Pages/MSC-
98th-session.aspx. This meeting puts forward the concept of “Maritime Autonomous Surface Ships
(MASS)” on the basis of various names like “unmanned ships”, “smart ships”, “intelligent ships”,
“remote-control ships”, and “autonomous ships”, etc. MASS is defined in such a way that the ships
can operate independently to varying degrees of interaction with humans. In May 2018, the 99th
session of the MSC of the IMO was held to discuss the objectives, definitions, scope, methods, and
work plans of the MASS in depth and pass a series of regulatory decisions, http://www.imo.org/en/
MediaCentre/PressBriefings/Pages/08-MSC-99-MASS-scoping.aspx. However, differences and argu-
ments still exist in the technical routes, methods and application scenarios for the future smart ships.

According to Rules for Smart Ships that are issued by China Classification Society 2015, a ‘smart
ship’ means the use of sensors, communications, the Internet of Things, and other technical means to
automatically perceive and obtain data of the ship itself, the marine environment, logistics, ports and
other aspects. Using advanced computer technology, automatic control techniques and large data
processing, the ship can be intelligently operated in terms of ship navigation, management,
maintenance, cargo transportation, etc., so that the ship is safer, more environmentally friendly, more
economical and more reliable than current ships. To fulfill these functions, an integrated system is
needed to percept and cognize environment and data, evaluate of the risk and plan routes for collision
avoidance, and dynamically control the ship instead of the crew members. This integrated system is
meant to represent the thinking and decision-making process of the captain, more specifically, the
brain of the captain. Thus, we name and propose the concept of the ‘Navigation Brain System’ for
contemporary smart ships and autonomous ships in the future.

2. Navigation Brain System

2.1. Components of the NBS

To make ships “think” like captains, the idea of developing an artificial intelligence platform, the
Navigation Brain System (NBS), has been proposed for the implementation of smart ships. The
system structure of the NBS system is shown in Fig.1. The NBS system composes a vision system, an
auditory system, a tactile system, a positioning system, an interactive system, a motion system, and a
decision-making system using artificial intelligence to process rules, regulations and experience.
The vision system uses contemporary sensors, including, but not limited to, Video, RADAR and
LIDAR to construct a vision system that can percept and reconstruct a virtual world that the ship sails

156
in. The auditory system is meant to monitor working condition of engines and other machines that
may make noise and hear sirens and whistles of other ships. The tactile system is to enable the ship to
feel wind, wave, current and swell like human. The positioning system is to locate ship by Global
Positioning System, BeiDou Navigation Satellite System (BDS), Galileo satellite navigation system
and so on. The interactive system is set to communicate with other ships or VTS through VHF,
LOARA, or satellite.

The motion system is constructed based on mathematical modelling of ship motions and feedback
data from sensors like IMU, which provides the base for route planning, decision making and motion
control. The navigation experience of and rules lay the foundation for the artificial intelligence system
to make independent decisions about route changing, collision avoidance, and other functions that are
relevant to human thoughts.

Video

Millimeter wave
radar

Forward sonar

Driving
Vision system Side scan sonar
Independent decision
experience

Laser radar

Behavior cognition Posture cognition

Ship whistle

Auditory system
Navigation rules
Noise

Information understanding
and transformation Posture

route planning
Tactile system Current

Motion system Wind direction


Speed controller Computing server

Heading controller Positioning Hull structure state

system
Interactive system

VHF BeiDou positioning system

Fig.1: Components of Navigation Brain System

2.2 Developments of the NBS

The NBS system provides an easy and friendly interface for captains to get information from different
sensors or devices, such as RADAR and AIS as shown in Fig.2. Furthermore, a remote-control
platform and a prototype VR simulator have been developed. Using these devices, the trimaran as
shown in Fig.3 in the East Lake can be controlled from the WUT campus via 4G mobile phone
network. The vessel’s operator type can be switched into manual or remote control during the trials.
The operator may sit on a six-freedom-degree platform wearing VR glasses can change the speed and
course of the ship from the lab. Furthermore, it is possible to perform remote-control operations
through satellite, for instance, IMASART. Such experiments are undergoing.

Based on the structure of the NBS, its functions and applications can be expanded. In the navigation,
sailors must pay much attention to navigation situation, which may make them tired result in maritime
perils. We can build a module to operate the ship by auxiliary navigation remotely. In a relatively safe
environment, we choose unmanned control to reduce mariners’ pressure, only apply monitor or VR
for supervising navigation. In case of emergency, we can change the control model to manual control

157
in danger. Additionally, a new 7 m long KVLCC2 model ship has been constructed for further
remote-control and autonomous navigation tests as shown in Fig.4.

Fig.2: Test platform for remote control and autonomous navigation

Fig.3: A human-machine interface for remote-control and autonomous navigation

Fig.4: A new 7 m long KVLCC2 model ship for remote-control and autonomous navigation tests

158
3. Application of the NBS to a ferry

3.1. Practical challenges of inland ferries in the Yangtze

There are over 3000 inland ferries, which are slightly different from each other, along the
Yangtze River. The navigation environment is much more complex than that of seagoing
ships. For the ferries in downstream, most of them have to sail 24 hours per day even in
foggy or rainy days. Furthermore, the traffic density is very high. The distance among ships
could be smaller than 10 m as they want to take the advantage of the current to save fuel. For
the ferries in the upstream, the impacts of harsh navigation environment, for instance,
currents, is stronger than the downstream. Zooming into the case of the Nanjing Benqiao
Ferry, the challenges of navigation are specified as follows:

• The ferry is highly affected by the current and loading condition.


• The ferry equips two diagonally located full-direction thrusters to improve its maneuvering
performance. However, it desires a lot of navigation experience and hard to control.
• The ferry crosses the busy waterway of the Yangtze River. The crossing situation with
passing ships is complicated and easy to misjudge causing collisions.
• The ferry has a busy schedule and continuously sails in complex conditions, which easily
causes fatigue of the captain.

3.2. Ship-Shore coordination

The NBS system for the Nanjing Banqiao Ferry is constructed based on the ship-shore coordination as
shown in Fig.5. Sensors devices like CCTV, Radar, AIS station, DGPS receiver, and so on have been
installed while similar devices are installed on the ferry as well. Both the ship station and shore station
can work independently but they are set to work together through a data link through LORA, 4G or
satellite. Thus, the data of both shore side and ship side can be collected simultaneously and further
fused to have a robust environment perception. In such a way, the pitfalls or limitations of shore-
based or ship-based only stations can be resolved. For instance, the shore-based RADAR may not
fully capture the ships in the river as small-scale ships may be blocked by large-scale ships. The ship-
based RADAR may not percept ships in long range due to its limited height.

Fig.5: Ship-shore coordination system

159
At present, the NBS system has been developed and updating for 3 inland river ferries in Wuhan and
Nanjing, which are two major cities along the Yangtze River since August 2018. In early 2019, the
NBS system will be installed on another 6 ferries in Nanjing. The interfaces for captains and
administration in the NBS system are shown in Fig.6 and Fig.7 respectively. Furthermore, these
interfaces can be switched at any time both onshore and board. Through the NBS system, the vision of
the captains is no longer affected during nights or foggy days. The 3D full-direction visualization in
the NBS system is based on multiple sensors which do not have the limitations of human eyes.

Fig.6: Interfaces from the captain view in the NBS system

Fig.7: Interfaces from the maritime administration view in the NBS system

4. Further extension of the NBS system

4.1. Intelligent new-energy inland container vessels

Huzhou is a city about 200 km away from Shanghai. The annual throughputs of Huzhou is over
400000 standard containers. Currently, there are more than 25 container ships are in service.
Furthermore, 90% of construction material to Shanghai is transported by inland cargo vessels from

160
Huzhou. The ship route (the yellow line with red edges) is about 250 km as shown in Fig.8. The goal
of the project is to build intelligent new-energy inland contain vessels to replace the current low
efficient ships. The new ships will be equipped with full electric engines using containerized batteries
as power sources. Each container may contain batteries of 1000 kWh. Furthermore, the intelligent
environment awareness and navigation control of the NBS system will be developed to use these ships
as waterborne AGVs in the future. Additionally, these ships will use full-circle rim driven thrusters
instead of traditional propellers and rudders to achieve good maneuvering and propulsion
performance.

Fig.8: Ship route from Huzhou to Shanghai (the yellow line with red edges).

Fig.9: The 64 TEU smart inland vessel with new energy designed by WUT

4.2. Construction of the world largest testbed for smart ships

The foundation of the world largest and the first testbed for smart ships in Asia, Zhuhai Wanshan
testbed for smart ships is on 10.2.2018. The construction of the testbed started on 3.11.2018. A test
certificate has been issued by the China Classification Society (CCS) to the Zhuhai testbed which is
also the only one that has been officially approved. The testbed will be built in two phases as shown
in Fig.10 and Fig.11. The first stage will be located among four islands in Wanshan, which has an
area of 21.6 km2. The second stage will be set to the south of the first stage. The area of the second
stage testbed will be 750 km2. The NBS system will be the key technique that is used to construct a
mixed reality test platform for smart ships.

161
Fig.10: Locations of the Wanshan testbed for smart ships

Fig.11: Components of the Wanshan testbed for smart ships.

5. Conclusions

This paper explained the concept of the Navigation Brain System (NBS). The NBS is a whole
solution to implement smart/autonomous ships based on ship-shore coordination. The NBS
system has been applied to ferries in Nanjing and Wuhan and will be further applied to other
ferries in the Yangtze Rivers. The NBS will be further expended to a 64 TEU smart inland
container ship with containerized batteries. Additionally, the NBS will lay the technology
foundation for the test area in Zhuhai Wanshan,

References

SHENOI, R.A.; BOWKER, J.A.; DZIELENDZIAK, A.S.; LIDTKE, A.K.; ZHU, G.; CHENG, F.;
ARGYOS, D.; FANG, I.; GONZALEZ, J.; JOHNSON, S.; et al. (2015), Global Marine Technology
Trends 2030, Technical Report, Univ. Southampton, Qinetiq, Lloyd’s Register

162
A Strategy for Closely Integrating Parametric Generation and
Interactive Manipulation in Hull Surface Design
Marcus Bole, AVEVA Solutions Ltd, Gosport/UK, marcus.bole@aveva.com

Abstract

Parametric Hull Surface Generation methods have been around for over a century but even with
modern algorithms and software the number of commercially available tools exploiting this technique
is exceptionally small. However, the capability is frequently requested by ship designers and new
methods for generating hull surface geometry are always emerging. Interest in these techniques
remains high but this has never been matched by everyday utilisation. Interactive hull surface design
requires experience and knowledge to be productive. Parametric methods should have simplified this
but codifying these processes into flexible tools which less confident designers can use effectively is a
significant challenge. This paper exposes the difficulties developers and designers face with both
techniques when used in isolation and builds on previous research to propose a strategy for combining
them to improve the design experience regardless of skill level. This creates opportunities for new
forms of Design Intent, allows style to be defined interactively and simplifies parametric models. The
paper concludes with recommendations for those that want to follow this approach.

1. Introduction

In the last century, hull surface design evolved slowly from a physical art to a completely electronic
experience. Hull surface design used to be conducted on a very large drawing board, in 2D but today
is conducted on much smaller electronic visual displays, in 3D. Throughout this evolution, designers
have consistently created tools to form the shape of hull surfaces. Initially, these were tools based on
templates (squares, ships curves), deformation of physical materials (spline battens) or mechanical
mechanisms (compass, planimeters) but have been replaced today by the mathematical formulas and
sequence of instructions implemented by different software applications.

Design is an experiential activity and, regardless of the implementation, modelling tools continue to be
developed to increase quality, accuracy, productivity and create insight. The transition to electronic
tools has significantly improved the capability of the designer but it remains a discipline that requires
skill and experience. The prevalent design experience still involves manipulating, distorting and
bending a surface representation into the desired shape, the difference is that today designers are
manipulating NURBS surfaces using a mouse rather than physically manipulating weights and battens.
For novice and intermediate users, it remains a time-consuming challenge to acquire the skills and
experience required to be accurate and productive.

Parametric Hull Generation continues to offer an alternative to the traditional interactive and skills-
based approach of creating a hull form. Here, an expert creates a process which produces a hull surface
based on much simpler, tailored inputs. It can be used without requiring the expertise originally used to
devise the procedure. But hull surface design is a complex domain, designers will readily break
conventions in the search for more efficient, economical or environmentally friendly designs. In this
respect, the closed black-box routines that Parametric Hull Generation is usually characterised as is of
limited utility. There is a need for customisation. Parametric methods are being exploited today, but as
open and configurable solutions to build models and automate analysis. However, like before, these
solutions are aimed at experts with the capability to build models using the abstract methods provided
to codify the desired engineering process.

To improve productivity, software designers increasingly look to layout and feedback methods
employed in the user interface to improve the communication in the human-computer workflow. In
interactive design software, productivity can be improved by providing the designer the ability to
represent their ideas, rules, concepts and processes. This is termed Design Intent. Using Design Intent

163
representations, software can minimise the amount of user interaction when the configuration of a
model is changed. Fundamental to the process is that the software mechanisms used to represent Design
Intent should be in context with the designer’s thoughts and not so abstract that the designer’s flow is
disturbed. This is achieved by making the application of these rules simple and instinctive single-
interaction events. These mechanisms can be considered parametric, but the user rarely experiences
them as numerical concepts.

Examples of Design Intent can be found in most modern interactive hull surface design applications but
is often limited to simple geometric relationships. The Design Intent capability of these tools could be
extended by utilising the powerful procedures and shaping processes that have been published by
Parametric Hull Generation studies, making these more accessible for both novice and intermediate
users as well as creating productivity processes for time constrained experts. This paper looks at the
modern state of both parametric and interactive hull surface design techniques to suggest where
developers interested in parametric hull generation should focus attention if looking to improve the hull
surface designers’ effectiveness regardless of their skills and experience.

2. Parametric Hull Surface Generation Approaches

Parametric Hull Generation evolved from the art of producing custom drafting tools for hull design. If
you are prepared to investigate, the customisation interfaces of many modern software applications
provide a variety of ways for a designer to develop their own tools and procedures. Anyone can create
their own routines, but it has become harder to push the boundaries of this art in a way that contributes
improvements to tools used by all designers. It is normal for complexity to increase with the number
of geometric constraints applied to control hull shape, which limits flexibility and frustrates the
developer as they search for their own revolutionary methodology. Today, two main strategies prevail;
the process either builds up the hull representation by combining separate models of hull features, often
modelled in 2D, together using intersection and sweeping, or a 3D framework is built to manipulate the
surface representation directly.

2.1. Sweeping Intersection

Hull surface generation methods based on sweeping of 2D curves predate 3D surface generation by
several decades because these methods evolved with the manual practices used on 2D draughting
boards. These techniques evolved, initially, to improve the productivity of this process, Benson (1940).
With the introduction of computers, the 2D draughting methods were codified and the task of ensuring
3D correspondence eliminated as a human task. Capturing the hull shape using a suitable mathematical
function that could be exploited electronically remained a challenge. Before the introduction of
parametric curves, shapes were captured using high order polynomials, Kerwin (1960), or conformal
mapping techniques, von Kerczek (1961), the latter offering insight into the hydrodynamic
characteristics of the hull form. However, neither method produces curves which are efficient to
calculate or have desirable shape characteristics.

The introduction of parametric curves such as Cubic Splines and NURBS (B-Splines, Bezier Curves
etc) changed this, as these representations are easy to calculate and easily transferred between different
software systems. In addition, the control points which define NURBS behave as natural ‘handles’ to
manipulate the curve using devices like computer mice. Using these representations, defining curves
to represent the properties of hull forms was greatly simplified becoming a design rather than a
mathematical exercise. The ease in which NURBS curves could be manipulated contributed to a
significant growth in the development interactive design and CAD (Computer Aided Design) software.
This has maintained NURBS as the dominate method for hull surface design since the first tools
appeared in the 1980s.

Interest in generating hull shaped curves parametrically remained and a great number of curve
generation algorithms have been devised. Harries (1997), presented a method for generating 2D B-
Spline curves based on the application of a fairness criteria with constraints on position, tangent vectors,

164
area and centroids, Figures 1a and 1b. Unlike other methods, this is not restricted in the number of
control points, constraints or formulation. The B-Spline curves produced are directly compatible within
a CAD or hull design environment. The method is comparable to published algorithms for B-Spline
least-squares fitting and fairing algorithms, the solution generating quickly using iterative analytical
solvers such as Newton-Raphson.

(a) (b) (c)

k
ec
D
k
ec
D

e
lin
er
at
W
de
Si
of
at
Fl
Stern

m
tto
Bo
of
at
Fl
Stern

Fig.1: Section generated by B-Spline with (a) varying underwater area, (b) varying waterline
breadth with constant area, and (c) applied as a multi-segment curve through 3D Design Curves.

Curve generation algorithms may be used to model longitudinal hull properties as well as the hull
sections that results by intersecting the longitudinal models at a specific position. By sweeping the hull
section along the longitudinal models a hull surface is implied, Figure 1c. Subsequently, these curves
may be used to fit a CAD surface representation to the hull form. For hull forms with specific features,
like ships with parallel middle body, it can be effective to represent the hull sections using several curve
segments. In this scenario, curve generation algorithms may be used to create each segment although
in practice it may only be necessary to use it on a single segment. The FORAN system uses an
alternative strategy in the FORMF module, generating longitudinal curves through section shapes. This
has the advantage of making it easier to imply a fair surface, but it becomes harder to control the
volumetric properties of the hull form as a Section Area Curve can no longer be used in a typical way.

2.1. Direct Hull Surface Generation

It is possible but more challenging to generate a CAD representation of the hull form directly. Methods
need to devise some framework through which the control points of the surface may be placed or
generated. Smooth shapes like yachts surfaces, Khan et al (2017), or planning hulls, Orvieto (2014),
with chines can be successfully produced but it is much harder to produce ship surfaces with flats,
radiuses, knuckles and bulbs. Unlike the Sweeping method discussed previously, the generation
procedure is less well defined but this does allow greater opportunities for innovation.

Early methods, Sanderski (1998), attempted to generate the hull surface with B-Splines using an
analytical solver. To reduce the number of parameters, a rectangular grid of control points was defined
in profile and the y-coordinates varied. While a solution could be found, the shape was not typical of a
hull form. Other surface representations have been explored. Bloor (1990), for example, experimented
with partial differential equations. These can overcome the need to use four sided surfaces but still
exhibit the same smoothing challenges as the previously mentioned technique. A limitation of using
blending is that, as the variation of curvature across the surface is minimised, it is evenly distributed
across the surface in a way that could be said to resemble a “bubble” rather than a shape typical of a
hull form.

Avoiding this can only be achieved by imposing some framework, structure or topology on the hull
surface to constrain the flexibility of the surface or reduce the number of free parameters. Unfortunately,
this also means that styling characteristics become embedded in the generation algorithm and difficult

165
for a user to influence. Curves are a logical solution to represent the topological structure of the surface
which somewhat blurs the distinction between this and the sweeping methodology. Since these curves
often bound the different regions of surface shape this style of definition corresponds to the Cross-
Sectional Design technique used by many ship-orientated commercial hull surface design tools. This
opens the opportunity to use the available customisation features to extend these tools by using macros
(Fastship, AVEVA Lines), parametric definitions (NAPA, CATIA), visual programming tools
(Grasshopper) and custom transformations (ShipGen). However, these methods are restricted to the
capabilities of these commercial tools and this limits the opportunity to implement more advanced
techniques.

3. Challenges: Hull Generation as an Agile Design Tool

Despite offering a more effective and precise approach to generating a hull surface representation these
techniques have remained on the fringes of popular hull form design, exploited for hydrodynamic
optimisation and research studies. There are some powerful design tools, but they are failing to make
the leap into popular design. Why is this?

3.1. A Black Box Process

The typical idea of a Parametric Hull Generation solution is a standalone process which produces a
CAD Surface from a set of parameters. Since the range of designs and styles are so varied it is
impractical to find a single solution that can accommodate all possibilities. Therefore, most solutions
focus on a single role or style of hull form. But even this specialisation can still be too broad. Specific
styles and features can be challenging to capture parametrically, increasing the complexity of the
generation procedure and producing a surface representation that has too much definition to modify
productively using interactive methods. A generation process that is effectively a ‘Black-Box’ is of
limited use to all designers except those that produce a surface that exactly matches the design
requirements. The solution is to open the ‘Box’ and with customisation provide the user with the ability
to influence the generation process.

3.2. Codifying and Customising Solutions

Many CAD tools provide macro interfaces to allow codification of processes which create and modify
geometry exploting the representations, data structures and algorithms already implemented in the
software. Unless it is a specialist tool it is unlikely to contain domain specific curve generation
algorithms such as the methods discussed in 2.1. However, coding is not the typical way designers
convey their ideas although they are likely to reuse a successful procedure if it can be quickly integrated
into their design process.

Fig.2: Visual interfaces for controlling Hull Generation


Processes, left, Excel and, right, Grasshopper.

166
The barrier to coding, i.e. typing syntax, may be somewhat overcome using user-friendly notation, i.e.
using symbolic or other familiar process capturing interfaces. Jorde (1997) demonstrates the how Excel
can be used to build a curve-based hull surface, Orvieto (2014), demonstrates how Grasshopper, part of
the Rhinoceros software, can be used to build a generation process by assembling visually components,
Figure 2.

Fig.3: Tree representations of object-orientated parametric models from,


left, Caeses, and, right, Paramarine.

A more recent solution is typified by applications like Caeses and Paramarine. These tools extend the
ability to couple together formulas, functions and graphics found in the spreadsheet concept using
object-orientated modelling. Instead of a grid of cells, objects with unique capabilities are connected
using explicit relationships. Individual objects may represent parameters, graphical elements,
algorithms, external processes or a conglomeration of any of these objects. This creates a very powerful
tool for modelling complex multi-discipline problems, integrating 3rd party solvers and simulations to
solve engineering challenges. However, as these applications rely on dynamic models, the
configuration of the software relies on parameters and connections rather than interactive manipulation
of geometry found in traditional hull design software. It can be an abstract experience to work with this
software since design becomes a thought-process of model building rather than interactive exploration.
Not all designers may be capable of this activity and many would find that the need to configure
software detracts from the act of direct design.

These tools are great examples of how Design Intent can be captured, but it is an abstract model building
experience. Many software tools, including mainstream mechanical CAD, now offer the ability to
create similar but less extensive relationships in a context sensitive way and initiate configurations with
default data. In this approach, the User Interface becomes more critical to the design experience by
actively offering Design Intent opportunities without interrupting the design flow. Increasingly, this
style of definition is becoming available in interactive hull surface design tools, offering parametric
design capability without the need to define any mathematical configurations.

4. Integrating Design Intent into Interactive Hull Surface Design

Interactive surface design tools operate by providing the designer with a visual structure which is used
directly manipulate or generate the definition of a mathematical surface. These structures are defined
by 3D control points which are interactively repositioned by the computer mouse. Once learnt, the
designers challenge is to manipulate these points into the correct position to achieve the desirable
dimensions, styling, hydrostatics, hydrodynamic performance and surface qualities. It takes time to
learn the skills required to make this a productive activity.

167
NURBS have become prevalent representation for marine design due to the simple rectangular grid of
control points used for definition and the easy implementation of the algorithm used to calculate the
surface. These two features have resulted in a mathematical surface that is easy to manipulate, code
and transfer definitions between software applications. Practically all standards of surface data
exchange use NURBS as a foundation. However, an unfortunate downside of working so directly with
a surface definition is that the user is responsible for deforming the rectangular definition into the shape
of the hull form. As more complex surfaces are modelled the risk of bad distortion increases. The
designer needs to understand what control point configurations cause bad shapes and plan to avoid
them. As the definition is a simple rectangular definition, Design Intent has little opportunity to improve
the design experience. Non-rectangular surface definitions such as T-Splines, Sederberg (2003), and
Subdivision Surfaces, Greshake (2018), are slightly easier to organise into the shape of a hull and can
support the definition of discontinuous features as a basic level of Design Intent. For complex hull
forms, such as ships, where there are clear features with specific curvature characteristics it can be
challenging to be productive as a novice user where the only way to learn is to spend time in front of
the screen.

An alternative approach, Cross Sectional Design, has been adopted by most tools focused on ship type
hull surfaces. Here the definition methodology focuses on defining hull shape using a network of curves
which an algorithm then interpolates to produce a mathematical surface. The resulting representation
may be defined by multiple NURBS, Bezier, Coons, Transfinite, Multisided patches or subdivision
surfaces. Unlike the NURBS control point mesh, the curve network is constructed sequentially. If the
implementation supports dynamic connections between curves, i.e. Relational Geometry, Letcher et al
(1995), a hierarchical definition structure is constructed which allows for change propagation and
provides a great opportunity for implementing Design Intent.

In Cross Sectional Design, as the name suggests, the designer defines intersecting cross sections of the
hull surface. Since Lines Plans also represents the hull surface as a sequence of cross sections in the
principle planes it very easy to translate information between the two representations. The design
language and thought process are similar. The approach promotes the use of 2D Design curves,
particularly in principle planes, simplifying the definition. This often unnecessary restricts the designer
curves representing Sections, Waterlines and Buttocks although with experience Diagonals
approximately orientated normal to the hull surface perform better. Hull definition usually starts with
boundary curves and primary form shapes such as the midship section, Figure 4a. Next, curves
representing major features and styles are defined, with reference to the initial set of curves defined in
the previous stage, Figure 4b. The definition is finished by adding curves which control how surface
shape is blended between these features, Figure 4c.

a) b) c)

Fig.4: Stages of creating a hull definition with a curve network: (a) Outer Boundaries and major
changes in shape, (b) Curves representing characteristic shapes and features, (c) General shape
control curves.

Implementing Cross Sectional Design using Relational Geometry creates the greatest opportunities for
Design Intent. The resulting hierarchy allows both changes and information to propagate down through
the network. Tangent conditions applied to parent curves are automatically inherited by child curves
so that both position and tangency are constrained as the child intersects parent. The application of
tangent constraints to primary form curves, Figure 4b, allow the designer to capture the shape structure
of the hull form, the Form Topology, discussed in 5.1, with the child curves inheriting the intended
shape as they reference parents. Constraints can also be applied to the control points of individual
curves to create knuckles and relaxations, Figure 5a. Straight and arc segments can be created by

168
synchronising the tangents between pairs of points. The application of Design Intent, as a constraint,
removes the need to understand how arrange the control polygon to create these features.

(a) (b)

Knu
ck le

Tan

nt
gen

e
Tang
t

Tangent

Fig.5: Examples of constraints that can be applied to at point level and at curve level. Combining
these tools when building a curve network allows hull definition to be created quickly.

Relational Geometry and Constraints both require code to be executed to reposition control points and
can be considered parametric routines without the need for numerical input. How the designer
experiences these parametric tools is determined by the User Interface. In Multisurf, the first
implementation of Relational Geometry, it is necessary to explicitly define a ‘bead’, a point on the
parent curve, on which to attach the child curve. In Napa, tangent constraints are created by typing in
symbolic text definition. However, the flow of these design process in the User Interface may be
streamlined by implicitly creating the Relational Geometry definition when a control point is ‘snapped’
to a parent curve, or by displaying available constraints in a context sensitive way when curve control
points are selected, both implemented in X-Topology which is introduced in 5.3. This demonstrates
that parametric definitions can be applied without code or abstract representations and without
interrupting design flow by careful crafting of the user interface.

While Cross Sectional Design offers a wider range of definition opportunities it remains a challenging
experience to produce surfaces with complex features that are defect free. Unlike direct NURBS
manipulation, where it is possible to defer small features later to the detailed design stage, Cross
Sectional Design requires far more explicit definition to generate the mathematical surface. Often small
features must be defined correctly to avoid undesirable knuckles or warping. Consequently, Cross
Sectional Design is more challenging to learn than other interactive surface design techniques and
without careful thought, challenging to embrace as foundation for Parametric Hull Generation.

5. IntelliHull and X-Topology

The comparison of parametric and interactive approaches implemented in modern software highlights
that it is desirable for parametric methods to have a degree of interactive input and that parametric
relationships are being exploited by interactive geometry tools by constraining definition and capturing
Design Intent. This suggests that there is an opportunity to more deeply integrate both to embrace
Design Intent and develop a user experience so that non-experts can be productive. This challenge was
explored by the author through the development of IntelliHull, Bole (2005), which implements
Parametric Modification, Design Intent constraint of B-Spline Curves and morphing of surface shapes.
Although the technique uses a simplistic curve definition and lofting process to generate a hull surface,
IntelliHull introduced two concepts, Form Topology and Parametric Modification, that measurably
increased the productivity of the hull design process. These concepts were subsequent introduced into
X-Topology, a hull surface design approach based on Cross Sectional Design.

169
5.1. Form Topology

The extensive glossary of terms used to characterise certain types of hull forms and features indicates
that there is commonality in the styles and shapes in a hull surface. These commonalities are
experienced strongly when using Cross Sectional Design because this approach has an ordered
definition sequence. Since certain types of hull form have characteristic features which influence how
the surface definition should be constructed this knowledge, termed Form Topology, may be used to
improve productivity by mapping modelling tools and analysis directly to the elements that define the
style of these characteristics. Form Topology is precisely represented in a Cross Sectional Design curve
network due to the need for curves to bound the outside of surface and internal features. Defined like
this, Form Topology is a sub-network of the full curve network and each region or ‘face’ in this network
can be characterised as having a specific type of shape, i.e., planar, cylindrical, blended etc. and these
shapes have direct relationship to the tangency information found on the curves that bound these
regions. Three core types of Form Topology are identified:

Monohull Ships A ship style hull form, where there is a clearly defined midship section, with planar
(flats) sides and bottom connected by a cylindrical turn of bilge, Figure 6. This configuration is driven
by the need to ‘boxify’ the vessel to ease handling for cargo, port, fabrication and dry docking.

Fig.6: The Form Topology structure of a Monohull ship annotated


with descriptions of local surface style

Knuckled or Chined A hull surface where for hydrodynamic, fabrication or cost reasons there is a
need to have large areas of low curvature surface, broken up by corners. In the case of hydrodynamics,
these flat areas generate lift and in the case of fabrication and cost there is a need to work with sheet
materials and simpler construction methods.

Round Bilge These hull surfaces typically have no large discontinuities running across the surface,
example of which are yachts and small craft, but also large ships where sea performance is prioritised
over other characteristics, e.g. naval vessels.

Although Intellihull does not use a curve network, through code it implements Monohull Ship Form
Topology to automatically identify the role of curves in the definition set and uses this information to
identify any missing elements creating the potential for additional definition to be automatically
generated. Form Topology is also used to automatically assign the applicable tangent conditions that
should be applied to a curve and to identify where in the definition design parameters should be
measured.

5.2. Parametric Modification not Parametric Generation

Incorporating interactive modification into the typical Parametric Generation approach is challenging
because it requires the algorithms that generate geometry to account for user changes in geometric
configuration. This kind of approach is more likely to introduce conflict between the algorithm and the
user since a generation algorithm that can accommodate user changes would be exceptionally
complicated. Rather than expose an interface where the user creates their own generation algorithms,

170
a simpler strategy that avoids this complexity is to implement parametric modification, through
selective transformation of user defined geometry, rather than hard code specific shapes of hull feature.
As a philosophy, this suggests hull style to be controlled through interactive manipulation and hull
dimensions through parameters which implement transformations. This approach decouples specific
geometry configurations from algorithms and allows the Interactive and Parametric approaches to
coexist within the same environment. It does not exclude geometry from being generated from
parameters initially, but it does suggest that afterwards a tight connection between the algorithm and
produced geometry should be avoided.

5.3. X-Topology: Applying these concepts to Cross Sectional Design

IntelliHull demonstrates how Form Topology, Parametric Modification and Constraints can be used to
design a ship hull surface through interactive styling and parametric control. However, the approach is
limited by the simplistic longitudinal lofting method used to generate the surface. It requires all design
curves to have the same number of points and has limited control over surface tangency in the transverse
direction. These limitations restrict Intellihull from being used as a flexible hull surface design tool.
X-Topology was developed as a typical Cross Sectional Design implementation upon which the ideas
expressed in IntelliHull could be explored. It uses multi-part B-Spline design curves to form the curve
network, which is then used to generate a B-rep (boundary representation) of surface patches. The
generated surface is based on NURBS patches although other mathematics are available in the
implementation. Control points can perform as typical B-Spline points or as interpolation points and
mixed within the same curve definition. Design Intent is implemented using curve constraints and
Relational Geometry is implemented so that curves are connected through ‘snapping’, as described in
4. Design curve definition isolates the designer from the B-Spline representation by processing the
control points into segments based on the constraints applied to the curve or inherited from parent curves
via Relational Geometry. Parametric modification is implemented, Bole (2010)¸ driven by Form
Topology.

Although inspired by IntelliHull, the requirements of Cross Sectional Design mean that X-Topology
needs just as much expertise as other software implementations based on this technique. Much can be
done through sensitive UI design to improve the experience but the need to actively manage the surface
topology, to define a correctly connected curve network with well defined junctions and faces that are
easily filled by 4 sided patches still makes it difficult for novice users to pick up. The challenging nature
of hull surface design, especially for novices, continues to promote Parametric Hull Generation as a
desirable alternative even though the IntelliHull and X-Topology concepts go some way towards
combining both Parametric and Interactive approaches. But the requirement to quickly generate an
acceptable mathematical Surface definition remains the critical challenge.

6. A Philosophy for integrating Parametric and Interactive Hull Design

IntelliHull demonstrates how taking alternative perspectives on typical approaches used for Parametric
Hull Generation can produce a solution which integrates parametric and interactive experiences
together. However, when this approach is explored with a more capable surface design framework it
often evolves into a solution for experts. The principle challenge when developing hull surface design
tools is how to provide a capability that has the breadth of detail required for experts yet can be easily
manipulated by the novice, on-route to becoming an expert. Solutions aimed at experts often rely on
the configuration of many ‘small elements’ to maximise flexibility while novices desire a smaller
number of ‘larger elements’ which can be configured to produce typical results. This suggests that,
depending on experience, different users require various levels of granularity of hull surface design
elements. Moreover, it is critical to allow users to easily refine the initial level of granularity to evolve
a design without losing consistency.

Critical to any approach is the need to avoid conditions or scenarios that destroy design flow by
requiring the designer to perform detailed configuration in a way that disturbs the thought process. As
detailed previously, the challenges that need addressing are summarised as follows:

171
Surface Topology All mathematical CAD surfaces require significant planning and management to
distort the definition into an acceptable representation. This effort needs to be avoided as it offers no
value to the design process and frequently challenges novice users who have yet to acquire skills to
achieve this productively. Consider using of simplified or alternative surface representations that can
be converted into a CAD surface later.

Avoid Abstract Modelling To achieve a wider variety of configuration options modern Parametric
software solutions are embracing object orientated representations to model reconfigurable generation
processes. Since the elements in these models are often non-graphical its necessary to represent them
using some abstract form. This is not “in-context” with the design problem and introduces a need for
training before users can understand how to manipulate the software. This approach can be avoided by
reflecting on the “granularity” of the modelling elements by sensitively selecting how to package the
Parametric process and expose interactive modification.

Connectivity The need for configurable modelling elements introduces a requirement that these
elements connect parametrically. Thought should be given to whether connections can be made
automatically or implicitly in-context with the design flow. Valid connections of parameters can be
achieved by standardising terminology and avoiding abstract named parameters, something that can be
minimised by allowing interactive modelling. This approach strengthens the value of Design Intent as
something which is applied as a condition or constraint rather than something that requires detailed
configuration or coding.

Avoid Hard Coded Generation Hard coding geometry generation either internally in the software or
by user customised macros prevents definition from being modified graphically. It is more flexible to
implement parametric modification, decoupling the geometry definition from the generation algorithm.
However, closed generation algorithms can be effective at initialising the configuration of assemblies
of elements acting as if they were a single definition. In this case, thought needs to be given to how to
decouple the assembly when deeper refinement and configuration is desired by the user.

Exploitation of Models Parametric Hull Generation has modelled a considerable number of hull
characteristics and styles. These models may be used to provide initial configuration, or insight into
typical shapes or volumetric distribution. Therefore, they may be used to augment definition of a hull
surface when the user supplied definition is of low ‘granularity’. Since these models are often empirical
they can be incompatible with other definition elements if operating independently. A typical example
of this can be see with Section Area Models which do not account for the style of the flat of bottom or
sides, Figure 11b. Because adding more parameters to the model increases complexity, alternatives
should be considered in keeping with the ideas of parametric modification or exploiting interactive
manipulation of the model if it can be represented graphically.

7. Removing Surface Topology Configuration with Alternative Hull Surface Representation

For both parametric generation and interactive hull surface design managing the Surface Topology of
the hull definition is a major overhead that introduces complexity for both designer and software
developer. NURBS surfaces have become so prevalent that alternatives are rarely considered. While
it is an irrevocable requirement that a CAD Surface needs to be produced to support later design
activities and analysis, the actual shaping of a hull form does not have to use a CAD Surface if there is
a more design-sensitive solution.

AVEVA Lines uses a curve network over which hull contour curves are lofted, implying a hull form
surface. Contour groups (Sections, Waterlines and Buttocks) are generated by intersecting the curve
network and previous contour groups with a principle plane at each contour location, producing points
through which a new curve is fit. Definition of the curve network and execution of the lofting process
is conducted by manually interacting with the software operations although the fitting sequences can be

172
captured interactively to a macro. Once a good lofted hull representation is produced, patches are fitted
to the curve network to produce the required CAD Surface.

An automated version of the lofting process was implemented using X-Topology, Bole (2012), to
explore the benefits of this approach. It was found that inconsistencies in the defining curve network
are highlighted far more clearly than when using surfaces patches, the blending action of the surface
mathematics filtering these out through the combination of U and V shapes. Furthermore, there is no
requirement to manage Surface Topology which even allows surfaces to be generated from
disconnected or sparse networks of curves although this is not recommended if the definition is intended
to be the source of a CAD surface definition later. The approach gives the designer the ability to explore
the shape of a hull form without concern for Surface Topology, remove inconsistencies from the curve
network and then generate the CAD Surface using the same definition after design process is completed,
Bole (2014). Subsequently, lofted surface and CAD surface are reviewed in parallel to identify further
inconsistencies and improve the surface quality.

This approach is ideal for Parametric Hull Generation. The need to consider Surface Topology is
eliminated, which allows a greater breadth of surface shapes and Form Topology to be supported and
opens the opportunity toward a more generic tool. It can be shown that the Curve Lofting process can
generate a functional hull form with a small number of curves. This can be limited to the curve network
representing the Form Topology. Further influence of shape can be achieved by generating section
curves using Harries’ method, incorporating Section Area models if required.

8. Minimising Definition by Constraining Shape using Surface Tangency

Since the objective is a process for generating a wider range of Hull Forms is it reasonable to specify
that the basic definition should be at least the Curve Network representing the Form Topology. This
encapsulates the general style of the hull form as well as surface tangent information in key places. This
is also a definition that novice users can understand and define through self-taught exercises. However,
as the Form Topology only captures location specific characteristics, it will often lack the subjective
information that describes how characteristics should be blended across the Curve Network.

When generating a CAD surface from an interactively defined Curve Network, tangency is interpolated
along individual curves of the network by applying fitting algorithms to the tangent vectors of crossing
curves. Curves are added to refine shape and tangency, increasing the complexity of the definition. But
this is not successful strategy for Parametric Hull Generation because the decision process that places
additional discrete definition is subjective and must account for neighbouring curves and hull form
style. Parametric methods typically solve this challenge by introducing models of surface tangency
along Form Topology curves. Harries (1998) uses models of surface tangency captured from existing
hull surfaces. While this approach is applicable for hull surfaces based on a parent design, models can
be easily invalidated if the designer subtly changes characteristics in the Form Topology. This challenge
highlights that there are areas of the hull surface where shape is not universally characterisable.
Parametric models of surface tangency are subjective and an obvious target for refinement by the
designer. Consequently, it may be better controlled through interactive modelling.

Interactive manipulation of tangent vector concepts in 3D has typically been a challenge due to the
limitations of 2D visualisation and manipulation, i.e. screen and mouse. It is difficult to find an
interactive manipulation concept that doesn’t interrupt the design flow. AVEVA Lines implements an
Angle Curve concept which controls the tangency of contour curves at 3D Design curve intersections.
For an ordinary design curve, up to three angle curves may be associated each for sections, waterlines
and buttocks and six if the design curve is a knuckle. It can be shown that using a basic Form Topology
Curve Network and Angle Curves, a smooth hull can be produced that does not required refinement
with additional design curves. These Angle curves, however, are not easy to configure, they cannot be
edited or visualised in 3D and do not offer interactive feedback from the hull surface when manipulated
in the software.

173
Fig.7: Tangent control over a disappearing knuckle is achieved by developing a visual tool to
constrain the surface along the curve with vector control handles. This removes the need for
additional definition curves and broadens the variety of Design Intent.

Vectors offer an improvement over angles. A single vector can be used as a control handle to describe
surface tangency about a design curve, two if the design curve represents a knuckle. A vector control
handle can be turned into a 2D concept for easier interactive manipulation by constraining it to be
perpendicular to the curve tangent at the origin of the handle. Further Design Intent rules may be
devised to constraint tangency by allowing the user to numerically specify the inclination angle with
respect to an axis or plane. This configuration allows surface tangency to be edited interactively as a
group of vector control points displayed in 3D, Figure 7, as an interactive curve graph in 2D or as a
table, all of which can be appreciated regardless of the level of expertise the designer has.

Fig.8: Applying tangency control to a planning hull form allows the subtle shapes required to
change comfort to be applied without any additional design curves.

This approach was implemented on the X-Topology Curve Network using multiple segments and curve
fitting equivalent to the method used to fit X-Topology Curves. However, end conditions and
disappearing knuckles require special attention in the fitting algorithm to avoid loss of continuity in the
surface tangent ribbon. Similar solutions may exist in other hull design software, but key is to get the
manipulation and visual feedback to support the designer rather than interrupt them.

9. Integrating the Parametric Generation

Parametric Hull Design solutions have typically presented an experience where users modify a fixed
selection of parameters or must start by assembling the parametric model. In respect to the act of design,
the first approach does not give an impression of great flexibility and the second suggests that design
can only be achieved through complex configuration.

In this work, the parametric philosophy is based on presenting an efficient User Interface where the
designer is not forced to work with design concepts that aren’t directly relevant to the hull design
process and that the software will present, automatically connect and initialise components of the
parametric process when activated. The software should guide novice users through the generation
process by automatically sequencing typical components but also expose them for removal or

174
refinement as part of design exploration and insight. This approach allows for coarse granularity of
definition for novices by automating the generation process, progressing to a finer composition as user
skill increases, by supporting process refinement, all the way through to a completely interactive
definition. Furthermore, this approach avoids the need to completely redefine the hull surface in another
hull design tool for detailed production and fairing.

The removal of the need to manage Surface Topology by implying the hull surface using curve lofting
makes this achievable. Furthermore, since the surfaces produced by curve lofting are capable of closely
approximating hull forms with very little information, the general hull generation process only needs to
focus on refining shape by constraining it to conform to typical design parameters or models. The
designer may choose to use parametric modelling to create the outline definition of the hull form, i.e.
the Form Topology. Since this also encapsulate style it is critical that these models can be converted to
and coexist with interactive definition without loss of performance. This is the challenge in this
solution.

9.1. Parametric Component Integration Rules

X-Topology uses a base Class (Object-Orientated software development) to manage relationships,


model queries and sequential updates of all derived Classes used to interactively define and generate a
hull surface. It is also the foundation of all components and parameters which implement parametric
processing allowing integration into a single Curve Network hierarchy where all definition elements
can interact. To create a seamless modelling experience that mirrors the implicit connectivity of
Relational Geometry the following rules are adopted.

• Parameters are identified by a name. Components publish parameters making them available
to all other components. Therefore, a parameter and its value is a global definition.
• In addition to the hierarchy formed when components reference others, they are organised by
their role in the generation process by tagging each Class with a constant index value.
Components are then ordered using a sorting algorithm. It would be possible to develop an
analytical solution to infer order but since the number of parametric components is small this
additional complexity is not deemed necessary.
• A parameter published with the same name as an existing parameter in the hierarchy is
considered to represent the same concept. The parameter is automatically mapped to the
existing parameter, retrieving its value as an input when the component it belongs to is
evaluated. If the value of the original parameter changes, mapped parameters and components
are automatically invalidated.
• The automatic connection of parameters with the same identifier enforces standardisation of
value and concept names. Consistent behaviour of parameters with the same name makes for
easier understanding of the software particularly for users new to hull design terminology.
• Components take responsibility for initialising default values. A reconfiguration update stage
is triggered prior to main updates to allow this to occur when components are added or removed
from the generation process.
• A parametric component uses prior information in the sequence by retrieving parameter values,
Form Topology and querying the current state of the geometric model to generate and publish
further parameters and geometry for use by subsequent components. In this respect, parameters
and curves are the principle classes of information communicated between components.
• Parametric components also support planar intersection to allow any geometry to be included
in the lofting process that, for practical reasons, cannot be represented using X-Topology
curves.
• In the final stage of the sequence a Curve Lofting process is applied, by intersecting all
geometric and parametric components.

The behaviour of this whole process is orchestrated by two main control objects. A generic ‘Mediator’,
Gamma et al (1994), which collects all components and geometry, provides the index of parameter,

175
sequences the update and performs the final Curve Lofting and a ‘Factory’ to effectively operate as a
Wizard by publishing options that instantiate parametric components into the process. Consequently,
the Factory encapsulates a specific style of Form Topology and the components available from the
Factory generate models that are appropriate to this configuration. Different ‘Factories’ may be used
to implement the other configurations of Form Topology.

Fig.9: The componentised generation process illustrates that Parameters and Curves are the primary
data types. Boxes with dashed outlines are optional.

9.2. Modification of Parametric Models

The use of simple check box options in the UI to include parametric components, which self-organise
and automatically connect, creates a process that is easy for the novice user to reconfigure. However,
it relies on models implemented in the software code that cannot be modified. Many of these models
can be considered subjective by containing style parameters or, empirical or heuristic rules.
Consequently, these models may not produce exactly the desired results when combined with the other
models that compose the generation process. Rather than change these models by altering the
mathematical functions an alternative approach is suggested. Since all these models produce
information that can be visualised graphically, customisation can be achieved by interactively
modifying the shape rather than changing the mathematical model itself. With a sensitively designed
UI this will provide the user with better feedback than altering code. If the model does not represent a
typical 3D concept it can be converted to a fitted curve model providing the designer with control points
that can be manipulated. Transformation functions can be applied to change these models, potentially
driven by the same parameters the original function relied upon. This approach does, however, have a
significant impact on the implementation as it requires that the geometric representations must persist
for the life span of the parametric components. Using this approach, the defects highlighted later in
Figure 11 can be corrected.

9.3. Example Output

Figure 10 illustrates some example hull forms generated by the implementation. The figure shows two
hull forms generated from interactive Form Topology Curve Networks and two completely parametric
hull forms. These examples were produced in minutes with no time spent on detailing. Since the hull
forms are discretely implied by curves the small detailed areas that introduce challenge for a continuous
mathematical surface definition do not occur. Figure 10b and 10c are examples of fully parametric hull
forms using an Initial Sizing tool based on well published rules and models based on Tribon Form. A
set of curves representing Form Topology are then generated by algorithms that have a small range of
style configurations covering different bulb shapes and transom format.

10. Reflection and Recommendations

Following on from IntelliHull, this development has continued to search for an approach which allows
a Designer to combine both parametric and interactive modelling techniques in a constructive way. It
has taken some time to identify that, to be effective, both modelling techniques need to have the
opportunity to reference the other. The only way this can be achieved is if the hull generation process

176
is open to reconfiguration at any stage and this isn’t possible if it treated as a ‘black box’ system.
Integrating Parametric and Interactive technique opens the opportunity for mainstream hull surface
design tools to incorporate algorithms and models that have been published years ago but are yet to
realise their full potential.

(a) (b)

(c) (d)

Fig.10: Generated hull forms based on (a) Interactive curves defined for a platform supply ship in
2008, see Figure 4, (b) Outline curves based on the YachtLINES parametric Hull Generator, Bole
(1997), (c) Parametrically Generated Curves based on Initial Sizing models originally used in the
Tribon Form application for a speed of 12 knots and (d) 16 knots with a different selection of
styles.

At present, many parametric components in the implementation are immature. Individual functions
have been prototyped and tested in isolation but not fully integrated into an interactive hull design UI.
Without this, the design flow is disturbed because it is often not possible to resolve surface defects due
to the inability to access parametric models in the right way. There are still many bugs to be investigated
and a review for code optimisation is needed. The examples, Figure 10, are either generated entirely
from parameters or from the Form Topology Curve Network of old hull forms defined interactively
several years ago. In all cases, the resulting hull surface is available within less than a minute. All have
surface defects but none serious enough to prevent refinement into a mathematical surface definition.

(a) (b)

Too Full

Too Tight

Fig.11: Presently, immaturity in the software is preventing defects in the generated modes from
being corrected. (a) requires modification to either the hull geometry or section area curve, (b)
requires a more ‘s’ shaped flat of bottom curve to allow hull sections to be more ‘u’ shaped.

177
An observation from earlier developments, where the generation process is closed, was that style
combinations could easily be created that were incompatible with parametric model characteristics. For
example, subtle shapes in the style of the Form Topology Curve Network would be incompatible with
the traditional bell-shaped Section Area curve resulting in sections that were too full in some areas of
the hull and too tight in others, Figure 11a. This suggests that the design curves, while geometrically
fair, may not produce smooth volumetric characteristics. Figure 11b shows a hull form with prominent
V-shaped sections in the bow due to the tight flat-of-bottom curve generated by a simple parametric
model. With the opportunity to interactively modify this curve giving it a more s-shaped style the
forward sections of the bow would come more ‘U’ shaped while the section area model remains
constant. The integration of Parametric Modelling provides the opportunity for insight into hull shape
far earlier, potentially minutes, compared with the days of modelling it might take to develop a hull
shape and check hydrostatics using interactive techniques. Consequently, change is a lot easier to accept
when less time is invested in the definition.

For someone looking to explore parametric hull design it is normally necessary to develop their own
library of code. This is a necessary part of understanding the underlying processes involved before they
can begin to combine parametric tools into their own process. In this respect the following
recommendations are given.

• Review shape generation algorithms developed in this field. Although it may be necessary to
learn advanced mathematical techniques there are benefits to be had in speed and capability.
• Consider interactive customisation of models instead extending the number of parameters. It
can be challenging to get a feel of what specialist parameters control and interactive
manipulation allows the designer to change geometry directly.
• The greatest challenge when developing Parametric Hull Generation tools is to avoid the
increase in complexity that occurs when additional constraint is applied the hull surface and
supporting models. Consider simplifying or using alternative representations, as they can
sometimes provide the same information with less effort.
• Black-Box Parametric Hull Generation processes typically have a narrow range variation.
Consider using open generation processes so that designer can make configurations beyond the
models the developer has time or interest to create.
• Consider who your user is. As the developer you have expert knowledge of your system. Others
will need to understand it before they can use it.

11. Concluding Remarks

Parametric and Interactive Hull Surface Design techniques are both powerful approaches, but it has
always been challenging to integrate them together. Even when used individually these challenges can
slow progress and require the user to tend to detail that detracts from their design objective. By taking
some pragmatic steps it is possible to make the integration and use each technique to resolve challenges
in the other. The two primary actions are: using an implied surface to avoid managing the surface
topology of CAD surfaces, and by using an open, configurable generation process where parametric
algorithms are treated as optional components that can be used alongside interactively defined
geometry. The integration of parametric generation algorithms raises the opportunity to extend the use
of Design Intent to make the hull design tool more productive for novice users while still allowing a
level of reconfiguration that will support the refinements that expert users require.

References

BENSON, F. W. (1940), Mathematical Ships’ Lines, Trans R.I.N.A. 82

BLOOR, M.I.G.; WILSON, M.J. (1990), Using Partial Differential Equations to Generate Free-Form
Surfaces, Computer Aided Design 22

178
BOLE, M. (1997), Parametric Generation of Yacht Hulls, Final Year Project, University of Strathclyde,
Glasgow

BOLE, M. (2005), Integrating Parametric Hull Generation into Early Stage Design, COMPIT,
Hamburg

BOLE, M. (2010), Interactive Hull Form Transformations using Curve Network Deformation,
COMPIT, Gubbio

BOLE, M. (2012), Revisiting Traditional Curve Lofting to Improve the Hull Surface Design Process,
COMPIT, Liège

BOLE, M. (2014), Regenerating Hull Design Definition from Poor Surface Definitions and other
Geometric Representations, COMPIT, Redworth

BOLE, M. (2016), X-Topology Surface Design, http://www.polycad.co.uk/xtopology.php

GAMMA, E.; VLISSIDES, J.; JOHNSON, R.; HELM, R. (1994), Design Patterns: Elements of
Reusable Object-Oriented Software, Addison Wesley

GRESHAKE, S. H.; BRONSART, R. (2018), Application of subdivision surfaces in ship hull form
modelling, Computer-Aided Design 100, pp.79-92

HARRIES, S. (1998), Parametric Design and Hydrodynamic Optimization of Ship Hull Forms, Ph.D.
Thesis, TU Berlin; Mensch & Buch Verlag

HARRIES, S.; ABT, C. (1997), Parametric Curve Design Applying Fairness Criteria, Int. Workshop
on Creating Fair and Shape-Preserving Curves and Surfaces, Berlin/Potsdam

JORDE, J-H. (1997), Mathematics of a Body Plan, The Naval Architect, January

KERWIN, J. (1960), Polynomial Surface Representation of Arbitrary Ship Forms, J. Ship Research
4/12

KHAN, S.; ERKAN, G.; DOGAN, K.M. (2017), A novel design framework for generation and
parametric modification of yacht hull surfaces, Ocean Engineering 136, pp.243–259

LETCHER, J.S.Jr.; SHOOK, D.M.; SHEPARD, S.G., (1996), Relational Geometric Synthesis: Part 1
– framework, Computer-Aided Design 27/11, pp.821-832

ORVIETO, A. (2014), Development of Parametric Planing Hull Design Features, COMPIT, Redworth

SEDERBERG, T. W.; ZHENG, J.; BAKENOV, A.; NASRI, A. (2003), T-splines and T-NURCCs,
ACM Trans. Graph. 22/3, pp.477-484

STANDERSKI, N. E. (1988), The Generation and Distortion of Ship Surfaces Represented by Global
Tensor Product Surfaces, Dissertation, TU Berlin

VON KERCZEK, C. (1961), The Representation of Ship Hulls by Conformal Mapping Functions,
J. Ship Research 13/4

179
Development of Real-Time Emergency Response Training Simulator for
Collective Ship Crews Based on Virtual and Mixed Reality
WooSung Kil, Korean Register, Busan/Korea, wskil@krs.co.kr
Seokho Byun, Korean Register, Busan/Korea, shbyun1@krs.co.kr
Jeong-yeol Lee, Korean Register, Busan/Korea, jylee@krs.co.kr
Myeong-Jo Son, Korean Register, Busan/Korea, mjson@krs.co.kr

Abstract

We present an emergency response ship training simulator for collective ship crews based on virtual
and mixed reality in the ship environment. Our simulator provides methods to perform familiarization
training of the emergency response regulations specified in ISM Code and provide the way of how the
collective crews deal with and collaborate each other in the emergency situations based on virtual ship
environment in real-time. In this paper, we describe how to build training scenarios, apply them in a
virtual reality ship environment, and control the training session for the evaluators when performing
multi-participant crew emergency response training. Especially, by presenting the method of training
control and monitoring using the mixed reality device, we examined how the mixed and virtual reality
devices which are greatly developing recently can be applied to the training simulator of the crews.

1. Introduction

Recently, maritime industry continues to make efforts to combine ship construction, maintenance drill,
and inspection, which lasts for a lifetime of ship, with virtual reality and mixed reality training system.
Virtual reality has already become an important tool for design verification in ship construction stage,
maintenance and familiarization drills for ship operation stage. Also, workers of maritime industry are
gradually increasing their understanding about the use of virtual reality contents and experience. In the
5th HTW subcommittee of International Maritime Organization, it was informed that the training
through VR was sufficiently effective in the experiment from "Information on effectiveness of closed
area training using virtual reality" (IMO, 2018), and in the information document "Introduction to
virtual reality-based simulator for sailor training and analysis of functional requirements", the virtual
reality-based ship crew training simulator is compatible with the functional requirements of the
simulator required by STCW, IMO (1978).

In this paper, we introduce a collective ship crew training system for onboard emergency response
training using virtual reality and mixed reality technology based on the above maritime industry trends
and awareness. The system is aimed at performing its functionalities equivalently with the real on
onboard ship training conducted by collective ship crews and the training is compatible with
International Safety Management (ISM) Code which regulate the system that should possibly deal with
the maritime safety, marine accident prevention and marine environment protection.

To accomplish those objectives, we use a common framework for building a ship based virtual reality
environment using 3D CAD data, Kil et al. (2018a). The common framework includes the process of
building a realistic virtual ship environment through geometric error correction, simplification, image
texturing, and pre-processing such as global lighting and grouping of the 3D CAD model. After
establishing a virtual reality based ship environment, we configure it so that collective ship crews can
possibly access to the training stage simultaneously by network message processing server. In addition,
emergency response training scenarios were established to enable collaboration for onboard emergency
response training required by the ISM.

Besides, we present mixed reality (MR) based training control module that supports collaborative work
in the real-world space between evaluator and reviewer to monitor and control the training session using
the latest MR devices.

180
2. Related Work

In the shipbuilding and offshore plant industries, efforts are being made to apply VR technology for
various purposes such as cost reduction during the construction process and training for safe operation.
Korean Register developed the survey training simulator that can visualize ship construction rules,
inspection guideline and inspection records based on virtual reality for each part of the ship, Kil et al.
(2018b). The ship survey training simulator provides a sophisticated model containing hull members,
equipment, pipes and so on based on the 3D CAD model, and it is configured to perform operations
such as custom mark-up, lantern lightening, spraying, and camera shooting. 3D CAD model of ship
also constructed for utilizing in shipbuilding process with FORAN solution, Fernández and Alonso
(2015). In our study, we utilized 3D CAD model from shipyard that had been created and used for the
design and construction of actual ship.

For the operation aspect of a ship, VR simulation technology has been studied various ways. OMS-VR
provides familiarization and operation training for on-board crews based on virtual reality, http://oms-
vr.com/. Based on advanced graphic environment, trainee can manipulate on-board equipment and fulfil
firefighting drill using VR controllers. LR focused on safety for crews, developed a simulator that
allows ship crews to train the identification of hazards during ship operation, and to visually identify
cases of safety failure, LR (2017). The system is designed so that a hypothetical character can visualize
the consequences of a hazard in a physical form and can be alerted to dangerous situations. RENK
provides educational contents for users to check the internal shape and manual of the ship propulsion
system and is designed to allow many users to receive education in the virtual space according to the
guidance of the trainer, RENK (2018). NewGen I & S explains the operation concept of BWTS device
based on virtual reality, provides the function to operate BWTS device, and is configured to visualize
the manual for each part of device, Park et al. (2018).

Fig.1: System Operation Concept

From the aspect of VR technology, design of multi-player supporting platform for VR gaming, Liszio
and Masuch (2016) and facility management using multiuser shared VR environment were presented,
Shi et al. (2016). We will explain immersive ship VR environment, which came from actual 3D CAD

181
model from shipyards, and how related parties such as ship operator, crew, surveyor of classification
societies, etc. can interact each other in this environment.

3. System Overview

3.1. System Operation Concept

The operating concept of our proposed real-time emergency response training simulator is shown in
Fig.2. Each trainee wears a headset device attached to a computer linked to a network connect to a
virtual training target ship. The Network Control Server processes multiple user connections and
receives, transmits, and broadcasts messages originating from each user. Also, the training information
such as the movement path of each user, the training record, the scenarios, and 3D Data are stored in
the data server. There are various kinds of scenarios selected by the training control server, such as
personal vessel familiarization training and group collaboration training. In the training control server,
the training evaluator can perform training debriefing and evaluation by referring to the training
information stored in the data server.

On the other hand, the training status and information are sent to the training control server. The
information can be observed by training evaluators in real time. The training evaluators can identify
and control the position and motion of the trainees on the training vessel placed in the real space by
wearing the mixed reality device. Several evaluators possibly review the whole training situation
together with the peer evaluator.

Fig.2: System Operation Concept

3.2. Building Virtual Ship Environment

In this paper, the pre-process proposed by Kil et al. was used to construct a virtual reality ship
environment for the crew emergency response training. The process of converting a ship 3D CAD
model to a virtual reality model includes a pre-process such as geometric correction, simplification,
material processing, and global light source rendering. The ship 3D model consists of objects such as
millions of members and equipment, and it takes a lot of time to go through manual work by a person,
so a batch operation is required to process a large amount of data at the same time. To do this, we used
Blender3D's scripting tools, an open source 3D graphics tool, to perform semi-automatic tasks such as
geometric correction, grouping, material handling, and texturing.

182
Fig.3: Virtual Reality Based Ship Environment

183
3.3. Training Scenario

In our system, the training scenarios were defined focusing on the emergency response training required
by the ISM Code. Scenarios include firefighting and evacuation, helicopter rescue operation, stranding,
residential fire, flooding, and cargo fire drill. Each scenario defines a role, place, a possessed item, a
motion, and script. A defined scenario is stored in the data server. The data in the Data Server is read
from the Scenario Parser of the Training Control Server as shown in Fig.4 and provides information for
actual training.

Table I: Scenario Definition Structure

Fig.4: Scenario Processing Concept

184
3.4. Collective Crew Processing

In order to perform emergency response training for collective ship crews simultaneously in real-time
as shown in Section 3.3, we need the processing concept that deal with the event messages that are
produced and consumed by each user.

In this system, we made network control server which provides the way to process non-blocking,
asynchronously. It helps message processing not to be stuck in the massive communication between
users. The Unity3D engine which used in basic rendering module, communicate with network control
server using Socket.IO library to provide interface with Node.js. Each user requests the information and
send event to the network control server. The network control server broadcast the message to the other
clients or generate relevant message according to the received message.

Fig.5: Basic Communication Structure of Network Control Server

Fig.6: Collective Crew Processing in Case of Updating Player

185
3.5. Mixed Reality Based Training Control and Monitoring

In our crew training system, the mixed reality system is applied instead of conventional display monitor
to visualize training status and to monitor or review training situation in real time. The evaluator can
analyse and monitor the training situation with his/her colleagues by wearing a mixed reality device,
and can explore the 3D model space matched with real world space, thereby increasing the immersion.
Table II describes the specifications and features of the Microsoft Hololens used in this paper.

Table II: Microsoft Hololens Spec. and Features

Fig.7: Conceptual Picture Which Describes Peer Evaluation

186
4. Development Result

The virtual reality-based collective crew training simulator introduced in this paper provides the
features and functions as Table III. Fig.8 shows a screenshot of the part of firefighting drill scenario.
Trainees perform tasks by interacting with characters and other users, solving pop-up quizzes during
training, and acquiring relevant information. In addition, as shown in Fig.9, it is possible to carry out
self - directed ship familiarization drill by matching the ship structure and instructions or guideline to
the ship model of virtual reality in the form of text, image and video.

Table III: Feature of VR· MR based Collective Crew Training Simulator

5. Conclusion

Virtual reality and mixed reality technologies, which have recently become prominent, are expanding
to training simulators for protecting marine and marine environments. In this paper, we present the
results of implementing the emergency response training simulator required by ISM Code using virtual
reality and mixed reality. In the future, we will identify the training effects and improvement
requirements using the virtual reality system and develop them so that they can be easily applied to
actual maritime training.

Acknowledgements

This research was performed as a part of the research project below and supported by the organizations
indicated. We acknowledge and appreciate the support provided.

- “Development and commercialization of safety education and training VR contents of sailor using
virtual reality technology” project funded by Ministry of Science and ICT of Korea (NO. S0602-
17-1016)

187
Fig.8: Activities during firefighting and evacuation drill

188
Fig.9: Ship familiarization drill

References

FERNÁNDEZ, R.P.; ALONSO, V. (2015), Virtual Reality in a shipbuilding environment, Advances in


Engineering Software 81(1), pp.30-40

IMO (1978), International Convention on Standards of Training, Certification and Watchkeeping for
Seafarers (STCW)

IMO (2018), Sub-Committee on Human Element, Training and Watchkeeping (HTW 5)

KIl, W.S.; SON, M.J.; LEE, J.Y. (2018a), Development of Multi-Purpose VR Simulator for a Ship from
3D CAD Model, 17th COMPIT Conf., Pavone

KIl, W.S.; SON, M.J.; LEE, J.Y. (2018b), Development of VR Ship Environment for the Educational
Training of Ship Survey, J. Society of Naval Architects of Korea 55(4), pp.361-369

189
LISZIO, S.; MASUCH, M. (2016), Designing Shared Virtual Reality Gaming Experiences in Local
Multi-platform Games, Cham, pp.235-240

LR (2017) Results of Lloyd Register's (LR) Virtual Reality (VR) Safety Simulator and gaming
experience at SPE Offshore Europe, proves more needs to be done in training
https://www.agcc.co.uk/news-article/results-of-lloyds-registers-lr-virtual-reality-vr-safety-simulator-
and-gaming-experience-at-spe-offshore-europe-proves-more-needs-to-be-done-in-training

PARK, S.K.; SONG, C.U.; LEE, D.J. (2018), Achievement of Digital Twin using by Shipyard's 3D
Design Mode, 2018 Conf. Korean Society of Marine Engineering

RENK (2018), Propulsion Simulator, https://www.renk-ag.com/fileadmin/Unternehmen/Aktuelles/


Presse/2018/RENK_P6_WindEnergy_EQ-Gear_de.pdf

SHI, Y.M.; DU, J.; LAVY, S.; ZHAO, D. (2016), A Multiuser Shared Virtual Environment for Facility
Management, Procedia Engineering 145/1, pp.120-127

190
Perspective-Taking in Anticipatory Maritime Navigation –
Implications for Developing Autonomous Ships
Mikael Wahlström, VTT Technical Research Centre of Finland, Espoo/Finland,
mikael.wahlstrom@vtt.fi
Deborah Forster, UCSD, San Diego/USA, dforster@eng.ucsd.edu
Antero Karvonen, University of Jyväskylä, Jyväskylä/Finland, antero.karvonen@icloud.com
Ronny Puustinen, University of Jyväskylä, Jyväskylä/Finland, ronny.y.puustinen@student.jyu.fi
Pertti Saariluoma, University of Jyväskylä, Jyväskylä/Finland, pertti.saariluoma@jyu.fi

Abstract

We consider the development of autonomous ships and programming-based navigation by exploring


and describing communicative and predictive elements in navigational risk-minimization. Maritime
pilots (n=6) and other professional mariners (n=2) were interviewed and observed actualizing
navigational tasks at a ship simulator. The results suggest that expert navigators envision the possible
future locations of nearby ships in view of assessment regarding the other ships’ communicative
capabilities, situational awareness, tasks, manoeuvrability and predictability. Overall, navigation
involves perspective-taking, which is a basic feature of social cognition. In discussing design
implications, we propose that similar features could be implemented to autonomous ship navigation.

1. Introduction

Maritime traffic consists of various actors – ships for leisure, transport, security, and beyond – each
with their own aims and capabilities to operate and communicate in varying weather conditions. In this
potpourri of changing elements and crisscrossing vessels, seafarers manage to bring their ships from
the open seas to the safety of harbours.

Remote controlled and autonomous ships, with no crew on-board, have been the focus of active research
and development in the recent years, both by governmental and industry actors. Key arguments for the
support of this technology transition include: 1) increased safety (if accidents happen, the crew will not
be in danger), 2) fuel saving and emission reduction (sleeker and lighter ships can be designed when no
need for life-support infrastructure, such as ventilation, heating, and sewage systems), and 3) creation
of new interesting careers (possibility to go home after work, instead of long voyages at sea) (Levander,
2017).

When designing these remotely controlled and monitored ships, division of labour between human and
technology is an open question. Impressive navigational AI (artificial intelligence) is known to us from
the computer games already, and real-life progress is taking place in the automobile industry where lane
assist and automatic braking pave way to fully autonomous cars. Connectivity, sensor technologies and
machine vision (i.e., AI based object detection) are some of the key technologies allowing this expected
technological change. The question therefore is when should human make operational decisions and
intervene and when should the machine make the decisions? More responsibility for the machine would
imply a stronger business case for autonomous ships: with fewer human intervention involved, an
operator or a group of operators could monitor a larger fleet, implying somewhat diminished hiring
costs. While the answer to this question perhaps ultimately reveals itself after sufficient real-life try-
outs, our take on this is to study human operators’ work and envision the division of labour by
identifying aspects in navigation that could be hard to implement through programming and the
technological sensory arrangements available.

Some studies already touch the subject of what is missing in transitioning from on-board control and
monitoring to remote operations in the maritime domain. Without the presence of physical bodies, there
is lack of ‘ship sense’ that allows the operators to feel wave and wind conditions as well as engine
noises, and make navigational decisions accordingly (Man et al., 2014; Porathe et al., 2014).

191
In this study, we will look at the issue that has been considered in developing autonomous cars (Müller
et al., 2016) for a while, but not so much in the maritime domain: how does the intelligent vessel adjust
its behaviour through considering the intentions and other features of the other voyagers? This
adjustment of one’s own navigation by anticipating the actions of the others is relevant focus of study
in the maritime domain. Sea traffic could be seen as a social system in consisting of actors that consider
and anticipate each other for the purpose of navigational decision-making. In human behaviour, this
relies on innate social capabilities, but the present-day AI solutions do not enable human-like
perspective-taking. That is, viewing a situation from the point-of-view of another person.

Our study explores maritime pilots and other professional mariners’ communicative and predictive
elements in navigational risk-minimization. Maritime pilots are a seasoned shiphandlers and who assist
ships of varying types to the harbour.

Our study indicates that pilots strive to minimize risks in navigational decision-making by envisioning
where the other ships could be in the near future: the aim is to arrange lengthy distances between one´s
ship and the others. More distance is generally better, given that the voyage to the destination will not
be compromised. This anticipation and arrangement of sufficient distances includes different forms of
communication and perspective-taking. We see that these considerations pave way to developing
technologies that facilitate navigation in the maritime domain, which could be beneficial for the
development of autonomous ships.

1.1. The act of marine navigation

Understanding how professional sailors navigate their ships entails implications for the design of
autonomous vessels. By understanding this task, we are more adept at considering the proper division
of labour between intelligent machine technologies and human remote operators. That is, understanding
what human behavioural and cognitive functions could (and should) be carried out through machine-
based solutions.

The existing literature (Prison, et al., 2013; Wahlström et al., 2016) suggests that ship operations could
be viewed as a joint cognitive system (Hollnagel and Woods, 2005). That is, the crew, the technology
and the environment operate as a single operative unit. Looking at the separate parts of the ship system
is insufficient. Since decision-making and perception manifest as a technology-mediated group-effort
the concept of distributed cognition is also accurate here: Hutchins (1995) describes how navigation of
a large military ship, takes place through various work roles among the crew and in the interpretation
of various representational tools (e.g., landmark sightings, radar and maps).

Indeed, perceiving the environment is essential in navigation. This consists of combining the map-based
bird’s-eye view of the environment, with the point-of-view from on-board the ship, interpreting
landscape consisting of navigational aids, such as, lighthouses and marked beacons and buoys.
Capability to note relevant information on sea is sometimes called ‘seaman’s eye’(Crenshaw, 1965).
Prison et al. (2013), however, also emphasize the importance of embodied feel in so-called ship sense
wherein the visual perception is only one element. The rocking of the ship and the vibration of the
engine are felt in the bodies of the sailors. This haptic feedback may indicate problems, as ‘wrong kinds
of’ noises and vibrations are felt by the mariners. With well-developed ship sense, some ships can be
steered along the waves as well, for more efficient, safe and pleasant journey (Porathe et al., 2014).

Prison et al. (2013) note that maritime navigation includes balancing acts. For instance, not too much
but sufficient speed is necessary: too little and the ship is impossible to steer, while too much could
cause ship rocking unwanted for the crew and difficulties in evading obstacles. These authors suggest
that the notion of balances combined with the notion of the joint cognitive nature of operations, implies
that marine navigation could be described as strive for harmony. The environment, the ship and the
crew all have to be in a harmonious balance one with another. In the model offered by Prison et al., the
components of ‘harmony’ can be categorized as follows: 1) environment consists of the broad context
and more specific situation, 2) the vessel consists especially of inertia and navigational instruments,

192
and, finally, 3) the human element consists of spatial awareness, theoretical knowledge and experience.
Theory and experience go hand-in-hand, since experience provides understanding of the physical forces
affecting the ship in naval shiphandling (Crenshaw, 1965). Generally, ship-handling theory involves
lay theories of physics.

Drawing from a description of recreational sailing as a skill (Murphy, 2010), harmony could also be
seen as handling complexity with aesthetic feeling. A sailing ship progresses through hydro- and
aerophysical phenomena, which are arguably so complex that the underlying mathematics cannot be
fully understood situation specifically by the shiphandler. If it feels good and it looks good, however,
there is a good possibility that the sails are set efficiently as well. Indeed, without unpleasant flapping
of the sails and the ship inclined appropriately pleasantly, the ship usually also proceeds efficiently. In
engine driven cargo ships, which tend to be noisy work environments, harmony, as an aesthetic feeling,
manifests perhaps less as pleasantness, but more as a lack of distinctive unpleasantness.

Overall, existing literature recognizes marine navigation as activity where perception, theoretical
knowledge, expertise, tacit feeling and collaboration fuse into one for achieving balance between
human-environment and technology.

1.2 Marine traffic coordination and perspective-taking

It is discussed above that technology and crew operate as unified entity in coordination with the
dynamically changing environment. The ship is a distributed cognition system working as one, but
could the marine traffic as well be viewed as a relatively unified and ‘harmonious’ system in itself? We
will first identify basics on how marine traffic coordination manifests and then consider the issue of
perspective-taking in how mariners consider each other’s ships for the purpose of their navigational
decision-making.

Firstly, ships movements are coordinated through collision regulations (or COLREGs) that somewhat
resemble the automobile traffic rules (e.g., give way for the one coming from the right, in right-sided
traffic). However, contextual understanding and ship-handler’s judgement are required – while there
are specific rules, more generic guidelines exist as well. As indicated by COLREGs Rule 8 a) (Navy
JAG, n.d.): “Any action to avoid collision shall be taken in accordance with the Rules of this Part and
shall, if the circumstances of the case admit, be positive, made in ample time and with due regard to the
observance of good seamanship”.

Secondly, there are various forms of ship-to-ship communication such as flags, speech via VHF radio,
lights and the horn. Research on ship-to-ship and ship-to-shore communication has explored especially
the use of language in the highly international context of nautical activity (Bocanegra Valle, 2011).
Standard Marine Communication Phrases exist to facilitate communication between actors without high
proficiency in English.

Thirdly, reading other ships’ intentions is a distinctive aspect of marine navigation. Communication
modalities entail elements that facilitate this act of reading: marine flags indicate ship intentions and
expression of intent is explicit part of Standard Marine Communication Phrases. Furthermore,
COLREGs (Navy JAG, n.d.) Rule 8 b) instructs that “[a]ny alteration of course and/or speed to avoid
collision shall, if the circumstances of the case admit, be large enough to be readily apparent to another
vessel observing visually or by radar; a succession of small alterations of course and/or speed should
be avoided.” This implies that the ship’s movements should be sufficiently distinctive to express
intention (e.g., instead of small gradual turns to certain direction, a big distinctive turn for expression
the intention of turning).

However, it could be that there is more than meets the eye in between-ship interaction. One may make
a contrast to how humans interact. There has been plenty of research on how individuals coordinate
their joint efforts through explicit communication, shared rules and procedures but also through subtle
cues, such as nods and bodily orientation (Heath & Luff, 2000). Teams of operators work as one

193
coherent entity as individuals anticipate one another’s doings through both explicit communication as
well as through more peripheral and implicit cues.

In this study, we apply the concept of ‘perspective-taking’ for exploring marine navigation. It is a
psychological term signifying the capability of viewing the world from vantage point of another
individual. We choose this concept because it encompasses the clearly present phenomenon of ‘reading
intentions’, but, in being wider in scope, opens avenues to consider other forms of considering the other
ships in marine activities. In the context of board games and in studies of expertise, perspective-taking
seems to increase the likelihood of predictive reasoning, i.e., taking account the possible future moves
of other players (Saariluoma, 1995; Zhang et al., 2012). Indeed, similar could take place on marine
navigation as well, which is in part map-based, in resemblance to many board games. The phenomenon
of perspective taking has been theorized and identified by a number of influential psychologists such
as Köhler (1917/1957), Wertheimer (1945), Piaget and Inhelder (1967), Gibson (1979) as well as
Neisser (1976), the message being that perspective-taking develops in early childhood as infants realize
that the other people have their own mind and mental experience. It is also something that is learnt and
a component of expertise (De Groot, 1965; Newell and Simon, 1972). It is a clearly human cognitive
capability and therefore and interesting vantage point for inquiry in the context of possible future
technological transition to intelligent autonomous ships.

Overall, how anticipation and coordination with the other ships takes place, seems to be a relevant focus
of research and it could be viewed by applying the concept of perspective-taking. Explaining the
perspective can be done by investigating mental representations of the sailors

1.3. Remote control and monitoring of autonomous ships

In remote operation (or teleoperation), operation takes place without direct human sensory contact to
the machine. Video camera feds, sensors, GPS-tracking are among the usual means for remote operation
(Sheridan, 1989). The benefit of remote operation is that it provides safer and more pleasant work
environment in avoiding noise, extreme temperatures, radiation, tremble or other possible issues caused
by the machines or their surroundings. This can be highly beneficial in certain fields, such as, mining
and space operations.

When it comes to seafaring, there are different concepts and ideas for remote operation. For instance,
in a large EU-funded research programme, it was presumed that commercial cargo ships could be
unmanned and remotely monitored during trans-oceanic phases of the voyages, but would include crew
in harbour operations (Rødseth et al., 2014).

There is no single established solution for the way in which the ships would be operated. Three
categories of control for autonomous ships have been distinguished by Porathe et al. (2014). First,
indirect control, refers to updating the voyage plan during the voyages; this could be necessary, say,
due to weather changes. Second, direct control, refers to ordering specific manoeuvres, such as, giving
way for officials during a rescue operation. Third, situation handling, refers to bypassing the
autonomous system, that is, the rudder and thrusters would be controlled directly by a remote operator.

It has been considered that ship teleoperation might include certain challenges (Wahlström et al., 2015).
These include, first, limited situation awareness due to reduced sense of the ship (Porathe et al., 2014).
Due to teleoperation there is no bodily feeling of the ship rocking and the look outside, even if
communicated via camera feeds, could provide only limited understanding of the conditions. Second,
there could be information overload (Porathe et al., 2014) due to the plurality of ships and ship sensors.
The SCC workers could be exposed to too much information in a manner such they would no longer
have the capability in understanding the situation at the sea. Third, if the ships would be monitored from
a far distance, there might be skill attrition (Jalonen et al., 2017): in the course of time, the remote
operator, even with extensive maritime background, might experience deskilling and lose some of the
tacit knowledge in seafaring.

194
The topic discussed in this paper, relates to the third challenge above: if the remote operator has only
limited (recent) experience of being on-board the ship, there might be limited knowledge on how to
operate ships on-board, which might translate into incapability in maritime perspective-taking.

1.4. Research aim

Drawing from the introduction above, this study explores how ‘ships’ (i.e., their crew or the shiphandler
present on bridge) consider the other ships in ensuring safe passage. Our working hypothesis is that
perspective-taking takes part in marine navigation and allows relatively coordinated marine traffic,
because it provides capability to anticipate other ships’ movements. Reflecting the exploratory and
qualitative nature of our study, this working hypothesis serves as a rough guiding notion for interpreting
and analysing the data, rather than as precise assumption on unequivocally defined research question.

2. Methods, data and analysis

A simulator setting was applied for data collection, since this would allow studying challenging
situations more efficiently observations on real sea. The study subjects were experienced maritime
pilots (highly experienced ship-handlers, former ship captains) (n=6) and other professionals (n=2), all
high-grade licensed mariners. On average, they had 20 years of experience (between 4 and 40 years) in
professional shiphandling.

The data collection procedure on simulator has been reported elsewhere (Saariluoma et al., 2019). In
this study, however, we apply interviews that took place after simulator sessions – every participant
was interviewed in a semi-structured manner. The interview scheme was inspired by core-task analysis
method (Norros, 2004) in that the participants were asked about uncertainty, complicated situations and
aspects of work, need for quick decision-making as well as challenges and good practices in general.
The transition to autonomous ships and the potential challenges in the operation of these machines
(Wahlström et al., 2015) was considered in interviews as well.

In practice, contextual interviews (Holtzblatt and Jones, 1993) were applied as the interviews were tied
to the events that took place in simulator. The data was analysed through fact-identification (Alasuutari,
2007) with an element of so-called grounded theory (Flick, 2018) approach. In other words, the working
hypothesis of ‘perspective-taking’ emerged from the joint discussions between participants and
interviewers; the data was then considered in view of how and why ‘perspective-taking’ manifests in
navigation. The interviews were conducted in Finnish and they were transcribed into text by
professional transcribers. For the purpose of this publication, snippets of the transcriptions were
translated into English by the authors.

3. Results and discussion: perspective-taking in anticipatory marine navigation

Our contextual interviews expose different ways in which perspective-taking seems to take part in
marine navigation. It is needed for anticipating the movements of the others and influences the
communication as well. Anticipation includes comprehensive imagination on where the other ships will
be or ought to be in a near future point of time – this is also something to be influence through
negotiation. A pilot (interviewee #3) explains:

“Yeah, and I would like to figure out the overall general view of a situation as soon as possible.
If there is someone coming, I'd like to take contact as soon as possible to figure out where you
are going, what’s going on, lets meet there and there, is it okay that we meet and pass like this.
When we both know well ahead of time there is seldom any panic in the end.”

Another pilot (#8) describes a direct link between anticipation and knowledge of the others. An
additional description of navigation as involving negotiation and planning is given as well.

195
“And with them usually, especially with encountering traffic, quite often we agree on the
meeting point, or it is at least it is agreed where we will not meet. And if I’m having a difficult
ship, like we now [at the simulator], usually it is, I would say, so that I negotiate with the other
piloted ships. Then it is really easy, as we know each other and know what who is thinking, we
have our own individual practices.”

Reasons as to why planning and negotiation are important in navigation includes the slow
manoeuvrability of large ships. ”Everything happens extremely slowly, so we should be able to
anticipate from very far away“, as explained by the pilot (#8). Furthermore, the ships are not always
well known for the marine pilots as they board and guide to harbour different kinds of vessels. As the
ship behaviour is not fully known for the pilot, there is need for playing it safe through planning, which
involves enlarging distances between the ships. Interviewees explained to us that although the captain
on-board in principle maintains the final responsibility of the ship, in practice it is common that the ship
crew gives total command for the pilot, although he or she just visits the ship for a short period. In
addition to knowing the local area, the pilot can be much more experienced than anyone among the ship
crew.

We propose that ship navigation includes profiling activities in view of the other ships in the sea-area,
i.e., ship type, its manoeuvrability, situational awareness, communicative capabilities and aims are
considered. For instance, recreational sailing ships were discussed: it is usual that they do not listen to
radio, they manoeuvre differently from other ships and they do not necessarily follow the situation as
meticulously as professional sailors.

Profiling, however, is only part of perspective-taking, as the experienced seafarers also imagine the
situations of the other ships. For instance, it was discussed that sail ships involved in a competition
could have diminished overall situational awareness: they could focus on the competition relevant
aspects in seafaring and less on generic safety. They might be looking at the co-competing sailboats,
but not so much into directions irrelevant for competition.

While ship-profiling provides insight on how to communicate (e.g., with sailboats, you tend to
communicate with the horn rather than with the AIS-radio), perspective-taking, in including situational
considerations, also takes part in guiding when to communicate. A pilot (#8) explains that in hurried
situations communication might not be feasible, because in such as a situation, with stressed crew,
mishaps may take place. It is part of expertise to know when to communicate, not too soon, not too late.

”Yeah, it is more like that you are trying to prevent the [difficult] situation, so that there would
not be any situation. When the [difficult] situation is going on, then communication no longer
works. If both ships are a bit nervous, there is lots of possibility for misunderstandings, so then
you should at least be able to very clearly express your own intentions and wishes from the
neighbour [the other ship], as unambiguously as possibly. But communication when the
[difficult] situation has not yet been actualized, it is pretty good there. But if you shout way too
early, nobody really cares, or they are like, yeah, yeah.”

Inability to understand the other ships’ intentions and behaviour creates uncertainty. This manifests in
the interviews in somewhat varying ways. As described above already (see the second first interview
snippet on negotiation and planning), an interviewee seems to view positively the notion that s/he knows
how the other pilots think and is familiar with their work practices. In contrast, the concept of
autonomous ships sparks uncertainty. The following comment is an explanation on why the interviewee
(#8) would maintain a long safety distance to autonomous ships.

”No but with other ships you can discuss and agree something, and then you have the courage
to go much closer, but if you can’t discuss with that fellow, or you know that a computer makes
decisions there then you don’t necessarily wanna go very close to any that kind of place where
there is not enough space to turn strongly to some direction.”

196
Other interviewee (#2) states that illogical behaviour of the other ships is one of the main sources of
uncertainty in marine activities.

“In fact it is logicality, how one drives, that provides most certainty [in marine navigation], and
that illogicality how they are driven that provides most uncertainty, let’s say so.”

Similarly, when it was discussed what kind of situations could be difficult for the intelligent autonomous
ship, interpreting ambiguously behaving ships was mentioned. The interview snippet below also
portrays the researcher bringing forth the idea of perspective-taking (although at this point that was not
the focus of research yet); the interviewee (#3) agrees that this is important and elaborates it further by
giving an example that relates to the situation at the simulator.

Interviewee: “It is like that, of course, those are probably difficult, as well as, for human, those
when they behave ambiguously, those like that motor boat, comes towards you. Although there
is no threat to you, you are afraid, pieces of fibreglass. When you can see that [the other ship]
does not behave in the usual manner.”

Researcher: “So, can we summarize as such: when you have to empathize with the other and
try to imagine its intentions?”

Interviewee: “Yeah, and, for example, a good example was that when there was two trawlers
[fishing boats]. First question was that do they have a seine [a fishing net] in use. Because I
know that then they are extremely unwieldy and bad at turning. Usually a trawler can
immediately turn 180 degrees but when they have the fishing net in use, then it is just mandatory
to take that into consideration.”

Finally, it is discussed that understanding and interpreting the other ships behaviour and intentions is a
skill, which is acquired through experience at localized sea-areas. Arguably, personal experience of
using various ship-types is a central source of this experience, although it can also be conveyed through
explanations (as in the excerpt below). Furthermore, the researcher leads another interviewee (#4) to
think of navigation in terms of perspective-taking (or ‘empathy’, more specifically) and the interviewee
agrees with this notion by using the Finnish word ‘samaistumiskyky’ that roughly translates into the
ability to adopt the perspective of the other person.

Researcher: “It was clear to you that it was trawling [fishing with a fishnet]?”

Interviewee: “Well I don’t know. Like I said, those fishing ship are to me, well occasionally I
identify trawlers, actually from the movement that they do [laughs].”

Researcher: “Yeah, it is like trawling in pairs.”

Interviewee: “Yeah. The Norwegians have taught me how it looks like. But like I said that I did
not [know] the notion that they did trawling in pairs, I didn’t actually think about it. I just
checked that a fishing boat. To me it was just a fishing boat in that situation. Two fishing ships
who were in that group.”

Researcher: “Do they talk about that in your training at all, that how do the others, or I mean
that, this is a fishing boat and that is different kind of a fishing boat, or something like this?”

Interviewee: “Not in training in Finland. I have gained the knowledge from those who have
been fishermen, so they have been able to interpret those fishing ships, but then it is, it relates
to that in Norway where there’s lots of fishermen, and many colleagues have been a fisherman
before, so they have that kind of knowledge that the usual Finnish sailor does not have
necessarily.”

197
Researcher: “Exactly. And that too what we are now talking about the behaviour of the
human being, in a sense, empathy, or that you will be able to understand what the other is
doing.”

Interviewee: “The ability to adopt the perspective of the other and to empathize, that kind.”

Fig.1 below summarizes the results. The aspects on Fig.1 are given content above. The figure
distinguishes between ‘ship profiling’ and ‘situational assessment’, although it is uncertain whether or
how this distinction manifests in shiphandler cognition – it is a logical distinction rather than something
that clearly manifests in our data. The causal links, which are represented by arrows on Fig.1, are logical
postulations as well reflecting the qualitative interview findings, that is, they are not statistical links
based on well operationalized and defined concepts. Maritime perspective-taking is learnt through
experience on-board ships or through stories of those experiences and it feeds into communication,
anticipation, and a sense of certainty. All these, in turn, arguably take part in navigational decision-
making. Communication with the other ships provides understanding on their situational status, which
explains the causal link arrow back to the ‘situational assessment’ -box.

Fig.1: Model of maritime perspective-taking

4. Conclusions and design implications

In summary, a good navigator does not just plan a pathway to the destination. She or he aims at risk
minimization because failures and surprises are always possible on sea. This implies the aim of
increasing the distances between ships, if possible. This, in turn, requires not considering your own
pathways and aims only but also where other ships will be or might be in different conditions. A good
navigator, so to say, interprets the traffic setting on sea as ‘playing field of varying actors’ all of which
have their different aims, different degrees of freedom and perceived level of reliability, and which can
be communicated with differently – perspective-taking arguably takes part to these assessments.
Navigation takes place by reading the traffic situation comprehensively rather than by merely planning
your own route.

198
Our study suggests that anticipatory and risk-minimizing navigation draws from assessments regarding
the other ships’ 1) communicative capabilities, 2) situational awareness, 3) tasks, 4) manoeuvrability
and 5) predictability (i.e., how logically they are behaving). This raises an interesting question as to
what degree an intelligent machine could actualize similar assessments. Perspective-taking is a
characteristic of human social cognition, which could be challenging to fully implement through
machine learning tools and technology available at hand today. However, ship identification and
trajectory predictions of pre-categorized ships could arguably be actualized through the help of machine
learning. These predictions and ship categories could be utilized by intelligent autonomous ship agents
in their navigational decision-making and communication, perhaps in collaboration with the remote
operator who could use such estimates as a navigational planning tool. An on-board shiphandler could
find such navigational aids useful as well. In general, ship-categories, generated by expert sailors and
featuring information content reflecting the elements on Fig.1, could help novice seafarers in ship-to-
ship communication and in interpreting the overall marine traffic at the nearby sea-area, which could
facilitate anticipatory navigational decision-making.

Acknowledgements

This study takes part to D4Value research programme funded by TEKES (a Finnish governmental
funding organization now known as ‘Business Finland’). Special thanks also to Rolls-Royce Marine –
Iiro Lindborg, Anssi Lappalainen, Rami Leponiemi, Anton Westerlund and Anu Peippo – for
collaboration in the research, in setting the basic research dilemma, in co-planning the simulator
sessions and for acquiring the research subjects. Funding was also acquired from VTT Technical
Research Centre of Finland Ltd, which also provided the simulator research setting, and from the
University of Jyväskylä.

References

ALASUUTARI, P. (2007), Laadullinen tutkimus [Qualitative research] (6th ed.), Vastapaino

BOCANEGRA VALLE, A. (2011), The Language of Seafaring: Standardized Conventions and


Discursive Features in Speech Communications, Int. J. English Studies 11/1, 35

CRENSHAW, R. (1965), Naval Shiphandling, US Naval Institute

DE GROOT, A. (1965), Thought and choice in chess, Mounton

FLICK, U. (2018), Doing grounded theory, SAGE Publications

GIBSON, J.J. (1979), The Ecological Approach to Visual Perception, Houghton Mifflin

HEATH, C.; LUFF, P. (2000), Technology in Action, Cambridge University Press.

HOLLNAGEL, E.; WOODS, D.D. (2005), Joint Cognitive Systems - Foundations of Cognitive Systems
Engineering, CRC Press

HOLTZBLATT, K.; JONES, S. (1993). Contextual inquiry: A participatory technique for system
design, In Participatory Design - Principles and Practice (pp.177-210) Erlbaum Associates

HUTCHINS, E. (1995), Cognition in the wild, MIT Press

JALONEN, R.; TUOMINEN, R.; WAHLSTRÖM, M. (2017), Safety of Unmanned Ships - Safe
Shipping with Autonomous and Remote Controlled Ships, Aalto University,
https://aaltodoc.aalto.fi/handle/123456789/28061

KÖHLER, W. (1917). The mentality of apes, Penguin Books

199
LEVANDER, O. (2017), Autonomous ships on the high seas, IEEE Spectrum 54/2, pp.26-31

MAN, Y.; LUNDH, M.; PORATHE, T. (2014), Seeking Harmony in Shore-based Unmanned Ship
Handling From the Perspective of Human Factors, What Is the Difference We Need to Focus on from
Being Onboard to Onshore? Advances in Human Aspects of Transportation: Part I, pp.231-239

MÜLLER, L.; RISTO, M.; EMMENEGGER, C. (2016), The social behavior of autonomous vehicles,
2016 ACM Int. Joint Conf. on Pervasive and Ubiquitous Computing Adjunct, New York, pp.686–689

MURPHY, D. (2010), Plain sailing: learning to see like a sailor: a sail trim manual, Burford Books

NAVY JAG. (n.d.), COLREGS - International Regulations for Preventing Collisions at Sea, http://
www.jag.navy.mil/distrib/instructions/COLREG-1972.pdf

NEISSER, U. (1976), Cognition and Reality, Freeman

NEWELL, A.; SIMON, H. (1972), Human problem solving, Prentice-Hall

NORROS, L. (2004), Acting under uncertainty, The Core-Task Analysis in Ecological Study of Work,
http://www.vtt.fi/inf/pdf/publications/2004/P546.pdf

PIAGET, J.; INHELDER, B. (1967), The Child’s Conception of Space, Routledge

PORATHE, T.; PRISON, J.; MAN, Y. (2014), Situation awareness in remote control centres for
unmanned ships, Proc. Human Factors in Ship Design & Operation, London, pp.93–101

PRISON, J.; DAHLMAN, J.; LUNDH, M. (2013), Ship sense—striving for harmony in ship
manoeuvring, WMU J. Maritime Affairs, 12(1), pp.115–127

RØDSETH, Ø. J.; TJORA, Å.; BALTZERSEN, P. (2014), MUNIN Delivarable D4.5 Architecture
Specification, http://www.unmanned-ship.org/munin/news-information/downloads-information-
material/munin-deliverables/

SAARILUOMA, P. (1995), Chess players’ thinking: A cognitive psychological approach, Psychology


Press

SAARILUOMA, P.; KARVONEN, A.; WAHLSTRÖM, M.; HAPPONEN, K.; PUUSTINEN, R.;
KUJALA, T. (2019), Challenge of Tacit Knowledge in Acquiring Information in Cognitive Mimetics,
Intelligent Human Systems Integration - Advances in Intelligent Systems and Computing 903, pp.228-
233

SHERIDAN, T. B. (1989), Telerobotics, Automatica, 25(4), pp.487–507

WAHLSTRÖM, M.; HAKULINEN, J.; KARVONEN, H.; LINDBORG, I. (2015), Human Factors
Challenges in Unmanned Ship Operations – Insights from Other Domains, Procedia Manufacturing 3,
pp.1038–1045

WAHLSTRÖM, M.; KARVONEN, H.; NORROS, L.; JOKINEN, J. (2016), Radical Innovation by
Theoretical Abstraction–A Challenge for The User-centred Designer, The Design Journal, 19(6),
pp.857–877

WERTHEIMER, M. (1945), Productive Thinking, Harper

ZHANG, J.; HEDDEN, T.; CHIA, A. (2012), Perspective-Taking and Depth of Theory-of-Mind
Reasoning in Sequential-Move Games, Cognitive Science 36/3, pp.560–573

200
Simulation Driven Structural Design in Ship Building
Tom Goodwin, Altair Engineering Ltd., Leamington Spa, UK tom.goodwin@uk.altair.com
Alan Dodkins, BAE Systems Maritime (Retired)

Abstract

The traditional naval ship design process relies on limited design data on the major structural design
drivers when making key decisions in the Concept and early Preliminary Design phase of a project.
This largely subjective approach, albeit using the best engineering judgement, can result in inefficiency
and sometimes even significant structural problems being locked-in from the start, with the consequence
of increased weight and unnecessary complexity, as well as higher design and manufacture cost in the
end product, compared with one where the design has been optimised. Simulation driven design acts to
solve these problems by providing naval architects with a greater and more in-depth understanding of
the design drivers at the concept phase, thus enabling more informed design decisions to be made at
this critical stage. It is facilitated using structural simulation, such as finite element analysis (FEA)
working in conjunction with optimisation technology, that yields ‘right first time’ designs. This paper
highlights the problems of the traditional design process and discusses the merits of the simulation
driven design process. This is supported by examples of how it has been applied to local structures on
the UK’s Queen Elizabeth Class (QEC) Aircraft Carrier. The paper also discusses how the process can
be applied at an earlier stage and expanded to whole ship design.

1. Introduction

1.1. Traditional Naval Ship Design Process

Naval ship structural arrangements are rarely optimised for weight and cost from the outset because for
these large and complex vessels the initial focus is entirely centred on defining a hull envelope, general
arrangement and powering solution that will meet the operational requirements and budget of the
customer. Although the major whole life cost and performance-driving decisions are made during
Concept and early Preliminary Design phases, Barlow and Shanks (2010), Andrews (2010), their impact
on the structural arrangement tends to be assessed subjectively and from a top-down perspective, by
undertaking manual design iterations with limited data or by adapting existing designs in order to obtain
a realistic weight estimate of the eventual structural arrangement. As structural weight can typically
form 40-50% of the total lightship for a naval vessel this is the minimum necessary in order to
demonstrate feasibility of a concept design. However, this approach can lead to the following problems:

• Undesirable constraints placed upon the structural arrangement by high-level design decisions
being locked-in as the project advances in design maturity from contract award to Preliminary
Design and into Detailed Design.
• A higher likelihood of costly iterative change being required in the later design and build phases
with consequent pressures on resources and programme.
• The resulting structural design being sub-optimal in terms of weight and cost to manufacture
and a poor compromise of design parameters.

Solving these problems would result in more efficient, low cost platforms.

Within the commercial sector, a well-defined operational profile and a strong learning curve achieved
as a result of a frequent turnover of designs, typically solves the above problems. The Naval sector is
rarely able to benefit from a similar learning curve for several reasons:

• The turnover of designs is much less frequent.


• There is an increasing trend for naval vessels to be designed for a multiplicity of roles due to
acquisition and operating costs limiting the number of platforms and thus putting pressure on

201
designers to achieve maximum leverage.
• There is the added requirement to protect the vessel against a variety of above surface and
underwater weapon threats as well as the normal seagoing environmental loads, ISSC (2006).
• The required design solutions tend to be such significant extrapolations of historical and
published data that they have effectively been started from a ‘blank sheet of paper’ to produce
a bespoke solution, as is the case with some recent UK naval ship programmes.

All these factors invariably lead to difficulties in converging on optimal solutions in the early stages of
the engineering lifecycle, with the result of considerable change or ‘design churn’ and higher
engineering costs in the later stages.

In order therefore to accommodate the above issues and work towards the same efficient low-cost
platforms achievable in the commercial sector an alternative to the traditional design approach is
required that does not rely on time-consuming manual design iterations and instead informs designers
to enable them to focus on high level engineering decisions.

1.2. An Alternative Approach

A means of automatically selecting the optimum ‘right first time’ design from all possibilities is
required. This can be achieved using optimisation algorithms that consider the overall design objective
and the design constraints and then determine the optimum values for the design variables in order to
satisfy them. This approach has been proven to work effectively in industries such as aerospace and
automotive and is readily applicable to the ship building industry. For example, competition in the
cruise ship market and the increasing number of novel concepts, has prompted a more sophisticated
approach to the design of structures for these ships, with optimisation techniques being applied to
demonstrate weight savings in the order of 10% compared with designs produced by the most
experienced engineers using the traditional approach, Zanic et al. (2000).

Using optimisation-based simulation early on is the fundamental basis of the simulation driven design
process. This paper demonstrates how the process of simulation driven design has already been
successfully used to produce local design solutions on the QEC Aircraft Carrier and provides discussion
on how it can be expanded to the whole ship level with the aim of solving the previously stated
problems.

2. Simulation Driven Design Process & Methods

2.1. Exploring the Design Space

When designing any structure, the end product will be required to meet single or multiple objectives
(e.g. minimise mass) and a set of constraints (e.g. stress must not exceed an allowable). This is achieved
through the manipulation of design variables (e.g. geometry). Different combinations of design
variables produce different design solutions and the space in which all these variable combinations
reside is known as the design space. Within all design spaces there are combinations of variables that
form the optimal solutions where the constraints are met whilst satisfying the objective functions. The
challenge to any designer is finding these optima.

The traditional structural design process relies on the optimal solutions being found through prior
knowledge, engineering experience and simple trial and error. For complicated design problems this
approach can lead to lengthy and costly design cycles in which the end product may never be optimal
and may not be as efficient in terms of cost and performance as may be desired. The fundamental
problem with the traditional design approach is that it is impossible to consider all design possibilities,
either because there is insufficient time to do so or because some design possibilities simply cannot be
conceived based on existing engineering knowledge and experience. This inability to fully explore the
entire design space means that the optimal solutions to complex design problems will rarely be found
by traditional design methods.

202
The simulation driven design process utilises structural simulation combined with optimisation
technology to intelligently explore, mathematically and logically, the design space in order to identify
the optimal design. This process results in fewer design iterations, which in turn results in a shorter
design cycle, whilst ensuring an optimum structural solution. A faster turnaround of design ideas also
allows a greater number of design starting points to be considered and enables trade-off studies to be
undertaken rapidly.

2.2. Simulation Methods

This part of the design process can either take the form of simulating the ship structure using a set of
rules-based design calculations, or by using finite element analysis (FEA) of the structure or indeed
both. Within an optimisation algorithm the variables can be passed through both hand calculations and
FEA at the same time in order to achieve as much fidelity as possible.

Traditionally, analytical methods such as FEA have only been used to validate and refine existing
designs, but this is an extremely poor use of a very powerful technology. In simulation driven design,
FEA can be enabled at the very beginning of the design process to ensure that the detailed assessments
of structural concepts that result from FEA can be used in assisting with the design rather than just
validating and refining it.

Whatever method is used, the corner stone of the process is optimisation. It is optimisation technology
that transforms structural simulation tools into design tools that can explore the design space.

2.2. Optimisation Technology

The optimisation technology within structural simulation driven design revolves around manipulating
the geometry of the structure. Four techniques presently exist for doing this:

• Free form or ‘topology’ optimisation


• Free-size optimisation
• Size optimisation
• Shape optimisation

Free form or ‘topology’ optimisation automatically identifies which areas of a package space (the space
in which the structure can reside) are structurally important and structurally redundant under a given
set of design loads, objectives and constraints. This method is typically used early in the design process
where the overall package space is defined and maximum freedom of design within that space is still
allowed.

As an example, topology optimisation could be used to identify the best location for structural bulkheads
(structurally important areas within a package space) and the best locations for openings within those
bulkheads (structurally redundant areas within a package space).

Fig.1 illustrates the topology optimisation principle where load paths are ‘grown’ and redundant
material ‘removed’ between two loaded hard-points within a package space to form an optimal
structure.

Free-size optimisation acts in a similar way to topology optimisation, indicating areas of structural
importance and redundancy within a package space. The difference is that free size optimisation acts to
freely vary the thickness of existing structural panels rather than physically removing or ‘growing’
structure within a package space. An example of this method can be seen in Fig.2, where an aircraft rib
structure has been optimised using the free-size method.

Size optimisation allows for pre-defined structures to be optimised by automatically changing


dimensional values (e.g. plate thickness or stiffener dimensions) in the structure.

203
Fig.1: Defining Optimum Structural Layout using Topology Optimisation

Fig.2: Free-Size Optimisation of an Aircraft Rib

Shape optimisation is similar to size optimisation in that it varies the dimensions of structural features.
However, instead of changing a dimensional value within an equation or FE model it acts to change the
shape of an FE mesh used to describe a structural feature such as a radius corner on an opening.

Both size and shape optimisation can build on the structure defined by topology or free-size
optimisation or be used to refine an existing design concept that has originated from other means.

Size optimisation can be applied to both FEA and non-FEA based simulation methods as it involves
setting dimensional values as variables. Topology, free-size and shape optimisation are FEA dependant
methods as they rely on structural package spaces being discretely divided up with finite elements.

Although optimisation methods can be used at any stage in a design process their application early in
the design process results in a more efficient distribution of material and load. This has the knock-on
effect of minimising stress concentrations, which is one of the main causes of remedial work later in
the design process.

3. Existing Alternatives to the Traditional Design Process

Currently the only alternative to the traditional design process that is known to exist in the ship building
industry, that also makes use of optimisation and structural simulation technology, is one that focuses
on the ‘fine tuning’ of pre-conceived structural arrangements. The ‘fine tuning’ is akin to the size

204
optimisation technology stated previously, where, for example, the plate thickness and stiffener
dimensions are changed in order to reduce mass and fabrication cost whilst meeting design constraints.
MAESTRO is an example of a tool that promotes this alternative approach to the traditional design
process.

Although the existing applications can be extremely powerful design tools, they are unable to address
the following key questions:

• What should the starting structural layout be to achieve an optimum solution?


• What is the optimum shape of structural features?

A simple example would be that of an opening in a bulkhead: the surrounding plate thickness can be
fine-tuned to achieve an ‘optimum’ solution, but what if the true optimum involves moving the opening
a couple of metres from its current position or changing the corner radii of the opening? Solving this
problem requires more than just size optimisation.

The simulation driven design process as described in Section 2 addresses the above questions in the
following manner:

• Firstly, a design optimisation stage is introduced before the ‘fine tuning’ stage that makes use
of the ‘free form’ optimisation technology (topology and free-size) to determine optimum
structural layout.
• Secondly a shape optimisation process is introduced that can be used in parallel with size
optimisation to provide maximum flexibility in ‘fine tuning’ the design.

4. Current Industry Applications

For many years simulation driven design has been successfully applied in the aerospace and automotive
industries primarily to reduce product mass in order to improve product efficiency. This need to reduce
mass has been driven by increasingly stringent governmental environmental legislation and customer
demand.

Within the aerospace industry Airbus has been a high-profile adopter of simulation driven design,
adopting the process as early as 2002, Krog et al. (2002), for design work on wing ribs for the A380.
The company has continued to increase its use of the process over the years and during development of
its most recent aircraft, the A350, had an optimisation centre dedicated to applying the process to the
design of structural components.

Within the automotive industry Jaguar Land Rover have been keen adopters of optimisation technology
in the design of vehicle components, extending the technology to account for design robustness to
ensure a robust optimum solution, Zeguer and Bates (2007). The fast-growing car industry in China has
also recognised the benefits with SAIC having adopted the process for the development of their
vehicles, Husson and Burke (2009).

Within the aforementioned companies, simulation driven design has been introduced into the design
process through the use of the following software tools: Altair Optistruct for free form, size and shape
optimisation and Altair HyperStudy for FEA solver independent optimisation and optimisation
involving robustness. Within the ship building industry, the demands for design improvement are less
clear cut and the problems that need to be solved require a more complicated solution than simply
saving mass. Within this industry cost is a primary driver and it is the raw material and fabrication cost
where there is significant room for improvement.

Simulation driven design can reduce raw material cost by enabling lighter more efficient structures and
can also reduce fabrication cost by reducing structural complexity through making more efficient use
of the ship structure and minimising the need for remedial work late on in the design process. It can

205
also help reduce engineering design costs through minimising the number of design cycles and helping
to eliminate the need for costly, late design changes that may have resulted from ill-informed design
decisions early on.

5. Application to the Queen Elizabeth Class Aircraft Carrier

The UK’s Queen Elizabeth Class (QEC) Aircraft Carriers have been produced and delivered by the
Aircraft Carrier Alliance (ACA), in one of the largest engineering projects undertaken in the UK in
recent times. The ACA is an innovative partnering arrangement between BAE Systems, Thales UK,
Babcock and the Ministry of Defence.

The above track-record of the use of simulation driven design provided the ACA with the confidence
to fund a pilot exercise on a section of structure of the vessel, during which time Altair engineers had
the opportunity to familiarise themselves with the principles of steel ship design.

The main driver for exploring the capabilities of this technology was the lack of background data in the
UK on naval vessels of this size (i.e. lack of pre-formed ideas of what structural solutions should look
like) and the sheer number and complexity of load cases, which together made intuitive design decisions
difficult to make.

The following sections detail examples of how simulation driven design has been used on the QEC
Aircraft Carrier.

5.1. Double Bottom Structural Arrangement

The double bottom structure of the ship is required to support significant loads from external hydrostatic
pressures and dynamic loads induced by large equipment items that reside on the double bottom.

As part of the QEC project’s “Confined Space Access and Escape Arrangements Policy” there was a
requirement to route access openings through the floors in the double bottom. In order to identify the
optimum locations for these openings, topology optimisation was employed to identify areas of
redundant structure that could accept access openings without compromising structural performance.

Fig.3 illustrates an example topology optimisation result for a double bottom floor along with a
conservative design interpretation of the topology result.

Fig.3: Topology Optimisation and Resulting Design Interpretation for a Typical Double Bottom Floor

Following the interpretation of the topology optimisation, size and shape optimisation was employed to
fine tune the shape of the openings to improve the stress response of the structure.

In addition to aiding in the placement of openings the topology optimisation acted to minimise the
steelwork mass required to meet the design targets. The final proposed design was 9% lighter than the

206
baseline design (despite a conservative interpretation of the topology results) and met all design targets,
while the baseline design failed to meet the stress targets. This combination of meeting project policy
needs in terms of access openings and reducing mass was a ‘win-win’ solution for the project as at the
time there was also a drive to reduce overall steel mass.

5.2. Flight Control Module

The QEC flight control (FLYCO) module is located on the aft island and is home to the equipment and
personnel that assist in the control of aircraft operations, Fig.4. The FLYCO structure comprises a large
glazed area supported between an upper and lower sponson structure. These sponson structures are
required to meet natural frequency and deflection targets and are therefore subject to the interactions of
mass and stiffness.

FLYCO
Module

Fig.4: Aft Island with FLYCO Module

Global
Topology Global Design Interpretation

Local Topology Local Design Interpretation


Fig.5: Global and Local Topology Optimisation Results

207
In order to satisfy the design requirements, simulation driven design was employed to achieve a ‘right
first time’ solution to the internal structural arrangement of the FLYCO module. Topology optimisation
was first employed to identify the optimum global positioning of stiffening webs within the package
envelope of the module. This was then followed by a further round of topology optimisation to identify
the optimum load paths within those webs, such that openings could be cut without compromising
structural performance, Fig.5.

Finally, size and shape optimisation was employed to fine-tune the plate thicknesses and opening sizes
to minimise mass and design complexity whilst meeting design targets.

The outcome was a structure that met the natural frequency, deflection, stress and buckling targets
whilst being 16% lighter than a traditional design and using less piece-parts, resulting in reduced
fabrication cost, Fig.6.

Fig.6: Final Internal Structure for the FLYCO Module

5.3. Stern Platform

The QEC stern platform is a cantilevered structure subject to significant slamming loads and therefore
needs to be integrated into the main ship structure in such a way as to minimise stress concentrations
and the risk of buckling, Fig.7.

The ACA conceived three design solutions, Fig.8, involving the use of insert plates and wanted to
identify the solution that would result in the thinnest and least number of inserts. Ordinarily such a task
would require many trial and error analyses, with no guarantee of achieving an optimal solution.

FE models of the three concepts were created followed by size and shape optimisation of the various
piece parts that formed the different solutions. The process enabled rapid identification of the solution
that resulted in both the minimum number of inserts and the minimum insert thickness.

The results of the optimisation are summarised in Table I.

208
Stern Platform

Fig.7: FE Model of QEC Aft End Incorporating the Stern Platform

Cruciform Chamfer Radius


Fig.8: Stern Platform – Three Proposed Design Solutions

Table I: Stern Platform Optimisation Study Results


Design variant % Reduction in peak stress Maximum Insert Minimum Number
compared to ‘cruciform’ Thickness (mm) of Inserts
‘Cruciform’ - 45 43
‘Chamfer’ 13% 58 37
‘Radius’ 26% 31 36

As indicated in Table I, the ‘radius’ design proved the best solution, exhibiting peak stresses 26% less
than the ‘cruciform’ design and having the least number of inserts and the thinnest inserts. Reducing
the number of piece parts and mass of material required has resulted in a design that is less costly to
manufacture through the savings made in raw material and fabrication cost.

5.4. Transverse Bulkheads

The transverse bulkheads (known as ‘bents’) that run between the flight deck and 2 deck above the 30m
wide hangar bay are subject to high stresses under ship racking loads. The problem is compounded by
the need for multiple, large access openings through the bulkheads. Figure 9 below illustrates an
exaggerated FEA deflection plot result of a typical ‘bent’ under racking loads, showing the multiple
openings. The large hangar space can be clearly seen in the middle of the plot.

209
Fig.9: Typical Racking Load Deflection Plot

Given that simulation driven design was not employed from the outset for these structures, it was found
that multiple stress concentration issues arose in way of openings and structural discontinuities that
required a rapid solution.

In order to rapidly identify what size and shape of door opening or plate insert to employ across the
bulkheads, simulation driven design was used in the form of structural size and shape optimisation on
FE models of the bulkheads. For a given bulkhead, multiple stress concentration issues could be solved
simultaneously enabling the interactions between the different stress concentrations to be accounted
for. This approach resulted in all the stress concentration issues being solved with the minimum possible
insert plate thickness.

For the example shown in Fig.9, eight stress concentration issues were identified following an FE
analysis of the structure. These eight issues were then simultaneously solved using a single optimisation
run that involved changing plate thicknesses and corner radii in order to reduce stresses to acceptable
levels. The optimisation took six iterations to converge on a solution and ran in approximately 20
minutes on a desktop PC.

This example illustrates how simulation driven design can be used to rapidly solve problems late on in
the design process. At the stage in the design programme when the analysis was undertaken, the access
openings were fixed in location and the general arrangement had developed without optimisation of the
opening locations. A great deal of work took place in the early stages to prove the feasibility of placing
access openings in these highly loaded structures. However, had a simulation driven design approach
been used earlier then perhaps an alternative solution would have been revealed that would have resulted
in less detailed design work having to be undertaken to achieve an acceptable design.

Very often the general arrangement development progresses with inadequate input of structural
considerations and without quantifying the cost and complexity of having to live with those decisions,
which only emerge later in detailed design.

6. Proposed Future Applications

The work undertaken to date would provide an ideal foundation for a research programme to identify
how best the simulation driven design process can be applied to whole ship design starting at the concept
phase and running through to the detailed design phase.

The previous examples have illustrated that simulation driven design has merit in solving local
structural design problems. One objective of any future work would be to demonstrate that the same
processes can be applied to whole ship design. The first task would be to firmly establish the capabilities
and tools that are currently in use such that they can be included in any simulation driven design process.
The focus here will be to ensure that the product of previous research work is not unnecessarily repeated.

210
The follow-on task would be to look at toolset integration, in particular how rule-based design tools can
be interfaced with optimisation software, FEA, CAD, CFD, cost models and shipyard operational
enablers and constraints. Including all these factors would enable all relevant variables to be included
and all stakeholders to have a sense of input to the design process and ownership of the resulting
solutions. This will all help contribute to ‘right first time’ design decisions and provide an auditable log
of all design parameters and constraints.

It is envisaged that topology and free size optimisation could be used in conjunction with whole ship
FE models to conduct trade-off studies at the concept phase to identify optimum positioning of
bulkheads and primary structure. In parallel, size and shape optimisation could be wrapped around
rules-based hand calculations in order to identify optimum rules-based structures.

The application of such technology starting early in the design process is reliant on whole ship FE
models being built quickly and efficiently. Advanced FE model pre-processing tools such as Altair
HyperMesh facilitate this. These tools can also be made to integrate with ship concept structural
definition tools in order to make the building of whole ship FE models a highly automated and therefore
fast process.

Introducing simulation driven design into the design process is not aimed at replacing the experience
of naval architects but to complement that experience. The process aims to provide naval architects at
the concept phase with as much information as possible to enable them to make more informed design
decisions at a crucial time.

7. Conclusions

Within this paper the shortcomings and problems associated with the traditional ship structural design
process have been highlighted. In addition, this paper has outlined the concept of simulation driven
design and the technology that underpins that process.

The paper has demonstrated, by example and through discussion, how the technology can provide
benefits to ship structural design and manufacture in terms of design and fabrication cost reduction,
mass reduction and improved structural performance and efficiency.

It is recognised that the tools and techniques still require further development in order to be successfully
applied to whole naval ship design due to the considerable size of the design space involved and further
research work would be needed to facilitate this.

Overall the benefits of simulation driven design to the naval marine industry can be summarised as
follows.

• Reduced structural weight, by minimising redundant material.


• Reduced cost per tonne of fabricated steel, by minimising the need for complex local solutions
to address stress concentrations and discontinuities (i.e. inserts, brackets and tapered sections).
• Reduced risk of through-life problems such as fatigue cracking through developing simpler
structures with fewer ‘complicating features’
• Reduced analysis effort in the detailed design phase as a result of there being fewer emergent
problems that have to be solved in order to make the design work in practice.
• Improved ability to cope with increasingly complex and sometimes conflicting design
requirements that may continue to evolve during the early life of a project.

It is the authors’ belief that simulation driven design using the approach described, with its logical,
automated ‘explore all design options’ capability represents the future of naval ship design in the UK.
It should be noted however that it is not intended to replace experienced naval architects but instead
provide them with the tools and freedom to make more informed design decisions at the critical concept
phase of a project. It is also recognised that for the technology to be adopted a move away from

211
traditional or established design approaches is required and a greater degree of structural modelling
undertaken in the early design stages. This means the deployment of more design resources in the early
project stages, demanding a funding commitment that needs to be recognised and acknowledged by
projects as a worthwhile up-front investment.

Acknowledgements

The support of Thales Naval UK Ltd., an ACA Industrial Partner, is acknowledged for recognising the
potential benefits of and supporting the simulation driven design approach to the QEC Aircraft Carrier.
TEX ATC Ltd. is acknowledged for supporting the use of the process for the structural development of
the FLYCO module, for which they are responsible.

References

ANDREWS, D.J. (2010), Marine Requirements Elucidation Revisited, RINA Int. Conf. Systems
Engineering in Ship and Offshore Design, Bath

BARLOW, G.J.; SHANKS, A.J. (2010), Systems and Safety Engineering – a Combined Approach
during Concept Design and Beyond, RINA Int. Conf. Systems Engineering in Ship and Offshore
Design, Bath

HUSSON, D.; BURKE, A. (2009), The Application of Process Automation and Optimisation in the
Rapid Development of New Passenger Vehicles at SAIC Motor, Altair HyperWorks Technology Conf.

ISSC (2006), Naval Ship Design, 16th Int. Ship and Offshore Structures Congress, Committee V5,
Southampton

KROG, L.; TUCKER, A.; ROLLEMA, G. (2002), Application of Topology, Sizing and Shape
Optimisation Methods to Optimal Design of Aircraft Components, Altair HyperWorks Technology
Conf.

ZANIC, V. et al (2000), Structural Design Methodology for Large Passenger and Ro-ro/Passenger
Ships, IMDC

ZEGUER, T.; BATES, S. (2007), Signpost the Future: Simultaneous Robust and Design Optimisation
of a Knee Bolster, Altair HyperWorks Technology Conf.

212
An Expert System for Cost Estimation of Shipyard Steel Assembly
Marije L. Deul, Delft University of Technology, Delft/NL,
m.l.deul@student.tudelft.nl
Bernardes J. Hoek, Delft University of Technology, Delft/NL, b.j.hoek@student.tudelft.nl
Sietske R.A. Moussault, Delft University of Technology, Delft/NL,
s.r.a.moussault@student.tudelft.nl
Anna-Louise A. Nijdam, Delft University of Technology, Delft/NL, a.a.nijdam@student.tudelft.nl
Gerrit Alblas, Delft University of Technology, Delft/NL,
g.alblas@tudelft.nl
Robert G. Hekkenberg, Delft University of Technology, Delft/NL, r.g.hekkenberg@tudelft.nl

Abstract

The current cost estimate applied by shipyards in the pre-contract phase is insufficiently valid and
accurate for “one-off” ships. Most shipyards use the unit man-hours/metric ton for the estimation of
man-hours in steel assembly. This key figure does not take blocks’ properties and the required actions
for assembly into account. It is evident that e.g. welding above head by hand or Submerged Arc Welding
(SAW) and the amount of lift and turn movements influence the lead-time and number of man-hours.
The estimation of man-hours in steel-assembly can be improved by the identification of production
patterns and the resulting actions. Therefore, the rationale, the reasoning and decision-making
processes of the production engineers needs to be captured. This paper focuses on rationale capturing,
codifying and storing for (re)use. First, it is discussed how the rationale is captured through experts-
interviews, based on the reactive knowledge capturing method. Thereafter, it is explained how the
captured rationale is stored in an Expert System, linking construction plans to a fitting assembly-order.
This resulting assembly-order can be used to derive the amount and type of actions required. This
enables a better substantiated cost estimate. The developed system in this research is generally
applicable and proves the concept. The paper concludes with recommendations for future research,
which includes further automation of this Expert System and integration of man-hour calculations.

Nomenclature
The definitions as used in this research are stated:

Plate Flat plate with secondary stiffeners


Panel A combination of plates in the same plane
Subassembly A combination of panels in different planes that is not a block
Block Combination of panels, defined in the block-division plan of the ship
Side-shell A double hull block on the side of a vessel

1. Introduction
Most Western European shipyards use a statistical analysis of historical data to estimate the construction
costs of a new-to-built vessel (Shetelig, 2013). Yard specific rules of thumb, like man-hours per metric
ton, are used and supplemented with expert opinions. This statistical analysis offers key figures which
are based on the assumption of a (linear) correlation between the weight and the number of hours needed
to process a metric ton. When the new vessel is very different from the previously built vessels, these
key figures lose their validity. It is crucial for profit-motivated shipyards to offer a competitive but
realistic price, in order to win a tender without big financial risks (Shetelig, 2013). In order to offer this
competitive price for unique vessels, this research presents an improved cost estimation based on the
expert opinion on assembly patterns.

1.1. Literature Review


In the thesis of Shetelig (2013) a parametric top-down cost estimation method is described. Using cost
estimation relationships Shetelig describes a method to find a parametric formula to estimate prices for

213
different technology groups. These prices are based on specifications of a group of vessel of the same
type and historical construction data. These are hull, propulsion and machinery, cargo containment &
handling and lastly common systems. This parametric approach gives different formula for each
technological group. Thus, using historical data one can make an estimation for the price of a new ship.
For steelwork of the hull, this can e.g. be an answer in the unit €/mt-steel.

Visseren (2017) developed a new cost estimation method using case-based reasoning. Using a graph
database, the construction costs can be estimated more accurately for constructions with fewer or more
parts than the parametric method or statistical method assumes it has.

In scheduling, heuristic rules are quite often applied. In the dissertation of Rose (2017) a planning
method is developed in which all relevant portions of the shipbuilding process: erection, block building
and outfitting, are integrated. As written: “this method is not developed to replace the current shipyard
planners but merely to support them in the decision making process”. A more in depth planning method
for the outfitting processes in shipbuilding was developed by Wei (2012). In addition to this dissertation,
Gregory (2015) developed an activity loader and planning generator to improve the pre-outfit strategy
in the pre-contract phase. These three reports focused merely on the outfitting aspects of the shipbuilding
process, however they proved to be very useful to gain insight in the improvements that are possible in
the planning of ship construction and to compare the different methods and tools used to set up these
planning tools.

Further literature research shows that Colthoff (2009), Kim et. al. (2002) and De Bruijn (2017),
contributed to the body of knowledge with regards to the planning of the steel-related aspects in
shipbuilding. Colthoff (2009) generated a model for the automatic scheduling of block building during
the pre-contract phase. Kim, et. al. (2002) considered a specific scheduling problem on a shipyard. The
designed algorithm focused on the constraint satisfaction problem, the approach showed insights in the
general process. However this approach is not applicable for this current research, because at this stage
there is no constraint satisfaction problem apparent.

De Bruijn (2017) developed an automatic block division generator. To gather more insight in the
decision-making process expert interviews were conducted with a block division engineer. Block
division engineers determine the block division of a ship based on the General Arrangement (GA) of
the ship in the pre-contract phase. The approach of De Bruijn (2017), conducting expert interviews, is
highly useful for this research. Expert interviews will be conducted with production engineers, as the
production engineers are responsible for the division of blocks into subassemblies and the resulting
production plan. This research is not about division of the GA into blocks but looks at the steel-assembly
one step earlier in the process and thus covers the division of the blocks into subassemblies.

The estimation of man-hours in steel-assembly can be improved by the identification of production


patterns and the resulting actions. Identifying the optimal order of assembly is required for the
identification of these production patterns. Finding the optimal order of assembly is complex and
calculating this order is time consuming, especially when limited information is available. In the pre-
contract phase, when the estimation of the man-hours is needed, the information available is the general
arrangement of the ship. Therefore, information about the production process of subassemblies into
blocks is required. Colthoff (2009), Wei (2012), Gregory (2015) and Rose (2017) proved that the
planning process can be generalized and automated. Continuing on the research of Visseren (2017) and
De Bruijn (2017) this research zooms in on the production process of subassemblies into blocks, whereas
this previous work focused on the production process of blocks into a ship. Contrary to block division
where quantitative data is widely available, the data for subassemblies less tangible. This is because this
data is primarily composed of the knowledge and experience of the production engineers. Concluding,
the gap in the existing literature is about the production process of subassemblies into blocks. This
information is needed to establish the optimal order of assembly, in order to identify production patterns
needed for the man-hours calculation.

214
For a shipyard the man-hour calculations can be vital information that allows the offering of a realistic
price. Herewith reducing the risk of financial loss or not being granted a tender. It therefore contributes
to a future-proof and healthy shipyard.

1.2. Problem Definition


For strongly unique new vessels the current man-hour estimation loses validity and accuracy, due to the
lack of comparable historical data. For shipyards it is crucial to have an accurate man-hour estimation
to remain a financially healthy shipyard. The main assumption for this research is that, whilst the ship
and its construction differ from earlier vessels, the assembly patterns and the rationale behind it at block
level allow for accurate man-hour estimations and comparison with historical data. As the present
literature does not offer an improvement for the man-hour estimation that takes the assembly patterns
into account, this research opens a new field. The identified knowledge gap is to be narrowed by means
of an Expert System with a basis in the knowledge of the production engineers.

The method of analysis of expert rationale in the maritime industry is developed by DeNucci (2012).
Continuing on the work of Visseren (2017) and De Bruijn (2017), this research will zoom in on the
production process of the steelwork of subassemblies into blocks, whereas previous work focused on
the division into blocks and the analysis of all parts present.

2. Method
Rationale capturing is the basis for developing the Expert System to allow for a substantiated man-hour
estimation. The scope of this research will be addressed in paragraph 2.1. Paragraph 2.2. will explain
rationale capturing, after which paragraph 2.3. presents the method as adapted for this research.

2.1. Scope
This research covers the part of the production process in which subassemblies are combined to blocks.
It will only address the steel work and does not cover outfitting as this has been covered in the work of
Gregory (2015). Changes made to the design during the construction process will be neglected.

Steel assembly of subassemblies consists mainly of welding, grinding and positioning. To determine the
required man-hours it is essential to know the order in which these activities take place, since this e.g.
determines the amount of overhead welding versus the amount of easier and quicker underhand welding.
To determine the order of activities, it is key to know how a block is divided into subassemblies and in
which order these subassemblies are constructed into blocks. This knowledge is primarily knowledge
and experience of the work-planners. Therefore this research will solely map the rationale and decision-
making-mechanisms of the work-planners of steel-assembly.

2.2. Rationale Capturing


The type of knowledge captured in this research is rationale. Rationale is the reasoning- and decision
making process (Lee and K-Y. Lai, 1991). DeNucci (2012) presents a method for capturing configuration
rationale in complex ship design. DeNucci develops a Rationale Capturing Tool (RCT), for which
Reactive Knowledge Capturing (RKC) is used. RKC as introduced by DeNucci is a novel approach to
trigger the expression of design rationale of maritime experts. In this approach, several unconventional
ship layouts are generated of which a few do not meet the expectations of a naval architect. Consequently
all designs are shown to an expert, a naval architect in this case, and this expert is asked for feedback on
these designs. From the reaction, remarks and reflections on the ‘faulty’ designs the rationale or logic
behind the design of a ship becomes evident. As the research is about capturing the rationale of
production engineers, DeNucci’s RCT is a very useful approach and the complete dissertation is a
guideline for this research.

215
2.3. Steel-Assembly Rationale Capturing Method
The work of DeNucci is modified to be suited for this research and a new method is developed. As
determined before there is a gap in the academic literature, being. The data concerning the production
of blocks from subassemblies is predominantly stored in the minds of the production engineers. The fact
that this knowledge is stored in their minds raises the question; How to extract knowledge from a mind?
There are different ways to capture the steel-assembly rationale. Two possible options are: to gain
experience as a work-planner or to conduct expert interviews. Given the significant time and effort
required to gain experience as a work planner, the research instrument of expert interview is selected to
capture the steel-assembly rationale. The goal of this method is to extract the decision mechanisms of
the experts from their minds, and to construct an Expert System of their knowledge. The Steel-Assembly
Rationale Capturing Method (SRCM) is developed. Based on DeNucci’s principles random assembly-
orders of one side-shell are generated and shown to an expert to trigger the expression of steel-assembly
rationale. A side-shell is chosen to enable alignment with the research of Visseren (2017) and because
of it straightforward, rectangular shape. Before generating random assembly-orders more in-depth
research into the subassembly division is conducted. Since there is not much quantitative data or
previous research about the subassembly division in academic literature, this more in-depth research
will also largely be based upon expert opinion, obtained through a first orientation interview.

The first aspect of the method is the formation of random assembly-orders. Based on DeNucci’s
principles these assembly-orders are unconventional. This is done deliberately to provoke the expression
of all steel-assembly rationale. A number of random assembly-orders of the selected side-shell are
developed manually, this is done by starting with four possible building positions. These starting
positions are the side-shell, bulkhead, deck, and bottom. Variations in the assembly-orders are in the
orders of placing material, as well as in the amount of lift and turn movements. The generated assembly-
orders are all feasible from a constructional perspective, but not the most time- or cost efficient. The
second part of the method is the set-up for the extraction of the rationale. For the purpose of objectivity
all interviewees are presented with the same set of questions regarding a different set of assembly-orders.
At the start of each interview the reactive knowledge capturing method is explained. This explanation
is identical every interview, and an example assembly-order is discussed in accordance with the
developed legend (of which an explanatory part is shown in Fig.1. as used for unambiguous
communication). Once the concept is clear to the interviewee, the expert is left in silence for 45 min to
complete the RKC by filling out the presented form.

Fig.1: Part of legend of block with plate-numbering of longitudinal material (L) and bulkheads (D)

216
2.3.1. Interview Process
As it was vital for the success of the RCT of DeNucci to adjust the designs during the knowledge
capturing process, also the developed method anticipated for adaptions to be made during the process.
As the SRCM developed in this research, is not an automated tool like DeNucci’s, these adjustments are
made manually between the two interview rounds. These adaptations to the assembly-orders are made
based on the rationale captured in the first interview round. The two interview rounds are scheduled
several days apart, to enable these manual adjustments as part of the SRCM.

For this reason the set-up of the implementation of the rationale capturing is as follows;

• Orientation interview, to get familiar with the yard and the process of production engineering
• Test Case SRCM, to test whether the SRCM as designed is suitable for the research
• Round 1 SRCM (3 production engineers)
• Round 2 SRCM (3 production engineers)
• Concluding Interview

The concluding interview is conducted with all production engineers as interviewed and aimed to trigger
a discussion between the experts and to confirm the list of lessons learned. The experts were asked to
offer both an individual and a consensus ranking of the two highest favored assembly-orders as
presented in the two interview rounds and two additional orders that were handed to the researchers by
the experts. The individual and consensus ranking differed, but after the discussion the experts agreed
on the final ranking. This discussion offered additional insights in the rationale that were used in the
constructed Expert System.

2.4. Ranking Assembly Orders


To support the answers given when using SRCM, the AHP mathematical model will be used (Saaty,
2008). AHP, Analytic Hierarchic Process, is a decision model which uses comparisons between different
criteria and the different possibilities to make a decision. Two criteria are compared and given a factor
from 1 - 9 of importance, which is used to mathematically assign a weight to a criterion. This AHP
model will be used to check if the assembly-order suggested through the RKC method is the same as
what the experts described as being important in an acceptable assembly order. This is done by
conducting a questionnaire at the end of each interview. In this questionnaire the experts are asked how
much influence certain criteria have on the required man-hours. The experts can choose between: No
influence, minor influence, medium influence, major influence, extreme influence. The building criteria
are specified as:

• Welding Method
This includes the position in which welding is done, such as overhead, underhand, sideways,
and which method is used, such as SAW or by hand.
• Lifting and Turning Movements
This includes the required amount of work with regards to lifting and turning the intermediate
constructions.
• Working Space
This includes both the space around the worker and the surface the worker has to work on.
Working at heights is included in this criterion.
• Intermediate Construction Stiffness
This criterion includes the necessary amount of work to make a construction rigid enough for
lifting and turning.

These criteria are all described as important in an assembly order. Knowing the comparative influence
on the lead time is important to weigh the possible assembly-orders.

217
3. Results
3.1. Lessons from the Expert Interviews
The results of the expert interviews are both the answers to the RKC questions, and the answers to the
open questions. From the open questions the following lessons are deduced, which were validated in the
concluding interviews by all experts.

• Order of bulkhead placement not relevant


The order in which the bulkheads are placed is not relevant for the man-hour estimation. In
practice this is decided by the assembly yard, and not bound to rules of thumb.
• No L-shaped connections for bulkheads
To allow for the physical feasibility when placing a bulkhead, L-shaped connections due to the
secondary stiffeners are not allowed, this is illustrated in Fig.2.

Fig.2: Explanation constraint by secondary stiffeners

• Consider panels, not plates


When constructing an assembly-order, the elementary parts should be the panels and not plates.
In this research the combination of plates in the same plane is called a panel. In this, the first
step should be to construct panels below hand with the Submerged Arc Welding (SAW)
machine, as this is the quickest and cheapest way for the yard to create a welded connection.
• Preferable: SAW and below hand
The preferable weld is with the SAW machine and below hand. All production engineers chose
additional rotational movements over welding above head.
• Close block at the end of the process
In order to reduce risks for the production personnel, and to create speed due to increased
accessibility, the final step of the block assembly should be to close the block.
• Build on largest horizontal surface
The largest flat surface should be used as the basis to build on. This is to minimize the heights
to work at, which is related to lead time and safety, as well as to reduce the amount of struts
needed to ensure the position of the block during production.

3.2. Expert System


Aligned with the lessons as presented in paragraph 3.1. an Expert System is constructed. This is a
flowchart and is structured with the idea that every block can be input, where the blocks that are
fundamentally different from the one used for the interviews currently end in “Further research is
recommended”. In this flowchart there are several choices that result in different building orders. To
illustrate these questions can be, but are not limited to:

• Are there panels that consist of more than one plate?


• Does the block contain curved plates?
• Does the building order contain subassemblies?

218
The questions are integrated in the flowchart. All questions refer to the geometric characteristics and all
can be answered with “yes” or “no”, allowing for the possibility off an automated answering system.
The input for this Expert System is the block of a ship and the outcome is either an assembly-order or
the recommendation for further research. An abstract representation of the flowchart is presented in
Fig.3. In this figure the letters represent the assembly-orders as deduced from the interviews, and the
numbers indicate the questions.

Fig.3: Abstract representation Expert System

3.3. Man-hour Estimation


The outcome of the Expert System is an assembly-order. Two options for further research are drawn up
to connect this generated assembly-order to a more substantiated estimation of man-hours. This can be
done by means of Action Analysis or by means of Historical Data. Both options are elaborated upon
below.

3.3.1. Action Analysis


Action Analysis means identifying the characteristics of the assembly of the block by counting the
parameters. The assembly-order is analyzed by means of counting all actions per step. These actions are
classified as the different welding methods, the different welding orientations, and 90° and 180°
rotations. All actions are summed per action or type and length of weld, and presented in a table. When
the yard key figures are known (hours per type of weld, hours per rotation), these can be used to make
a weighted summation and calculate the total time for all actions required for the construction of the
block. An example of the implementation of this Action Analysis is shown in Fig.4. This option for the
man-hour calculations, indicates how the weighted summation of different sorts of steel work operations
(welding meters, crane movements, etc.) gives an ideal estimation of the hours needed to build the block.

219
Fig.4: Action Analysis of an example assembly-order

3.3.2. Historical Data


A man-hour estimation can also be made based on a parametric approach. This approach estimates the
man-hours of a to be built block by using previously built blocks with the same assembly-order and
known man-hours. A step-by-step description of this approach:

1. Find ship blocks


Find ship blocks that have the same assembly-order from a historical data set.
2. Compare
Compare the found ship blocks to generate a parametric formula for man-hour estimation.
3. Calculate
Use the block specifics of the to be build block to calculate man-hours using the parametric
formula.

With the parametric analysis the data from the action analysis can be weighed and if needed multiplied
by a “practical factor”. This option for the man-hour calculation does take the irregularities like walking
distances for the production workers, and downtime into account.

3.3.3. Combined Man-hours Estimation


The combination of both options offers the answer to the question of how to use the rationale of experts
for a more substantiated man-hour estimation: by identifying the assembly-order of a block. This
assembly-order can be divided into several actions, as well as be compared to the assembly-orders of
previous built blocks. The combination of both, offers a substantiated estimation of the man-hours.

220
Because the combination is based upon an detailed action analyses of the block and takes irregularities
into account.

3.4. Validation and Verification


The results of the AHP questionnaire as explained in paragraph 2.4 are shown in Table I. All criteria
were tested for their influence on the lead time from none (1) to medium (3) to extreme (5). If one expert
ranked two (or more) criteria the same, he was asked to make a relative ranking between them. This
relative ranking is shown in brackets, one being the relatively most important. The preference that was
expressed in the AHP method, was found to comply with the preferred assembly-order that is the result
of the Expert System. From the AHP is concluded that the two most important influences on lead time
are, the welding method and the working space. From the production engineers it is learned that the used
welding method is of greatest influence regarding lead time. Working space is regarded to have a major
influence as well, but less than the welding method.

Table I: Absolute weighted criteria with relative ranking when equal

In order to verify the Expert System as constructed, ten blocks are used as input for the Expert System.
This is to test whether the assembly-order that is output matches with the expected order. The expected
order is the one that, with the knowledge as deduced from the expert interviews and the AHP ranking in
mind, is chosen by the researchers. For the ten blocks (eight side shells, one double bottom, one bilge)
all outcomes were as expected and thus the system is considered to be constructed in line with the
research finding.

The validation is done by means of the expert opinion to the outcome of the Expert System. A production
engineer of the yard is asked to validate whether the outcome of the flowchart is in line with the actual
assembly-order. The answer to this was that resulted assembly-order was accepted, which concludes to
a solid validation of the Expert System.

4. Discussion of results
This section offers an analysis of the results and explains the future possibilities for the implementation
of this research. First, paragraph 4.1 explains the novelty of the approach, after which paragraph 4.2
defines the scope of the developed Expert System and at last paragraph 4.3 elaborates on the discussion
of the research results.

4.1. Novel Approach


This research opens the door for a new way of estimating the man-hours in steel assembly. It breaks
with the current manner in which the (estimated) steel weight of a block determines the man-hours,
which is an unit that influences the man-hours but does not directly imply man-hours. This novel
approach is about the identification of production patterns and the resulting actions, it does not include
the weight of a block. The amount and sort of actions required enable a better substantiated cost estimate.

4.2. Flowchart Usability


The flowchart is suited for all blocks of a ship. Depending on the type of block an assembly-order or a
specific recommendation for further research is the outcome. As all the questions result in a “yes” or
“no” answer, the next step, automation, is a relatively small one. When this Expert System is automated,

221
it is possible for a shipyard to enter the yard preferences and key figures, resulting in a yard specific
assembly order. Concluding the developed system is usable for different blocks and shipyards.

4.3. Discussion
During the research there were several assumptions and practical limitations to the knowledge gathering.
This paragraph will highlight the most critical points of discussion with regards to the practical
limitations.

• Number of experts
This research is based upon the answers as given by ten experts of the yard, of which six were
interviewed for the specific SRCM. This number is driven by the available resources for this
research, and has provided the research with answers in the extent that is deemed reasonable
and acceptable by the researchers.
• One block used for rationale capturing
For the interviews, only one ship block was used, the side-shell of the selected ship., This means
that the captured rationale was with regards to this specific block type. The Expert System aims
to extrapolate the results of these interviews to as many blocks as possible. It has to be
considered that for this more interviews are needed, into the rationale with respect to other block
types.
• Complex knowledge caught in “binary” rules
This research aimed to capture the knowledge of the expert, and to capture these in rules that
are as generic as possible. In this process, the knowledge that contains a lot of conditions (”That
depends on the amount of X.”, ”That depends on Y.”) is compressed into straightforward rules
for the flowchart.

5. Conclusion
The goal of this research is to present a way to improve the man-hour estimation at shipyards, especially
for those with a profit objective. This paper presents a method by means of expert interview, to integrate
the knowledge of the production engineers in the early-design cost estimations. The knowledge of the
experts is used to generate an Expert System that determines the assembly-order of a block. With this
assembly-order two options are presented to find the man-hours related: the first option uses yard key
figures to calculate the hours needed using weighted summation of particulars like “above head welding
meters”, “90° rotations” or “below hand SAW meters”. The second compares the patterns included in
the assembly-order with historical patterns to find blocks of which the registered man-hours can be used
as a justified estimation. Using this comparison of patterns, the subassemblies can be related to historical
constructions, which altogether leads to a justified man-hour estimation of the ship.

This novel approach for man-hour estimation offers a quick and substantiated estimation in the early-
design stage of a ship. For a shipyard this is important information that allows the offering of a realistic
price. Therewith the reduction of risk of financial loss or not being granted a tender. It therefore
contributes to a future-proof and healthy shipyard. With the outcome of the flowchart validated by the
shipyard expert it is proven that, the developed method was successful in capturing the rationale of
experts and leads to a reliable suggestion for the assembly order, making it a promising starting point
for further research.

Recommendations for further research comprise the expansion of the Expert System by means of
including the missing parts that mention “further research required”, the automation of the system and
generating historical data to allow assembly-order comparison.

Acknowledgements
This research was conducted with the support of Royal IHC, which allowed the researchers to interview
ten of their employees, for this we would like to express our gratitude.

222
References

COLTHOFF, J. K. (2009), Schedule generation for section construction activities. Master thesis, Delft
University of Technology

DE BRUIJN, D. P. (2017), A model based approach to the automatic generation of block division plans
- on the effective usefulness in ship production optimization algorithm, Master thesis, Delft University
of Technology

DEN OUDEN, G., & HERMANS, M. (2009), Welding Technology, Delft: VSSD

DENUCCI, T. (2012), Capturing Design - Improving conceptual ship design through the capture of
design rationale, Ph.D. thesis, Delft University of Technology

GREGORY, C. (2015), Improving the pre-outfit strategy for a shipbuilding project – Generation of a
more detailed outfit schedule in the pre-contract phase, Master thesis, Delft University of Technology

KIM, H.; KANG, J.; PARK, S. (2002), Scheduling of shipyard block assembly process using constraint
satisfaction problem, Asia Pacific Management Review 7/1, pp.119-138

LEE, J.; LAI, K-Y. (1991), What’s in Design Rationale, Human-Computer Interaction 6, pp.251-280

ROSE, C. (2017), Automatic Production Planning for the Construction of Complex Ships, Ph.D. thesis,
Delft University of Technology

SAATY, T. (2008). Relative measurement and its generalization in decision making why pairwise
comparisons are central in mathematics for the measurement of intangible factors, the analytic
hierarchy/network process, RACSAM 102/2, pp.251-318

SHETELIG, H. (2013), Shipbuilding Cost Estimation, Master thesis, Norwegian University of Science
and Technology

VISSEREN, S. (2017), Case Based Reasoning: A Cost Estimation Method for the Ship Building
Industry, Master thesis, Delft University of Technology

WEI, Y. (2012), Automatic Generation of Assembly Sequence for the Planning of Outfitting Processes
in Shipbuilding, Ph.D. thesis, Delft University of Technology

223
Approach to Holistic Ship Design – Methods and Examples
Stefan Harries, FRIENDSHIP SYSTEMS, Potsdam/Germany, harries@friendship-systems.com
George Dafermos, NTUA, Athens/Greece, dafermos@deslab.ntua.gr
Afroditi Kanellopoulou, NTUA, Athens/Greece, afroditi@deslab.ntua.gr
Madalina Florean, Cadmatic, The Netherlands, madalina.florean@cadmatic.com
Scott Gatchell, HSVA, Hamburg/Germany, Gatchell@hsva.de
Eero Kahva, Elomatic, Finland, eero.kahva@elomatic.com
Paulo Macedo, Rolls-Royce, Norway, paulo.macedo@Rolls-Royce.com

Abstract

The paper describes developments within the European R&D project HOLISHIP. The bottom-up
approach taken to build a flexible platform based on CAESES® as a Process Integration and Design
Optimization environment is discussed. The integration mechanisms are explained, the use of
surrogates is motivated and examples from various application cases and design tasks are given. The
examples stem from a RoPAX ferry, a double-ended ferry, a service operation vessel for offshore wind
farms and an offshore supply vessel for deep-sea missions. Systems that have been successfully
coupled within CAESES® during HOLISHIP so far include CADMATIC, MARS, NAPA, NEW-
DRIFT+, -Shallo and FreSCo+, ShipX, COSMOSS, and LDPA.

1. Introduction

The design of ships comprises many disciplines and the aim is to establish a good compromise
between opposing objectives. Structural integrity and safety, energy efficiency and environmental
impact, building costs and life-cycle performance are all important aspects which need to be
considered in an approach to holistic ship design. In order to cope with the complexity of design and,
ultimately, optimization parametric models, low- and high-fidelity simulations, surrogates, data and
variant management etc. have to be considered.

The paper describes parts of the work carried out within the R&D project HOLISHIP
(www.holiship.eu), bringing together 40 European partners. In particular, the bottom-up approach
taken to build a flexible platform on the basis of CAESES® as a PIDO environment (Process
Integration and Design Optimization) will be discussed. The integration mechanisms will be
explained, the utilization of surrogates will be motivated and examples from various application cases
will be given, illustrating the diversity of design phases and design tasks encountered.

The examples stem from application cases (AC) within HOLISHIP and cover a RoPAX ferry for a
24h round-trip service in the Mediterranean, a double-ended ferry for short-sea shipping in
Scandinavia, a service operation vessel (SOV) for the maintenance of offshore wind farms and an
offshore supply vessel (OSV) for deep-sea missions. Systems that have been successfully coupled
within CAESES® during the HOLISHIP project so far are CADMATIC, GES, MARS, NAPA,
NEWDRIFT+, -Shallo and FreSCo+, ShipX, COSMOSS, LDPA and many more tools. Coupling
mechanisms, data transfer and storage will be discussed and selected results will be presented.

2. Background

HOLISHIP is a large European R&D project within HORIZON 2020, bringing together 40 partners
over the course of four years (2016-2020). It aims at the design and optimization of maritime assets,
in particular, ships, considering many, ideally all, key performance aspects in a holistic manner. The
adjective ‘holistic’ stems from the Greek word ‘holos’ (ὅλος) which has the meaning of all, whole,
entire. The noun ‘holism’, according to Wikipedia, describes the idea that systems (physical,
biological, chemical, social, economic etc.) and their properties should be viewed as wholes, not just
as a mere collection of parts.

224
The project is made up of quite a few work packages, six of which are concerned with the
development and improvement of software systems (WP1 to WP6) while two are dedicated to the
integration of tools for design and operation (WP7 and WP8, respectively). The remaining work
packages focus on application cases (WP9 to WP17), dissemination and management, respectively. A
thorough description of the project can be found in Papanikolaou (2019) while a quick overview can
be gained from Harries et al. (2017).

3. Integration approach

3.1. Coupling of systems and data management

In principle, the coupling of systems and the management of data can be approached from two sides,
namely top-down or bottom-up. Both approaches have their advantages and disadvantages as
elaborated in Harries and Abt (2019):

A large and unifying system is typically built via a top-down approach in which all potential data
items and all possible scenarios are accommodated and/or anticipated. Ideally, once defined and
established the top-down approach is quick to apply to any new design situation. However, the bulk of
work is to be done upfront when agreeing on the way how to introduce and approve (new) data items.
In addition, a critical mass of data items needs to be made available before one can start to utilize such
a system productively. The huge efforts put into establishing standardized exchange formats, for
instance STEP (Standard for the Exchange of Product model data), are representative of such a top-
down approach. As can be easily imagined, with a lot of players in the field – system vendors,
Classification societies, design offices etc. - and for the many diverse maritime assets encountered this
is a formidable and long-term undertaking. (Nevertheless, within HOLISHIP a top-down approach is
worked on within WP8, building on the experience previously gained in the aerospace industry. For
lack of space it will not be further covered within this paper.)

On the other side, a bottom-up approach aims at an ad-hoc integration of systems as needed to tackle a
particular design task. The data items to be processed, exchanged and stored are defined during the
course of a design project. Typically, the data base is continuously growing with additional items that
are taken into account. The advantage of a bottom-up approach is that an integrated system, even if
not yet complete, is operational from the start. Complexity is added while working with the system
and while the design task evolves. However, the disadvantage is that there may be an uncontrolled
growth of data items, potentially making it difficult to keep an overview. In addition, a completely
new project would normally have to be started from scratch unless it is similar to a project previously
run that could be copied and subsequently adjusted. (Within HOLISHIP this bottom-up approach is
pursued within WP7, based on the PIDO environment CAESES® by FRIENDSHIP SYSTEMS. This
is, admittedly, a practical way to go ahead as will hopefully become clear within this paper.)

Fig.1 shows the synthesis model for the design of a RoPAX ferry (see section 4) while Fig.2 depicts
the synthesis model for the design of a double-ended ferry (see section 5). Even though there are
similarities it can be readily seen that the systems working together, the tool providers and parties
involved as well as the data items exchanged and the sequence of executing simulations (or surrogates
of them) are not the same. For a synthesis model of an OSV see Jong et al. (2018).

The coupling of systems – in order to establish synthesis models as illustrated in Figs.1 and 2 – is
realized by exchanging only those data items that really need to be shared. These data items may be
simple parameters such as length, beam, draft and displacement of the ship, given as floating point
numbers. They may also be lengthy data files such as a watertight STL-description of the hull
geometry as input to a grid generator for viscous flow analyses.

All data items that are used by more than one system are stored in a central repository (within a
CAESES project) while tool-specific inputs, control data and pieces of information that do not need to
be shared are kept locally. An exemplification of this is given in Fig.3 for the connection of three

225
tools. Within the hexagon there are the data from the intersecting set(s) plus the tool-specific data that
shall be considered within the design project, i.e., all data the design team wants to work with. The
data items outside the hexagon are tool-specific data that are stored, too, but not exchanged.

Fig.1: Instance of a synthesis model for a RoPAX ferry

Fig.2: Instance of a synthesis model for a double-ended ferry

The generic mechanism of coupling a single tool to CAESES® is depicted in Fig.4. In principle, any
tool that can be executed in batch-mode can also be coupled to and run out of CAESES®. All it takes
is the provision of input file(s) and output file(s) as templates plus the description of the location of
the executable that shall be launched when data from the tool are called for. Data items within
templates are either replaced or read by CAESES® when changes to the design are made either
manually in an interactive process or automatically during an optimization campaign. Further details
on tool coupling are given in Harries and Abt (2019). Elaborated examples can be found in Harries,
MacPherson and Edmons (2015) and in Albert et al. (2016).

226
Fig.3: Data management in a Fig.4: Coupling of tools within CAESES
bottom-up approach

If more than one tool is connected to CAESES within one project, forming a synthesis model, the
coupling procedure follows the same principle for each individual tool, naturally with tool-specific
input(s) and output(s). Simulations are run in parallel if no output from one tool is required as input to
the next tool. The tools “know” of each other by means of shared parameters (and/or free variables)
that define model dependencies. This means that within the hierarchical model each object is aware of
its supplier(s) and client(s). For instance, if NAPA is asked to provide a damage stability assessment
for a new hull variant, it would request the current geometry from the integration platform along with
further input data and then execute the analysis in batch-mode on the basis of scripts written by an
expert beforehand (see also section 4).

3.2. Surrogate models

A synthesis model that comprises many systems, by nature, copes with all sorts of issues associated to
licenses, software versions, hardware, operating systems, network availability etc. Furthermore, some
of the systems require a lot of computational resources, for instance viscous Computational Fluid
Dynamics codes take a long time to run and may need, at least temporarily, quite a lot of memory and
storage. Therefore, it is rather beneficial to replace some of the systems with suitable surrogate
models. This has two major advantages: Firstly, data are returned much faster (split seconds vs. hours)
when needed during the design work. Secondly, all potential bottlenecks of running the more
complicated systems are shifted upfront and are taken care of by the experts who know their software
and hardware really well.

A surrogate model (also known as a meta-model, a response surface or a reduced order model) is an
approximation of what the actual simulation would yield. It is created on the basis of many
simulations, typically produced by means of a design-of-experiment (DoE) within the anticipated
bounds of the free variables of the succeeding design task. Typical surrogate models are based on
ANN (Artificial Neural Networks), Kriging and/or polynomial regression.

Surrogates can be interpreted as numerical series, for instance, a dedicated numerical hull series for
resistance and propulsion of a specific ship type, say a RoPAX ferry (see section 4) or a double-ended
ferry (see section 5). Fig.5 shows a comparison between the results from the simulation (abscissa) and
the approximation form the surrogate model (ordinate). For concept design the encountered error, here
within ±1% of the simulation, is acceptably small. Again, for a more comprehensive discussion see
Harries and Abt (2019). Fig.6 illustrates the estimates from a surrogate model for the change of
resistance of a double-ended ferry (see section 5). Increasing beam raises the resistance monotonously
while a change in length yields a noticeable minimum in the middle between the length’s lower and
upper bound. For visualization purposes only the length and beam are shown as free variables but
weight and speed are included in the surrogate model, too.

227
Fig.5: Comparison between simulation data and surrogate Fig.6: Surrogate model for the change of
model for calm-water resistanceof a RoPAX ferry resistance of a double-ended ferry

3.3. Application cases

The ships chosen for research and development within HOLISHIP are representative of contemporary
and anticipated European ship design and ship building activities. For details about market conditions,
mission requirements and operational profiles see Yrjänäinen et al. (2019).

The examples discussed here – a RoPAX ferry (HOLISHIP WP7/WP16), a double-ended ferry
(WP17), a SOV (WP7) and an OSV (WP9) – stem from several of the application cases, all of which
were still ongoing within HOLISHIP at the time of writing this paper. Therefore, the idea here is to
present a flexible and extendable approach to holistic ship design. (The intention of this paper is not,
however, to propose any final designs for specific operating scenarios.)

4. RoPAX ferry

Let us start with an optimization study for a RoPAX ferry representative of short-sea shipping in
European waters, in particular the Mediterranean, the North Sea and the Baltic Sea. As a baseline a
RoPAX ferry by FINCANTIERI S.p.A., designed within the former EU project GOALDS, was
chosen. The ship is a twin screw, medium size vessel for short (international) voyages with a length
between perpendiculars, maximum beam and subdivision draft of 162.85 m, 27.6 m and 7.1 m,
respectively. The route between Piraeus (mainland Greece) and Heraklion (Crete) was chosen as an
operational scenario with a day trip of 6.5 hours at 27 kn and a nightly return trip of 8.3 hours at
21 kn. The design features a main and an upper vehicle deck and a lower hold. For details see Harries
et al. (2017).

4.1. Design task and parametric model

CAESES® was used as the HOLISHIP integration environment (PIDO) and, in addition, as the CAD
software to generate the hull form, exploiting the built-in methods of surface modeling and parametric
transformations. A stable connection between CAESES® and NAPA as the primary analysis software
was one of the important prerequisites to realize the optimization study in which the probabilistic
damage stability and the financial performance of the investment were to be assessed concurrently.

The software coupling is achieved via text files exported from CAESES’ Software Connector to be
used as input to NAPA, along with an IGES-file, containing the hull surface representation as
parametrically generated within CAESES. A series of calculations regarding several aspects of each
design variant can then be performed by executing macros within NAPA. The generated output, also
as text file(s), is collected by CAESES and subsequently utilized to build surrogate models.

228
The NAPA parametric model requires input about the main dimensions of the vessel and the
propulsion power which was estimated from CFD analysis using -Shallo and FreSCo+ by HSVA.
The positions of the transverse and longitudinal bulkheads are determined based on the change of the
hull’s main dimensions. The current hull geometry is imported to NAPA from an IGES-file,
overwriting any existing hull surface. This procedure ensures that the hull form is updated and that the
main limiting surfaces, inside the hull, are well defined. Subsequently, the compartments are defined
by referencing the updated limiting surfaces and snapping onto them, leading to a set of rooms which
are linearly scaled in the three Cartesian directions, covering the entire internal volume of the hull up
to the car deck’s ceiling. Openings and cross-flooding devices are defined, according to the geometry
changes, and the lane meters available for cars and trailers are estimated via the volume change of the
RoRo spaces, utilizing the baseline as reference.

Next, the Lightship weight is estimated as the sum of the steel weight, the machinery weight and the
outfitting weight. The steel weight is volume-dependent and it is approximated as the initial steel
weight of the baseline, corrected by a term accounting for the change of the hull volume up to the car
deck’s ceiling. The machinery weight depends on the propulsion power, received from a surrogate
model for the hydrodynamic performance via CAESES as input, while the outfitting weight is
considered as constant during the optimization.

A number of loading conditions are defined based on the estimation of certain mass loads and the
anticipated filling degree of the tanks, assigned for fuels, fresh water etc. For each loading condition
the intact stability criteria (2008 IS-Code) for passenger vessels are evaluated, considering passenger
crowding, ship turning and weather criteria in addition to the general criteria that are applicable to all
merchant ships. Finally, damage stability assessment and financial analysis are performed.

4.2. Damage stability analysis

Damage stability assessment is a crucial design aspect for RoPAX ferries. The recently revisited
damage stability regulatory framework (MSC.421(98)) sets a higher standard of safety, as introduced
by the new Required Subdivision Index (R-index), while it aims to better account for water-on-deck
through stricter criteria (s-factor) in damage cases involving RoRo spaces. Consequently, the
regulations constrain the feasible design space, making the achievement of the required safety levels a
challenge.

Fig.7: Subdivision arrangement for a RoPAX ferry

Therefore, optimization studies are needed to reveal possible design trends driven by the regulations.
In order to perform damage stability calculations, a subdivision is parametrically selected taking into
account the main structures that hinder the flooding process, Fig.7. The three intact loading conditions

229
(light service, partial subdivision and deepest subdivision draft) need to be defined according to the
regulations. The next step is the damage generation and the calculation of the s-factor for each
damage scenario, i.e., a combination of initial condition and damage case. Finally, for each draft a
cumulative (partial) Attained Index (Ai) is determined and the final Attained Subdivision Index (A-
index) is calculated as a weighted average of the partial attained indices. A vessel complies with the
regulations if the A-index is higher than the Required Index (R-index) while each of the partial
indices separately exceeds 90% of the R-index value. Macros performing the aforementioned tasks
were programmed in NAPA and used for the automated damage stability assessment.

4.3. Net Present Value

The economic performance of each design variant is expressed as the differences of the Net Present
Value (NPV) between a variant and the baseline. The changes in the NPV are determined from an
investment scenario and an operating scenario. The investment scenario is defined by the building
costs, the year of delivery and the interest rate. The building costs are decomposed in the costs of
steel, outfitting and machinery. The costs of each element are calculated using appropriate cost
coefficients and weight and power deviation from the baseline design, rather than using absolute cost
figures. The operating scenario is used for the evaluation of the annual income and operating costs
throughout the vessel’s lifetime in comparison to the baseline along with the selling price after the end
of service. The annual income is assumed to consist of earnings from passenger and vehicle fares. The
operating costs consist of the annual costs for port, fuel, crew, maintenance and various other cost
items. The impact of design modifications on building and operating cost as well as annual revenues
are calculated first and, based on them, the variation of the NPV for a specified lifetime is estimated.

The ship is expected be operated year-round, considering a high season of seven weeks with seven
round-trips per week, a medium season of twenty four weeks with five round-trips per week and a low
season of twenty two weeks with three roundtrips per week resulting in a total of 235 round-trips per
year. Appropriate occupancy rates for passengers, cars and trucks for each of these three periods were
assumed for the calculation of annual revenues. Since there are always limits in the demand for
transport work, a gradual reduction of the occupancy rates for ships with larger transport capacity was
taken into account. The ship’s earnings per year are calculated assuming an income of € 7.189 per
lane meter and € 12.392 per passenger, with port fees of € 103 per gross tonnage. The prices for steel
and power are € 7.080 per ton and € 413 per kW, respectively. The assumed oil prices are € 242 per
ton of FO, € 767 per ton of DO and € 1062 per ton of LO.

A lifetime of 20 years has been used, with an interest rate of 5%. An inflation rate and a fuel price
escalation pattern are also applied, considering a potential shift to low sulfur fuels in order to comply
with the anticipated regulations turning the Mediterranean Sea into an ECA (Emission Control Area)
in 2020.

4.4. Selected results for the RoPAX ferry

A through case study for the RoPAX vessel was conducted, with a design space exploration of 500
variants generated by CAESES® by means of a Sobol as a standard DoE. The hull forms were
transferred to NAPA and each variant was evaluated using the tools and procedures for damage
stability assessment and financial analysis as described above. Out of these 500 designs, 116 proved
to be feasible. The DoE served as basis for a multi-disciplinary and multi-objective optimization in
which the Net Present Value of the designs was to be maximized (as an economic performance index)
while the fuel consumption per round-trip was to be minimized (as an ecological performance index).
A genetic algorithm, namely NSGA II, available within CAESES®, was used, resulting in 924 feasible
and 76 infeasible designs.

In Fig.8 a scatter diagram of Attained Subdivision Index difference compared to the Required Index
(herein denoted as A-Index margin) versus the Net Present Value difference of each variant compared
to the baseline (herein denoted as delta NPV) is presented. A set of constraints was introduced in this

230
study, according to which all feasible designs should have positive delta NPV and A-Index margin
higher than 0.01 (as a safety performance index). The Pareto frontier of the non-dominated designs is
shown in the upper right region of the first quadrant, namely where highest economic success (as
expressed in delta NPV) meets highest safety (as expressed in A-index margin).

Fig.8: Attained Index Margin vs. NPV variation for a RoPAX ferry

5. Double-ended ferry

A double-ended ferry (DE ferry) is typically utilized to connect an island to the mainland or to span
fjords, lakes or rivers where bridges would not be (economically) appealing. Often, a round-trip
would be pretty short, typically around one hour or less, and it would be too time-consuming to turn
the vessel or the maneuver itself would be too cumbersome. Hence, a DE ferry normally is a small yet
complex vessel that operates more or less the same way in both directions, resulting in special
requirements for arrangements, hull shapes and propulsion systems. Within the HOLISHIP project the
segment of double-ended ferries has been subdivided into three different size classes based on data
derived from more than 100 existing vessels. For details see Yrjänäinen et al. (2019).

5.1. First design task

Within HOLISHIP the first design task for the double-ended ferry commences with a pre-selected
topology, defining the type of hull form (currently, a monohull with central skegs at either end), the
propulsion system (presently, one propeller at either end) and the general layout (here, an open car
deck spanned by a central superstructure), Fig.9. It is anticipated to investigate alternative topologies
and concepts at a later stage of the project, for instance replacing Diesel engines with electric drives.
Eventually, separate optimizations shall be run and compared with each other to identify the best
overall design for a particular route and for specific owner’s requirements.

The Hull Structure module of the CADMATIC Hull system was chosen to be at the core of the design
task while CAESES® again (as in the application case of the RoPAX ferry) acts as both the integration
platform and the provider of the hull shape. In principle, a model within CADMATIC Hull is used for
3D-modeling of the entire structure from basic design via detailed design to production engineering of
hull blocks, assemblies, panels and parts.

Within CADMATIC Hull there are several parameters that influence the main dimensions of the
vessel, see Fig.10. These parameters can be given either as fixed values or as mathematical formulas.
A parameter can also be defined in terms other parameters (e.g. a reference distance between decks)

231
and can be modified at any given time. Changing these parameters will then result, for instance, in a
ship with different stability and damage characteristics.

Fig.9: Reference surface model (top) and steel model (bottom) for DE ferry within CADMATIC

The parametric model for the internal arrangement was created within CADMATIC Hull and stored
in the form of surfaces with associated properties, so-called “reference surfaces.” Such a reference
surface is an invisible surface that is used as a topological structure to define plates. The actual steel
structures refer to these reference surfaces and are realized by inheriting user-defined values such as
thickness and material type. As soon as any parameter of a reference surface changes the steel
structure is updated. This is called “property topology” within CADMATIC Hull. Since the plane of a
future plate and its common properties are predetermined by its reference surface the process of
creating a plate is pretty faster. In addition, CADMATIC Hull allows establishing relationships
between one design element and another, extending the property topology beyond the concept of
reference surfaces. In this sense, a new design element can be created in relation to an existing design
element. When changing the element that is being referenced the dependent design element is
updated, too. As a result, it is possible to define an entire system where modifying just one parameter
changes every key element in the ship while automatically taking all conditions and constraints into
account.

5.2. Steel weight estimates

Within CADMATIC there are two options for weight estimations at an early design stage:

The first option is to use unit weights. The unit weights are calculated from the mid-ship section and
then allocated to other structural elements. The unit weights can be computed, for example, from a
rule-based calculator such as Mars2000 by BV. This is rather simple for longitudinal structures but
typically requires more effort for other structures. The mid-ship section defines the unit weights in the
first phase. These unit weights are subsequently used for similar structures throughout the vessel.
Naturally, there are additional requirements to be observed, for example collision bulkheads have to
be reinforced according to the rules and the ends of the vessel need special treatment. The goal is,
however, to keep the process as simple as possible in order to calculate the steel weight rapidly with
very few pieces of information.

The second (and preferred) option for steel weight estimates is to determine the weight via a new
function, currently being developed by CADMATIC. The steel weight in CADMATIC Hull is based
on summing up the data of all parts, naturally making a quick estimation based on just a few parts
impossible. The mainframe and the main layout are mandatory to define the basic design and the
general arrangement. With this information the scantlings are determined so that the ship follows the

232
rules. An existing tool, Flexiweight, is therefore being extended to estimate the weight and center of
gravity (COG) of the ship based on so-called “reference” frames.

Fig.10: Parameters of a DE ferry computed in CADMATIC Hull


(based on fixed values or other parameters via mathematical formulas)

For the application case of the double-ended ferry presently only the mainframe is utilized. A naval
architect designed the parallel mid-ship section and created a completely loaded 3D space.
Flexiweight calculates the actual steel weight and COG of the mid-ship section, the volume and the
weight of the completely loaded space. Using this information, Flexiweight then calculates the weight
per volume ratio [kg/m3] taking into account the ship shape and utilizes this weight for the other
frames that are similar to the mid-ship. The tool is not limited to the mid-ship section. Rather, several
frames could be employed as reference for weight estimations. Obviously, the more reference frames
are considered the better but the greater the effort. Eventually, Flexiweight will also take the weight of
the shell plates into consideration which should finally result in even better weight estimations.

5.3. Calm-water hydrodynamics

A surrogate model for the resistance prediction was established for the double-ended ferry on the
basis of an educated guess for the design variations of interest. Fig.11 shows geometry variations
(primarily length and beam) and operational variations (draft and speed).

The computations regarding steel weight are performed within CADMATIC Hull as explained above.
Naturally, design decisions regarding the main dimensions and the structure affect the weight and,
consequently, the vessel’s target displacement. Since no draft limit needs to be enforced for the

233
operating region of interest, the draft was regarded as a dependent variable for adjustments. A change
in length and beam leads to a change in weight which again leads to a change in draft and, finally, a
change in resistance. This readily couples the free variables, say length and beam in early design, with
derived values such as weight, draft and resistance.

Fig.11: Variations of the double-ended ferry

A common procedure to get an acceptable resistance prediction at an early design stage is to utilize a
standard hull series. However, for a double-ended ferry this would not be easy to find. As an
alternative, the design team, here Elomatic, would have to come up with another way of predicting the
resistance for each design variant. This would normally be done by running suitable tools, such as the
potential flow code -Shallo, possibly in the connection with a viscous flow solver like FreSCo+. The
simulations would either be undertaken locally by the designers themselves, necessitating licenses and
experience alike, or contracted to a consultant that would analyze individual designs, introducing a
time lag and limiting the number of designs that can be considered.

Within HOLISHIP a different approach is proposed: The expert, here HSVA for hydrodynamics,
creates a surrogate model (see also section 3.2) and allows the design team to utilize it via the
HOLISHIP platform built on CAESES®. A clear advantage is that the design team can conduct many
investigations independent of any additional time-consuming computations. Another advantage is that
the experts that provide the surrogate are able to ensure high quality since they are more experienced
users of the codes and, as is the case at HSVA, are even the developers of the codes with an entire
database of model tests at its disposal. To this end, HSVA used CAESES®, together with the built-in
software package Dakota, and created a four-dimensional search space for the -Shallo computations
by means of a Latin Hypercube as a DoE. The length, beam and displacement of the ship were varied
and then computed over a range of speeds.

For calibration, a selection of a few candidates from the design space was analyzed with the RANS
solver FreSCo+. The comparison between panel code results and RANS computations provided a
correction factor, to give the panel codes a more realistic prediction. For a comparison of the results
see Fig.12. These high-fidelity computations were performed on the computing cluster at HSVA, a
hardware resource which could not be easily matched by a design team or easily replaced by cloud
computing. Fig.13 shows the influence of beam on resistance for given length-over-all (LOA) at the
same displacement. It does not come as a surprise that the wider-beamed vessel is also the one with
higher resistance. This would be qualitatively known by the design team. On the basis of the surrogate
model it can now be readily considered quantitatively, too.

234
Fig.12: Comparison of hydrodynamics data from Fig.13: Effect on resistance by changing the beam
potential flow and viscous simulations of a DE ferry for same displacement

5.4. Selected results for the DE ferry

As can be appreciated CADMATIC already represents a powerful parametric modeling system. By


integrating it with additional tools and systems as shown by the synthesis model illustrated in Fig.2
the versatility is further extended. Within a first application CADMATIC Hull has been successfully
connected to the HOLISHIP platform. CAESES® produces a hull form and then runs CADMATIC
Hull in batch-mode. For the hydrodynamics analysis the surrogate model is utilized (as discussed).

The first implementation of the design and optimization process comprises the following elements
(subject to further extensions within the HOLISHIP project, for instance a more elaborate system for
the assessment of life-cycle performance):

 For each variant a new hull form is generated by CAESES for the free variables of length and
beam. CADMATIC Hull is called and (re)calculates the 3D model.
 CADMATIC then checks via MARS2000 whether the scantlings are acceptable (and not
over-dimensioned).
 Thereafter, CADMATIC Hull gives an estimation of the hull weight and returns the value to
the integration platform.
 CAESES calculates the resistance for the current hull form from the surrogate model,
provided from CFD simulations, taking into account the draft that corresponds to the weight
estimate.
 Changes to CAPEX and OPEX are computed based on steel weight (CAPEX) and resistance
(OPEX).

So far the optimization work has focused on three different key performance indicators: steel weight,
number of cars and resistance. The optimization loop tries to minimize the steel weight while keeping
the number of cars as close to 150 as possible. An inequality constraint was set up that if the number
of cars falls below 150, that variant is discarded from the optimization run as infeasible. Length and
beam were varied as free variables since they affect the deck area the most. The length was chosen to
vary between 110 m and 135 m while the beam was allowed to vary between 17.5 m and 22 m. (In the
next phase additional free variables such as depth and deadrise as well as additional tools shall be
considered.)

A lifetime of 25 years with an interest rate of 6 % was assumed. For the first optimization campaign
the oil price was set to be € 500 per ton of DO (subject to further refinement), the number of cars per
day is supposed to be 1260 with a ticket price of € 8 per car. The number of runs is expected to be 15
per day over a round-trip of 10 nm at a service speed of 13 kn.

235
Fig.14: Excerpt of optimization history for DE ferry

Dakota within CAESES® was employed for the optimization runs, utilizing a response surface model
for the steel weight (section 5.3) and a response surface model for the resistance (section 5.4, Fig.6).
A maximum Net Present Value (NPV) of € 12.4M was obtained with 122.6 m of length, 19.6 m of
beam and 1731 tons of total weight. Fig 14 gives the history of this first study for the DE ferry while
Fig.15 shows some selected correlations. The green dots are feasible designs while the red triangles
mark infeasible designs (for instance, since not enough cars can be accommodated). The blue dots
show designs that are optimal with regard to the chosen objectives.

236
Fig.15: Correlations between NPV and total weight as key performance indicators and
length and beam as free variables for a DE ferry

6. SWATH Service operation vessel (SOV)

In Europe the market of OMS (Operation, Maintenance and Service) for offshore wind farms is highly
competitive, with an expectation of significant growth in the coming years. Ship-owners are
constantly pursuing lower costs, increased efficiency and profitability for their vessels. Service
operation vessels (SOV) are ships dedicated to OMS, providing transit, accommodation and shelter as
well as means of transfer to the turbines. The harsh conditions often encountered in the North Sea are
particularly challenging, typically leading to pretty large vessels in order to yield acceptable motions
at sea. This results in large capacity for cargo and crew, including technicians, but also in high
building and operating costs.

Consequently, the question came up if a shorter non-conventional vessel could also serve as an SOV.
A design task was set up between HOLISHIP partners Rolls-Royce, NTUA and FRIENDSHIP
SYSTEMS to investigate the potential of replacing a conventional monohull with a SWATH (Small
Water-plane Area Twin Hull), Fig.16.

Fig.16: Rendered view of the SWATH SOV concept

SWATHs, in general, are developed for superior seakeeping in relatively high sea states when
compared to conventional hulls of similar size. Other advantages of SWATHs are relatively low
power requirements at high speeds, good maneuverability, ample transverse stability and large deck
areas. Nevertheless, SWATHs also have their drawbacks such as large wetted surface areas
(detrimental at low speeds), sensitivity to weight changes (necessitating draft and trim control), pitch

237
instabilities (calling for stabilizing fins) and unique structural challenges. For a more comprehensive
discussion see Papanikolaou et al. (1991).

6.1. Feasibility study

The primary aim of a first study was to compare various designs with regard to zero-speed seakeeping
performance, for details see Macedo (2018). Four design concepts were considered for the SOV: Two
existing monohulls (a 62 m and a 82 m vessel already designed by Rolls-Royce) and two new
SWATH configurations (one with symmetric and one with asymmetric demi-hulls) to be developed
and optimized. Again, CAESES® was utilized as the HOLISHIP integration platform. It was applied
for modeling, mesh generation, tool integration and optimization. The SWATH hulls were designed
parametrically and optimize for low motions. Finally, the most favorable designs were compared to
the monohulls.

6.2. Geometric modeling

A fully-parametric model (FPM) was developed since, firstly, no initial SWATH design was available
to start from and, secondly and very importantly, the efficacy of a fully-parametric model in
generating design variants and supporting the simulation effort is much higher than of a partially-
parametric model (PPM). See Harries et al. (2015) for a thorough comparison of FPM and PPM. The
parametric model was divided into four main parts as depicted in Fig.17: The parallel mid-body, the
bulb region, a transition between the parallel mid-body and the bulb and, finally, the design waterline
defining the geometry of the strut.

Fig.17: Parametric model for the asymmetric SWATH

Within the model the length between perpendiculars was regarded as a design variable while the
length-over-all was kept constant by automatically adjusting the extension of the bulb. In this way the
length of all SWATH variants would be the same as the length-over-all of the shorter monohull.
Similarly, the width of the demi-hull’s mid-body was automatically adapted to match the shorter
monohull’s displacement volume. We felt that in this way the vessels could be compared on equal
terms.

6.3. Seakeeping analysis

For the seakeeping analysis NEWDRIFT+ was utilized. NEWDRIFT+ has been developed at NTUA
as a six degrees-of-freedom (6 DOF), frequency domain, 3D panel code for seakeeping analysis of
ships and arbitrarily shaped floating structures subject to incident regular waves. 3D panel methods
are a special category of solvers for the wave-body interaction problem that combine acceptable
accuracy and computational speed, making them very attractive to assess the seakeeping behavior of
floating structures within an optimization campaign. For details see Papanikolaou (1985), Papa-
nikolaou and Schellin (1992) and Liu et al. (2011).

238
NEWDRIFT+ is based on Green’s integral theorem to calculate the potential flow around a floating
body that interacts with regular waves. To this end, a suitable Green function is employed, satisfying
all boundary conditions of the wave-structure interaction problem (free surface, bottom and radiation
conditions), except for the body boundary condition. In order to fulfill the body boundary condition a
distribution of pulsating sources over the wetted body surface is assumed. The unknown source
strength distribution and related flow characteristics can be evaluated as the solution of an integral
equation. For the numerical realization of the method, the wetted surface must be discretized in planar
triangular and/or quadrilateral panels. This is realized within CAESES® and forms part of the fully-
parametric model, see Fig.18. Note that the panels are not evenly distributed but a stretch factor was
applied to the quadrilateral mesh so as to increase accuracy for any given number of panels.

Fig.18: Mesh generated by CAESES® of the underwater body for NEWDRIFT+

Wave induced loads, consisting of the incident wave and diffraction loads, and the radiation terms,
expressed as the added mass and damping coefficients, can be calculated by integration over the
wetted surface. Furthermore, absolute and relative motions, velocities and accelerations can be cal-
culated and monitored for selected points in the body’s coordinate system. Post-processing the results
from regular waves at different frequencies, also the seakeeping behavior in irregular waves can be
analyzed via spectral characteristics, utilizing the vessel’s Response Amplitude Operators) (RAOs).

6.4. Calm water resistance analysis

The calm water resistance was analyzed based on a second panel code, also developed at NTUA, see
Papanikolaou (1985). The methodology is considered applicable to preliminary design as it estimates
viscous resistance and computes wave resistance, validated by experimental data. The operational
speed profile of the vessel must be carefully studied since the wave resistance shows pronounced
humps and hollows. The inputs required for the computations are main dimensions such as length,
strut width and the distance between struts while the actual geometry is represented by the same panel
mesh as used for the seakeeping simulation with NEWDRIFT+.

6.5. Selected results for the SOV

The feasibility study was divided into several steps:

 Build the parametric model and run preliminary analyses.


 Identify and eliminate variables that would lead to instabilities in the simulation tool.
 Run an optimization campaign with the chosen set of design variables, in this case utilizing a
Dakota response surface algorithm provided via CAESES®.
 Investigate the impact and optimize further design variables that have less influence on the
geometry but might still change hydrodynamic responses favorably.
 Fine-tune the SWATH by means of a local search, utilizing the T-Search algorithm embedded
within CAESES®.
 Check and compare selected designs for additional performance characteristics, e.g.
resistance.

Fig.19 gives a comparison of the heave RAOs for the two monohulls and the two most favorable
SWATH vessels stemming from the optimization campaigns. It can be seen that the RAO increases
monotonously with the ratio of wavelength to length-over-all (λ/LOA) for the shorter monohull while

239
the longer monohull features a small peak for shorter waves. The two SWATHs, in general, have
smaller RAOs for heave but, relatively speaking, slightly more pronounced peaks. Around the peak of
energy of the spectrum, here λ/LOA between 1.2 and 1.6, the asymmetric SWATH shows a similar
RAO as the short monohull while the symmetric SWATH features rather low values. Interestingly,
the optimized asymmetric SWATH has the lowest response of all for very long waves while the
optimized symmetric SWATH has lower heave responses for all other wavelengths.

Fig.20 displays polar plots of Root Mean Squared (RMS) for heave forces (left side), roll moments
(center) and pitch moments (right side). It can be noticed that the RMS forces in the vertical axis (z-
axis) are smaller for the SWATHs than for the shorter monohull for bow and quartering seas. Only for
beam seas the shorter monohull yields slightly smaller values than the asymmetric SWATH. The
longer monohull has the highest values for all wave directions. The center of Fig.20 shows that the
RMS moments around the longitudinal axis (roll motion around the x-axis) are highest for the
asymmetric SWATH for all heading angles. The longer monohull has higher roll motions than the
shorter one. Most interestingly, the moments are significantly smaller for the symmetric SWATH,
indicating that it has less roll motions than all other vessels. On the right side of Fig.20, it can be seen
that the pitch motions of both SWATH vessels are much smaller than those of both monohulls.

When studying the body excitation forces relevant to dynamic positioning (DP), i.e., surge, sway and
yaw (not shown here), the required power consumption for station keeping could potentially be larger
for SWATH vessels, especially if the efficiency of the propulsion system should be lower than that of
a monohull even though the propulsion arrangement of SWATHs eases maneuvering. For a more
thorough discussion see Macedo (2018).

Fig.19: Heave RAOs for 180o heading angle

When checking the best designs from the seakeeping optimization for calm water resistance, a few
additional conclusions can be drawn: For the SOV with a design speed of 13 kn, a fuller waterline
shape and reduced waterline length led towards a lower overall resistance at that speed (about 4.5%
less towing resistance than the average of the designs), but increased resistance between 4 kn and
8 kn. A slender and longer waterline shape lowers the wave resistance coefficient between 4 kn and
8 kn but create peaks of increased resistance at 4 kn and 11 kn. The bulb shape did not impact the
results tangibly, most likely since it is placed rather far below the waterline. In general, at design
speed, the SWATH variants have about 40% higher resistance which is primarily due to the much
higher wetted surface area than a monohull of similar size would feature.

To summarize, optimized SWATHs have high potential to yield a larger operational window than
longer monohulls since they heave, roll and pitch less in several different sea conditions.
Notwithstanding superior seakeeping performance, SWATHs are likely to be more susceptible to
wave drift forces, therefore possibly requiring higher energy in dynamic positioning. They will also
require higher propulsion power to reach the same speed as a conventional monohull of the same size.

240
Fig.20: Polar plots for heave RMS forces (left) and roll RMS moments (center) and pitch RMS
moments (right) for a 3.25m HS, 8.5s TP sea state and zero forward speed

If SWATH configurations are to be further pursued for SOVs several additional (and important)
elements would need to be taken into account, namely weight and weight distribution for stability,
propulsive efficiency as well as cost estimations for both construction (CAPEX) and operation
(OPEX). These elements were not considered at any depth within this preliminary study but could be
incorporated by suitably (and readily) extending the HOLISHIP platform in a subsequent phase, i.e.,
by adding further simulation tools and increasing the relevant data sets (Fig.3).

7. Offshore supply vessel (OSV)

The final example covers the design of an Offshore Service Vessel (OSV). OSVs are utilized for
demanding operations under challenging conditions in deep to ultra-deep waters, such as subsea
construction and operation of ROVs (Remotely Operated Vehicles). They are regarded as highly
specialized vessels and their design requires the synchronization of a multitude of disciplines. Thus, a
holistic approach, considering key performance indices (KPIs) for many different disciplines, was
developed and utilized from a conceptual design level to the power system concept verification. This
section gives an overview and an update on the work performed so far. For details see de Jongh et al.
(2018) and Torben et al. (2019). As it is the situation with the other application cases reported here,
the work is still ongoing within the HOLISIP project. For an artist impression of the OSV designed by
Rolls-Royce, Fig.21.

Fig.21: Offshore supply vessel for crane operations in deep seas

7.1. Design task

The mission of the vessel under investigation is to perform subsea installation of heavy modules in
ultra-deep water using a subsea crane. Therefore, the main purpose of the vessel is to transport the
heavy module from shore to the installation site and, subsequently, to serve as a stable platform for
the lifting operations over the side of the vessel using the subsea crane. The aim is to find the

241
combination of vessel size and crane type capable of performing the mission at the lowest possible
cost considering both CAPEX and OPEX.

In order to structure the tasks required for such a challenging design, three main phases were
distinguished, gradually feeding data from one phase to the next:

 Phase 1: High-level conceptual design


 Phase 2: Power system conceptual design and optimization
 Phase 3: Power system concept verification

Based on results of phase 3, it may become necessary to return to phase 2 or even to phase 1, closing
the loop for holistic design during the concept phase. The steps would traditionally be done by manual
evaluations but, by utilizing CAESES® as the central hub, it was possible to set up a multi-parameter
design space and to evaluate several objectives.

7.2. High-level conceptual design

Defining the main dimensions for an OSV requires significant effort and review by quite many
experts. In phase 1 of the design process the size of the vessel, the power consumption and its
efficiency to perform the required task are identified. Similarly to the coupling examples presented for
the RoPAX and the DE ferry, Figs.1 and 2), the first phase of design of the OSV contains several
major connections that progressively exchange data and, finally, provides input for phase 2 for the
power system initial design:

 Hull lines import and transformation: Based on a reference vessel with re-scaling of the main
dimensions, the design waterline level and the bulb shape (CAESES).
 Lightship and steel weight: Based on data of a reference vessel, considering main dimensions,
equipment required on-board and accommodation area (empirical formulas realized as a
CAESES feature).
 Stability: Checking IMO (International Maritime Organization) regulations of intact stability,
crane drop-load maximum heeling angle and a simplified initial evaluation of the damage
stability, similarly to what is described in section 4.2 for the RoPAX ferry (NAPA).
 Vessel motions: Evaluating maximum significant wave height for safe operations and
operability along the year for each hull design (ShipX).
 Station keeping and dynamic positioning: Estimating the required installed power for a
particular sea state and heading angle of interest (ShipX).
 Resistance and propulsion: With a nested optimization of the bulb shape and Lackenby
transformation for wave resistance reduction of a particular hull size (Shipflow).
 Machinery systems: Based on the required installed power and a provided database of
available thrusters and propulsion powertrain (COSSMOS).
 CAPEX and OPEX: Based on steel weight, equipment and operational profile of the vessel
(empirical formulas realized as a CAESES feature).

For each variant in the design space, constraints might be violated and feasible designs will not be
evenly distributed. A design space exploration was therefore undertaken to understand the system’s
behavior, offering the chance of adapting the design space or of reconsidering the limits set for the
constraints. After having identified the most promising feasible designs, it is possible to squeeze out
further performance by applying local optimization algorithms.

7.3. Power system conceptual design and optimization

The purpose of this second phase is to verify the performance of multiple different power system
configurations for a given operational profile. Life-cycle costs, safety and emissions were chosen as
the objectives (KPIs).

242
Each vessel operational tasks includes subtasks that have different power demand and duration,
independent of the power producer and distribution solution. Thus, the combined optimization of the
size of the power sources and controlling mechanisms is crucial. A large number of combinations of
machinery and power system components is possible with differences in investment and operational
costs. The combinations are evaluated with Roll-Royce’s MPSET (Marine Power System Evaluation
Tool) which is a time-simulation tool of the operational profile in order to estimate the fuel
consumption and emissions for each subtask. The results are further analyzed with a RAM tool
(Reliability, Maintainability and Availability) for assessment of maintenance and repair costs.

7.4. Power system concept verification

To verify and further optimize the power system concept defined in phases 1 and 2, dynamic
simulations of operational scenarios, including control and monitoring systems are planned to be
performed in phase 3. The simulation shall be able to reproduce a realistic load profile in a critical
operational scenario, including transient environmental effects and possible faults to investigate the
robustness of the system.

The outcome of the final phase is the dynamic load profile for critical operational scenarios,
performance of the vessel and the performance of the power system. Further KPIs can be calculated
and used, together with the outcomes of the dynamic profiles, to update technical specifications.

7.5. Selected results for the OSV

Fig.22 shows a preview of the results obtained from phases 1 and 2, highlighting a few of the possible
KPIs for such a complex vessel.

Fig.22: First optimization results for OSV

Late modifications on initial design parameters, such as vessel main dimensions, are highly costly and
can be avoided which a holistic design approach early into the project. Equally important is the
holistic approach for machinery and power systems design. Here, the optimal combination of
components yields the highest efficiency of a vessel in performing given tasks.

As it can be noticed from this example there are no limits for tool integration, KPI selection and
optimization in order to design very specialized vessels. It is important to remark that it also needs to
be a practical and usable on a daily routine, making it necessary to smartly define design variables and
constraints to reach a holistic design scenario efficiently.

243
8. Conclusions

The paper has shown how key aspects of design and optimization for sophisticated ship types can be
integrated so as to create synthesis models in a bottom-up approach. The application cases discussed
have reached different levels of maturity. While the RoPAX ferry design and optimization is already
pretty advanced, the project work on the double-ended ferry has just about started. The Service
Operation Vessel has been deliberately limited to a pure feasibility study while the Offshore Supply
Vessel has reached a comprehensive status of tool integration. The design examples serve to illustrate
that for different maritime assets with rather dissimilar operational tasks the data to be generated,
shared, managed and studied are quite diverse, favoring a flexible mechanism of tool integration as
well as the ad-hoc (and implicit) definition of the necessary data sets.

Importantly, both the tools integrated and the data exchanged can be flexibly extended as work
progresses. The design team basically has two options to choose from. Either it can utilize a tool for
direct simulations after integration or it can replace simulations with surrogate models, for instance by
employing a dedicated numerical hull series. The selected results show that studying many variants
during a design campaign opens up the understanding of important relationships and allows selecting
design variants that are particularly good with regard to various opposing objectives.

Acknowledgements

HOLISHIP is funded by the European Commission within the HORIZON 2020 Transport Programme
(www.holiship.eu). Among the many partners and experts contributing to the project only a few can
be mentioned here. We would like to thank Apostolos Papanikolaou (HSVA and NTUA) as spiritus
rector of the project and Jochen Marzi (HSVA) as project coordinator along with Sverre Torben
(Rolls-Royce), Antti Yrjänäinen (Elomatic) and George Zaraphonit (NTUA) as work package leaders.

References

ABT, C.; HARRIES, S. (2016), Pushing processes and performance by parametric design of boats
and yachts, 24th Int. HISWA Symp. Yacht Design and Yacht Construction, Amsterdam

ALBERT, S.; HARRIES, S.; HILDEBRANDT, T.; REYER, M. (2016), Hyrodynamic Optimization
of a Power Boat in the Cloud, High-Performance Marine Vehicles (HIPER 2016), Cortona

DE JONGH, M.; OLSEN, K.E.; BERG, B.; JANSEN, J.E.; TORBEN, S.; ABT, C.; DIMOPOULOS,
G.; ZYMARIS, A; HASSANI, V. (2018), High-level Demonstration of Holistic Design and
Optimisation Process of Offshore Supply Vessel, 13th Int. Marine Design Conf., Helsinki

HARRIES, S. (2014), Practical shape optimization using CFD, Whitepaper, https://www.friendship-


systems.com/company/papers

HARRIES, S.; ABT, C. (2019), CAESES – The HOLISHIP Platform for Process Integration and
Design Optimization; A Holistic Approach to Ship Design – Vol. 1: Optimization of Ship Design and
Operation for Life Cycle, Springer 978-3-030-02809-1, pp.247-293

HARRIES, S.; ABT, C.; BRENNER, M. (2015), Upfront CAD – Parametric modeling techniques for
shape optimization, Int. Conf. Evolutionary and Deterministic Methods for Design, Optimization and
Control with Applications to Industrial and Societal Problems (EUROGEN), Glasgow; in Advances in
Evolutionary and Deterministic Methods for Design, Optimization and Control in Engineering and
Sciences, Springer 978-3-319-89986-2 (2018)

HARRIES, S.; CAU, C.; MARZI, J.; KRAUS, A.; PAPANIKOLAOU, A.; ZARAPHONITIS, G.
(2017), Software Platform for the Holistic Design and Optimization of Ships, Jahrbuch der Schiffbau-
technischen Gesellschaft

244
HARRIES, D.; MACPHERSON, D.; EDMONDS, A. (2015a), Speed-power optimized AUV design by
coupling CAESES and NavCad, 14th Conf. Computer and IT Applications in the Maritime Industries
(COMPIT), Ulrichshusen

LIU, S.; PAPANIKOLAOU, A.; ZARAPHONITIS, G. (2011), Prediction of added resistance of


ships in waves, Ocean Engineering 38, pp.641-650

MACEDO, P. (2018), SWATH SOV Hull Concept and Optimisation for Seakeeping, Chalmers
University of Technology

PAPANIKOLAOU, A. (Ed.) (2019), A Holistic Approach to Ship Design – Vol. 1: Optimization of


Ship Design and Operation for Life Cycle, Springer 978-3-030-02809-1

PAPANIKOLAOU, A. (1985), On Integral-Equation-Methods for the Evaluation of Motions and


Loads of Arbitrary Bodies in Waves, Ingenieur-Archiv. 55, pp.17-29

PAPANIKOLAOU, A.; SCHELLIN, T. (1992), Α Three Dimensional Panel Method for Motions and
Loads of Ships with Forward Speed, Ship Technology Research 39/4, pp.147-156

PAPANIKOLAU, A.; ANDROULAKAKIS, M. (1991), Hydrodynamic optimization of high-speed


SWATH, 2nd Int. Conf. FAST Sea Transportation, Trondheim

TORBEN, S; DE JONGH, M.; HOLMEFJORD, K. E.; VIK B. (2019), Modelling and Optimization
Machinery and Power System, A Holistic Approach to Ship Design – Vol. 1: Optimization of Ship
Design and Operation for Life Cycle, Springer 978-3-030-02809-1, pp.413-432

YRJÄNÄINEN, A.; FLOREAN, M. (2018), Intelligent General Arrangement, Marine Design XIII,
Vol.1: 13th Int. Marine Design Conf. (IMDC 2018), Helsinki

YRJÄNÄINEN, A.; JOHNSEN, T.; DÆHLEN, J.S.; KRAMER, H.; MONDEN, R. (2019), Market
Conditions, Mission Requirements and Operations Profiles, A Holistic Approach to Ship Design –
Vol.1: Optimization of Ship Design and Operation for Life Cycle, Springer 978-3-030-02809-1, pp.
75-121

245
Microservices to Reduce Ship Emissions?
Charles-Edouard Cady, SIREHNA (Naval Group), Nantes/France, charles-
edouard.cady@sirehna.com

Abstract

What happens if you bring the latest & greatest web technologies to the more traditional naval
world? We built a web app to help ship captains reduce their carbon footprint. The functionalities are
available on the bridge as soon as we push them, without action from the user. Performance
indicators and other features can be added in real-time and updated on the go. Although the
challenges we face are quite different from those of the big internet companies, we ended up using
some of their tools to solve our problems. This approach creates challenges regarding certification,
safety regulations, and security.

1. Building a modular decision support system

1.1. Context

During the European H2020 project LEANSHIPS, SIREHNA has designed and built a software tool
to help ship captains reduce their fuel consumption and carbon footprint. This tool allows a captain to
simulate a voyage and evaluate the ship’s fuel consumption, taking wind, waves (similarly to ADOPT
DSS, 2008) and ocean currents into account. Building this software required us to integrate many
components, some of them not built by us:

Fig.1: Some of the domains in the LEANSHIPS decision support system

The traditional approach to integrating those components is to have a single component orchestrate
the calls to all the other components. This architectural style can be called “monolithic”. For this

246
project, however, we decided to try to make our software less coupled and more modular using a very
different approach, inspired by a trend in the software world called “microservices”. This decision has
had some interesting side effects which we are going to explore in this article.

1.2. Microservices

According to Sam Newman in ‘Building microservices’, microservices are “small, autonomous ser-
vices that work together”. To refine this description, we can say that microservices are an architec-
tural style promoting the decomposition of a system into focused, independently releasable servers
with bounded context.

Fig.2: From monolith to microservices

• focused: each micro-service should perform one task and perform it well and it should not con-
tain too much code (i.e. be easily replaceable),
• independently releasable: each micro-service should be deployable at its own pace. Ideally, dif-
ferent teams should be able to work on different services. This means the services should be
loosely coupled. Each micro-service can have its own technology stack,
• bounded context: by mapping tightly our services to a technical domain (eg. propulsion, sensors,
weather, etc.), as defined by Eric Evans in Domain-driven design, we make each service more
manageable. More focused services are easier to reason about, test and maintain.

Instead of decomposing the application in horizontal slices (persistence, various layers of middle-
ware, user interface), the decomposition was made vertically in that each micro-service is responsible
for its persistence and user interface. This reduces (but does not eliminate) coupling between the ser-
vices and each team can build and deploy its part of the system without relying on any other team.
Moreover, it allows each micro-service to match an underlying technical domain (and therefore a
team).

247
This style has been used successfully since the mid-2000s by web giants such as Netflix, Spotify,
Google, Amazon and E bay. The adaptation of this idea to the maritime world is the subject of this
article.

Fig.3: Twenty-year shift in software architectural practices

2. Herding tailored technology stacks

2.1. Different technology stacks for different domains

One of the problems we face when trying to integrate software components from various technical
domains (hydrodynamics, propulsion & thermodynamics, weather…) is that each domain works with
their own set of tools, using their own conventions and vernacular. In a monolithic architecture, we
either have to force the domain experts to use a single language and development stack or have a
separate team translate the work of each domain into a common framework: both seem inefficient and
wasteful. By contrast, in the microservices world, the idea of microservices is to map each
microservice to a single domain and make them independent so the experts can use the tools they are
the most familiar and productive with. How can we allow each service to use their own set of tools
without the system becoming a huge entangled mess?

2.2. Docker containers to the rescue

To ensure independence between our services, we run them each in a separate Docker container. A
container is a unit of software that wraps up an application and its dependencies and isolates it from
the rest of the system. It differs from virtualization in that containers rely on the operating system’s
kernel. Containerized software will always run the same, regardless of the infrastructure. Containers
isolate software from its environment and ensure that it works uniformly and one container cannot

248
contaminate another. For instance, we can use one version of a library in one container and another
version in another container and those two versions will not be in conflict in any way. Using
containers gives us several benefits:

• Composable: Containers can be stacked, which facilitates code reuse (containers can be built
on top of other containers). This means that services do not need to reinvent the wheel as
configured images of virtually all tools and frameworks are readily available.
• Replaceable: like their homonyms in the physical world, containers are designed to be
manipulated in a homogeneous way. All the tooling used to test, deploy, store and version
containers can be used whatever the contents of a container. Therefore, it is possible to
replace a service by another as long as the API is preserved and it is embedded in a container:
no pre-requisite are required because the container is self-contained.
• Team alignment: having each service be built by at most one team and be delivered as a
container means that the service team can focus on the innards of the service, while the
deployment team can focus on the deployment, regardless of the technologies used inside the
containers (no more provisioning).
• Resilience: the failure of a container will not bring the whole application down because by
definition each container is isolated from all other containers. Also, Docker implements a
health check mechanism to avoid querying an unhealthy service (which can be set to be
automatically restarted).
• Tailored technology stacks: as all containers are isolated, the technologies used by one do not
impact another. This means each team can use the tools it deems best for the job. No
contamination of a service by another's dependencies (eg. possible to use different versions of
the same library), no memory corruption is possible.
• Document dependencies: containers are built using Dockerfile, which therefore provide an
executable specification of a service's dependencies and how to build it.

Having chosen how we isolate the services, we need to work out how they can communicate with
each other.

2.3 Communication style

2.3.1. Synchronous communications

Traditionally, communication between services mimics function calls: we send some parameters and
wait for a result. This style of communication is synchronous because we have to wait for a response
before the rest of the processing can take place.

This style has several flaws:

• Errors tend to propagate: if the target service fails, the clients must take responsibility to
handle the error and either absorb it (e.g. by returning a default value) or propagate it. When
the server is working again the clients are responsible for resending the request, but this may
not be possible if the request changes the state of the service (non-idempotent).
• If the service gives a response but the client then fails, the operation is lost.
• The services need to know the address of the other services they are interested in: for
example, if the propulsion model service needs the weather, it needs to know its address to
run the requests

A more detailed description of these protocols, especially REST, is beyond the scope of this article
and may be found in Fielding (2000), but these drawbacks go against our objective of making our
system more modular, which is why we chose to use asynchronous message passing instead.

249
2.3.1. Asynchronous communications using message queues

Message queues are a communication mechanism whereby a producer sends a message to a message
broker which then forwards it to one or several receivers. They provide an asynchronous
communications protocol, meaning that the sender does not usually expect an immediate response (or
any response, for that matter). The sender does not know nor care which consumers will receive the
message, as this is handled by the message broker. In fact, producer and consumer do not even need
to be on line at the same time. The situation can be represented by the following diagram:

Fig.4: Message queue principle

This has several implications:

• clients can be added dynamically,


• there is no need for a service discovery mechanism as the producer of a message does not
need the address of the consumers,
• the broker can handle disconnected consumers by storing undelivered messages,
• clients are completely decoupled,
• requests and actions are separated,
• load-balancers are no longer necessary: just spinning off another instance of a service will
naturally distribute the load as the message broker distributes messages across instances.

See Videla and Williams (2012) for a more thorough description.

3. Benefits

This architecture has several benefits in terms of performance and resilience, but also had a rather
unexpected positive impact on the way we think about our systems.

3.1. Performance

When the performance of a monolithic software is insufficient, we only have one choice: run it on
better (more expensive) hardware. Using containers and message queues, we have better options. A
tool called docker-compose can detect if the load on any service is getting dangerously high and

250
automatically spawn new instances of that service. The message queue (and the fact that our services
have no internal state) guarantees that the load will be balanced evenly on each instance. We can
therefore fine-tune how much hardware we need for each particular service. As the message broker is
naturally distributed, we can run those instances on any number of physical machines. We can also
limit the memory and CPU usage on a per-service basis.

3.2. Resilience

As explained earlier, using an asynchronous communication scheme means that failures do not
propagate: failures are literally “non-events” in that no event is sent on the message queue. For
instance, if our fuel consumption model stops working, the rest of the system can operate (failures do
not snowball catastrophically), but with less functionality. Moreover, Docker has so-called “health
checks” which allow it to monitor the state of each service and restart it if it becomes unhealthy.

3.3. Impact on our engineering processes

Our traditional approach to writing decision support systems is to have a single model perform all the
calculations: we strive to make our model represent all aspects of the system as closely as possible,
the Holy Grail being a single holistic model that we can use to predict the ship’s behaviour perfectly.
This ideal resembles the situation depicted by Jorge Luis Borges in 1946 in Del rigor en la ciencia,
where science of cartography becomes so exact in an empire that only a map on the same scale as the
empire itself will suffice:

“... In that Empire, the Art of Cartography attained such Perfection that the map of a single
Province occupied the entirety of a City, and the map of the Empire, the entirety of a
Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers
Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided
point for point with it. The following Generations, who were not so fond of the Study of
Cartography as their Forebears had been, saw that that vast map was Useless, and not without
some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In
the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by
Animals and Beggars; in all the Land there is no other Relic of the Disciplines of
Geography.”

Indeed, when using a holistic model, the model becomes the central (and critical) part of the system
and each performance indicator we wish to add to the system adds complexity to that model until the
model’s complexity approaches that of the thing being modelled.

Building this decision system using microservices has shown us a different approach: we no longer
have one but many models, all independent, and not necessarily made by the same teams, and with
possibly contradictory underlying hypothesis, depending on the aspect of the problem we are
interested in. We view each model as a way to answer a specific question, rather than a perfect digital
twin of the ship. The “voyage” data structure that we use to transmit information from one service to
another is filled progressively by each service it passes through: each model adds its outputs to it.
This means all services have the same inputs and outputs, namely a list of voyages, which means we
can chain them whichever way we want. We found this approach to be more modular, as new models
can be dynamically added and compared, and it makes it easier to stay within the applicability domain
of each one.

Another unforeseen positive effect is that we can now easily tailor a specific solution based on each
user’s needs: we just give them access to the services they need, no more, no less. Repackaging for a
different user may only involve changing or adapting the front-end and choosing a different set of
microservices. Thanks to Docker containers and message queues, we are bridging the gap between
our domain experts and our end-users.

251
4. Safety and security

4.1. Disclaimer

The first and most important fact about the decision support system regarding safety and security is
that it is a non-critical system: failure or incorrect results from the decision support system will not
cause the ship or human lives to be at risk. The system is operated on a distinct network from the
propulsion and safety-critical systems and is not type-approved as an ECDIS: it is meant to be used as
a navigation aid only. That said, certain precautions are taken to ensure the system behaves properly.

4.2. Data ownership model

The ship’s database is not part of the system: it is secured by other means. The decision support
system merely runs requests on that database and it is up to that database to check the relevant
permissions. Data is not stored in the system: all voyages are in the user’s browser cache which
means different users will not see each other’s voyages.

Unlike some other tools on the market, the decision support system is deployed on our customer’s
infrastructure, which means our customer has complete control over the data flow and access to the
system: our customer keeps his data in–house. Indeed, SIREHNA uploads the Docker images of each
service on private repositories on Docker Hub. This means that the Docker images can only be
accessed by authorized users. SIREHNA has complete control over the images themselves, the list of
authorized users and the permission level for each user, on a per-image basis. The customer’s IT
department can then choose the infrastructure they want to run the system on: they have complete
control over what the system has access to, which ports are opened and how much memory and CPU
can be allocated to each service. The system is designed to run completely offline, i.e. without
internet access.

4.3. Authorizations

The decision support system runs on a private network, on a specific server: access to that server is
regulated by our client’s IT department and we do not duplicate this permission scheme.

5. Conclusion

Using a microservices architecture as provided very clear benefits to the quality of our decision
support system (performance, resilience, safety and security) but also improved our understanding of
when and how to do model-based engineering. Is this approach always applicable?

5.1. Limits

The microservices architecture, with all its promises, does incur a cost. Working with distributed
databases implies, by virtue of the CAP theorem, we have to embrace eventual consistency: we must
tolerate that the system can be temporarily in an inconsistent state. Performances are not predictable:
our messages will be processed by any number of services which may or may not fail, network
latency can vary, etc. Therefore it seems unwise to use such an architecture in a hard-realtime system
or in a mission-critical embedded software.

A good microservices architecture requires a good domain decomposition: during the course of the
project, we actually had to rewrite whole parts of the software several times because our domain
boundaries were inadequate which meant each service did not have the information it needed at hand
and coupling forces brought our monolith back to life. Therefore, a microservices architecture is
perhaps not the right choice if the domain or domains is not well understood (eg. in a greenfield
project).

252
Moreover, microservices require extra tooling for monitoring, logging and testing, which must be
learnt: when the number of people and the number of domains is small, a microservices architecture
may not pay for itself.

Another challenge we face with microservices is that they are harder to test than a monolithic system
and testing at scale requires other strategies, such as chaos engineering.

5.2. Prospects: chaos engineering

Organizations such as Netflix operate microservices at such scales that they are unable to know how
many microservices are running at any given time. As all these services may interact with each other,
it becomes intractable to test all the combinations before going live. Indeed, as the environment is
highly dynamic, it is in fact counter-productive to prevent the deployment of a microservice until all
its possible interactions have been tested. Instead, Netflix tests each service independently, of course,
but also generate failures in the system on purpose: this is called “chaos engineering”.

The idea is that during office hours, programs (called “chaos monkeys”) run that cause random
failures to the system when the engineering teams are the most available and effective to fix them. If
they cannot fix them, the chaos monkeys can revert the failure. In principle, it is the same as fire
evacuation training: the engineering teams are constantly trained to detect and fix any number of
problems so that if an actual failure occurs they will be more likely to detect and fix it.

Organizations who embrace chaos engineering find there is more value in repairing defects quickly
than to ensure there are no defects to begin with: as they become stronger each time they fail, they are
called by Nassim Taleb “antifragile organizations” in his book Antifragile.

Acknowledgements

This project has received funding from the EU (H2020-EU.3.4). Thank you Alan Guégan and Olivier
Langrand for your help on the project.

References

BORGES, J.L. (1975), A Universal History of Infamy (translated by Norman Thomas de Giovanni),
Penguin Books

EVANS, E. (2004), Domain-driven design: Tackling complexity in the heart of software, Addison
Wesley

FIELDING, R.T. (2000), Architectural styles and the design of network-based software architectures,
PhD thesis, University of California, Irvine. AAI9980887

GÜNTHER et al. (2008), ADOPT DSS - Ocean Environment Modelling for Use in Decision Making
Support, 7th COMPIT Conf.

NEWMAN, S. (2015), Building microservices: Designing fine-grained systems, O’Reilly

TALEB, N.N. (2012), Antifragile: Things That Gain From Disorder, Random House

VIDELA, A.; WILLIAMS, J.W. (2012), RabbitMQ in action - distributed messaging for everyone,
Manning Publ.

253
Communicating Ship Designs via Virtual Reality
Harry Linskens, DEKC Maritime, Groningen/The Netherlands, harry@dekc.nl
Hans van der Tas, DEKC Maritime, Groningen/The Netherlands, hans@dekc.nl

Abstract

In the past decades, naval architecture has transitioned from paper drawings and hand calculations to
extensive use of CAE. In doing so, the 3D model of the ship has become the basis of all information,
from which 2D drawings are derived and delivered for approval. Recent developments focus on
changing how the design is communicated: instead of 2D drawings, now the 3D model itself can be
easily shared across platforms. By combining this with VR technology, the design can be presented in
a much more immersive and intuitive way. Current developments include the coupling of design
information to the 3D model. Future steps could cover full simulation of the model in this environment.

1. Introduction

Designing and building ships and other watercraft is one of the oldest professions known to humankind.
Throughout most of its history, the basic approach remained unchanged: the ship would be designed
from a number of 2D plans, after which it was up to the craftsmanship of the shipwright to make the
design a reality.

In the last several decades, however, naval architecture has made the transition from paper drawings
and hand calculations to CAD files and advanced digital analysis tools. Nonetheless, the result of this
hard work is usually nothing more than it has been for the last century: a large stack of 2D drawings.
More often than not, this stack of drawings is also what the approval process of the vessel is based on.

With the developments in computer technology in the past decade, it has become possible to use 3D
information in the design process. In this philosophy, the 3D model of the ship has become the basis of
all information, from which the 2D drawings are derived and delivered for approval. This is valid from
the general arrangement in the initial design phase to all construction plans and workshop drawings for
the shipyard. This alone represents a significant improvement over the traditional approach of all-2D
drawings.

Nonetheless, communicating the 3D model is still primarily done using 2D plans, reports with still
images, and annotated screenshots. Therefore, in this paper, various techniques for improving the
communication of a vessel design that are applied by DEKC Maritime will be explored. Focus will be
on both the communication between the design team and client as well as the communication between
technical experts, both within the design company and towards external parties. A brief outlook towards
future developments will also be given.

First, an overview will be given of current engineering practices involving the use of 3D models. Then,
the techniques that can be used to improve how well the vessel design is communicated will be detailed
further. Following this, several use cases of these techniques will be examined. Finally, an outlook
towards future developments and opportunities for application of these techniques will be given

2. Current Approach to Communicating Ship Designs

To understand the possible areas of improving communication during the design process of a vessel, a
brief summary will be given of the design process. Then, the relevant channels of communication to
different parties will be identified, along with how this is generally achieved.

254
2.1 Design Process

To streamline the process of designing a ship, this process is subdivided into a number of distinct phases.
First, during the concept design phase, the main characteristics of the vessel are determined and the
overall feasibility of the design is assessed. Then, during the basic engineering phase, the necessary
calculations are performed to optimize the design, and the class-approval plans are set-up. Finally,
during the detail engineering phase, all construction and system details are worked out to a production
level, after which the design package is delivered to a shipyard to be built.

2.1.1 Concept Design Phase

During the concept phase, a 3D ship model is made in CAESES as soon as the first discussions with
the client for a vessel have taken place and the main characteristics of the hull shape have been sketched.
This may be an existing hull form that has been parametrically scaled to the proper dimensions, or a
fully-parametric hull developed in-house. Creating a clean 3D model during this first concept phase
allows many of the calculations and checks usually reserved for a later phase to be performed on the
concept with minimal extra cost. Examples of this include stability calculations in NAPA or CFD
calculations in FINE/Marine. Furthermore, the parametric set-up allows simple variation of the
principle dimensions of the vessel.

While the hull shape is developed in 3D very early on, the general arrangement is typically created in
2D, using hull lines obtained from the 3D model. This 2D general arrangement is still one of the most
important drawings in the early design phase. However, during the last stages of the concept phase, the
general arrangement is used to set up the plate model, providing a rough overview of the vessel’s layout
in 3D for the client.

2.1.2 Basic Design Phase

Once the concept has been finalized, the plate model is used as a basis to set up the construction model
of the vessel. After the construction has been set-up, FEM calculations using ANSYS mechanical can
be performed on the developed 3D vessel model. Additional stability and CFD calculations are also
performed, to optimize the more detailed aspects of the design. Finally, a presentation model is
developed, based on the plate model and the general arrangement. All relevant information is then sent
to class for approval.

In the basic design phase, most of the translation between the 2D general arrangement to a full 3D
model is executed: both the developed hull shape and all relevant ship internals are inserted in Cadmatic,
from which the sections of the general arrangement are obtained. This is also the case for all
construction plans and other class-approval drawings.

2.1.3 Detail Design Phase

During the detail design phase, all details necessary for the production of the ship are worked out. This
includes structural details, equipment placement, pipe routing, and outfitting. Typically, this is all done
in the 3D model using various components of the Cadmatic software suite. After this is completed, a
full 3D engineering model is available, effectively forming an as-built digital twin of the ship.

Once the full engineering model is completed, it usually sees little further use. Exceptions are when
during the lifetime of the vessel some form of operational support is provided. This can include complex
mobilizations or retrofits, which can be conveniently engineered using this as-built engineering model.

2.2 Communication Channels

During the different design phases, the design of the ship must be communicated to a number of
different parties, each with a different perspective and a different level of technical involvement. The

255
most important communication channels, both internal and external, are summarized in this section.

2.2.1 Communication Channels during Concept Design Phase

During the concept phase, the most important external channel of communication is between the design
company and the client. Good communication on this channel is essential for client satisfaction, as it
will ensure the client receives the product that is desired. Traditionally, this is accomplished through
nothing more than a general arrangement and hand sketches. As it can be quite challenging to form an
accurate perception of the completed vessel from one plan, this is identified as a channel that can be
improved upon.

An additional channel of communication that is critical to a design project is that between the client and
their financers. Although this does not directly involve the design team, it is essential that the usually
non-technical financers are able to grasp the concept of the vessel. In this, a general arrangement plan
leaves something to be desired as a communication tool.

2.2.2 Communication Channels during Basic Design Phase

During the basic design phase, communicating the design to the client is not as driving as in the concept
design phase, as by this phase the concept has been (more or less) frozen. Nonetheless, the client is
typically still interested in more detailed aspects of the design of the ship. These include the
arrangements of technical spaces, equipment on deck, and the wheelhouse.

In addition to the client, other external parties become important communication partners. During the
approval process, the most often-used communication channel is between the design team and the
classification society. Across this channel, technical details regarding the design of the ship must be
shared, which is typically done in the form of 2D plans and the occasional 3D geometry model.
Interestingly enough, classification societies still typically provide approval based on these 2D plans
and reports of the performed calculations, BV (2018), DNVGL (2018).

Aside from this, there are usually a number of suppliers of equipment. These parties also often provide
information in the form of 2D drawings to the design team, who then are left with the challenge of
integrating the system. For system integration, it could be very beneficial to have a platform allowing
the effective sharing of 3D information to both parties.

2.2.3 Communication Channels during Detail Design Phase

During the detail design phase, the shipyard tasked with building the vessel becomes the most important
party to communicate with. Workshop drawings and cutting files are traditionally the output of the
design team in this phase. Furthermore, this phase signals the typical end of the development of the
engineering model, as it reaches the as-built state.

3. Developments in Communicating Ship Designs

Key towards the performed developments in communicating ship designs is the coupling of the
engineering model made in Rhino or Cadmatic to the Unity platform, originally developed for the video
game industry. This chapter will give a short summary of the Unity platform, an overview of how and
what type of data is transferred from the engineering model to this environment, and a description of
the methods in which this is currently used to improve various communication channels.

3.1 The Unity Platform

Unity is a real time 3D game development platform, used to create half of the world’s video games.
Beyond the video gaming industry, it has also found applications in the film and automotive industries.
The platform features a multitude of features designed to make video games, which lend themselves

256
very well towards developing new ways of interacting with any type of 3D model, Unity (2018).

Apart from tools purely intended to improve the visual representation of the 3D model, a number of
more advanced options are available. Specifically, real-time animation and simulation tools provide the
possibility to perform complex dynamic analyses within the Unity environment.

Finally, Unity has a large and active community of amateur and (semi-)professional developers, who
maintain an active presence on various discussion board. Through this knowledge base, many different
tools are available for any user to use. This allows complex Unity models to be constructed with
relatively little effort, as much of the desired tools have already been implemented.

3.2 Creation of the Unity Model

To create the Unity model, several types of information can be extracted from the engineering model.
Most importantly, this is the geometrical information of the engineering model, such that it can be
visualized in the gaming environment. Additionally, database information relating to the exported parts
can be included, as well as calculation results from external software.

3.2.1 Geometrical Information

The exchanging of geometrical information of objects from different CAD packages to Unity can be
done in a number of different ways. Supported file formats inculde: fbx (motionbuilder), obj
(wavefront), dae (collade), 3ds (Autodesk 3ds Max 3D), asii (simple text file with vertices / mesh face
knots / vertices colour). This makes it quite straightforward to transfer geometry from the engineering
model to the Unity platform.

3.2.2 Database Information

In the predefined object “Assets”, different Unity objects are given a unique identifier. This identifier
can be used in an sql database connection or a csv txt file with id and attributes to link geometrical
objects to any kind of information database. In addition, this information can easily be shown on the
user interface.

3.2.3 Calculation Results

The results for CFD calculations made by FINE/Marine can be transferred to Unity. Then, using
different developed routines, these results are prepared in mesh format and simply exchanged to Unity
as geometry. On the hull surface, a color map can show for instance the friction and pressure fields,
with a colorbar showing the value ranges of these parameters. Stream lines and tell tales shown the
direction and speed of the water flow around the hull. Finally, the wave pattern colour map and height
lines around the hull shape expressed the waves generated by the hull speed in the water.

Similar to CFD results, calculation results from FEM mesh models can be imported into Unity. This
allows the results to be explored in real size. Again, the mesh color map expresses the values of the
stresses or deformations at each position. I, the deformed model can be shown. An advantage to the
deformed model is that the parts that are not connected can be found when exploring the model in VR.

3.3 Interaction with the Unity Model

As a gaming platform, Unity provides a number of methods with which users can interact with the Unity
model. Most basic among these is compiling an executable desktop app, which can be used on any
computer. Slightly more costly, but also far more immersive, is to use a virtual reality (VR) headset.

257
3.3.1 Desktop App

The desktop app shows the complete ship or object environment on-screen with the same possibilities
as in a VR environment, except the experience of space. All implemented user interface actions are key
and mouse driven, depending on what is implemented in the environment. Possible actions include
movement, rotation, transfer to a predefined position, showing/hiding objects/systems, clipping of
objects, and movement/rotation of objects. In addition, different types of actions common to video
games, like gravity and collision to objects, are possible, but need to be tailored to a client’s particular
need.

A further advantage of using the desktop app for data transfer is that it is easy to use, requiring no
additional software installation on the recipient’s machine. Furthermore, the data is encapsulated in a
single executable, which makes sharing the data both more convenient as well as safer.

3.3.2 Virtual Reality

The VR headset is an extension of the desktop game environment. Instead of using keys or mouse, the
headset and controllers will be used. Movement will be done with the controller by Ray tracing to the
next position. Rotation and limited camera movement are headset / real-walk area (5x5 m) driven. By
using the controller buttons, different actions can be undertaken. Actions like movement by walking in
or flying around the vessel, hiding/showing objects selecting or deselecting objects. Furthermore,
actions can be triggered by a collision of the VR controller with a virtual object.

However, the real advantage of VR is the experience of space and depth, and the feeling of actually
being part of the design. This immersion is key to many of the possible improvements in communication
that can be achieved using this method. Beside this, it is also possible to guide the VR user though the
environment by using the key or mouse options.

3.4 Improvement to Communication Channels

Being able to interact with the Unity model, whether this is done through the desktop app or a VR
headset, already represents major improvements in communicating certain aspects of the design.
Examples of this are presented in this section.

3.4.1 Improving Communication during Concept Design Phase

The ability to create a Unity model quickly in the concept phase can significantly improve the
perception of the design by the client. By creating a very immersive experience, the client can quickly
become familiar with the proposed design in a virtual environment, allowing him or her to express his
or her wishes to the design team more effectively. This, in turn, leads to a design that better fulfils the
wishes and expectations of the client.

3.4.2 Improving Communication during Basic Design Phase

During the basic design phase, Unity provides a highly versatile platform for sharing the results of
complex calculations, such as FEM analyses, to the classification society. Rather than delivering a
lengthy calculation report with numerous screenshots of the calculation results, instead the results can
all be included in a Unity model and compiled to an executable. In this way, the approval engineer can
view the full dataset of results in 3D, including tools such as clipping boxes and variable color scales
to facilitate assessment of the calculation.

Furthermore, as mentioned previously, the compiled Unity model is a stand-alone application, requiring
no specific software to interact with the model. This results in a dataset that is very easy to share across
platforms.

258
3.4.3 Improving Communication during Detail Design Phase

During the detail design phase, the Unity model can be used towards the shipyard in much the same
way as it is used towards the client in the concept phase. As the construction supervisor can experience
as if he or she is actually present on the vessel, elements of the vessel’s construction that may provide
difficulties during building can be more easily identified. This feedback can then be incorporated by
the design team, to make the vessel easier and cheaper to build.

4. Outlook

The current developments have already shown much promise towards being able to improve
communication of the design of a ship. However, these developments are focused primarily on static
Unity models, and mostly consider only communication towards those parties that a design company is
typically in contact with. Several key possible future developments will be highlighted in this chapter.

4.1 Dynamic Unity Models

A significant step forward would be to make the developed Unity model dynamic. This can both be
with respect to the 3D geometry of the model itself, as well as the model reacting to its environment
in some way. Several promising examples are summarized below.

4.1.1 Dynamic Vessel Layout

An important first step would be to make the layout of the vessel dynamic in some way. With this, it is
implied that the user can influence the arrangement of the 3D geometry directly within the Unity
environment. This could range from the user defining positions of certain components in the model to
allowing the user to actively move bulkheads around or alter the hull shape.

Advantages of implementing such a development would be to allow clients and shipyards to more
consciously assess the placement of equipment or the accessibility of certain compartments.
Wheelhouse arrangements and visibility lines can also be judged in a much more effective manner than
on a 2D arrangement. Furthermore, it would allow the designer to adapt the design quickly to the wishes
of the other party. In addition to this, it would allow vessel operators to configure cargo layouts more
easily and with more insight, which could be interesting for the offshore wind industry, for example.

4.1.2 Dynamic Simulation

Unity provides many opportunities for implementing simulations of varying degrees of complexity in
the platform. The first step would be to set up the Unity model to allow basic vessel responses. This
would require mass, gravity, buoyancy, and perhaps friction models to be implemented in the overall
Unity model. This would allow the vessel to respond to changing mass on-board, such as
loading/unloading or crane handling.

Once mass, gravity, and buoyancy models are included in the overall framework, it would become
possible to add additional environmental conditions to the model. Examples such as waves and wind
could make the resulting Unity model interesting for the simulation of complex offshore operations,
such as wind turbine installation or dredging. This could in turn be used for assessing the feasibility of
a given operation or for training the crane operators.

4.2 Enhanced Visuals

Currently, the visual representation of the 3D geometry is very basic, with simple colors distinguishing
construction, equipment, and outfitting. However, within the Unity platform, many tools are present to
create a more life-like representation of the 3D geometry. Implementing more realistic visuals would
help improve the immersion perceived by the user, literally bringing the model to life.

259
4.3 Augmented Reality (AR) for Ship Building, Repair, and Retrofit

As an as-built digital twin, the Unity model can be beneficial in various stages of construction, repair,
or retrofit of the vessel, particularly when combined with AR technology. For ship construction,
building orders could first be tested in a virtual environment, after which an AR tool could be used to
help the builders assemble the necessary sections. For repair or retrofit, the Unity model could be used
to assess what the actual state of the vessel is by overlaying the engineering model on top of the actual
vessel for the engineer on-board.

4.4 Support for Unmanned or Autonomous Vessels

As a final possible application, the use of the digital twin in Unity could be used for the support of
unmanned or autonomous vessels, Kooij et al. (2018), Kooij and Hekkenberg (2019). This would
perhaps be the ultimate goal of a digital twin: by combining the ability to transfer data from an
unmanned vessel to the Unity model in real time, a virtual representation of the vessel could be made
on-shore, while the vessel itself is in transit. Thus, a captain could steer an unmanned ship (or multiple
unmanned ships) from a single shore control station, or a technician could inspect the technical spaces
of an autonomous ship without being physically present.

5. Conclusion

In this paper, techniques for improving the communication of the design of a ship across different
channels and in different project phases are presented and discussed. To support this, an overview of
the current design process used at DEKC Maritime is given, along with a summary of the different
channels of communication that are driving during the design. Then, a method for transferring the ship
design to a more versatile platform for communication, Unity, is presented, along with current and
possible future improvements to the communication process during the design of the vessel.

The Unity platform has proven to be a very powerful tool with much potential for improving
communication in ship design. In simply transferring the static engineering model to Unity, much
greater understanding of the design can be created in an intuitive way for clients, classification societies,
and shipyards. Future possibilities such as implementing a dynamic Unity model would allow ship
owners or operators to simulate complex offshore operations in an immersive environment, and can
facilitate the development of unmanned or autonomous vessels.

References

BV (2018), NR467 Steel Ships – July 2018 edition, Pt. B, Ch.1, Sec. 3 Documentation to be Submitted,
Bureau Veritas

DNVGL (2018), RU-SHIP Edition July 2018, Pt. 1 Ch. 3 Documentation and certification
requirements, general, DNV GL

KOOIJ, C.; COLLING, A. P.; BENSON, C. L. (2018), When will autonomous ships arrive? A
technological forecasting perspective, 14th INEC/iSCSS, Glasgow

KOOIJ, C.; HEKKENBERG, R. (2019), Towards unmanned cargo-ships: the effects of automating
navigational tasks on crewing levels, 18th COMPIT, Tullamore

UNITY (2018), Unity User Manual (2018.3-002P), Unity Technologies

260
MTCAS – An Assistance System for Collision Avoidance at Sea
Matthias Steidel, Offis Institute for Information Technology, Oldenburg/Germany,
matthias.steidel@offis.de
Axel Hahn, University of Oldenburg, Oldenburg/Germany, axel.hahn@uni-oldenburg.de

Abstract

This paper introduces a Maritime Traffic Alert and Collision Avoidance System (MTCAS). MTCAS is
an assistance system for pro-active collision avoidance. Its core is a new approach for calculating the
Closest Point of Approach (CPA), which incorporates a context-sensitive vessel behaviour prediction.
Here, information about typical vessel movement, bathymetric and routing information are used.
MTCAS supports navigators also in critical ship-to-ship encounters by applying an approach for
evasive manoeuvre negotiation. The integration of MTCAS into two vessels and subsequent field tests
are finally described.

1. Introduction

Alarm management on today's ship bridges requires a lot of effort from the crew. Nautical officers are
supported by assistance systems, such as the Automated Radar Plotting Aid (ARPA), in assessing
potential hazardous ship-to-ship encounters. If such an encounter is detected, an alarm is raised.
However, in today's alarm management there is the problems that a huge number of false and
exaggerated alarms are raised. Due to this, navigators are ignoring a variety of those alarms because
they are most of the times useless. In addition, there are problems with the coordination of evasive
manoeuvres. Such manoeuvres are often selected and carried out on the basis of observations. Due to
the slow dynamics of ships, the estimation of the direction of movement and the selection of evasive
manoeuvres based on these observations is rarely correct. In addition, navigators rarely coordinate the
manoeuvres by radio. Consequently, the evasive manoeuvres will contradict each other. This yields to
misperceptions of critical ship-the-ship encounters and to misunderstandings in the process of collision
avoidance. These misunderstandings are resulting in an increased risk for collisions.

The problem with false and exaggerated alarms is based on the use of the Closest Point of Approach
(CPA), which is today’s standard in assessing ship-to-ship encounters. The CPA is calculated between
the own ship and all target ships in its vicinity. For this purpose, a linear motion vector is calculated for
each target ship based on the current speed, position and course. Subsequently, the vector of the own
ship is compared with that of the target ships with regard to their CPA. However, this is where the
weakness of this approach lies: possible depth restrictions, waterways and typical motion patterns are
having an influence on the behaviour of a vessel, but are not considered during CPA calculation. Thus,
a linear motion model for collision avoidance is not suitable. The use of such a model results in
unrealistic and exaggerated collision warnings, which must therefore always be checked for plausibility
by navigators. Especially in areas with high traffic density, this increases the workload for the crew on
the bridge.

In order to tackle these problems, the Maritime Traffic Alert and Collision Avoidance System
(MTCAS) was developed. MTCAS is an innovative e-Navigation assistance system aiming at reducing
human work load and misperceptions of traffic situations. For this purpose, MTCAS provides
functionalities for pro-active collision avoidance including methods for intelligent vessel behaviour
prediction and an approach for cooperative collision avoidance. Furthermore, MTCAS is designed to
support mariners during collision avoidance. Due to this, MTCAS will not perform manoeuvres.
MTCAS was developed and modelled by adopting the basic idea of the Traffic Alert and Collision
Avoidance System (TCAS), which is an implementation of the Airborne Collision Avoidance System
(ACAS) used in commercial aircrafts, Holdsworth (2003). Fig.1 illustrates the problems addressed by
MTCAS.

261
Fig.1: MTCAS principles and functionalities

MTCAS was developed to support the navigators with improved collision alarms and to reduce possible
misunderstandings in the assessment and resolution of critical ship-to-ship encounters. In order to
improve collision warnings, a concept for the calculation of an intelligent CPA by applying a context-
sensitive behaviour prediction was developed. Additionally, the concept of the Ultimate Action Alarm
helps mariners in the assessment of potential hazardous ship-to-ship encounters.

By providing a concept for a cooperative Manoeuvre Negotiation and methods for Critical Situation
Resolution, MTCAS makes it contribution to the reduction of potential misunderstandings in the
process of collision avoidance.

This paper gives an overview of the concepts developed for MTCAS and is structured according to
Fig.1: After giving an overview of existing performance standards regarding alarm management the
related work used during the developing is presented. After this, the Escalation States, which is a
concept for assessing the criticality of ship-to-ship encounters, are described. Following this, the
methods for improving collision warnings are described. For this purpose, the concept of the Critical
Ship Pose (CSP) is introduced. This section concludes with a description of an approach for predicting
vessel behaviour and the introduction of the Ultimate Action Alarm.

Following this, the concepts for reducing misunderstandings during critical ship-to-ship encounters are
presented. This section focuses in particular on the approach for cooperative negotiating of evasive
manoeuvers.

The integration of MTCAS into a Vessel Traffic Service (VTS) is described, followed by the description
of the validation process.

2. Performance Standards

As described above, MTCAS shall contribute to the improvement of the alarm management. As an
assistance system, MTCAS performs the task of auto tracking, like ARPA normally does. In addition,
the MTCAS was designed as a retrofit system so that it can be added to existing ship bridges. For this
reason, the International Maritime Organization (IMO) performance standards for alarm management
of modern Integrated Bridge Systems (IBS) and auto tracking must be taken into account.

The requirement regarding IBS and alarm management states, that the number of alarms should be kept
to a minimum, International Maritime Organization (1996). By improving the collision warning with
a context-sensitive behaviour prediction, the number of exaggerated and false alarms can be minimized.
This improves the existing alarm management and relieves the navigator. Thus, the number of alarms
is decreased.

Auto tracking was introduced as a standard by the IMO in order to improve collision avoidance. With
the ARPA, the workload for the mariners should be reduced and a better situation evaluation should be
ensured. As described in the introduction, ARPA often fails in this context. In MTCAS, the concept of
Escalation States for assessing a ship-to-ship encounter is introduced. This improves the situation
evaluation since it uses a colour coding for the respective Escalation States. The concepts for improved
collision warnings are also reducing the workload in this context, IMO (1996).

262
3. Relevant Work

MTCAS is an assistance system that is intended to support mariners in collision avoidance. Addressed
research areas in this context are encounter analysis and classification, historic ship-to-ship accidents
and near-miss detection approaches. This section gives an overview of relevant work for MTCAS.

3.1 Encounter Analysis and Classification

The work of Kamijo (2000) detects traffic patterns in video images and is able to detect accidents in
images. With a hidden Markov Model (HMM), the system learns various event behaviour patterns of
each vehicle.

Xue et al. (2009) are presenting a method for collision-free trajectory planning. They are taking
regulations and rules into account. Their method is a three step procedure: identify the target and the
current position of the own vessel, detecting potential collision situations and control the ship
automatically. For trajectory planning a manoeuvring model is necessary. For calculating a possible
route, the three-degree of freedom (DoF) model is used. For route finding Xue et al. (2009) use the
potential field method. For the purpose of collision avoidance, they consider the Convention on the
International Regulations for Preventing Collisions at Sea (COLREGs). This work is based on the work
of Hilgert and Baldauf (1997).

According to Yang et al. (2007), the main problem of collision potential is that several navigators do
not adhere to COLREGS. By proper behaviour the amount of collision can be reduced. This is the result
of a study containing intelligent decision-making and simulating several encounter situations.

Tam and Bucknall (2010), are concentrating on close range encounters. They develop a method for
assessing collision risk by determining the encounter type. A COLREG compliant encounter evaluation
is the focus of this method. Afterwards, they calculate the dimension of the safety area.

3.2 Ship-to-ship Accidental Situations

A new method criterion to evaluate the probability of a collision is presented by Montewka et al. (2012).
The shortest distance at which ships are still able to avoid the collision is called Minimum Distance to
Collision (MDTC). The required formula is derived by analysing accident statistics.

Youssef et al. (2014) are taking the work from Montewka et al. (2012) as a basis for developing a
probabilistic approach to select relevant sets of ship-to-ship collision accident scenarios. For this
purpose, they built up a database containing historic collisions and near-miss situations of 21 years.

3.3 Near-miss detection approaches

A Vessel Conflict Ranking Operator (VCRO) is presented by Zhang et al. (2015). The factors distance
between vessels, relative speed of both vessels and the relative angle between the headings of both ships
are important. Previous analysis shows the derivation of the mentioned factors for near-misses. With
that approach they derive near-miss ship-to-ship decisions from AIS data.

Van Iperen (2015) introduces two ways to detect near-misses. One with the DCPA calculation, the other
with ship domains. The result is based on the analysis of historic Automatic Identification System (AIS)
data. The author evaluates the main indicator for the level of safety in a specific area by the number of
occurred collisions. Then, a ship domain with statistical methods by the 0.5%, 1% and 5% of closest
vessel in encounter situations is generated. This yields a ship domain similar to the Fujii ship domain,
Fujii and Shiobara (1971).

263
3.4 Conclusion

The presented works in the context of collision avoidance reveal important factors for collision
potential. The biggest problem in collision avoidance is the human factor. That means to prevent
collisions, the uncertainty of another vessels behaviour needs to be considered. The risk and probability
of a collision increases with uncertainty.

For assessing the risk level of a ship-to-ship encounter, following factors need to be considered:
distance, speed and orientation angle between both vessels. Taking the distance between two vessels is
not new. The CPA calculation considers the distance as the main factor for collision detection. Also,
the ship domain approaches are using the distance. Newer approaches propose the manoeuvrability of
the vessels as an important indicator for the risk potential. The manoeuvrability can be derived from
ship type and size of the vessel. This dynamic information is also important to solve the uncertainty
problematic by generating the possible set of potential states of the target vessel. These findings are
taken into account during the creation of the MTCAS concepts.

4. Improved Collision Warnings

Existing assistance systems like the ARPA are generating collision warnings based on the comparison
of two linear movement vectors. However, the CPA calculated in this way is in many cases unrealistic
and exaggerated, since external conditions and typical movement patterns are not included in the
calculation. Most of the times, the alarms are ignored by the mariners. MTCAS contains two concept
for calculating improved collision warnings. The first is called Critical Ship Pose (CSP) and is an
extension of the traditional CPA. In addition to this, MTCAS provides the mariners a feature for
predicting the most probable behaviour of target ships. These concepts are intended to avoid
unnecessary alarms. Furthermore, MTCAS provides the mariners a new concept for evaluating ship-to-
ship encounters: Escalation States. In the following sections the Escalation States, the CSP and the
behaviour prediction are described.

4.1 Escalation States

MTCAS provides mariners with a new concept for assessing ship-to-ship encounters, similar to the
TCAS. Fig.2 shows the concept of Escalation States used in MTCAS. Basically, the further right an
escalation state is plotted on the figure, the more critical the respective state is. In order to determine
the different states, the CSP must first be calculated. Then, two factors are decisive for the final
classification of an escalation state: the time and the distance a vessel will need to reach the calculated
CSP. The thresholds for each Escalation State are obtained by using nautical expert knowledge.

Fig.2: Escalation State concept in MTCAS

264
If the CSP is more than 24 nautical miles away the situation is labelled as Clear State. Here, no danger
exists from the target ship.

• Recommendation State
When both ships continue to move towards each other and require less than 12 minutes to reach
the CSP, the ships are in the Recommendation State. In this state, the MTCAS provides the
mariners the calculated CSP by displaying it. Furthermore, the mariners have the opportunity
to request a behaviour prediction of the target ship. This prediction gives an estimation about
how a vessel will travel based on the analysis of historic AIS data. In addition to this, the most
probable resolution of a critical ship-to-ship encounter is predicted. The knowledge required is
also obtained by analysing historic AIS data. Both analysis and prediction methods are de-
scribed later in this paper.
• Danger State
Ships are located in the Danger State when the time is less than six minutes and the
distance to the CSP is less than six nautical miles. During this State, MTCAS
provides the mariners with the opportunity to resolve the hazardous encounter by
applying a cooperative negotiation of evasive manoeuvres.
• LMM State
The Last Minute Manoeuvre (LMM) State is the last possibility for navigators to prevent the
collision. When this state is entered, an Ultimate Action Alarm is generated. In this state, the
vessels are less than 30 s and 1 nmaway from the CSP. Here, MTCAS calculates the manoeuvre
which is necessary to avoid the collision. The manoeuvre can still be performed until a certain
point is reached. This point is indicated to the navigator on the display and is called Last Line
of Defence (LLoD). For the purpose of manoeuvre calculation, the hydrodynamic properties of
the own ship are taken into account. If the manoeuvre is not carried out until the LLoD, a
collision is inevitable.

4.2 Critical Ship Pose

In the maritime domain, the most common way of assessing and identifying hazardous ship-to-ship-
encounters is the calculation of the CPA between two vessels. For this purpose, a linear vector for each
vessel is calculated based on their current position, speed and course. These vectors are then compared
regarding their closest point, i.e. CPA. This yields a CPA for each of the involved ships, which describes
a geographical point. The CPA is extended by the calculation of two additional values: the Distance to
CPA (DCPA) and the Time to CPA (TCPA). Finally, the navigators receive a warning if the TCPA and
the DCPA fall below a threshold value defined by them. The CPA and the corresponding values TCPA
and DCPA are displayed to the navigator.

As described above, this procedure has some drawbacks, which is mainly based by the abstraction of
vessel motion to a linear motion. Context related information are not considered in this calculation. In
addition, the sensors used for measuring the own course, speed and position and the sensors for
measuring the same values for the target ships usually have inaccuracies. By only using a snapshot for
calculating the CPA, these errors combine and yield an unprecise CPA result.

In order to address these drawbacks, a new concept for a context-sensitive CPA calculation combined
with the consideration of sensor inaccuracies was developed. For this purpose, a new term for describing
the result of this calculation is introduced: Critical Ship Pose (CSP). In contrast to the CPA, the CSP is
defined by two values: the position at which the own ship has the smallest distance to the target ship
and the pose of the own ship at this position. Fig.3 illustrates the CSP concept. On the left side, the own
ship is depicted. The straight line running from his own ship in the direction of travel is his course,
which is the same for the target ship on the right side of this figure. Sensor inaccuracies are represented
by the two funnels on the ships. More precise, the funnel describes the probability of the ship's position
in the future, taking the sensor errors (speed, heading, position) into account.

265
Fig.3: Calculation of CSP considering PNT information

When looking at Fig.3, one can notice that the funnel of the own ship is smaller than the funnel of the
target ship. The reason for this is that the own ship is equipped with additional reference sensors so that
the sensor inaccuracy can be determined. Thus the position, speed and course of the own ship can be
determined with a higher accuracy. Therefore the future position of the own ship can be predicted more
precisely. In order to calculate the closest point of the two ships, the two funnels are compared regarding
their closest point. Due to the sensor inaccuracies, the ships may also be located at other positions within
their respective funnels. For the CSP, the minimum distance between the funnels is taken. This
pessimistic assumption should help to better clarify the criticality of the situation to the navigator.

The problem of unprecise and exaggerated alarms is addressed during the calculation by taking No-Go-
Areas and routing information into account. A No-Go-Area is an area, which is limited in draught for
the respective vessel. The consequence for the calculation is that a ship will not pass through a No-Go-
Area. Thus, a calculated closest point cannot be within a No-Go-Area. As soon as a vessel sends route
information via AIS, the MTCAS uses this information to calculate the CSP. Moreover, the route
information of the own ship can be used. The advantages of using the CSP concept over the
conventional method is obvious: unnecessary and exaggerated alarms based on a linear CPA calculation
method are avoided. This yields less stress and workload for the crew in areas with high traffic density.

4.3 Vessel Behaviour Prediction

As described above, when vessels are in the Recommendation State, the mariners have the opportunity
to request a prediction of the most probable behaviour of a vessel. The problem of maritime behaviour
prediction is addressed by using a rule-based approach, which consists of two levels. On the first level,
a prediction about the most probable behaviour of a vessel is made based on the most common
behaviour of similar vessels. A possible influence of others ship in the proximity are deliberately
ignored. This results in the most probable maritime situation picture in the proximity of the ship, which
can then be used to estimate potential collision risks.

In order to support the mariners in the pro-active resolution of those ship-to-ship encounters, those
situations have to be classified according to the COLREGs either as a Head-On, Overtaking or Crossing
situation. This classification is done on the second level of the developed behaviour prediction
approach. Following this, the most probable resolution of this situation is predicted based on the
resolution of similar historic ship-to-ship encounters. That the most likely resolution is predicted means
that COLREG compliant resolution is not always predicted. Such deviations from the recommended
behaviour were observed repeatedly during the review of historical AIS data.

The most common vessel behaviour for both prediction approaches were obtained by analysing historic
AIS data in the German Bight. The analysis of such data and a more detailed description of the two
rule-based behaviour predictions are described in the following.

266
4.3.1 State of the Art

Mazzarella et al. (2015) are extracting typical vessel behaviour from historic AIS messages. Then, they
are predicting the future track by applying a Particle Filter, which is based on the association of the
current track with historic tracks. For this purpose, the mean values for course and speed of the matching
historic track are considered as a future value for the prediction.

With the use of an Artificial Neural Network (ANN), Daranda (2016) is predicting the most probable
behaviour of vessels in the Baltic Sea. The prediction result is a track consisting of so-called Turning-
Points. A turning point is defined at a point, where vessels perform a course change of at least four
degrees. These points are extracted by analysing historic AIS data.

Pallotta et al. (2014) are extracting historic traffic patterns from AIS data. This extraction is based on
their previous work, Pallotta et al. (2013). Based on the association of historic pattern with the current
track, the predicted vessel behaviour is modelled as the Ornstein-Uhlenbeck Process. This enables the
modelling of the uncertainty of the movement due to the hydrodynamics of a ship.

In the work of Wijaya and Nakamura (2013), the authors are using a simple rule-based approach for
predicting vessel behaviour. For this purpose, they are extracting typical tracks from historic AIS data.
The rule based prediction is based on the association between a historic track and the current track.

Contrary to this approach, Ristic et al. (2008) are presenting in their work a method for statistically
extracting traffic patterns. Then, they are using a Particle Filter in combination with the traffic patterns
for predicting the future movement of a vessel.

With the help of an ANN, Zissis et al. (2016), are predicting the most probable behaviour in the Ionian
Sea. The ANN is trained with extracted traffic pattern. In contrast to the previous described approaches,
they are not predicting a track but the next vessel position in 15 Minutes.

If looks at the state of the art for vessel behaviour prediction, all approaches have in common that they
extract typical behaviour. The prediction is then based on this extracted behaviour by applying different
methods.

Xiao et al. (2017) are presenting an approach for the probabilistic forecasting of marine traffic. In order
to do this, they extract historic traffic patterns in the Strait of Singapore by applying a lattice based
DBSCAN algorithm. The behaviour of a vessel is then modelled by using Kernel Density Estimation
and can be predicted for a timespan of five, 30 or 60 minutes ahead.

In order to extract and learn traffic patterns, Rhodes et al. (2005) are proposing to divide the considered
sea area into different regions as a grid. The typical traffic patterns in a region is then learned by
applying a modified version of Fuzzy ARTMAP classifier.

This work is then extended in order to predict the behaviour in a time span of 15 minutes ahead. More
precise, the presence in a grid is predicted. By applying the Neural Associative Learning method, they
are learning the patterns and predicting the future behaviour based on these patterns, Bomberger et al.
(2006).

Looking at the related work on behaviour prediction, it becomes clear that all approaches are giving a
prediction based on historical movement patterns. The method used for the prediction differs for each
approach.

In MTCAS, the same approach is used: Historic traffic patterns are extracted and then a rule-based
approach is used for the prediction.

267
4.3.2 Prediction of the Most Probable Behaviour

As described above, the prediction algorithm on the first level requires the most common behaviour as
a basis. In order to predict the most probable vessel behaviour in MTCAS, a rule-based system is used.
A rule-based system is a method from Artificial Intelligence in order to decide based on previously
derived knowledge. The required knowledge is obtained by analysing historic AIS data in order to
extract the traffic patterns and the rules for generating the behaviour prediction.

The extraction and representation of the traffic patterns is inspired by Oltmann (2015) and Pallotta et
al. (2013). As a result, the extracted behavioural patterns are modelled as a graph. One special character
of this graph is that there are two kinds of nodes: one for representing geographical points where ships
manoeuvre in the near vicinity. The other are representing ports or marinas. For each of those
destination nodes, a frequency distribution is calculated which describes the observed ship types and
ship length at this point. The first-level prediction algorithm uses this information in order to decide,
for which destination a vessel is headed. The Destination-field in an AIS message may be another
appropriate, even more precise information source for which destination a vessel is headed. But as
Harati-Mokhtari (2007) states, this field is often erroneous. Besides typing errors, using the wrong
abbreviations for harbours or simply not updating the destination of the current journey are reasons for
errors in this field. Thus, the use of this information requires proper data cleansing and error handling.

The rule-based approach for predicting vessel behaviour on the first-level is divided into two parts. The
first part aims at predicting the most probable destination of the vessel. For this purpose, the length and
type of the considered vessel is compared to the frequency distribution of each destination node in the
sea area. The node with the highest accordance is selected as a destination. Following this, the path over
graph from the current vessel position to the predicted destination is calculated.

4.3.3 Prediction of Evasive Manoeuvres

In contrast to the method for predicting the most probable behaviour, this method does not require
typical traffic patterns. Moreover, this method requires predicted behaviour as an input. Based on this
prediction, potential hazardous ship-to-ship encounters with the own ship are estimated.

Here, the CSP of the predicted track and the own ship route is predicted. If the distance between two
ships at the CSP falls below a threshold, this encounter will be further examined. According to the
COLREGs, the encounter situation is labelled as either Head-On, Overtaking or Crossing. In the data
analysis phase, ship-to-ship encounters for each of these situations are extracted. Here, a rule for the
creation of evasive manoeuvres were statistically extracted based on the length ratio between the two
vessels. This rule is then applied to the identified ship-to-ship encounter based on the first-level
prediction. As a result, the evasive manoeuvres are predicted. The most important finding in this context
is that vessels do not always perform evasive manoeuvres that are compliant with the COLREGs.

4.3.4 Evaluation

The developed prediction algorithms were evaluated with historical AIS data covering a period of three
months in the Innenjade region in the vicinity of Wilhelmshaven, Germany. The Innenjade is
characterized by the existence of several harbours, a deep-water port and a port of the German Navy.
For evaluation purpose, the related AIS data has been grouped as tracks. Then, a track and a position
within in this track are selected randomly. Based on this position, the destination and the corresponding
track is predicted. Afterwards, the distance between the historic and predicted track is calculated.

In order to evaluate the prediction algorithm on the second-level, a route for an own ship is generated.
Several ship-to-ship encounters with a predefined evasive manoeuvre are then created. Starting from a
point on the track, the second algorithm is used to calculate the evasive manoeuvre. The predicted
manoeuvre can then be compared to the real evasive manoeuvre. Table I summarizes the prediction
accuracy the two rule-based prediction algorithms.

268
Table I: Prediction results
Predicted Destination Predicted Give-Way Vessel
Correct 70% 84%
Incorrect 30% 16%

As one can see, the algorithm for predicting the Give-Way Vessel performed better than the prediction
of a vessel’s destination. For determining the distance between the predicted track and the historic track,
the median of the distance between these two tracks are used. This yields an average distance of 0.204
nautical miles.

5 Reduction of Misunderstandings

Before performing evasive manoeuvres it is important that there is a consistent operational picture on
each of the involved ships. Possible misunderstandings in this context can yield to erroneous actions,
which can result into collisions although the crews are performing actions in order to avoid the collision.
To address this problem, MTCAS provides the mariners a mechanism for the consistent assessment of
ship-to-ship encounters. This mechanism is then extended by adding an algorithm for the negotiation
of evasive manoeuvres. A description of this mechanism follows.

5.1 Evasive Manoeuvre Negotiation

If in the Danger State, the MTCAS offers the mariners a mechanism for resolving the situation by
negotiating evasive manoeuvres. The calculation of the evasive manoeuvres and the negotiation
principles are based on the work of Kim et al. (2017). The algorithm is extended in order to consider
the COLREGs. At the beginning, the ship-to-ship encounter is classified according to the COLREGs
(Head-On, Crossing or Overtaking). This is an important parameter for the negotiation algorithm since
different kind of evasive manoeuvres are recommended by the COLREGs depending on the classified
encounter.

For the following procedure, it is important to determine whether all involved vessels are equipped with
MTCAS or just the own ship. In the prototype, three cases are considered, which are illustrated in Fig.4.

Fig.4: Considered MTCAS equipment combinations in the prototype


d
• 1 Ship
The 1-Ship combination describes the case in which only the own ship is equipped with a
MTCAS. In this case, the negotiation algorithm generates a COLREG compliant manoeuvre
based on the classified ship-to-ship encounter. The proposal will then be displayed on the
screen, accompanied by an on-screen request to the navigator to either accept or reject the
proposal. If he accepts the proposal, MTCAS monitors the resolution of the situation. As soon
as a deviation from the accepted proposal is detected, MTCAS raises an alarm that encourages
the navigator to contact the other vessel via radio in order to resolve the situation. This alarm

269
is also raised when the initial manoeuvre proposal is rejected.

• 2 Ship
When both vessels are equipped with MTCAS, the negotiation procedure also starts with the
generation of an evasive manoeuvre proposal based on the classified ship-to-ship encounter.
The generated proposal is then displayed on the MTCAS screen on both vessels. If both
navigators have accepted the proposal, the agreement will be shown on the screen and the
manoeuvre will be displayed. Similar to the first case, MTCAS also monitors the resolution of
the situation and raises an alarm if a deviation is detected. The MTCAS also calls on the
navigators to coordinate via radio in the event that one of the two navigators on the respective
ships rejects the proposed manoeuvres.

• n Ship
The third case considered in the MTCAS prototype is that n vessels are equipped with MTCAS.
Here, the procedure is almost the same compared to the previously described 2-Ship scenario.
The difference is, that manoeuvres are exchanged between n ships. If one of the involved
vessels decline the manoeuvre proposal, the negotiation is cancelled for all vessels. MTCAS
also advises the resolution of the situation by radio here.

The use of such a collaborative approach to negotiating evasive manoeuvres yields some benefits. The
negotiation algorithm classifies the ship-to-ship encounter. An exchange of the result between the
systems is carried out. This prevents possible misunderstandings in the assessment of ship-to-ship
encounters. Furthermore, this enables a distinct classification into Stand-On and Give-Way vessel, as
defined in the COLREGs. By requesting the navigators to accept or decline the manoeuvre proposal, it
becomes clear for each vessel if the other has recognized the proposal and the situation assessment.
This also prevents misunderstandings and reduces the workload of the navigators.

6. Vessel Traffic Service Integration

However, the MTCAS approach is not limited to ships. One result of the project is the integration of
MTCAS into VTS software. By using MTCAS in the VTS, the intelligent behaviour prediction and the
improved alarms allow early detection of potential conflict areas in the sea area. Furthermore, the
resolution of potentially dangerous situations can be detected in the VTS. The results of the negotiations
are visualised in the VTS.

This clearly shows the advantage of integrating the shore side: not only the crews at sea have a common
operational picture, but also the VTS operators. In this way, further potential misunderstandings can be
avoided, which generally increases safety at sea.

7. Validation

MTCAS was developed and tested with the eMaritime Integrated Reference Platform (eMIR), Hahn
and Noack (2016). This infrastructure includes a ship simulator and the research boat Zuse. In addition,
a sea container equipped with a ship bridge and sensors like AIS is part of eMIR.

The ship simulation of eMIR was used during the development process to test the MTCAS. For this
purpose test scenarios were generated which cover different ship-to-ship encounters. In addition,
scenarios were extracted from real ship collisions, such as the collision in October 2015 between
Flinterstar and Al Oaiq. The iterative testing and development of MTCAS enabled a quick integration
and validation. Besides testing MTCAS in a simulation, the research boat Zuse was used to test the
MTCAS prototype. In a total of three tests, each lasting one week, the MTCAS prototype was tested at
different stages of development. Besides integrating MTCAS into the Zuse, the research cutter
Senckenberg was also equipped with a MTCAS. Both vessels are shown in Fig.5.

270
Fig.5: Zuse (left) and Senckenberg (right) performing an evasive manoeuvre recommended by MTCAS
during the field test in the Innenjade

The Senckenberg is a research cutter with a length of approximately 30 metres and belongs to the
Senckenberg Institute. In Fig.5, the research boat Zuse is moored to the Senckenberg. As the test area,
the Innenjade near Wilhelmshaven was selected. Then Innenjade is the connection between the
Jadebusen and the German Bight and is less frequented than the Elbe or Weser. This gives a lot of
flexibility when testing such a system. The tests in the Innerjade correspond to the scenarios used in the
simulator.

The last test took place in September 2018. In this context, the general functionality of the features
described above was successfully tested and demonstrated.

During the tests, the sea container was positioned in the port of Wilhelmshaven. By integrating the VTS
components of the MTCAS, the corresponding functionality could also be successfully tested.

8. Conclusion

Within the context of the MTCAS project, concepts were developed to support navigators in collision
avoidance. A method for a more intelligent hazard assessment was developed, taking into account
external information such as no-go areas, route information and sensor inaccuracies. The intelligent
prediction of ship behaviour enables navigators to assess possible collision risks at an early stage. A
function for the evaluation of ship encounters and the cooperative negotiation of evasive manoeuvres
offers further support to navigators in collision avoidance. In addition, such a function can contribute
to the reduction of possible misunderstandings in the assessment of the situation. The integration of
MTCAS into the VTS enables the creation of a consistent situation picture at sea and on land. All
developed concepts were successfully tested in simulation and under real conditions with two ships.

Acknowledgement
The work was conducted within the “MTCAS – electronic maritime collision avoidance” and “Step-
Up!CPS” project. MTCAS was funded by the Federal Ministry for Economic Affairs and Energy and
the Federal Ministry of Education and Research (Germany) funded “Step-Up!CPS”.

References

BOMBERGER, N.A.; RHODES, B.J.; SEIBERT, M; WAXMAN, A.M. (2006), Associative learning
of vessel motion patterns for maritime situation awareness, 9th Int. Conf. Information Fusion

DARANDA, A. (2016), A Neuronal Network Approach To Predict Marine Traffic, Technical Report,

271
Vilnius University

EUROPEAN MARITIME SAFETY AGENCY (2017), Annual Overview of Marine Casualties and
Incidents 2017

FUJII, Y.; SHIOBARA, R. (1971), The analysis of traffic accidents, J. Navigation 24(4), pp.534–543

HAHN A.; NOACK, T (2016), eMaritime Integrated Reference Platform, Deutscher Luft- und
Raumfahrtkongress 2016

HARATI-MOKHARTI, A.; WALL, A; BROOKS, P; WANG, J (2007), Automatic Identification


System (AIS): Data Reliability and Human Error Implications, J. 60, pp.373-389

HEXEBERG, S; FLATEN, A.L.; ERIKSEN, B.-O.H.; BREKKE, E.F. (2017), AIS-based vessel
trajectory prediction, 20th Int. Conf. Information Fusion

HILGERT, H.; BALDAUF, M. (1997), A common risk model for the assessment of encounter situations
on board ships, Deutsche Hydrographische Zeitschrift 49(4), pp.531–542

HOLDSWORTH, R (2003), Autonomous In-Flight Planning to replace pure Collision Avoidance for
Free Flight Aircraft using Automatic Dependent Surveillance Broadcast, Swimburne University

IMO (1996), Resolution MSC.64(67): Adoption of New and Amended Performance Standards, Int.
Maritime Org.

KAMIJO, S.; MATSUSHITA, Y.; IKEUCHI, K.; SAKAUCHI, M. (2000), Traffic monitoring and
accident detection at intersections, IEEE Trans. Intelligent Transportation Systems 1(2), pp.108–118

KIM, D.; HIRAYAMA K.; OKIMOTO T. (2017), Distributed Stochastic Search Algorithm for Multi-
Ship Encounter Situations, J. Navigation 70(4), pp.699–718

OLTMAN, J.-H. (2015), ACCSEAS North Sea Region Route Topology Model (NSR-RTM)

MAZZARELLA, F.; ARGUEDAS, V.; VESPE, B. (2015), Knowledge-based vessel position prediction
using historical AIS data, Sensor Data Fusion: Trends, Solutions, Applications (SDF), Bonn

MONTEWKA, J.; GOERLANDT, F.; KUJALA, P. (2012), Determination of collision criteria and
causation factors appropriate to a model for estimating the probability of maritime accidents, Ocean
Eng. 40, pp.50–61

PALLOTTA, G.; HORN, S.; BRACA P.; BRYAN, K. (2014), Context-enhanced vessel prediction
based on Ornstein-Uhlenbeck processes using historical AIS traffic patterns: Real-world experimental
results

PALOTTA, G.; VESPE M.; BRYAN, K (2013), Vessel Pattern Knowledge Discovery from AIS Data:
A Framework for Anomaly Detection and Route Prediction, Entropy 15(6), pp.2218-2245

RHODES, B.J.; BOMBERGER, N.A.; SEIBERT, M; WAXMAN, A.M. (2005), Maritime situation
monitoring and awareness using learning mechanisms, Military Communications Conf., pp.646-652

RISTIC, B; LA SCALA, B.; MORELANDE, M; GORDON, N (2008), Statistical analysis of motion


patterns in AIS data: Anomaly detection and motion prediction, 11th Int. Conf. Information Fusion

TAM, C.; BUCKNALL, R. (2010), Collision risk assessment for ships. J. Marine Science and
Technology 15(3), pp.257–270

272
VAN IPEREN, E. (2015), Classifying Ship Encounters to Monitor Traffic Safety on the North Sea from
AIS Data, TransNav, Int. J. Marine Navigation and Safety of Sea Transportation 9(1), pp.51–58

WIJAYA, W.M.; NAKAMURA, Y. (2013), Predicting Ship Behavior Navigating through Heavily
Trafficked Fairways by Analyzing AIS Data on Apache HBase, 1st Int. Symp. Computing and
Networking, pp.220-226

XIAO, Z.; PONNAMBALAM, L.; FU, X.; ZHANG, W. (2017), Maritime Traffic Probabilistic
Forecasting Based on Vessels’ Waterway Patterns and Motion Behaviors, Trans. Intelligent
Transportation Systems 18(11), pp.3122–3134

XUE, Y.; LEE, B. S.; HAN, D. (2009), Automatic collision avoidance of ships, J. Eng. for the Maritime
Environment 223(1), pp.33–46

YOUSSEF, S.; KIM, Y.; PAIK, J.; CHENG, F.; KIM, M. (2014), Hazard identification and
probabilistic scenario selection for ship-ship collision accidents, Int. J. Maritime Eng. 156(Part A1),
pp.61–80

ZISSIS, D.; XIDIAS, K.; LEKKAS, D. (2016), Real-Time Vessel Behavior Prediction, Evolving
Systems 7(1), pp.29-40

ZHANG, W.; GOERLANDT, F.; MONTEWKA, J.; KUJALA, P. (2015), A method for detecting
possible near miss ship collisions from AIS data, Ocean Eng. 107, pp.60–69

273
Shore-side Assistance for Remote-controlled Tugs
Laura Walther, Fraunhofer Center for Maritime Logistics and Services, Hamburg/Germany,
laura.walther@cml.fraunhofer.de
Britta Schulte, Fraunhofer Center for Maritime Logistics and Services, Hamburg/Germany,
britta.schulte@cml.fraunhofer.de
Carlos Jahn, Fraunhofer Center for Maritime Logistics and Services, Hamburg/Germany,
carlos.jahn@cml.fraunhofer.de

Abstract

In particular, busy international ports may experience bottlenecks caused by growing ship sizes,
increased transport volumes and limited port basins. To increase safety and efficiency of ship
navigation in ports, the German research project FernSAMS aims to develop an unmanned remote-
controlled tug. In order to enable proper remote control, shore-side assistance by an adequate system
is required. This assistance system needs to ensure data transfer and communication with the tug, allow
line handling and propulsion control and integrate a situational awareness system. The functionalities
of the system are elaborated within this paper with particular focus on the situational awareness.

1. Introduction

Busy international ports may experience bottlenecks caused by growing ship sizes, increased transport
volumes and limited port basins. Thus, ensuring and increasing safety and efficiency of ship navigation
in ports plays an important role. Often one or more assistance tugs are utilized to support seagoing
vessels by pushing or pulling operations, Hensen (2003). However, tugs are among the five types of
vessels with the highest total losses, accounting for almost sixty total losses over the last ten years,
Allianz Global Corporate & Specialty SE (2018). This risky work environment in combination with
rather long idle times as well as the requirement of highly qualified personnel during tug operations,
suggests potential for remote-controlled operation of tugs. Advanced communication and modern
control techniques offer new possibilities for water- or land-based manoeuver coordination and control
of the assistance tugs as well as for optimization of the manoeuvres.

In order to enhance safe and efficient ship navigation in ports and thus contribute to competiveness and
environmental sustainability, the FernSAMS research project aims to develop a remote-controlled tug
for berthing and unberthing manoeuvres of large merchant ships. Funded by the Federal Ministry of
Economic Affairs and Energy, the project was started by the seven partners Voith, MacGregor,
MediaMobil, MTC Marine Training Center Hamburg, Fraunhofer Center for Maritime Logistics and
Services CML, the Institute for Fluid Dynamics and Ship Theory (Hamburg University of Technology)
and the Federal Waterways Engineering and Research Institute in September 2017. Compared to other
recent tests with remote-controlled tugs, such as the “Svitzer Hermod” by Rolls-Royce (2018) or the
“Borkum” by KOTUG (2018), that rather concentrate on performing navigational tasks remotely,
FernSAMS aims to provide a holistic approach. Accordingly, the three-year project not only focusses
on designing an innovative remote-controlled harbour tug but also on developing all the components
required for its remote operation assisted by autonomous functionalities.

Functionalities addressed within the holistic approach refer to automated line handling, communication
and data exchange as well as to shore-side assistance, training of mariners and manoeuvre optimization
by a simulation model. In particular, shore-side assistance requires an adequate system. This assistance
system as the human-machine-interface needs to ensure data transfer and communication with the tug,
allow line handling and propulsion control and integrate a situational awareness system. The specific
requirements and the derived concept of the system including the mentioned functionalities are
elaborated upon in the following with particular focus on situational awareness.

274
2. Requirements for Remote Operation of Tugs

The development of a remote-controlled harbour tug including all components required for its operation
necessitates the previous analysis and specification of operations, which is presented in detail by Wal-
ther et al. (2018). The considered operations between the tug being ordered and moored again range
from leaving berth, berthing through transiting, waiting, convoying and taking position to pulling or
pushing operations with or without establishing a line connection respectively. This analysis not only
provides the basis to determine the degree of automation or remote control, but also to specify features
of the tug and its assistance system required for remote-controlled operation assisted by autonomous
functionalities. Hence, to specify necessary features, refine the requirements and ensure acceptance by
mariners, a workshop with approximately forty stakeholders as well as a survey with thirty participants
have been conducted.

The survey included questions about the eligibility of manoeuvers for remote control, the location of
the remote control station, personal responsibility and training needs of personnel, potential degree of
automation of different processes, dangers and risks, necessary assistive technology, acceptance of
changes on board, and the challenges and opportunities regarding remote controlled tugs. The results
of the assessment identified (in order of frequency of mentions) firefighting, push/pull, escorting,
(un-)berthing, lock passages and indirect towing as suitable manoeuvers for remote controlled tugs.
Concerning the location of the remote control station, respondents preferred remote control from a shore
station (45% of responses) and remote control from the bridge of the towed vessel (35% of responses).

Regarding the potential degrees of automation of different processes, Fig.1 lists the results as assessed
by the respondents, who rated six different processes on a scale from low (1) to high (5). The processes
comprise leaving berth, transit, waiting, tug operations with line, tug operations without line and berth-
ing. According to these results, all processes offer at least a medium potential for automation. The
operations wait and transit were assessed with the highest potential, while the lowest potential is as-
signed to processes with or without line connection, i.e. pushing or pulling operations. It can be con-
cluded that a lower potential degree of automation is associated with more complex processes. In other
words, the lower the complexity of a process the higher the suitability for automation becomes as to the
results from the survey. Referring to the development, however, the trade-off mentioned by Walther et
al. (2018) should be kept in mind. Correspondingly, a higher level of autonomy may increase develop-
ment efforts but also robustness against communication drop-outs. Nevertheless, remote control may
enhance the operator’s situational awareness in safety critical situations. This may not be the case when
the operator is solely occupied with monitoring tasks in autonomous operation.

Tug operation with line

Tug operation without line

Leave berth

Berth

Transit

Wait

0,0 0,5 1,0 1,5 2,0 2,5 3,0 3,5 4,0 4,5 5,0

Fig.1: Potential degrees of automation of different processes from low (1) to high (5) (N=13), Walther
et al. (2018)

275
In order to specify necessary features of the remote-controlled tug and its assistance system a question
of the survey addresses the types of assistance features and technical aids that mariners expect to be
integrated into a remote control system. The results of the survey are visualized in Fig.2. Out of the six
means of providing assistance mentioned most often, three can be considered technical aids. These
include onboard sensors and the display of measured forces, cameras for visual monitoring and the
display of the situation on a chart. In contrast, the mentioning of incorporating mariners’ best practice
and considering the navigation by landmarks represent an interest in incorporating traditional maritime
navigation methods, allowing trained personnel to transfer their knowledge from a traditional tug to the
remotely controlled tug. Lastly, respondents expect a transfer of situational awareness to simulate the
“feeling” of being on board of the tug, possibly through the incorporation of Augmented Reality (AR)
or Virtual Reality (VR) technology. AR/VR technology presents the chance of providing a link between
traditional maritime routine and technical innovations, as immersive 360° visuals can be enhanced by
overlaying assistive information like chart displays and additional sensor data.

Other
9%
Onboard sensors and display
of measured forces
20%
Consideration of
navigation by landmarks
8%

Transfer of situational awareness


(e.g. by AR/VR)
11% Cameras for visual monitoring
(e.g. of lines)
20%

Display of situation
on chart
13%

Incorporation of mariner's best


practice in remote control
19%

Fig.2: Expected assistance for remote control (N=13), Walther et al. (2018)

3. Concept of the Tug Assistance System

On the basis of the previously specified requirements, a concept for the Tug Assistance System (TAS)
consisting of the modules seen in Fig.3 is derived. The tug is linked with the TAS through the Data and
Communication System (DCS). At this state terrestrial communication with a satellite backup is
assumed. To enable line handling of the remote-controlled tug, the Autonomous Line Handling System
(ALHS) is integrated on board the tug and in the TAS. The Propulsion Control System (PCS) is also to
be integrated both on the tug and in the TAS. This will allow for steering of the tug by controlling its
engine and Voith Schneider Propellers. Onboard sensors to measure forces or distances are considered
by the Onboard Sensor System (OSS). These sensors, particularly visual sensors, such as cameras,
provide inputs among others to the Situational Awareness System (SAS), which is the fourth module
of the TAS and central to this paper. Most importantly, the SAS shall transfer the “feeling” of being on
board of the tug to the operator and therefore provide maximal situational awareness, which is essential
to enable trained personnel to remotely control the tug. In addition, the SAS provides an interface for
the initiation, termination and observation of the autonomous operation of the tug. Thus, the SAS needs
to closely interact with the other modules of the TAS to display, send and receive information and
commands.

276
TAS
ROBOTUG Terrestrial with satellite backup
TUG AS S I S TANCE S YS TEM
DCS
OSS
Data and Communication System
Onboard Sensor System

Communication System
Communication System
SAS
Fail-to-safe

ALHS – Onboard Situational Awareness System

Autonomous Line Handling System


ALHS – Shore
Autonomous Line Handling System

PCS – Onboard
PCS – Shore
Propulsion Control System
Propulsion Control System

Fig.3: Concept of the Tug Assistance System for Remote Operation, Walther et al. (2018)

4. Situational Awareness and Augmented Reality in the Maritime Field

Situational awareness (SA) is defined as “the perception of the elements in the environment within a
volume of time and space, the comprehension of their meaning and the projection of their status in the
near future”, Endsley (1988). Endsley further describes three levels to situational awareness: The correct
perception of the situation, the correct interpretation of all elements, and the prediction of future devel-
opments based on the first two levels. The influence of situational awareness has been studied in various
fields including aviation and driving and has been deemed an important factor in maritime operations.
A study analysing maritime accident reports found that 71% of all human errors could be attributed to
SA related problems, Grech et al. (2002). In the maritime field specifically, SA has been studied in the
context of remote control centres for the control of unmanned ships, Porathe et al. (2014). Porathe et
al. mention three elements that provide most of the necessary SA for a remote operator in a shore control
centre: a camera system, the electronic chart and radars.

In order to promote situational awareness Augmented Reality (AR) may be applied, as addressed with
regards to the field of aviation among others by Foyle et al. (2005). AR is a variation of Virtual Reality
(VR). While VR completely immerses a user inside a synthetic environment, AR allows the user to see
the real world around him. This world can be superimposed by or composited with virtual elements.
Ideally, the user perceives the situation as if the real and virtual elements coexisted in the same space,
Azuma (1997). This enhances possibilities for applications on remote-controlled tugs to simulate the
“feeling” of being on board of the tug.

AR displays can be divided into two classes. See-through AR displays are characterized by the ability
to see through the display medium directly to the world surrounding the observer. Computer-generated
graphics are then superimposed over this image. Monitor-based displays are non-immersive and
therefore do not allow a direct view of the environment. The live or stored video images of the
environment are displayed on the monitor and then overlaid with computer-generated graphics,
Milgram et al. (1995). Additionally, a distinction can be drawn between a head mounted display
(HMD), a handheld display and a spatial display. HMDs are display devices worn on the head that place
both images of the real and virtual environment over the user’s view of the world. HMDs can either be
video-see-through or optical-see-through. While the former needs two cameras to provide both the real
and the virtual environment, the latter allows views of the real environment by employing a half-silver
mirror technology. Handheld displays (e.g. tablet PCs) can be held in the hands of the user and use
video-see-through techniques to overlay graphics onto the real environment, Furht (2011). Spatially
Augmented Reality (SAR) employs projection-based display devices that provide a projected image of
the virtual objects in or onto the user’s immediate environment, Raskar et al. (1998).

277
Considering these technologies, AR technology may be applied in different ways to increase safety and
efficiency of tug navigation. Two options to employ AR technology on board the tug as well as two
options for its use on shore are briefly discussed in the following paragraphs.

First, one variant is the use of AR on the tug. In this scenario, situation awareness information may be
projected onto the bridge window. This allows the master to maintain the view of the surrounding
environment while supporting him with additional information. Advantages include a direct perception
of the situation on board and only little familiarization for the master. Challenges can be related to
several persons who can be on the bridge and move around. Although this approach may pose some
challenges and is rather unsuitable for remote-controlled unmanned tug operation, it may present an
intermediate step towards this objective by providing additional assistance and increasing acceptance
at an earlier stage. Another use of AR technology directly on the bridge of the tug features a HMD as a
display. The master wears a helmet or glasses with optical-see-through technique. Situational awareness
information is superimposed on the master’s view of the surrounding environment. This variant has
similar advantages and disadvantages as the previous variant, but trades smaller display size and capital
investments for higher mobility. Furthermore, multiple persons may be on the bridge.

With regard to shore-side utilization of AR for remote operation, it may be differentiated between
approaches using several monitors or HMD. A monitor-based approach in a shore side remote-control
station is implemented and tested by Rolls-Royce (2018) in collaboration with the Danish tug operator
Svitzer. Live footage from a camera mounted onboard the tug is displayed and superimposed with
situational awareness information. However, it is noted that a “significant difference from controlling
the vessel from the bridge was that the tug operator was not able to feel any of the sensations of being
in control”, Wingrove (2018). Looking at this requirement, the HMD-option may be advantageous.
Furthermore, a HMD to simulate the “feeling” of being on board the tug may not only be used in a shore
station but may also be integrated into a mobile version in case of remote control from the bridge of the
towed vessel, which corresponds to the respondents’ preferences resulting from the previously
addressed survey. Since a shore centre is seen as the most suitable location to set up and test the tug
assistance system before potentially going for a mobile version, the concepts presented in the following
are based on the assumption of a shore-side approach.

5. Interface Concept

Under consideration of a shore-side approach and the integration of a HMD for improved situational
awareness, the conceptual design of the SAS interface, particularly referring to technical specifications,
input needs and visualization options, is elaborated in this section.

5.1. Architecture and Specification of the Situational Awareness System

The Situational Awareness Systems consists of a visualization system and input modules to control the
tug, which are provided both as a desktop application and an AR application. Both clients communicate
with each other via a network connection. The simplified software architecture is shown in Fig.4.

Fig.4: Simplified software architecture of the Situational Awareness System

278
The visualization system uses a camera view provided by one or several cameras on board of the tug,
including at least one 360° stereoscopic camera streaming a live 3D video to the AR module while 2D
non-immersive video is streamed to the desktop application. Additional information including an
ECDIS chart display, line information from the ALHS, and information from the onboard sensors are
also integrated, using several screens in the desktop application and annotated information in the AR
application. Table I summarizes other potential types of sensors that could be added on board of the tug
to improve situational awareness. The input modules communicate with the PCS to send control
commands to the tug. Several potential input methods, ranging from standard mouse and keyboard input
to position-tracked Virtual Reality controls are analysed and described below.

Table I: Potential sensors on board of a tug


Usage Sensor Types
Engine Level, flow
Navigation Accelerometer, GNSS, gyroscope, magnetometer, radar,
camera, AIS (IR, 360), INS
Distances Echo-sounder, optical (LED, laser/LiDAR), ultrasonic
Environment Anemometer, barometer, echo-sounder, hygrometer, laser
(LiDAR), temperature
Towing Force transducer, hydraulic pressure, oil level, tachometer (for
drum and hydraulic pump speed), temperature (for oil)

The AR module of the SAS is being implemented using the Unity3D engine. The Oculus Rift CV1
HMD is used as a Virtual Reality display and the Oculus Rift Touch controllers can be used as input
devices. The Oculus Rift is a Virtual Reality Headset using two displays, which enable a 1080x1200
pixel per eye resolution and a 110° stereo field of view in total. Two tracking sensors are placed parallel
to each other on a desk with a distance of 1-2 m apart. With these sensors, position and rotation of the
HMD and Controllers can be measured and provided to the system. Thus, the system can use six degrees
of freedom (6-DOF) through movement and rotation of the controllers and HMD as input. Any
movements of the HMD, and therefore the head, are directly translated into camera movement. Any
other interactions (mainly controlling the tug, but also communication features, switching between
different camera views) require another input device.

5.2. Classification of the Interface

To classify the SAS interface, Fong & Thorpe’s work regarding vehicle teleoperation interfaces can be
drawn upon. Four categories of teleoperation interfaces are described: Direct, multimodal/multisensory,
supervisory control and novel. Direct interfaces use the traditional controls of the operated vehicle and
simple video feedback. Multimodal interfaces provide different modes of operation and combine data
from several sensors. Supervisory control interfaces only transfer commands from the operator to the
vehicle, which is at least semi-autonomous. And lastly, the term novel interface aggregates all types of
interfaces that employ novel methods, e.g. gesture interfaces, are used for novel purposes, Fong and
Thorpe (2001). Consequently, the SAS is a combination of supervisory control and a multimodal
interface. Although it could also be classified as a novel interface when considering the use of AR/VR
technology and motion controls.

The desktop client represents a supervisory control interface, being primarily used to initiate and
monitor autonomous operation of the tug. The AR client can be defined as a multimodal/multisensory
interface as it combines data from various sensors and data sources, including a camera or simulator
imagery and allows the user to switch between visualization methods and viewpoints. As noted by
Terrien et al., multisensory interfaces can “improve situational awareness, facilitate depth judgement,
support decision making and speed command generation”, Terrien et al. (2000).

This classification highlights the advantages of the developed architecture with regard to the system
objective of facilitating situational awareness and therefore increasing safety during teleoperation.

279
5.3. Analysis of Conventional Maritime Control Elements

In order to efficiently integrate the AR client and provide the best possible assistance to the operator,
the selected input device should be compatible with the Unity3D engine. To determine the most suitable
device to simulate the controls of a real tug, the control handles of common ships are analysed regarding
their interaction mechanisms and are briefly described:

• Lever (single or combined)


Levers are commonly used to control the engine or thrusters. They can be aligned parallel to
the ship’s centreline, i.e. the longitudinal x axis, so that the lever may be used forwards and
backwards along the x axis (e.g. to control the force of propulsion ahead and astern). Otherwise
it can be positioned perpendicular to the centreline, i.e. in the direction of the ship’s y axis.
Here, the lever moves left and right, e.g. to control bow or stern thrusters. Combined levers can
be used to set two elements (e.g. forward and aft engine) to the same value simultaneously.
• Joystick (2 or 3 degrees of freedom)
Another option is the control via a joystick. Moving the joystick along its horizontal axis
changes the direction/rudder position, while moving it on the vertical axis adjusts the propulsion
force. If the joystick has a third degree of freedom in addition to the horizontal and vertical
axes, i.e. the rotation about the z axis (yaw), an according rotational motion of the controlled
ship may be induced.
• Azimuth lever
Ships equipped with an Azimuth propulsion system are typically controlled with specialized
levers which can be rotated about the z- and y-axis (yaw & pitch). Their rotation directly
controls the orientation of the Azimuth propellers.
• Steering wheel and tiller
Another option to control the rudder orientation are steering wheels and tillers. While steering
wheels return to a neutral position when let go (aligned to the centreline), tillers can have the
option of staying in position.
• Button panel
Occasionally, some systems can be addressed via buttons. If, for example, continuous
adjustment of the RPM is not necessary, it can be set in steps with buttons corresponding to
specific RPM values.

Different types of maritime vessels utilize differing combinations of the aforementioned control
elements. The emphasis of this paper is on the control of a RAVE tug that is equipped with two
longitudinally aligned Voith-Schneider Propellers (VSP) at the front and aft of the vessel for high
manoeuvrability due to larger indirect steering forces, such as that presented by Oggel 2018.

The propellers consist of several controllable blades attached to a rotating circular disk. To control the
vessel, the rotational speed of this disk and the blade angles are regulated. This controls magnitude and
direction of the propeller’s thrust, respectively. Such a propulsion system is usually controlled with two
joysticks, for forward and aft propellers. These control the thrust ahead or astern on their vertical axis
(parallel to the vessel’s centreline) and thrust starboard or portside on the horizontal axis. The joysticks
are non-centring (i.e. do not return to a neutral position when let go) and snap to both axes when moved,
providing tactile feedback when the thrust is set to zero. To enable very precise movements, the
propeller RPM can be controlled additionally, either with a double lever for continuous control or a
button panel to set discrete values. Consequently, the following aspects have to be controlled by input
elements of the TAS:

Table II: Aspects of the controls of a VSP tug


Forward propeller Aft propeller Controlled on tug via
Steering pitch Steering pitch Joystick (horizontal axis)
Driving pitch (propulsion) Driving pitch (propulsion) Joystick (vertical axis)
RPM RPM Double lever or buttons

280
5.4. Survey of Potential Input Devices

Apart from the conventional maritime control elements, specific devices may be suitable in the context
of AR. Some of the viable devices for generating input within the AR system shown in Fig.5 are
described in the following:

Fig.5: Oculus Touch controller layout, https://developer.oculus.com/documentation/unity/latest/


concepts/unity-ovrinput/

• Oculus Touch controller


The Oculus Touch controllers (two, for left and right hand) are available as an add-on to the
Oculus Rift VR HMD. It is possible to use either both controllers simultaneously (one in each
hand) or just one controller (one-handed). The controllers are connected wirelessly to the PC
and feature the buttons and joysticks seen in Fig.5. The joysticks (thumb sticks) are self-
centring and return to their neutral position when let go. The triggers provide continuous values,
depending on how much they are pressed down. The other buttons have three states: not touched
(no contact to the user’s finger), touched and pressed down. The two sensors used for the Oculus
Rift HMD also track the controllers’ position and rotation. Thus, moving the controllers in 6-
DOF can provide input in addition to buttons and joysticks.
• Xbox One gamepad
The Xbox One gamepad developed to be used with the Xbox One video game console can also
be wirelessly connected to a PC. Like the Oculus Touch controllers, it features discrete and
continuous buttons and two self-centring joysticks. Unlike the Oculus Touch controllers, its
movement cannot be tracked. The controller can be held singlehandedly, but both hands have
to be used to reach all buttons. Gamepads by other manufacturers can be used as well, although
the Xbox One gamepad is the most common. Its predecessor, the Xbox 360 gamepad has been
used before to control unmanned surface vessels, using its left joystick to control direction and
speed of the vessel, Osga, 2015.
• Radio control system
Radio control (RC) systems are commonly used to control quad copters, model airplanes and
also model ships. As RCs are not constructed to be used with one specific vehicle and instead
should be interchangeable between uses they offer a wide variety of buttons, joysticks (both
centring and non-centring) and switches. With an adapter to receive and forward the radio
transmissions such systems can be connected to a PC. Just like the gamepad, an RC can be
temporarily held in one hand but needs both to enable use of all buttons.
• USB Joysticks
Joysticks of various manufacturers can be connected via USB to the PC. Usually these are used
to provide input to flight simulators but can also be used to various other applications. Like the
radio control systems, these joysticks offer a wide variety of buttons to be compatible with

281
various applications. The central joystick itself usually has three degrees of freedom and is self-
centring. Multiple joysticks can be connected to a PC (provided there are enough USB ports).
They usually have a large base to be stationary on a desktop.
• Maritime simulator handles
Professional ship handling simulators generally integrate handle boxes which can contain
various kinds of control elements equivalent to real maritime control elements, like those
specified in the previous chapter. They are usually connected to the simulator through modbus
and can also be connected to an average PC. These boxes are often large and heavy and are
mostly used as a stationary part of a simulator setup.

5.5. Graphical User Interface

In addition to accepting input to control the tug, the Situational Awareness System also has to provide
feedback regarding the input and visualize information from other modules, e.g. charts, onboard sensor
information and live video from one or more cameras.

Arranging this array of information in a way that is intuitive and quick to understand is mostly a
challenge for the AR client. Designing graphical user interfaces (GUIs) is a well-understood science for
traditional desktop interfaces but those design rules insufficiently apply to AR. The 360° immersive
display of an AR setup provides new opportunities and limitations. Compared to a standard monitor the
HMD has a lower resolution and screen space has to be shared between the environment and
information displays. On the other hand, information can be placed around the user instead of just in
front of him and can be visualized in 3D space, as three-dimensional objects instead of just flat 2D
displays. The goal is to design an interface that is as intuitive as possible, as there are no standard AR
interaction paradigms yet that users are accustomed to.

Because of the shared space with the 360° view, the best option to position the information displays is
to create smaller previews of the different displays to enable a clear overview of all available
information. If information exists that is thematically similar, e.g. several different camera views or
different charts, just one preview is necessary. The user can then switch between views as needed. Since
all information is now only presented in a small scale, a zoom feature has to be provided to make a
detailed inspection of information possible. An idea of this concept may be gained from Fig.6.

Fig.6: Exemplary mock-up of the 3D visualization of the tug’s environment

There are several ways to implement such a zoom feature. One option is to let the user zoom a display
of his choice by touching it with the controller. This has the advantage of utilizing the 3D placement of
interface elements and the disadvantage of possibly interfering with any other hand-movement based
interactions. Another option is to have the user select the display to zoom by looking at it and then
pressing a button – either on the controller or a 3D button placed in the virtual space. It is also possible
to place virtual buttons for every display in the scene.

282
Another aspect to be taken into account is the placement of interface elements. Assuming that the user
is usually looking forward, the most frequently used elements and the most critical information should
be placed in front. Some tug operations require the user to look backwards for longer periods of time,
or frequently change the line of sight. Keeping this in mind, it might be beneficial to change the
placement of interface elements. This can be done dynamically every time the user turns his head a
certain amount or on demand, either by letting the user place the interface himself or by letting him
switch between pre-defined positions (forward, backward, etc.).

AR technology offers another option of arranging information. Information that is associated with
certain objects in the 360° view, e.g. the towed vessel or harbour structures, can be placed in relation to
these objects. Any information about the line can be positioned directly on the line itself, so it becomes
more intuitive for the user where to look when searching for this information. For instance, the
minimum distance to the towed vessel can be shown on the water surface so the user does not need to
apply any 2D information to 3D space himself.

6. Prototype

Considering the concepts described in detail above, the most promising ideas concerning the AR client
are combined in a first prototype. This prototype is connected to a simulator, from which the SAS
application receives data about the movement of the tug and the towed vessel. As to the input device,
the Oculus Touch controllers are integrated in the first test set-up. Based on the possibilities elaborated
above, the controllers are suitable for the SAS AR application due to their sufficient number and right
types of buttons to enable the necessary basic interactions. Moreover, the positional tracking adds a
functionality that makes the controllers stand out from the other devices. The ease of integrating the
controllers into the Unity3D application with the OculusVR framework is another advantage.

Input for the tug controls is implemented mimicking their real counterparts – i.e. two joysticks, one for
the left and one for the right hand, controlling steering and driving pitch for forward and aft propellers
and a dual lever controlling RPM of both propellers, as shown in Fig.7. By using 3D visualizations of
the actual control elements of the RAVE tug, the controls within the AR client aim to be as easy as
possible to understand and adjust to for trained personnel. The positioning of the virtual control
elements also provides quickly legible feedback concerning the current status of the propellers.

Fig.7: Prototype set-up in Unity3D Engine

The fact that the Oculus Touch controllers’ joysticks are centring and those of a tug are not creates a
discrepancy in the controls. This is solved by having the user control the virtual joysticks indirectly, so
that the virtual joysticks stay in place while the controllers’ joysticks snap to centre. The RPM lever is
controlled by holding a button, the Index Trigger for the left lever handle and the Hand Trigger for the
right handle, as shown in Fig.5, and then moving the right controller forwards and backwards to increase
or decrease RPM. When advancing the system from a prototype to an industrial application, specific
modifications of the controllers may be realized by customization with non-centring joysticks.

283
For the first prototype a 3D visualization of the environment based on the simulator data is chosen. The
environment shows the tug, the towed vessel and any traffic ships as 3D models on a water surface.
Two information displays are situated on the bridge of the tug, one showing chart data received from
the desktop client and one showing a live video image from a connected webcam, serving as a stand-in
for future remote camera feeds.

7. Conclusions

To address safety and efficiency needs of modern ports the project FernSAMS aims to develop a
remote-controlled unmanned tug, including a human-machine interface utilizing AR technology to
facilitate situational awareness during remote and autonomous operations. The analysis of requirements
by mariners and the specification of necessary features of the remote-controlled tug and its assistance
system influence the design of the so called Tug Assistance System (TAS). The TAS needs to ensure
data transfer and communication with the tug, enable line handling and propulsion control and allow
situational awareness. The design of this Situational Awareness System (SAS), including its
architecture and input devices, is elaborated in this paper. An introduction to situational awareness and
AR in the maritime field provides the basis to analyse conventional control elements and AR related
input devices to identify advantages and disadvantages. These are relevant when discussing options
regarding interaction methods in the context of remote-controlled tugs. Based on the compiled concepts
including one of the graphical user interface, a prototype of the SAS is developed in the Unity3D engine
using the Oculus Rift as AR device and integrating situational data from a ship handling simulator.

The SAS as well as the other modules of the TAS are under continuous development. One of the next
steps within the development process in the project FernSAMS is related to the involvement of mariners
for further testing of the prototype using the simulation-based test-bed at Fraunhofer CML. This aims
to meet the requirements by mariners and thus to enhance the acceptance of remote-controlled tugs and
contribute to their future success.

Acknowledgements

Parts of the research leading to these results have received funding from Federal Ministry of Economic
Affairs and Energy. Moreover, the authors gratefully acknowledge the cooperation with the project
partners and would like to thank Srikanth Shetty, Hans-Christoph Burmeister, Julia Hertel and Jannik
Peters for their contributions at Fraunhofer CML.

References

Allianz Global Corporate & Specialty SE (2018), Safety and Shipping Review 2018

AZUMA, R. (1997), A Survey of Augmented Reality, Presence: Teleoperators & Virtual Environments
6/4, pp.355-385

ENDSLEY, M. R. (1988), Design and evaluation for situation awareness enhancement, Human Factors
Society Annual Meeting 32/2, pp.97-101

FONG, T.; THORPE, C. (2001), Vehicle teleoperation interfaces, Autonomous robots 11/1, pp.9-18

FOYLE, D.C.; ANDRE, A.D.; HOOEY, B.L. (2005), Situation awareness in an augmented reality
cockpit: Design, viewpoints and cognitive glue, 11th Int. Conf. Human Computer Interaction Vol.1,
pp.3-9

FURHT, B. (2011), Handbook of augmented reality, Springer Science & Business Media

GRECH, M.R.; HORBERRY, T.; SMITH, A. (2002), Human error in maritime operations: Analyses
of accident reports using the Leximancer tool, Human Factors and Ergonomics Society Annual Meeting

284
46/19, pp.1718-1721

HENSEN, H. (2003), Tug Use in Port: A Practical Guide, The Nautical Institute, London

KOTUG (2018), KOTUG demonstrates remote controlled tugboat sailing over a long distance,
https://www.kotug.com/newsmedia/kotug-demonstrates-remote-controlled-tugboat-sailing-over-long-
distance

MILGRAM, P., TAKEMURA, H., UTSUMI, A., KISHINO, F. (1995), Augmented reality: A class of
displays on the reality-virtuality continuum, Telemanipulator and Telepresence Technologies 2351,
pp.282-293, Int. Society for Optics and Photonics

OGGEL, J. (2018), Gobal Shipping Trends and the Carrousel Rave Tug: Connecting the Dots,
WhitePaper, Novatug

OSGA, G.A.; McWILLIAMS, M. R. (2015). Human-computer interface studies for semi-autonomous


unmanned surface vessels, 6th Int. Conf. Applied Human Factors and Ergonomics and the Affiliated
Conferences, Las Vegas

PORATHE, T.; PRISON, J.; MAN, Y. (2014), Situation awareness in remote control centres for
unmanned ships, Human Factors in Ship Design & Operation, London, p.93

RASKAR, R.; WELCH, G.; FUCHS, H. (1998), Spatially augmented reality. In: First IEEE Workshop
on Augmented Reality (IWAR’98), pp.11-20

Rolls-Royce (2018), Rolls-Royce demonstrates world’s first remotely operated commercial vessel,
Available: https://www.rolls-royce.com/media/our-stories/press-releases/2017/20-06-2017-rr-
demonstrates-worlds-first-remotely-operated-commercial-vessel.aspx, Accessed 03 09 2018.

TERRIEN, G., FONG, T., THORPE, C., & BAUR, C. (2000). Remote driving with a multisensor user
interface (No. 2000-01-2358). SAE Technical Paper.

WALTHER, L.; HARTMANN, A.; BURMEISTER, H.-C.; JAHN, C. (2018), Mariners in the Context
of Remote-controlled Tugs, ISIS - MTE 2018, Berlin, Germany, 27 - 28 September 2018

WINGROVE, M. (2018), Svitzer tackles tug remote control challenges, https://www.marinemec.com/


news/view,svitzer-tackles-tug-remote-control-challenges_50509.htm

285
Automatic Selection of an Optimal Power Plant Configuration
Using Client Preferences and time-based Operational Profiles

Jitte van Dijk1, Peter de Vos2, Rolf Boogaart1


1. Nevesbu, Alblasserdam/The Netherlands, J.S.vandijk@nevesbu.com
2. TU Delft, Delft/The Netherlands, P.deVos@tudelft.nl

Abstract

For economic and environmental reasons commercial vessels increasingly need to be optimized
towards their mission. At the same time it becomes ever more challenging to find the most suitable
power plant during early-stage design as the number of alternative fuels, system components and
power plant configurations increase exponentially. Thus, there is an increasing need amongst system
designers for advanced methods of concept or design space exploration in order to better understand
how design requirements, constraints, technical solutions and performance characteristics relate. In
this paper a methodology is developed which compares a large set of potential power plant
configurations (>25000) based on fuel consumption, emissions, system mass and volume. The
developed methodology enables automatic selection of an optimal power plant configuration using
client preferences and time-based operational profiles. In this paper the developed methodology is
applied to three different ship types.

1. Introduction

Upcoming environmental awareness ITF(2018), regulations IMO (2018) and economic reasons
encourage ship owners to optimize ships towards their mission. This optimization mainly influences
the onboard power plant, since this is a large contributor to the operational costs of a ship and the
main source of emissions on board.

Nowadays, an increasing number of possible components are considered feasible and attractive
during the design and optimization of power plants. This increases the number of possible power
plant concepts; making a manual comparison impractical, if not impossible. This comparison is
complicated further by the fact that every ship owner (or client) has different preferences with respect
to the subject of fuel consumption, system volume/mass and emissions. Throughout this paper these
properties are referred to as the performance indicators of a power plant.

To investigate the influence of the owner’s preferences on the design and selection of the optimal
power plant configuration a design space exploration tool is developed. This tool uses a pre-
determined library of components to create a large set of feasible power plant configurations. The
performance of these configurations is then estimated for the pre-defined mission of the vessel using a
performance simulation.

In this paper the mission of a ship is represented by the operational profile, which is defined using a
time-based description of three power demands: the propulsive, auxiliary and mission specific (or
operational) power demand. For the purpose of designing the considered propulsion systems the
propulsive power is to be complemented with a description of the propeller design.

Following the performance simulation an optimal configuration is then selected using a multi-criteria
analysis. To do so the multi-criteria analysis uses weight factors which represent the client’s
preferences and these factors are therefore part of the required input. The described process is shown
in Fig.1, which shows a process flow diagram of the developed tool.

286
Fig.1: Schematic Process flow diagram of the concept exploration tool

2. Setting up the Library

The Library of Feasible Concepts defines the different power plant configurations that are to be
designed and tested. Prior to the creation of such a library a modelling methodology has to be
established.

In this research a power plant configuration is considered to be a network of interconnected system


components. The connections between these components are not changed and therefore the topology
of the system is fixed.

Different power plant concepts are then created by varying the presence, the type and the capacity of
the components. Before the different components can be introduced it is first necessary to determine
the level of detail of the information within the LFC.

For the performance simulation, rather detailed information (for early stage design) about each
system component (e.g. number of cylinders per engine, cylinder geometry etc.) is required. It is
possible to include this information in the library. However, this means the library size will grow
exponentially due to the sheer number of parameters that have to be defined (and varied), as is known
from combinatorial mathematics, e.g. de Vos (2018a).

To limit the size of the LFC a set of Intermediate Design Algorithms (IDeA’s) is developed. This is
achieved by comparing similar components (e.g. a 4-stroke diesel engine with 5 and 7 cylinders) and
then select the most suitable (according to the client preferences) variation for that specific
component. Both approaches are also shown in Fig.2. Method ‘A’ depicts the method where all
information is specified in the library and method ‘B’ shows the IDeA approach.

The IDeA approach only requires a broad description of each system (e.g. 4-stroke diesel engine) to
be included in the LFC. The IDeA’s thus work as a filter to prevent too much calculation time being
spent on similar “inner-loop” detailed performance simulations. With the required level of detail
specified, it is now possible to define the system components for which the presence, type and
capacity is to be varied, i.e. the different options for each component.

287
Fig.2: Different concept definition methods

The complete topology (i.e. with all considered connections) and the different components (or nodes)
that are considered during this research are shown in Fig.3. This figure also contains a numerical
reference for each node, which corresponds with the numbering found in Table I. This table
summarizes the different options that are considered for each node. In this table similar nodes have
been combined for the sake of readability.

With the modelling methodology and the different system options introduced it is possible to
construct the library of feasible concepts which is the topic of the following chapter.

Fig.3: Complete power plant topology and system boundaries (including node numbering)

288
Table I: Definition of nodes and system options
Node Options
Number Name System / Type
Single Shaft
1 Shaft Configuration Power Take In (PTI) (i.e. two power inputs to a single
propeller shaft)
2 & 4 stroke - Diesel Engines (DE)
4 stroke Dual Fuel engines (DF)
Propulsive Power Permanent Magnet Synchronous Machines (PMSM)
2&3
Generation System (all with and without gearbox)
‘No engine’ (only for node 3) (in case of single shaft
configuration for node 1)
Li-Ion Batteries
Electrical Power
4 Lead-Acid Batteries
Storage
None
Table I: Definition of nodes and system options (continued)
Number Name System / Type
5/7/9 Main / Pilot fuel tanks : MDO / HFO
6 / 8 / 10 Secondary fuel tanks : LNG / CNG / NH3
5 to 11 Fuel Tank(s)
11 Fuel for fuel cell : Pressurized Hydrogen
All tanks : Empty tank
DE and DF driven Generator (Genset)
Electrical Power Proton Exchange Membrane Fuel Cell (PEMFC )
12 Generation DE Genset & PEMFC
DF Genset & PEMFC
No E-power generation
Exhaust Gas Open Loop Wet Scrubber.
13 & 14
Treatment Systems Selective Catalytic Reduction (SCR)
(EGTS) No EGTS (both nodes)

3. Building the Library

Given the chosen modeling methodology the number of power plant concepts that can be created is
equal to the product of the number of options per node (as shown by Eq.3.1). Given the
aforementioned options this results in ~35.8 million possible power plant configurations.
𝑛𝑜𝑑𝑒 14

#𝑐𝑜𝑛𝑐𝑒𝑝𝑡𝑠 = ∏ # 𝑜𝑝𝑡𝑖𝑜𝑛𝑠 (3.1)


𝑛𝑜𝑑𝑒 1

#𝑐𝑜𝑛𝑐𝑒𝑝𝑡𝑠 = 2 ∗ 8 ∗ 9 ∗ 3 ∗ 3 ∗ 4 ∗ 3 ∗ 4 ∗ 3 ∗ 4 ∗ 2 ∗ 6 ∗ 2 ∗ 2 ≈ 35.8 ∗ 106

Although the amount of considered concepts is low when a comparison is made to other design space
exploration libraries (for example the one developed by de Vos (2018b)), it is still rather large and
bound to contain power plant concepts which are not able to perform the tasks required of a power
plant (delivering power) or ones which are impractical.

During this research power plants which cannot perform the required tasks or which are not allowed
to do so, because of regulations, are considered to be infeasible. Configurations which are impractical
(according to “common sense”) are not removed, instead it is expected that they are not selected as
suggested optimal solution due to their impracticality. A set of predefined constraints is used to
eliminate the infeasible configurations from the library and these constraints are discussed in the
remainder of this chapter.

289
The first set of constraints ensure that the considered configurations are allowed. In this research the
only reason for a power plant to not be allowed is related to the maximum NOx and SOx emissions.
These emission limits are defined for several different Emission Control Areas (ECA’s) and the type
of ECA in which the ship has to operate is therefore added to the input parameters of the tool. The
following ECA types are considered: a SOx emission control area (SECA), a NOx emission control
area (NECA), a complete emission control area (ECA: both SOx and NOx) or only global emission
limit.

The developed constraints enforce the application of an exhaust gas treatment system when it is
required to meet the IMO emissions regulations for SOx and NOx emissions, which are
summarized/presented in DNV-GL (2016) and Klein Woud (2008).

These constraints effectively remove configurations which are not allowed to operate in the specified
ECA and configurations which contain exhaust gas treatment systems while they are not needed (to
comply with the aforementioned limits) from the library.

The second set of constraints is implemented to ensure that the considered power plant configurations
are capable of delivering both propulsive and electric power, without unnecessary components.

These constraints are formulated as follows: “A power plant configuration is deemed to be infeasible
when” - …
• there are fuel tanks and/or systems which are not needed
• the required fuel is not present in the corresponding tank
• the configuration cannot perform the tasks required of a power plant (delivering power) (e.g.
there is no system which supplies electrical power)

The implemented constraints remove a total of 99.93 [%] of all possible power plant concepts,
leaving 26.818 configurations which are to be considered by the exploration tool.

4. Intermediate Design Algorithm

In this chapter the working principle behind the IDeA’s is discussed in more detail. Several
algorithms have been developed for each larger component with one or more additional degrees of
freedom: e.g. the main propulsive engine. However, only a single one will be presented in more
detail, since the working principle behind the other IDeA’s is similar.

This specific IDeA creates a set of engine designs by varying the number of cylinders per engine
along a range of pre-determined values (which is related to the engine type defined in the LFC). This
number of cylinders is then combined with a set of (average) manufacturer parameters to obtain a
power per engine. This power is then used to determine the number of engines necessary to meet the
power demand specified by the operational profile, resulting in the dimensions of the complete
propulsion system. This is then complemented with an initial performance estimate (for this case a
methodology developed by Jalkanen et al (2012 ) is used).

From the set of engine designs one is selected using the weight factors that describe the client
preferences. The working principle is schematically shown in Fig.4.

5. Definition of the case studies

To demonstrate and verify the working principles of the tool three case studies are defined. These
cases consist of generalized ship types for which; a typical mission (in time domain) could be defined
with reasonable accuracy and for which information about typically used power plant configurations
is available. The latter being useful when verifying the results produced by the tool.

290
The considered cases are: a general cargo vessel, a Trailing Suction Hopper Dredger (TSHD) and a
harbor tug. The operational profile for each ship is estimated using reference vessels and a description
of each profile is found in Fig.5, 6 and 7.

Fig.4: Propulsive Combustion Engine Design algorithm

Fig.5: Operational profile general cargo vessel

291
Fig.6: Operational profile trailing suction hopper dredger

Fig.7: Operational profile harbor tug

In addition to the operational profile the tool requires the following other input parameters:

• the indication of the client preferences


• the Emission Control Area (as discussed in chapter 3)
• the representation of the propeller design

Client Preferences

The client preferences are defined using four weight factors (one per performance indicator). Each of
these weight factors is allowed to have a value between 0 and 10. The values for each case study have

292
been estimated by considering what the owner of such a vessel could prefer and are found in Table II.

Emission Control Area

The emission control areas are selected by considering typical operational areas for each vessel and
then determining which emissions are controlled in that area. The result of these considerations is
included in Table II.

Both the selected weight factors and Emission Control Area’s solely reflect the authors’ opinion on
these cases and that they are all a matter of discussion.

Propeller Design

The final input parameter is the indication of the propeller design. During this research the propeller
design is represented using a relationship between the propulsive power and the propeller speed. This
relationship is defined using propeller law constant ‘C4’ (see also Eq.4.1) Klein Woud (2008) and this
value is added to the required input.

3 (5.1)
𝑃𝑝𝑟𝑜𝑝 [𝑊] = 𝐶4 ∗ 𝑛𝑝𝑟𝑜𝑝 [ℎ𝑧]

The value for this constant can be determined by reverse engineering from reference vessels or by
using details from the propeller design (as shown by equation 4.2 Klein Woud (2008)). The values for
each case study were determined using reference vessels and are included in Table II.
5
2 ∗ 𝜋 ∗ 𝜌 ∗ 𝐷𝑝𝑟𝑜𝑝 (5.2)
𝐶4 = ∗ 𝐾𝑄
𝜂𝑟
ρ : Density (sea) water [kg/m3]
Dprop : Propeller Diameter [m]
ηr : Relative Rotative Efficiency [-]
KQ : Torque Coefficient (from open water diagram) [-]

Table II: Additional input parameters


Cargo Harbor
Parameter TSHD
Carrier Tug
Fuel Cons. 10 8 6
Weight Emissions 0 6 8
Factors System Mass 5 5 3
System Volume 7 5 5
C4
0.4 * 10-4 6.0 * 10-4 2.8 * 10-4
[𝑘𝑊 ⁄𝑅𝑃𝑀3 ]
Misc.
Emission SOx + NOx Global SOx + NOx
Control Area ECA limits ECA

Remarks

Both the propeller law constant and emission control area are defined as time independent constants.
Doing so simplifies the algorithms behind the tool and reduce the computational time required.
However, it also excludes several conventional systems and/or (ship) management strategies such as
controllable pitch propellers and/or the switching of fuel types during a voyage. The fact that these
are typical options that are often used, gives rise to an important recommendation for future research
(given at the end of this paper).

293
6. Results of the case studies

In this chapter the results of the three different case studies are presented and discussed. The
calculated values for each performance indicator and the components of the selected power plant
configuration are found in Table III and Table IV respectively. Note that the Fuel consumption found
in Table III is the total amount of consumed fuel (i.e. fossil and non-fossil (NH3 and H2) fuel
together). Additionally, the list of selected components found in Table IV is complemented with an
indication of the installed power (where applicable).

Table III: Numerical results, all ship types


Cargo Harbor
TSHD Unit
Carrier Tug
Fuel Cons. 447 0.55 59 ton
NOx 7.24 0.35 0.17 gram/kWh
SOx 2 0.02 0.01 gram/kg fuel
CO2 1370 0.68 44 Ton
Mass 614 48 1053 Ton
Volume 1153 56 1380 m3

Table IV: Optimal power plant configuration per ship type


Case name Cargo Carrier Harbor Tug TSHD
Shaft configuration Single Shaft PTI PTI
1st Propulsive engine 2- stroke DE, Direct PMSM, Direct PMSM, Direct
(11.9 MW) (1.2 MW) (8.8 MW)
Main Fuel
MDO - -
1st propulsive engine
Secondary fuel
- - -
1st propulsive engine
2nd propulsive engine Dual fuel Engine,
Dual Fuel Engine, Geared
- Geared
(1.2 MW)
(2.2 MW)
Main Fuel
- MDO MDO
2nd propulsive engine
Secondary fuel
- LNG LNG
2nd propulsive engine
E-power generation PEM Fuel Cell PEM Fuel Cell PEM Fuel Cell
system (618 kW) (1.2 MW) (23 MW)
Main fuel
- - -
E-power generation
Secondary fuel
- - -
E-power generation
Hydrogen Storage Pressurized
Pressurized Hydrogen Pressurized Hydrogen
Hydrogen
E-power storage - - -
NOx reduction system SCR - -
SOx reduction system Scrubber - -

Most of the components inside the selected power plants are expected given the provided input,
modelling methodology, the implemented power management strategy and some general marine
engineering principles. Nonetheless, there are some components which were not expected.

For example the fact that HFO, which is a rather conventional fuel is not selected for any of the three
cases. The fact that HFO is not selected comes from the fact that more severe emission limits were

294
used, which caused after treatment systems to be required in all emission control areas. Another cause
could be the fact that the largest advantage of HFO, the relatively low price, is not considered in this
research.

Another remarkable trend is the selection of fuel cells in each of the three cases. This is likely caused
by the fact that fuel consumption was judged based on the consumed fuel mass, and since hydrogen
relatively is a fuel with a high energy density (kJ/kg) (without its storage facilities) it has a strong
advantage over other fuels in this aspect. Additionally typical disadvantages of fuel cells, like the
purchasing costs, poor lifetime were not included.

7. Weight Factor Variation Study

In addition to the three cases discussed earlier, the influence of the weight factors is investigated by
varying them, thus representing varying client preferences. This study is performed using the ‘cargo
vessel’ case (defined in Chapter 5) as a benchmark. From this benchmark case a single weight factor
is varied per case and for each weight factor 3 additional values are determined.

This created 12 (3 cases x 4 weight factors) additional cases for which the complete exploration
process was executed. The variation study therefore considers a total of 13 cases (1 original case + 12
additional ones).

The results for the performance indicators are normalized with respect to the original benchmark case
to obtain a deviation expressed in percentages for each performance indicator.

The complete set of results is not included in this paper, due to the large amount of data (12 normal-
ized cases, each case containing 26.818 configurations which all have several performance indica-
tors). Instead some highlights are discussed and an example of the results is included in Fig.8 and 9.
These figures show the same dataset, with Fig.9 showing a zoomed-in version of the results shown in
Fig.8 by omitting the largest spikes.

In both figures each color marks a different case in the variation study and each dot represents a sin-
gle configuration. Fig.9 also includes a vertical line marking the division between single shaft and
power take in propulsion systems.

In the presented figures the change in consumed fuel mass is shown for three variations of the weight
factor related to fuel consumption. These cases are created by changing the original weight factor for
fuel consumption (10) to 1, 4 and 7, while keeping the other weight factors the same. The legend of
both figures include the set of the weight factors (per case) in the same order as found in Table II
(Fuel consumption, Emission, System Mass, System Volume) .

The results presented in Fig.9 show limited deviations between 0 and 10 [%], although there are some
extreme deviations (up to 120 [%]), as can be seen in Fig 8. Both of these deviations are discussed
separately in the following paragraphs.

From the presented deviations (both Fig.8 and 9) it can be seen that, overall, when compared to the
benchmark case, the consumed fuel mass increases when the weight factor is decreased. This is
expected, since decreasing the weight factor implies that low fuel consumption is deemed less
important.

Note that the case with lowest weight factor (1) (blue-colored) is not visible. This is due to the fact
that the cases with a weight factor of 1 and 4 have very similar results, effectively hiding the former
set from sight.

The observed extreme deviations (visible in Fig.8) are found at configurations where electrical
propulsion is applied in combination with a varying power division between the PEMFC and the

295
engine driven genset(s). This can be verified by examining the components of a single configuration,
which is included in Table V. In this table both a large electric motor used for propulsion and a
changing power division between electricity generation systems can be observed.

Fig 8: Example results of the weight factor variation study.

Fig 9: Weight factor variation study example results (zoomed in on y-axis)

The list of components shows two possible causes of the extreme increase in fuel consumption. The
first of which is that, when comparing the two cases, more HFO will be consumed for the alternative
case. This then causes the consumed fuel mass to increase, since HFO has a significantly lower
energy density (kJ/kg) than hydrogen (again excluding storage).

The other cause for the increase in fuel consumption is the fact that a SCR system is required because
the combustion engines deliver a larger portion of the required power, which causes the NOx emission
limits to be exceeded. The addition of a SCR does not only increases the total system mass (which is
not shown/discussed further in this paper), but also requires reagent. This reagent is added to the
consumed fuel, resulting in another increase in fuel consumption.

296
Table V: Detailed design data of a single configuration that shows extreme deviations
Original Case Alternative Case
Shaft configuration Power Take In Power Take In
1st Propulsive Dual Fuel Engine Dual Fuel Engine
engine (844 kW) (844 kW)
Main Fuel
1st propulsive MDO MDO
engine
Secondary fuel
1st propulsive LNG LNG
engine
2nd propulsive PMSM, Geared PMSM, Geared
engine (11.4 MW) (11.4 MW)
Main Fuel
2nd propulsive - -
engine
Secondary fuel
2nd propulsive - -
engine
E-power generation PEMFC & DE Genset PEMFC & DE Genset
system (9795 kW & 1347 kW ) (1108 kW & 12354 kW)
Main fuel
HFO HFO
E-power generation
Secondary fuel
- -
E-power generation
Hydrogen Storage Pressurized Hydrogen Pressurized Hydrogen

E-power storage Li-Ion Batteries Li-Ion Batteries


NOx reduction
- SCR
system
SOx reduction
Wet Scrubber Wet Scrubber
system

To gain more insight into the results of the weight factor variation study, or more specifically; the
reason for the large deviations in Fig.8, it is necessary to have an understanding of the structure of the
Library of Feasible Concepts. For a certain propulsive engine system (including fuel type) every
possible electrical power supply system is considered before the propulsive engine type is changed.
With that new engine type the different electrical power supply systems are again considered and the
cycle repeats.

With this knowledge in mind, the patterns observed in Fig.8 and 9 can be used to deduce that the
influence of the varying client preferences changes depending on the type of the propulsive system(s).
This deduction implies that the operational profile has an influence on how the tool responds to the
varying client preferences.

That the operational profile influences the design of a power plant is not new for a marine engineer
(Klein Woud (2008)). However, the fact that the resulting performance indicators respond differently
to changes in the client preferences as a function of the operational profile is unexpected, i.e. if the
operational profile changes (an apparently independent input parameter of the tool) the deviation of
e.g. fuel consumption as a response to varying client preferences would change as well. This is
certainly worth investigating in the future.

297
In addition to the numerical values of the performance indicators, the change in selected power plant
configuration is also monitored. Two specific power plant configurations are selected multiple times,
for different preferences, and these are presented in Table VI.

The fact that the same power plant is selected multiple times indicates that the selection of the optimal
power plant is not as sensitive to a variation of the weight factors as the performance indicators are.

Table VI: Most dominant concepts, weight factor variation


Selected for 5 Selected for 5
System
of the 13 cases of the 13 cases
Shaft configuration Single Shaft Single Shaft
Dual Fuel
1e Propulsive engine Dual Fuel Engine, Geared
Engine, Geared
Main Fuel
MDO MDO
1e propulsive engine
Secondary fuel
LNG LNG
1e propulsive engine
2e propulsive engine - -
Main Fuel
- -
2e propulsive engine
Secondary fuel
- -
2e propulsive engine
E-power generation
Dual Fuel Genset PEM Fuel Cell
system
Main fuel
MDO -
E-power generation
Secondary fuel
LNG -
E-power generation
Hydrogen Storage - Pressurized Hydrogen

E-power storage - -
NOx reduction
- -
system
SOx reduction
- -
system

8. Conclusions & Recommendations

8.1. Conclusions

The results of the cargo carrier case indicate that the newly developed tool selects an expected power
plant configuration (given strict emission regulations) and that the estimated performance indicators
are in the correct order of magnitude. This provides confidence in the working principles behind the
tool.

The other two case studies also result in configurations for which the selection can be explained given
the operational profile and basic marine engineering considerations. However, there are several
different power plant configurations applied in practice (this is especially true for trailing suction
hopper dredgers Stapersma (2017)). This makes these cases less suited for the verification of the tool,
but they remain useful for the purpose of verifying its working principles.

298
The case studies and the weight factor variation study demonstrate that the tool has a tendency
towards the selection of fuel cells and hydrogen storage, which is unexpected valuable learning
experience. This tendency is likely caused by the fact that fuel consumption was judged on a mass
basis, which is hugely in favor of the application of hydrogen. However, an important disadvantage of
fuel cells is the fact that they are expensive. This is not included in the tradeoff, since costs were not
included in this research. Additionally the weight of the hydrogen storage system was not included.

The weight factor variation study showed that the developed tool is indeed sensitive to the
preferences of a client. The performance indicators showed an average deviation of around 5%,
although some extremes were observed as well. These extremes were caused by a combination of an
electric propulsion system and a generation system which contains both an engine driven genset and a
fuel cell.

From the combination of the different studies it was also deduced that the value of the weight factors
is not the only parameter influencing the response of the tool. Instead, the operational profile also
influences how the tool responds to a variation of a weight factor.

The selection of the most optimal concept shows less variation under influence of a changing weight
factor.

Based on these discussions it can be concluded that this research has provided new insight into the
relationship between design requirements (such as the client preferences), system performance and
technical solutions from a large set of possible solutions.

Nonetheless there are some topics that could be investigated in order for the tool to be improved and
these are discussed in the following paragraph.

8.2. Recommendations

The first recommendation for future research is the inclusion of more components. Especially the
inclusion of controllable pitch propellers (possibly in combination with a power take off) and other
methods to store hydrogen or usage of a reformer could produce interesting results.

Another recommendation is related to the implemented (power) management strategies. As discussed


in section 4, the strategy did not yet include the switching of fuel types during a voyage. This strategy
is often used in practice for ships which operate in different areas around the world. To include such a
strategy the Emission Control Area would have to become a time based parameter.

In addition to including more components/strategies, it is recommended to include more performance


indicators, and thus design choices. Especially the tradeoff between capital costs and operational costs
are often important factors in the selection of a certain power plant.

Another recommendation concerning the performance indicators is to replace or complement the


indicator ‘fuel consumption’ with total system efficiency. Doing so reduces the effect the specific
mass of a fuel allowing a fairer comparison between the different systems. It also might reduce the
tendency towards fuel cells and hydrogen.

Furthermore it is recommended to further investigate the influence of the operational profile on the
reaction of the tool to a changing weight factors (for example by performing the same study for
another ship type).

The final recommendation for future research is investigation of the impact of the pre-defined
technological parameters, since these are bound to have an influence on the results. An example of
such a parameter is the sulphur content of the applied fuels. This might change in the near future due
to the upcoming global sulphur cap DNV-GL (2016).

299
References

DE VOS, P.; STAPERSMA, D.; DUCHATEAU, E.; VAN OERS, B. (2018a), Design Space
Exploration for onboard Energy Distribution Systems, COMPIT, Pavone

DE VOS, P.; STAPERSMA, D. (2018b), Automatic Topology Generation for early design of on-
board energy distribution systems, Ocean Engineering 170, pp.55-73

DNV-GL (2016), Maritime Global Sulphur Cap 2020, DNV GL, Hamburg

IMO (2018), UN body adopts climate change strategy for shipping, Int. Maritime Organization

ITF (2018), Decarbonizing Maritime Transport – Pathway to zero-carbon shipping by 2025, Int.
Transport Forum, Paris

JALKANEN, J.; JOHANSSON, J.; KUKKONEN, J.; BRINK, A.; KALLI, J.; STIPA, T. (2012),
Extensions of an assessment model of ship traffic exhaust emissions for particulate matter and carbon
monoxide, Copernicus Publications, Helsinki

KLEIN WOUD, H.; STAPERSMA, D. (2008), Design of Propulsion and Electric Power Generation
Systems, IMarEST, Londen

STAPERSMA, D. (2017), Main Propulsion Arrangement and Power Generation Concepts,


Encyclopedia of Maritime and Offshore Engineering, John Wiley & Sons Ltd.

300
The Implementation of Virtual Reality Software for Multidisciplinary Ship
Design Revision
Christopher-John Cassar, UCL, London/UK, christopher-john.cassar.16@ucl.ac.uk
Nick Bradbeer, UCL, London/UK, n.bradbeer@ucl.ac.uk
Giles Thomas, UCL, London/UK, giles.thomas@ucl.ac.uk

Abstract

This paper proposes integrating Virtual Reality (VR) into the ship design process to support Human
Factors Engineering (HFE). Virtual reality tools have been proposed for various areas of application
in ship design, but the literature includes few detailed investigations that provide evidence of efficient
implementation. The first version of a VR-HFE design process and revision tool, based on Unity and
C# scripting, is presented. The current functionality includes the ability to import design files into the
VR environment at full scale in a straightforward and fast way. The tool allows rapid transition from
the model space to VR visualisations with minimal additional workload. The applicability of the tool is
demonstrated for a ship design revision application of HFE focused compartments.

1. Introduction

Virtual Reality (VR) for ship design has garnered interest due to benefits such as increased immersion
and spatial awareness for graphics visualisation. The development of VR is proposed as the next step
in advanced visualisation methods allowing the design engineer to focus on a greater amount of detail
within the design (Šikić and Bistričić, 2015).

The use of VR within the ship design process is suggested to have benefits that can increase error
mitigation for design models from both a naval perspective and commercial one (Lukas, 2010;
Rosenblum, L. et al.; 1996). It may also lower the dependence of physical models for detailed aspects
of the design due to the designer obtaining access to a full-scale version of the design within the virtual
environment. As a result the designer is able to look at an up-to-date concept version of the model rather
than wait periodically for physical models for further design analysis. These virtual models may be
more cost effective due to avoiding the validation of concept design physical model stages and other
associated model costs (Torruella, 2014).

This paper presents a VR design revision tool for multi-disciplinary design collaboration. The tool was
developed using the Unity game engine with C# scripting. Series ship design process models,
developed to find areas of implementation, are also demonstrated in this paper. These design process
models were based on the Design Research Methodology (DRM) used for engineering design research.

2. Maritime Design Applications

Direct applications of VR to ship design opens some opportunity for further concept exploration.
Qualities such as access to multi-user based environments and first person perspective design revision
offer the engineers a greater amount of communication opportunities and user-focused design elements
to improve the design.

2.1. Off-shore Structure Design

Offshore structural design often requires knowledge in structural arrangement and technical analysis.
The increased level of focus and spatial awareness that is offered by VR design based tools can help
less experienced designers understand the concept and requirements. Due to the first person
environment of VR and the ability to create multi-user based environments this makes it especially
useful for design revision and contract design stages (Kaye et al.; 2017; Streuber and Chatziastros,
2007). A past major issue for utilising VR tools was the difficulty of interfacing VR with CAD, but

301
recent developments in CAD have allowed for file formats and application plugins to be employed.
(Larkins et al.; 2013). Developments within the software building environments have made avenues for
developers to create user-friendly applications for offshore structural design (Cook et al.; 1998).

Design review and visualisation within VR makes it possible for effective management planning
through scenario simulations. These applications can be designed to simulate a design of interest during
a specific event (Chapman et al.; 2001). The information obtained from this event can later be used for
lifecycle management and also design refinement if done at an early stage.

2.2. Vessel Design

The use of VR within the ship design process is suggested to have benefits that can mitigate errors from
both a naval and commercial perspective (Lukas, 2010; Rosenblum et al.; 1996). During the beginning
stages of design, rendering into VR environments has had noted issues, such as lack of design clarity
and detail, but as the technology has improved the benefits have begun to out-weigh the drawbacks
(Morais et al.; 2017). This has seen the U.S. Navy implement advanced visualisation systems within
their facilities (Koolonavich, 2018; ProQuest, 2004), and also the development of new tools for existing
CAD software (Šikić, 2017). Using these virtual environments lowers the dependence of physical
models for detailed aspects of the design, because the designer will have access to the full scale version
of the design within the virtual environment. The designer is able to look at an up-to-date concept
version of the model without waiting for a physical mockup to be constructed. These virtual models are
cost effective because they reduce the expense of producing physical models and remove the space
required to store such models (Torruella, 2014).

VR within the design process offers a new avenue for design editing and innovation. Understanding the
design from a full scale and immersive perspective gives the design team involved a more accurate
depiction of the model dimensions and ergonomics of the arrangement (Alonso et al.; 2012). VR allows
the designer to find faults in the design in a much easier fashion than traditional CAD (Jamei et al.;
2017). Virtual team environments are an important benefit of VR as this gives the designers involved a
greater amount of shared visual information to use for illustrative purposes during design revision. This
can also make it simple to illustrate design alternatives through annotations as it is important to review
information consistently (Cebollero and Sánchez, 2017).

Added functionalities within the VR environment, such as first person environment object manipulation
or design modification is proposed to make the design revision communication easier to implement
between design departments (Martin and Connell, 2015); this allows the design modelling aspect of
VR to become a less cumbersome process. VR application of general arrangement designs was
proposed (Ahola et al.; 2014; Reina Magica, 2014) which uses VR for deck plan modification and other
levels of annotation. The stated benefits of this are that it helps with the early stages of vessel general
arrangement as this stage relies on the development and analysis of new concept ideas. Using VR and
applying it within ship design can allow for thorough navigation of design concepts, which aids in
design error mitigation by increasing focus of the designer within the vessel (Perez et al.; 2015).

There have been proposals to use VR technology within 3D model testing for design revision. Soares
(2011) suggests that the use of virtual environments for presenting model simulations helps to increase
the realism and the clarity of the results. VR technology within ship design revision is suggested by
(Fernández and Alonso, 2014) to allow for more comprehensive ship predictions about the
characteristics of a vessel design. This is due to the increased amount of detail that is shown in the
visualizer. Giving the design team more visual information to work with will enhance the accuracy of
predictions of the impact that design changes can have on the vessel’s lifecycle performance.

VR in ship design revision and collaboration is an important addition to the overall ship design process.
By creating a VR environment that encourages technical communication this creates a shared
representation of the design (Menck et al.; 2012), which can be supplemented by the implementation
of work scenarios. This can be used to take into account the needs of the crew members involved in the

302
vessel design (Nordby et al.; 2016; Thérisien and Maïs, 2008). The end user is also able to learn from
this collaborative environment, which can increase the transfer of relevant knowledge amongst both
parties (Pynn, 2017).

3. Ship Design Process Applications

There have been logical areas of application for VR within the maritime industry. Placing the
technology into an array of sub-fields demonstrates the possibilities of implementation and leads to the
work proposed in this paper.

3.1. Design Research Methodology Application

Design Research Methodology (DRM) is a research framework that is used for engineering based
design projects. The reason behind using the DRM as a basis for this project is its clear method of
application for design research projects, and its implementation of design research guidelines. The
details in the stage deliverables, as presented by the DRM, offer more applicable information for this
research. This is presented in the breakdown involved in the different stages of the DRM framework as
shown in Fig.1.

The aim of DRM is to support the design process of the engineering research industry by creating an
avenue for better understanding of the process and product. This aim leads to a series of questions that
are the foundation of this research, which is presented (Blessing and Chakrabarti, 2003) as the
following:

• When is a product considered successful?


• What is the process for the creation of a successful product?
• What can be improved to increase the probability of success?

These questions open up the opportunity to look at the factors involved in an engineering design.
Considering the measures of success, an efficient process for this success, and the elements involved in
the design process, makes it easier for the design engineer to categorize the requirements of the product
to increase its chances of solving its derived issue. This analysis is then expanded upon to develop the
basis of the direction for design research.

Fig.1: DRM framework process model (Blessing and Chakrabarti, 2009)

This guides the project to analyzing the process involved in ship design. The position at this stage is to
understand the stages involved in the ship design process and find out where VR could be applied.

303
These factors would depend on breaking down the benefits of VR and finding out the criteria for the
different stages in the ship design process.

3.2. VR-HFE Concept / Preliminary Design Process

In a recent study conducted by the author, a series of ship design processes were developed, which were
compiled from literature and information gathered through interviews with Naval Architects and
Human Factor Engineers. The first task was to develop a generic ship design process. After the
interviews the design process was discussed with external Human Factors Engineers; this led to a first
version of the Human Factors Engineering (HFE) ship design process. From this point, specific tasks
in the design process were recommended based on their ergonomic requirements for VR-HFE
implementation. This led to the first iteration of the VR-HFE ship design process being drafted.

There are HFE considerations that are taken into account during a design review. Some of these
conditions rely on observational analysis using specialist knowledge and regulated criteria.
Considerations for internal design must look at the following conditions in order to better benefit the
crew members (UK MoD, 2006):

• Human factors considerations:


o The motion of the vessel with regards to the placement of the design spaces.
o Rate of traffic flow within passageways and other busy work spaces.
o Requirements for hazard mitigation, firefighting, and other damage control aspects.
o Escape routes and causality routing methods.
o Recreational criteria for social activities within the vessel.
o Environmental issues such as vibrations, noise and working temperature.
• Access and egress arrangements:
o Focus on areas of work traffic flow including areas for embarkations and disembarka-
tion.
o The requirements necessary for storage and removal of goods and other equipment.
o Acknowledgement of essential escape routes.
• Requirements for common equipment:
o Emergency breaker positioning.
o Location of alarms and other warning systems.
o Placement of light switches and their operation requirements with vessel spaces.
o Internal communication equipment location and complexity.
o Inventory style and location for compartments, including the legend-design and place-
ment.
o Location of emergency escape equipment.
• System routing:
o Location of intake and uptakes.
o Design of pipe joints with respect to end-user requirements.

In these considerations there are some that VR can offer benefits. Areas such as the traffic flow design,
escape route design, causality routing methods, and hazard mitigation design specifications can see an
increase in design observation resulting from VR first-person design revision. There are also aspects
such as the positioning of emergency equipment, alarms, communication equipment, and general
inventory that would benefit from the increase design focus offered by VR environments. VR can offer
the ability for the design engineer to re-enact the process of using such equipment, which will aid in
their design decisions.

The routing of systems with regards to end-user requirements is also an important criteria the design
engineer must recognize.

304
In order to develop a ship design process that was applicable to industry standards it was important to
obtain information both from literature research and interviews. The interviews focused on the order of
the ship design process tasks and HFE practices, which was used to add information towards the ship
design processes developed. This gave the ship design process designed an opportunity for feedback on
areas that literature did not cover. The process models were designed in the Cambridge Advanced
Modeller tool, which was made for engineering design process model research.

Defining the stages of VR implementation was necessary for the scope of this project as shown in Fig.2.
The concept design stage was chosen as the initial area of investigation. This was done based upon the
level of flexibility involved and public information available. Also, the information gathered at the
concept design stage can often have the impact that can be carried on through the entire design cycle.
By implementing a tool that can increase the ability to make accurate decision early in the design
process, this can make the later stage design decisions less meticulous.

Fig.2: Ship Design Process

Through information gathered in the interviews, general arrangement, machinery arrangement, and
payload definitions were recognised as areas that benefit from ergonomic human factors analysis. The
design of these areas directly impacts the crew. These areas have a degree of dependence on
visualization to make certain HFE design choices. The naval architects involved in this stage will have
the opportunity to take into account work conditions from the perspective of the end-user. This means
that VR can offer a level of immersion that can impact design decisions.

Fig.3 shows a machinery arrangement sketch during the concept stage. During this stage only the spatial
requirements and basic layout design for primary components are known. There is also an HFE analysis
which will focus on combining the HFE components into the machinery space design. This will take
into account information such as tasks performed in the space (frequency and detailed nature of the
task), numbers of people required, space required (i.e. removal routes, lay down areas), taking account
of range of target audience body size that is adjusted for Personal Protectives Equipment (PPE), tools
and other equipment required, and the different needs for PPE (UK MoD, 2006). All of this data will
assist in designing a desired machinery arrangement that acknowledges the end user, which will then
be validated utilizing VR within the machinery arrangement design. This will allow for detailed design
revision while taking into account HFE criteria.

In these considerations there are some that VR can offer benefits. Areas such as the traffic flow design,
escape route design, causality routing methods, and hazard mitigation design specifications can see an
increase in design observation resulting from VR first-person design revision. There are also aspects
such as the positioning of emergency equipment, alarms, communication equipment, and general
inventory that would benefit from the increase design focus offered by VR environments. VR can offer
the ability for the design engineer to re-enact the process of using such equipment, which will aid in
their design decisions.

305
Milestone 1 Start of Process

HFE Task

VR Task

Direction of Tasks
Continuation of Tasks

HFE Task

VR Task
VR Task

Final Milestone Milestone 1

Fig.3: Concept stage – VR machinery arrangement tasks with relation to proposed HFE tasks

306
The routing of systems with regards to end-user requirements is also an important criterion the design
engineer must recognize.

The preliminary design stage is a prerequisite to the basic stage. This stage focuses on the preparation
of the basic stage. In this part of the design process there is a substantial amount of information available
about the final vessel but there is still a level of flexibility for necessary amendments without
excessively prolonging the design process. The Human Systems Activities Analysis which focuses on
aspects of HFE orientated tasks (Lamb, 2003; NSWC, 2012). Fig.4 shows a representation of tasks
involved at this stage. UK MoD, (2006) recommends that the following should be taken into account
during human factors’ based tasks:

• Skills & knowledge of the desired crew members.


• Physical characteristics of the members, if available, to ensure users can move and work safely
within the vessel without having to adopt unsafe body and limb positions.
• Personnel factors, which covers a range of issues such as satisfaction of the job, acceptable
work conditions, and rotation of service to mitigate repetitive tasks.
Areas such as initial general arrangements, concept general arrangement, machinery arrangement
sketch, payload definition, habitability and crew service diagrams, and the basic design stage all rely
on a degree of visualization for HFE design revision. The connection proposed between HFE and VR
applications within ship design is based upon the visual benefits that VR offers and the ergonomic
considerations that HFE design revision must fulfill. This includes aspect such as areas of work traffic
flow including areas for embarkations and dis-embarkation, requirements necessary for storage and
removal of goods and other equipment, acknowledgement of essential escape routes, and required space
for maintenance. Based on interviews and design process analysis this led to a suggestion of
implementing VR-HFE tasks in areas within the ship design process that have a higher degree of HFE
design ergonomic criteria. The result of this was the development of process models such as Fig.3,
which is the design of the machinery space curing the concept stage, and Fig.4, which is the human
systems activities in the basic design stage.

Although there is less maturity of data during the concept stage there is still HFE criteria that can be
investigated using a VR design tool. The data at this stage would help guide the allocation of tasks later
on during the ship design process. During the preliminary design stage there is more visual design
information to work with so the VR-HFE design revision tasks would assist in making more permanent
design changes. This correlation has shown itself to be an opportunity for VR implementation. Due to
the level of immersion involved in VR the naval architect is able to make design decisions that are in
line with HFE conditions.

4. Proposed VR-HFE Software Application

This section will focus on explaining the process of developing the VR design software. The building
environment for this program was a combination of Unity and Microsoft Visual Studio. Out of these
options C# was chosen due to the availability of Software Development Kits (SDKs) written in this
language.

4.1. Unity Development Process

Creating the frame for the Graphical User Interface (GUI) was the first step in this aspect of the project.
This involved designing the fundamental input and output parameters that presented the information
performed by the background functions. This was done in Unity as it offers a variety of available assets
that are customizable. Also, developing VR applications in Unity is simplified due to ‘OpenVR’ SDK
(built by Steam) making it easier to focus on the functionality of the software rather than the difficulty
of integrating VR into the Unity game engine. This section describes the development of the event
system, GUI, camera system, and briefly the start menu.

307
VR Task

VR Task

Fig.4: Preliminary (Late Concept) Design Stage - VR HFE Human System Tasks

308
Before setting up the GUI it was important to make sure that interaction between the inputs from
hardware and the software are enabled. This was done by utilizing Unity’s Event System component.
The event system allows for the program to organize the inputs to the corresponding function in an
orderly fashion. Fig.5 illustrates a simple example of the Yourdon and Coad data flow chart for an
application of the event system (Cooling, 2003).

Fig.5: Simple data flow chart for Event System application

Once the GUI framework was completed the next stage was implementing the backend functions. The
first step at this stage was first breaking down the desired function of the software. This can be
summarized as the following for the first stages of this research:

• Uploads design using a file browser.


• Tool’s upload functionality is independent of the Unity driver system.
• Allows for real time design viewing and movement.
• Closes design without restarting the tool.
These functions were deemed as a basic start in order to create the fundamental functionality of the tool.
The first task was to create a live file browser which would respond to ‘Upload Obj’ button.

Using an open source SDK package, a movement function was added to the camera to allow the user
to move around the design.

Closing the design allows the designer to introduce another design for validation. It is a necessary
function when wanting to swap designs instantly. This was implemented using the ‘LoadScene’
function.

Fig.6: VR design tool data flowchart

309
4.2. Basic Software Demonstration

The development of a basic VR design tool has been presented. This tool, at the current stage, allows
for CAD files to be viewed and examined in VR. This allows for the implementation of VR as an
interdisciplinary design revision tool to be tested in a case study scenario. The functionality of the
software, as it currently is, allows for visualization of the 3D model n VR. With the simple models
tested within the program there has not been any data loss or corruption; this may change upon the
implementation of more complex designs. Fig.7 shows a simple off-shore patrol vessel design being
viewed in the VR environment.

Fig.7: GUI and simple hull form shown in VR environment

5. Future Investigation Method

The next steps in this project will include further functionality development of the tool, and developing
test-case scenarios. This will take the project closer to understanding the positives and negatives of
using VR-HFE within the ship design process.

A series of design process models, for the concept and preliminary stage, were developed. Each design
process model used data collected from literature and interviews from Naval Architects and Human

310
Factors Engineers to build towards the first iteration of examples shown in this paper. The process
models were used to search for areas of application for VR-HFE design revision. A combination of
MOD HFE considerations and interviews with industry will be used to further expand on key areas of
VR-HFE application. This will be done using a scoring mechanism that highlights activities with ergo-
nomic implication. Interviews from industry will further add towards this part of finalization.

Once further necessary functionalities are added to the tool there will be a VR-HFE case study investi-
gation into the improvement of ergonomics of ship design. This will focus on comparing a standard
HFE analysis method to the VR-HFE case study.

The VR-HFE case study will also investigate the impact that this will have on multi-disciplinary com-
munication. This will compare the traditional approach to of project communication to the VR-HFE
method.

6. Summary

This paper presented an introduction into the approach being taken for this work. A brief demonstration
of the software has also been presented in this paper. Insight into the design process analysis has been
explained with results presented based on interviews with industry. The results have shown possible
areas of implementation for VR-HFE tasks within the concept and preliminary design stage. There will
be further investigation into specific tasks involved in ship design such as machinery space and mission
area arrangement design scenarios.

The approach explained in this paper will allow for possible areas of application for VR-HFE within
the ship design process to be identified with qualitative data. By using this approach to analyse tasks
within the ship design process this will highlight tasks that VR-HFE will impact the most.

References

AHOLA, M.; MAGICA, R.; REUNANEN, M.; KAUPPI, A. (2014), Gameplay Approach to Virtual
Design of General Arrangement and User Testing, RINA

ALONSO, V.; PEREZ, R.; SANCHEZ, L.; TRONSTAD, R. (2012), Advantages of using a virtual
reality tool in shipbuilding, SENER, Madrid

BLESSING, L.; CHAKRABARTI, A. (2003), DRM: A Design Research Methodology, Berlin, pp.1-15

BLESSING, L.T.M.; CHAKRABARTI, A. (2009), DRM, a Design Research Methodology, Springer

BOULOS, S. (2003), B1. Object Files (.obj) (Technical description), Wavefront

CEBOLLERO, A.; SÁNCHEZ, L. (2017), Virtual Reality Empowered Design, Int. Conf. Computer
Applications in Shipbuilding, Singapore

CHAPMAN, P.; STEVENS; WILLS, D.; BROOKES, G. (2001), Real-time visualization in the offshore
industry, IEEE Computer Graphics and Applications 21, pp.6–10

COOK, J.; HUBBOLD, R.; KEATES, M. (1998), Virtual reality for large-scale industrial applications,
Future Generation Computer Systems 14, pp.157–166

COOLING, J.E. (2003), Software engineering for real-time systems, Addison-Wesley

FERNÁNDEZ, R.; ALONSO, V. (2014), Virtual Reality in a shipbuilding environment, Advances in


Engineering Software, pp.30–40

311
JAMEI, E.; MORTIMER, M.; SEYEDMAHMOUDIAN, M.; HORAN, B.; STOJCEVSKI, A. (2017),
Investigating the Role of Virtual Reality in Planning for Sustainable Smart Cities, Sustainability 9

KAYE, T.; WAGSTAFF, S.; BLACK, F. (2017), Multi-User VR Solutions for Enterprise Deployment

KOOLONAVICH, N. (2018), US Navy eyes virtual reality application with Moback CRADA, VR Focus

LAMB, T. (2003), Ship design and construction, SNAME

LARKINS, D.; MORAIS, D.; WALDIE, M. (2013), Democratization of Virtual Reality in Ship-
building, COMPIT, Cortona, pp.316–326

LUKAS, U.F. von (2010), Virtual and augmented reality for the maritime sector, 8th IFAC Conf.
Control Applications in Marine Systems, Rostock-Warnemünde, pp.196–200

MARTIN, J.; CONNELL, A. (2015), Accessible Immersive Visualisation for Shipbuilding, Int. Conf.
Computer Applications in Shipbuilding, Bremen

MENCK, N.; YANG, X.; WEIDIG, C.; WINKES, P.; LAUER, C.; HAGEN, H.; HAMANN, B.; AU-
RICH, J.C. (2012), Collaborative Factory Planning in Virtual Reality, Procedia CIRP 3, pp.317–322

MORAIS, D.; WALDIE, M.; LARKINS, D. (2017), The Evolution of Virtual Reality in Shipbuilding,
COMPIT, Cardiff, pp.128–138

NSWC (2012), The Navy Ship Design Process, Naval Surface Warfare Center

NORDBY, K.; BØRRESEN, S.; GERNEZ, E. (2016), Efficient Use of Virtual and Mixed Reality in
Conceptual Design of Maritime Work Places, COMPIT, Lecce, pp.392–400

PEREZ, R.; TOMAN, M.; SANCHEZ, L.; KERAUSCH, M. (2015), The latest development in CAD/
CAM/CIM. The Virtual Reality in Shipbuilding, 29th Asian-Pacific Technical Exchange and Advisory
Meeting on Marine Structures, Vladivostok pp.105–112

POIŠS, S. (2015), Unity3DFileBrowser

PROQUEST (2004), U.S. Navy Opens Center for Concept Visualization for New Ship Design Using
SGI Onyx Advanced.pdf, PR Newswire Association LLC 1

PYNN, W. (2017), Minimising the Designer / End User Knowledge Gap using Virtual Reality, Int.
Conf. Computer Applications in Shipbuilding, Singapore

REINA MAGICA (2014), Proteus: A Cruise Design Tool for the Future, MA thesis, Aalto Univ.

ROSENBLUM, L.; DURBIN, J.; OBEYSEKARE, U.; SIBERT, L.; TATE, D.; TEMPLEMAN, J.;
AGRAWAL, J.; FASULO, D.; MEYER, T.; NEWTON, G.; SHALEV, A.; KING, T. (1996), Shipboard
VR: from damage control to design, IEEE Computer Graphics and Applications 16, pp.10-13

ŠIKIĆ, G. (2017), Using Virtual Reality Paradigm to Present Ship Structures in CAD Environment, Int.
Conf. Computer Applications in Shipbuilding, Singapore

ŠIKIĆ, G.; BISTRIČIĆ, M. (2015), Stereo 3D Presentation of Ship Structures Using Low Cost
Hardware, Int. Conf. Computer Applications in Shipbuilding, Bremen

SOARES, C.G. (2011), Marine technology and engineering, CRC Press

312
STREUBER, S.; CHATZIASTROS, A. (2007), Human Interaction in Multi-User Virtual Reality, Int.
Conf. Humans and Computers, pp. 1–7.

THERISIEN, Y.L.; MAÏS, C. (2008), Virtual Reality – Tool of Assistance to the Design of the
Warship’s Complex Systems, COMPIT, Liege, pp.460–466

TORRUELLA, A. (2014), Augmented reality labs: Seeing the future of design, Jane’s Int. Defence
Review 47, pp.32–33.

UK MoD (2006), MAP-01-011 HFI Technical Guide (formerly STGP 11) (Maritime Acquisition
Publication No. 4), MAP-01-011, Defence Procurement Agency, MoD

313
Hull-to-Hull Positioning for Maritime Autonomous Ship (MASS)
Svein P. Berge, SINTEF Ocean AS, Trondheim/Norway, svein.berge@sintef.no
Marianne Hagaseth, SINTEF Ocean AS, Trondheim/Norway, marianne.hagaseth@sintef.no
Per Erik Kvam, Kongsberg Seatex AS, Trondheim/Norway, per.erik.kvam@km.kongsberg.com

Abstract

This paper presents the concept of hull-to-hull (H2H) positioning and uncertainty zones to assist nav-
igators and operators to perform safe navigation of objects in proximity to each other. Data from po-
sition sensors and geometry (2D/3D) data will be shared amongst vessels or other objects to calculate
hull to hull distance avoiding physical contact (e.g. steel-to-steel contact). The H2H solution will uti-
lize a variety of positioning sensors, including the European GNSS systems Galileo and EGNOS and
aims to develop open interfaces such that any H2H compliant equipment provider or user can use the
H2H services provided in the planned pilot. Data exchange protocols will be based on existing stand-
ards as far as possible also including the IHO S-100 standard for describing geometry, operational
zone descriptions and bathymetry data.

1. Introduction

Moving from manned to fully autonomous unmanned ship operations requires very accurate and reli-
able ship navigation systems. Normally, ship navigation is based on several onboard sensors like
GNSS, echo-sounder, speed log and navigational radar, as well as electronic chart system (ECDIS) in
addition to visual observations by the officer on watch. In manned operation, sensor fusion, situational
awareness and control are all done by human-in-the-loop. In absence of human perception and obser-
vation, there is a need for additional sensors and new intelligent sensor fusion algorithms applied for
autonomous navigation. During maritime proximity operations, like simultaneous operation with sev-
eral ships, automatic docking and manoeuvring in inland waterways, the relative distances and veloci-
ties between the different objects are of major importance.

The H2H (hull-to-hull) concept, initially proposed by Mr. Arne Rinnan at Kongsberg Seatex, will
provide exchange of navigation data supporting both relative positioning and exchange of geometry
data between objects using a secure maritime communication solution (e.g. maritime broadband radio
system). The H2H solution will be based on existing open standards like the IHO S-100 standard and
being prepared to support autonomous navigation. The protocol will preferably be open, such that any
H2H compliant system from any vendor can connect and start using the services provided in the
standard.

2. The H2H project

The H2H project is funded by the European GNSS Agency under the Horizon 2020 programme. The
project is coordinated by Kongsberg Seatex (NO), and participants are SINTEF Ocean (NO), SINTEF
Digital (NO), KU Leuven (BE) and Mampaey Offshore Industries (NL). The project started in No-
vember 2017 and will run for three years. The project will develop the H2H concept, propose stand-
ardization and study safe and secure communication solutions. An H2H pilot will be built and demon-
strated in three use cases; simultaneous operation, inland waterways and auto-mooring.

3. The H2H Concept

The core functionality of H2H is to provide hull to hull distance between vessels, and to use the con-
cept of uncertainty zone to visualize the uncertainty of the distance calculation. To calculate the hull
to hull distance it is required to know the location of the hull relative to the other hull. In H2H this is
obtained by exchange of position sensor data and 2D and 3D geometric vessel models between H2H
objects, Fig.1. The geometric vessel models will be used to generate digital twins representing the

314
vessels, and then the sensor data will allow positioning the digital twins relative to each other. Each
H2H object will be represented by a digital twin implemented in an H2H Engine.

Fig.1: Uncertainty Zone

In addition to hull to hull distances, the hull to hull velocities are essential information for navigation.
H2H Engine will therefore also estimate the relative motion between the digital twins, and from this
derive hull to hull velocities.

The position sensors can be different types, including systems providing two- and three-dimensional
positions (for example GNSS) and systems providing range measurements and angle measurements,
as well as inertial systems. In the H2H pilot we will include the European GNSS systems Galileo and
EGNOS. Galileo will be used in relative mode providing high accuracy relative positions, whereas
EGNOS will provide integrity. The uncertainty zone will be derived on basis of the accuracy of the
positioning sensors and the accuracy of the geometric model. The concept is extended to not only
providing hull to hull distance, but also distance between hull and static objects, for example a quay.

As shown in Fig.2, the H2H system has two external interfaces: 1) The H2H Engine User Interface
and 2) The H2H Vessel-to-vessel Interface. Both interfaces will be based upon existing standards as
far as possible such that different vendors can connect their own proprietary applications and systems
following the H2H framework.

3.1 H2H Engine User Interface

The user interface will allow use case applications external to the H2H Engine to interact with the
H2H Engine. Those applications would use information from H2H and provide additional
functionality, for example for control systems or for ECDIS display. In addition, H2H will also
include its own HMI, which is limited to the functionality provided by the H2H Engine.

The H2H Engine user interface allows for setting up the H2H Engine and provides outputs from the
H2H Engine. The user interface will primarily be accessed by other on-board systems. This could for
example be an operator console, or an autonomous control system. However, in general, the user
interface could also be accessed externally to the vessel, over a radio communication system.

The H2H Engine User Interface allows external applications to connect to H2H and obtain navigation
information, for example hull to hull distances and velocities and uncertainty zones. Typical output
data will be motion measurements, uncertainty zone, relative distances/velocities between different
objects and support for ECDIS or other systems.

Real-time motion data for control applications will also be provided in the interface and necessary
Quality of Services (QoS) measures (latency, data-rate etc.) will be supported.

315
Vessel A H2H Engine Example: ECDIS
Data Interface

Position and
3D model data
motion sensors

H2H Vessel to
Use case specific
Vessel H2H Engine
application
Communications Example: Control system

H2H Vessel to
vessel interface

Vessel B

H2H Vessel to
Use case specific
Vessel H2H Engine
application
Communications

Position and
3D model data
motion sensors

Fig.2: H2H Basic Modules (green boxes) and connection to external applications (blue boxes)

3.2 Example of display system – ECDIS

ECDIS provides continuous position and navigational safety information. The system generates audi-
ble and/or visual alarms when the vessel is in proximity to navigational hazards. For inland waterway
operations there is an own Inland ECDIS Standard 2.4, Inland ENC HARMONIZATION GROUP
(2015) based on edition 2.4 for the Product Specification for Inland Electronic Navigation Charts
(IENC). For inland waterway operations, the bathymetric data are of special interest. Inland ECDIS
provides also the basis for other River Information Services (RIS), e.g. Inland AIS. Inland ENC must
be produced in accordance to the bathymetric Inland ENC Feature Catalogue and the Inland ENC En-
coding Guide. Typical information needed for the Inland ECDIS are;

• Position of own vessel including uncertainty zone


• Bathymetric data
• Navigational hazards (operational zones)
• Inland AIS
− River Information Services (RIS)
− NMEA data

Typical standards that are supported in ECDIS systems are:

• IEC 61174 Ed.2.0 ECDIS Operational and Performance Requirements. Method of Testing
and Required Test Results.
• IMO Resolution A.817 (19), Performance Standard for Electronic Chart Display and Infor-
mation Systems.
• IEC 60945 Ed.3.0 Marine Navigational equipment, General Requirements. Methods of Test-
ing and Required Test Results.
• NMEA 0183:2008 version 4.00 Standard for interfacing Marine Electronic Devices

316
• IEC 61162-450 Ed.2.0 Maritime navigation and radio communication equipment and systems
- Digital interfaces
• IEC 529 Second edition (1989-11), Degrees of protection provided by enclosures (IP code)
• AIS interface is compatible with ITU-R M.1371 and IEC 61993-2.

3.3 Standardization

There is a need to look at the standardization of the data exchange, both between two vessels but also
from the H2H engine to the applications on board, including the presentation in ECDIS format. The
usage of already existing standards needs to be considered in cases where such standards already ex-
ists, for instance regarding the exchange of navigation information onboard the vessel, IEC (2018),
NMEA (2008), and also regarding the exchange of GNSS data, RTCM (2016).

For the vessel to vessel communication, exchange of 3D models suitable for doing calculations of rel-
ative distances and speed is important. The data exchange definition also must take into account the
possible restricted bandwidth and the actual accuracy needed, dependent on the distance between the
two vessels.

For data to be presented in the applications, and possibly in ECDIS systems, extensions of the S-101
ENC standard will be considered, especially related to the presentation of uncertainty zones and oper-
ational zones in 2D. Generally, we will investigate which part of the IHO S-100 framework of stand-
ards that are relevant for the data exchanges in H2H. The S-100 standard is based on several ISO
19100 standards covering spatial and temporal schema, imagery and gridded data, profiles, portrayal,
encoding and so forth. Since S-100 has been selected by IMO as the basis for their e-Navigation ar-
chitecture and the Common Maritime Data Structure, IMO (2013), this standard will be an important
starting point for the H2H standardization.

The H2H data exchange will cover both static vessel information (vessel particulars and 3D models)
and dynamic data (positioning and inertial data, uncertainty and operational zones). The static part of
the H2H data should be aligned with the new Reference Data Model proposed, SAFETY4SEA (2018)
to IMO FAL 42 covering data elements required for electronic port clearance according to the IMO
FAL CONVENTION, IMO (2018). The H2H data exchange should also relate to recently proposed S-
100 standards, for instance S-421 on Route Exchange, IHO (2019) and S-211 on Port Clearance, IALA
(2019).

3.4 Vessel-to-vessel interface

The vessel-to-vessel interface is used for data exchange between H2H objects. The basic types of data
to be exchanged is sensor data and geometric models, complemented with other navigation related
data. The communication channel could be any wireless system providing required performance
(bandwidth, latency, reliability etc.). Different communication solutions will have different bandwidth
capacity and latency performance needed to be taken into account in the framework. To avoid cyber-
attacks on an open wireless communication protocol, reliable mechanism to reduce cyber risk must
also be implemented. IMO has in 2017 initiated the Guidelines on maritime cyber risk management to
raise awareness on maritime cyber risk threats and vulnerabilities, IMO (2017).

Typical, new signature and encryption systems for digital data and use of a public key infrastructure
can protect against cyber-attacks on critical safety and operational information. There is currently no
functionality or registry in the S-100 standard supporting cyber-security issues. Due to limited
bandwidth, data can be serialized with less overhead using for example the Google Protocol Buffer to
support a variety of programming languages (Java, Python, Objective-C and C++). Example of data
exchange between objects are given in Table I. The data will, where applicable, be accompanied with
estimated accuracy.

317
Table I: Data exchange vessel-to-vessel communication
Data content Proposed standards
Object meta data, example: Position, Heading, Speed, NMEA 0183/IEC 61162-450/S-100
Name, ID
Vessel model: 3D model, Sensor reference points, Lever NMEA 0183/IEC 61162-450/S-100
arms
GNSS raw data, including pseudo-ranges RTCM 10403 and 10410
Position data for given reference points NMEA 0183/IEC 61162-450/S-100
Inertial sensor data NMEA 0183/IEC 61162-450/S-100
Range measurements, e.g. radio based (R-mode), radar NMEA 0183/IEC 61162-450/S-100
based and optical systems
Direction (angle) measurements NMEA 0183/IEC 61162-450/S-100
Uncertainty data NMEA 0183/IEC 61162-450/S-100
Raster data, including radar images and photographs IEC 61162-450/S-100
Video IEC 61162-450/S-100
Operational data, for example text messages and NMEA 0183/IEC 61162-450/S-100
operational zones
Environmental (bathymetry) data NMEA 0183/IEC 61162-450/S-100
Other data as required by applications NMEA 0183/IEC 61162-450/S-100

4. Uncertainty Zone

The uncertainty zone represents the uncertainty in the outer boundary of the geometry of vessels or
objects of interests within an operational area, at a given time, Fig.1. A more detailed Fig.3 illustrates
the different error sources for computing the uncertainty zone. The uncertainty zones will be calculat-
ed by the H2H engine, based on the accuracy of geometric vessel models and sensor inputs from own
vessel and other vessels and objects involved in an operation. The integrity requirement for the uncer-
tainty zone will be expressed as the probability that actual position of a point on the hull will be inside
the uncertainty zone with a probability of 95%. The extent of the uncertainty zone from the hull
would then represent this probability.

Fig.3: Errors contributing to uncertainty zone. Orange is the physical vessel; green line is the 2D
vessel geometry model and light pink is the uncertainty zone, here shown as a fixed distance
from the 2D vessel geometry.

318
4.1 Error calculation of uncertainty zone

In order to calculate the uncertainty zone, estimates of the various error components must be
combined into an overall estimated accuracy. Each point on the hull will be considered independently,
and according to the error model as described in the following and illustrated in Fig.3:

• Sensor error
Any measurement is made with a physical sensor located on a specific point on the vessel.
The sensor measures with a certain accuracy, the corresponding error is the sensor
measurement error. Sensor error is denoted by 𝜺𝒔𝒆𝒏𝒔𝒐𝒓 . When we use relative positioning,
combining sensors on own vessel and the target vessel/object, the accuracy of the sensors on
the target will impact the accuracy of the overall position solution, and hence the size of the
uncertainty zone.

• Sensor installation error


The physical location of the sensor in the ship is represented by a modelled point in the 3D
vessel model. This could include a lever arm offset. The installation error represents the error
between actual location of the sensor and the modelled location. This error will also include
any offset in phase centre of the sensor antenna, if applicable. The size of the installation error
depends upon how well the physical sensor installation has been measured relative to the
vessel’s 3D model. Installation error is denoted by 𝜺𝒊𝒏𝒔𝒕𝒂𝒍𝒍𝒂𝒕𝒊𝒐𝒏 .

• Reference position offset

This is the offset vector between the reference point and a specific point on the 3D model. As
the reference position is relative to the 3D model, the error in the offset vector is by definition
zero.

• 3D model error
The 3D model error is the difference between the 3D model and the actual vessel physical
hull. 3D model error is denoted by 𝜺𝟑𝒅𝒎𝒐𝒅𝒆𝒍.

With these error contributions, the total hull estimate error, which will be the basis for the uncertainty
zone, is:

𝜺𝒉𝒖𝒍𝒍 = 𝜺𝒔𝒆𝒏𝒔𝒐𝒓 + 𝜺𝒊𝒏𝒔𝒕𝒂𝒍𝒍𝒂𝒕𝒊𝒐𝒏 + 𝜺𝟑𝑫𝒎𝒐𝒅𝒆𝒍

Initially these errors will be considered independent and Gaussian distributed. Hence, in a simplified
model, the overall standard deviation of the location of the hull will be given as:

𝝈𝟐𝒉𝒖𝒍𝒍 = 𝝈𝟐𝒔𝒆𝒏𝒔𝒐𝒓 + 𝝈𝟐𝒊𝒏𝒔𝒕𝒂𝒍𝒍𝒂𝒕𝒊𝒐𝒏 + 𝝈𝟐𝟑𝑫𝒎𝒐𝒅𝒆𝒍

The total error will in general vary over the outer boundary of the hull. Hence, when calculating the
total error, the various error sources must be projected on the part of the hull for which the error is
calculated. For example, a heading error will contribute to different size of the uncertainty zone at
fore and aft of a vessel.

The following considerations need to be made when implementing this model:

• The sensor error depends upon type of error and how sensors are fused.
• The installation error depends upon the installation and should be provided as an input to the
processing in form of uncertainty in sensor location.
• The 3D model error depends upon the quality of the 3D model and how well it fits the actual
build. This error might be different for different parts of the hull.

319
• Errors in sensors providing orientation will cause the 3D model to rotate relative to the true
hull, hence cause an error which depends upon the distance from the rotation axis.
• When using relative sensors, the total of the hull to hull error will also depend upon the errors
of the target vessel. It needs to be carefully considered to which degree those errors need to be
taken into account.

The standard deviation for the location of the hull will then be transformed into the uncertainty zone.
In doing this, the uncertainty zone must be scaled to represent the wanted level of accuracy or
integrity.

If an alarm limit has been specified, it would also be possible to include a check of the uncertainty
zone against the alarm limit. This could even be visualized, showing, for example in a separate colour,
that for a vessel the uncertainty zone extends beyond the alarm limit.

Uncertainty zone can be modelled as a polygon in either 2D or 3D. A polygon is defined as a plane
figure (2D) or volume (3D) that is bounded by a finite chain of straight-line segments closing in a
loop to form a closed polygonal chain or circuit. Each corner (edge) is defined by its coordinate in-
cluding position uncertainty which can be modelled by a parametrized ellipsoid or a sphere.

H2H will be a flexible framework that allows using all available position sensors. The achievable ac-
curacy, and hence size of the uncertainty zone, depends upon the accuracy of the sensors, the installa-
tion and how well the geometric model represents the physical hull. Hence, the size of the uncertainty
zone, being steered by the accuracy, depends upon the quality of the sensors and the geometric mod-
els. A vessel well equipped with high quality sensors, including relative GNSS, and a precise calibrat-
ed geometric model, could achieve uncertainty zones down to meters or even decimetres level.

5. Operational zone

Operational zones are any other zones than the uncertainty zone which need to be taken into account
when navigating. The H2H concept will focus on defining and providing uncertainty zones related to
the position accuracy, whilst the use case applications will define and implement operational zones
related to different aspects of safe navigation. Hence, the operational zones are not part of the H2H
concept.

Fig.4: Example of operational zone for inland waterways


The uncertainty zone represents the uncertainty of the position of the hull at a given time. However,
when it comes to safe distance between vessels and objects, also the vessel’s dynamics and manoeu-

320
vrability need to be taken into account. Additionally, when navigating relative to a map, the map ac-
curacy must be considered. Further, additional margins might be required to further reduce the risk of
accident. As an example, if several H2H vessels are doing simultaneous operations, common safety
zones or escape zones need to be transmitted to all interested H2H objects using the same zones for
navigation. Safe navigation will also be different, depending upon type of operation and vessels in-
volved, and could include distance, speed, course, manoeuvrability, etc. This will be included in the
operational zones.

Other examples of zones are escape zones for offshore operations and different zones for inland wa-
terways dependent on the vessel's ability to stop completely.

Fig.5: Escape Sector Zone

Some of those zones are illustrated in Fig.4 and Fig.5. As a part of the next phase in the project, the
H2H standardization work will define a format for representing the uncertainty zones and the opera-
tional zones. Each zone can also be related to a specific contextual meaning, e.g. the colour is repre-
senting level of a warning or an alarm. It can be used to give guidance to the ship master or the con-
trol system on which navigational actions to take. Operational zone can be defined on top of an uncer-
tainty zone or in some cases independent of an uncertainty zone. Operational zones are calculated by
the Use Case Specific Applications and has specific semantics related to it as defined by the Use Case
Specific Applications, for instance:

• Warnings to be raised when two zones are overlapping or are in close contact, or when an object is
entering a zone.
• Recommendations to specific actions to be taken by autonomous systems related to a zone, for in-
stance for auto-mooring, where a new phase starts when an overlap is identified.
• Access restrictions to the zone, also linked to specific time periods, vessel types, vessel sizes and
geographical areas. This can be used to indicate locks (including closing/opening times), quays or
VTS areas. Another example is to use this zone to indicate fixed obstacles that must be passed on a
certain distance, for instance the riverbank or navigation marks.
• Navigational zones to ensure that vessel keeps safe distance to other vessels and obstacles during
navigation. Examples are the Waypoint Operational Zone defined for inland waterways passages
indicating that the vessel should stay within this zone to ensure safe passage and the safe zones de-
fined for two vessels approaching on a passage (collision avoidance).
• Safe zones and other zones related to safe operations that are used to indicate safe operations for
vessels in close proximity with each other or to a fixed object, or escape zones, no-go zones, stand-
by zones and responsibility zones.

321
• Communication zones can be an area of interest/first communication zone defining that a vessel
moving into this zone should be made known to the object that has defined this zone and should
start communication with this object in the cases where both objects are H2H compliant and able
to communicate. This type of operational zone can be used to define what information to be ex-
changed at what time. This can be defined based on how several operational zones relates to each
other or based on an object entering or leaving an operational zone.
• Regulations: Operational zone can be defined based on requirements given in maritime regulations
for instance related to piloting, tug usage, reporting and VTS areas.

Each operational zone can be defined by a set of parameters that are listed in the following:

• Shape: This is the geometrical shape of the operational zone (polygon, circle etc.) and whether it is
2D and 3D. The shape (circle, ellipse, square, polygon) is determined by the Use Case Specific
Application. The shape of an operational zone for a certain vessel can change during the different
phases of an operation or navigation action. An example is a situation where two vessels are ap-
proaching and passing in close distance: When the relative distance between the vessels is large,
having a circular or rectangular shape, or just a point, may be enough. When the vessels are mov-
ing in closer proximity, the shape of the operational zone may be based on the shape of the hull.
• Size: This is the size of the operational zone. The size of the shape must be determined, either by
the diameter, length of the sides or by other means.
• Time: This is the time period when the operational zone is valid in case of time-varying infor-
mation, for instance in the case of opening hours for locks and quays, bridge opening hours and
mooring gear availability.
• Information: This is the information that is related to the operational zone and is described by the
following dimensions:
• What information is transferred? This can be information related to the vessel or fixed object: posi-
tion data, geometric model, vessel dimensions, intended routes, already calculated uncertainty
zones and operational zones, among other kinds of information needed by the Use Case Specific
Applications. It can also be operational information, as for instance warnings, recommendations,
information about restricted waters or related to regulatory requirements.
• When the information is transferred? For an operational zone, the trigger for exchanging infor-
mation can be defined to be for instance the time when two operational zones meet or when they
intersect. It can also be when an object or vessel enters an operational zone. Further, it can be
when an uncertainty zone meets or intersects with an operational zone. The timing of the opera-
tional zone information can be defined by the Use Case Specific Application user.

6. Summary

The H2H project supports the process moving from manned to fully unmanned autonomous naviga-
tion. Three pilots will be developed within next year and used to demonstrate three different opera-
tions from open sea operation (simultaneous operations), inland waterways operations with hard con-
straints, and auto-mooring (ship-to-shore operation) by use of the H2H concept.

In the next phase of the project, the standardization work will focus on developing and supporting
interfaces that different equipment providers and other stakeholders can interconnect with the H2H
solution (interoperability). Two different interfaces for both the application and communication solu-
tions will be developed and build on the S-100 standard and other maritime standards as far as possi-
ble. An open standard is preferable compared to proprietary solutions, special for smaller companies
that can deliver third-party services (both autonomous functions and HMI).

Anyway, care should be taken with regards to maritime safety. The H2H concept does not yet focus
on unmanned operations. Safety and risk are still managed by the operators. Even if the overall goal
of the project is to increase safety of close proximity operations, failures in the H2H system might
give undesired consequences and reduced safety. The safety aspect should therefore be the backbone

322
in further development of autonomous navigation when there is no human-in-the-loop. A safe design
rule is to develop new autonomous navigation systems with at least the same level of safety as for the
dynamic positioning (DP). For the H2H solution, we also need to consider cyber-security as an extra
dimension. As far as we see, there is not yet any standard or regulation how this should be done for
MASS application.

Acknowledgements

This paper is based on several preliminary reports submitted as project deliveries to the European
GNSS Agency (GSA) and is based on initial concept definition, user requirements and gap analysis of
current state-of-the-art technologies and standards related to autonomous navigation. The project re-
ports are based on a cooperation between Kongsberg Seatex AS (NO), SINTEF Ocean AS (NO),
SINTEF Digital (NO), Mampaey Offshore Industries (NL) and KU Leuven (BE). The H2H project
has its own project web-site, https://www.sintef.no/projectweb/hull-to-hull/. The H2H project has re-
ceived funding from the European GNSS Agency under the European Union's Horizon 2020 research
and innovation programme grant agreement No 775998.

References

IALA (2019), Data Modelling, https://www.iala-aism.org/technical/data-modelling/iala-s-200-deve


lopment-status/

IEC (2018), Maritime navigation and radiocommunication equipment and systems - Digital interfaces
- Part 450: Single talker and multiple listeners. Standard IEC 61162-450:2018

IHO (2019), Information about S-421 Route Plan Exchange, https://www.iho.int/mtg_docs/


com_wg/ENCWG/ENCWG3/ENCWG3-7.2_Information_about_S-421_Route_Plan_Exchange.pdf

IMO (2013), NAV 59/6, Development of an e-Navigation Strategy Implementation Plan, Report of the
Correspondence Group on e-Navigation to NAV 59

IMO (2017), IMO Guidelines on Maritime Cyber Risk Management, http://www.imo.org/en/


OurWork/Security/Guide_to_Maritime_Security/Documents/MSC-FAL.1-Circ.3%20-%20Guidelines
%20On%20Maritime%20Cyber%20Risk%20Management%20%28Secretariat%29.pdf

IMO (2018), IMO FAL Convention, http://www.imo.org/en/OurWork/Facilitation/ConventionsCodes


Guidelines/Pages/Default.aspx

INLAND ENC HARMONIZATION GROUP (2015), Inland ECDIS Standard 2.4, http://ienc.
openecdis.org/?q=content/inland-ecdis-standard-24

NMEA (2008), Standard for interfacing Marine Electronic Devices, NMEA 0183:2008 version 4.00.

RTCM (2016), Differential GNSS Services - Version 3, Standard RTCM 10403.3

SAFETY4SEA (2018), IMO FAL 42 Outcome, https://safety4sea.com/imo-fal-42-outcome/


?utm_source=noonreport&utm_medium=email&utm_campaign=major&utm_source=newsletter&ut
m_medium=email&utm_campaign=SAFETY4SEA+-+daily+13%2F06%2F2018

323
A Model-Based Approach to Modular Ship Design
Ken Sears, Siemens Industry Software, UK, ken.sears@siemens.com
Dejan Radosavljevic, Siemens Industry Software, UK, dejan.radosavljevic@siemens.com
Jan van Os, Siemens Industry Software, Netherlands, jan.van_os@siemens.com

Abstract

The model-based enterprise (MBE) approach to product design has evolved where it is accepted that
digital twins can improve product quality while increasing design and manufacturing productivity. As
part of this trend, a 3D model-based approach to the conceptual design of ships has become
increasingly common. This paper describes how to move beyond current approaches to 3D concept
design by utilizing MBE to better support conceptual design for a family of modular ships. These ideas
can be combined with automated simulation-driven design, creating a powerful framework for early
concept design. A subset of the technology introduced herein can also be generalised to support the
extensive reuse of system designs between arbitrary ships. This paper examines the technology needed
to support the reuse and design of modular families of ships. The approach will be illustrated using the
capabilities of existing commercial computer-aided design (CAD), computer-aided engineering (CAE)
and product lifecycle management (PLM) software.

1. Introduction

In the late 1970s and 1980s, the cost of naval ships rose at an alarming rate. In an effort to control costs,
shipyards in several countries started working to create modular or flexible ships. The idea being to
have common hull forms and systems which could be combined with an array of payload options to
suit various missions. Since this time, different approaches to modularity and the reuse of design
information have been investigated by both naval and commercial shipyards.

This paper reviews a number of approaches to modular or flexible ship design and the principles of
model-based enterprise. It also describes how PLM systems can be employed to streamline and reuse
modular design in different ways. Finally, it looks at the way PLM and parametric 3D modelling can
be used to implement and optimize modular designs.

2. Modular design strategies

Designing modular ships continues to be a challenge for both common platform and mission-specific
modules. The platform design is crucial to the success of a flexible/modular ship approach. It must
satisfy inherently different requirements to accommodate the vision of flexible ships. For example, the
platform must have the capacity to support current as well as future sets of mission-specific modules
within standard requirements for power, cooling, heating, space and weight. To achieve a truly flexible
ship, design challenges for mission-specific modules include standardised module volume and weight
as well as rigorous interface control between the various modules and platform services. Additionally,
mission-specific system integrators must accept and design their equipment and systems to be
physically and functionally compatible with the platform and standardised module interfaces. System-
level requirements must be managed by the shipyard throughout the lifecycle of the asset to properly
support upgrades and mission conversions.

Partitioning and distributing the systems and mission-specific modules within the hull and
superstructure must not diminish overall ship stability. Finally, one of the key challenges is equipping
interchangeable mission-specific modules with common computing, data and communications infra-
structure with open and commercial standards. The ability to switch common and interchangeable
mission-specific modules between ship families is extremely valuable.

Historically there have been several other approaches to flexible naval ship design, e.g. Sorensen and

324
Christiansen (2019), Ballé and Sutherland (2019). These continue to evolve as they are being adopted
in new programmes. In addition to these long running approaches, more naval shipbuilders are planning
to deliver flexible families of ships, for example the proposed BAE Type 31e frigate.

These flexible platforms for naval ships provide a range of options to satisfy both domestic and
international programme requirements. They have delivered more affordable ships with larger
production bases to amortise the nonrecurring research and development (R&D) costs, larger
production runs to achieve lower unit costs for components and systems, and improved construction
productivity across many hulls and programmes.

The success of these flexible ship programmes has pointed towards the need for different approaches
and processes for design and engineering. Requirements management, systems engineering,
configuration and change management, as well as class/hull effectivity are much more demanding in
flexible ships. Additionally, it is imperative that rigorous interface control management processes and
disciplines are maintained between platform and mission-specific modules throughout the design and
construction of ships.

2.1 Engineer to order in shipbuilding

When buying a new car, the customer first selects a model, then the type of vehicle (coupe, saloon,
station wagon) and finally options. Car dealers are equipped by the manufacturer with configurator
tools which ensure they only offer combinations of model, type and options that can be supplied. The
configurations offered are all modelled in the vendor’s PLM system, using what is referred to as a
configured bill of materials (BOM) approach, http://beyondplm.com/2019/01/14/bomusings-video-
blog-update-revision-effectivity-150-bom/. The configured BOM contains representations of all the
parts that can be set up to define a specific vehicle.

In the shipbuilding world, this configured BOM-based approach can be hard to justify because the lower
production volume does not justify pre-modelling all the components required in every possible
configuration. Research into the configurator-based approach to ship specification, e.g. Nieuwenhuis
(2013), suggests that an engineer to order (ETO) approach could allow the use of standard system
definitions between multiple ships and potentially lower costs. Some shipyards have developed families
of ships where system designs are reused across multiple designs. For example, Royal IHC developed
the IHC Supporter® class of ships, IHC (2013), which included several variants of offshore support
vessels and reused various standard structural and mission-specific system modules as illustrated in
Fig.1.

Fig.1: IHC Supporter® Class (image copyright Royal IHC)

325
The IHC Supporter® class modular and multi-purpose offshore support and construction vessels are
based on the recognition that offshore contracting companies earn their money with the mission
equipment, not the vessel platform. Therefore, this platform does not have to be redesigned for every
new ship and can adhere to several basic layouts as a platform to transport the mission equipment.

Vessel design is largely based on the reuse of pre-designed modules: fore-ship, midship and aft-ship
modules based on a set of fixed breadths, along with standard above and below deck modules. The fore-
ship modules include standard engine rooms, variable size accommodation layouts, bow thrusters, etc.
The aft-ship modules include integrated azimuth propulsion thrusters.

During the proposal stage for a Supporter class vessel, a configurator allows the customer to visualise
the vessel with only a few mouse clicks through the sets of standard mission equipment, such as offshore
cranes, cable reels, helidecks, moonpools, diving equipment, accommodation sizes, length requirements
and more, just like a customer might select options for a new automobile.

The modular approach continues through all design stages and manufacturing. Mission equipment is
concentrated into skids, manufactured as a whole and, after hoisting on board, connected directly to
hydraulic and electric power, along with other interface types.

For this type of approach to modular design, the interfaces between the pre-designed or pre-constructed
modules must be worked out and managed at a very detailed level. Otherwise, as in conventional
designs, bottlenecks can form, resulting in increased engineering costs and project delays.

2.2 Class-based design

As the approaches to flexible/modular ship design were maturing in Europe, the U.S. Navy sponsored
a programme aimed at reducing ship acquisition and maintenance costs through the specification of the
capabilities of computer systems used in the design, manufacture and lifecycle support of ship
programmes, NSRP (2008). The core of this work was the development of what became the
requirements for an Integrated Product Development Environment (IPDE) that could be used to support
the entire lifecycle of multiple ship programmes. The specification defines a set of requirements for an
effective IPDE, focusing on product data management capabilities and interfaces with
CAD/CAM/CAE, enterprise resource planning (ERP) and catalogue systems used for the design,
construction and in-service support of navy ships and submarines. The goal was to cover all aspects of
the product development environment used to create, manage and disclose ship product information
throughout the ship’s lifespan. The requirements contained in the IPDE specification were intended to
enhance the capabilities of software tools currently in use to manage product configuration and enable
efficient product change.

Central to this work was the idea of a class-level design along with configuration management to enable
the representation of differences between vessels in a class without the clone-and-modify approach that
was in practice at the time. Large naval shipyards that are adopting the IPDE concept are building on
PLM systems that can support effectivity-based configuration management. This allows shipyards to
use configuration rules to represent design changes between the different ships in a class by only
tracking the changes between the hulls design. Fig.2 shows a simple example of effectivity-based
configuration management, used to manage design changes between the different hulls in a class of
ships. In this example, it was found during sea trials that there was insufficient cooling in the engine
room resulting in the need for the air conditioning system to be updated. An engineering change order
(ECO) was created for the cooling system and three different solutions applied to the ships in the class
depending on where in the design lifecycle they were. Effectivity-based configuration management
allowed the design changes for each hull or group of hulls to be made to the equipment in the HVAC
system, duct routing and other systems without having to repeat the changes in a separate model for
each ship.

326
Fig.2: Example of effectivity-based configuration management

3. Model-based enterprise

A model-based enterprise (MBE) approach relies on the creation of a managed 3D model (or digital
twin) very early in a programme and then have this model evolve throughout the programme’s lifecycle.
The relevance of this concept of front-loading a design project through the creation of a digital twin is
described in detail in Stachowski and Kjeilen (2017). An important aspect of MBE is the focus on
modelling as a generic term; MBE is often thought of as the use of 3D models and product and manu-
facturing information (PMI) to replace drawings but this is just one small aspect of MBE. The key to
MBE is to have integrated models that cover all disciplines and can be used in downstream lifecycle
stages.

4. PLM as an enabler of reuse

The effective implementation of both flexible and class-based design requires robust change and
configuration management capabilities. PLM tools that were originally developed for the automotive
and aerospace industries as a backbone to handle product information throughout all stages of a
product’s lifecycle can provide many of the necessary capabilities. This PLM implementation needs to
be able to handle information from different systems spanning from office programmes to CAD, CAE
and computer-aided management (CAM). It must display all managed information in a user-friendly
fashion independently of devices, location and disciplines. The implementation also needs to be able to
manage content, revisions, versions, user access and configurations of all data.

A key aspect of a successful PLM implementation is to separate the product breakdown structure (PBS)
which represents every unique component within a ship. This PBS provides the information structure
that supports delivering the ship according to specifications, on time and within budget.

A PBS provides a model of a ship that is broken down in several views. These views could be functional
requirements, systems and locations. It is important that these views or breakdowns are independent of

327
the way the parts of the ship are modelled by the design tools used in different disciplines. Fig.3 shows
a typical PBS where functionality, systems and location all refer to system parts in the product
definition. A system part is not a CAD model; instead, it links multiple representations of the object it
represents. These representations could be drawings, datasheets, a reference to a library part, or part
designs created in multiple CAD tools for different disciplines. Statuses and maturities are assigned to
the system parts, enabling the ship model to support change management processes if required.

Fig.3: Example product breakdown structure (PBS)

The PBS above also enables functional requirements to be associated with system parts, thereby
providing data that can be used to ensure that interfaces between modules are correctly maintained.

A product breakdown structure like this can potentially represent multiple ships using configuration
management techniques. For example, using the option-and-variant approach would make it possible
to support a modular design approach while using an effectivity-based configuration would support the
class-based approach common in naval shipbuilding.

5. Automating the design process

When developing ship concepts, shipyards and design houses traditionally use multiple tools. These
can include a surface design tool, a 2D drafting tool to create general arrangement (GA) drawings and
a visualisation tool to generate high-quality images to communicate the overall design to the customer.
In addition, simulations and calculations are performed by hand or in separate systems for each design
discipline. The multi-tool approach results in several unconnected data sets that can be expensive and
error-prone.

Experienced naval architects often extract data from previous projects and adjust them to fit the new
project. The goal of this process is to acquire concept design data as fast as possible. It is not very
transparent and thus creates a risk of failure. In addition, the unconnected data sets have no proof of
compatibility to each other. At one level the copying from previous projects seems closely related to
the modular or flexible ship approaches that have been discussed above. However, the lack of
integration in the design environment and no practical management of the interface specifications of
any reused modules make it easy for the architect to accidently break the modular architecture or to
have inconsistent design and simulation results.

Stachowski and Kjeilen (2017) describe an approach to concept design based on the early creation of a
digital twin that provides a single 3D model where integrated applications can perform the tasks needed
to create and evaluate the concept. Having a single source of data also makes it easier to ensure that the
various simulations have been performed on consistent data, and minimise the risk of inconsistent

328
results. Arens et al. (2018) extended the ideas of Stachowski and Kjeilen (2017) to directly use a simula-
tion-based approach to optimise the concept model and ensure the design of a more efficient and cost-
effective vessel. The work of Arens et al. (2018) aimed to demonstrate consistency between the
simulation results and concept design through the use of the digital twin approach that relied on a
parametrised model of the ship hull and superstructure, and a 3D general arrangement model. This
model was further linked to 2D drawings, aesthetic views and different model representations suitable
for analysis and simulation, Fig.4. The resulting digital twin holds solid, surface and simplified repre-
sentations, which are available for different use cases and decision making.

Fig.4: Example of a Digital Twin

The digital twin was created within Siemens PLM Software’s NX™ for shipbuilding CAD tool, which
enables an end-to-end design approach from initial concept to production. Parameterisation techniques
can be utilised, so that updates of dimension/parameter values in the model result in the automated
regeneration of the CAD model to reflect the updated values. By parameterising the design instead of
building the model as a traditional one-off approach, geometry modifications in the model can be
updated automatically and driven with ease to evaluate potentially infinite design possibilities. General
arrangement drawings and aesthetic views can be automatically updated as can all computational fluid
dynamics (CFD) simulation input data. For CFD, Simcenter STAR-CCM+™ software was used,
enabling the performance of a virtual prototype of the vessel to be simulated at full-scale. Finally,
HEEDS™ software was used to automate the process of changing variables in the parametrised CAD,
update the virtual prototype in the CFD simulation, execute the analysis, and intelligently search for
better designs. Utilising this approach allows designers and engineers to focus on gaining insight and
discovery rather than the laborious efforts associated with traditional manual approaches to ship design.
Now, better designs can be found, faster.

6. Modelling and optimising modular ships

The techniques that Arens et al. (2018) used to optimise the design of ships can be combined with the
PLM-based technology that we have discussed above to enable new families of ships and new members
of existing families to be created efficiently. A digital twin for a new member of a family of ships can
be created using a CAD model where the parameterisation only allows the creation of models that
conform to the family’s constraints. For example, Fig. 5 shows three hulls created by a parametric model
where the bow and stern sections are common, and the mid-section length and breadth can be picked
from a fixed set of values. As the hull form for the new family member is created, the aspects of the
General Arrangement that are common to the family automatically update to fit at the appropriate place
within the new hull, Fig.6.

The decision to use an effectivity-based configuration management approach to model a family of ships
partially depends on the differences between the members of the family. In naval shipbuilding, it is
common to have a class of very similar ships or submarines built for a specific navy. In this situation,
the differences between members of the class are usually relatively small: the hull design and principal
dimensions are the same but aspects of the arrangement and equipment on the ship may change.

329
Fig.5: Creating hulls for family members

Fig.6: Arrangements for different family members

In this case, the effectivity-based approach is quite attractive. However, in scenarios where there may
be large differences between family members the effectivity approach becomes less attractive.

A PBS-based PLM implementation can deliver value in a modular design environment even without
using a configuration management approach. For example, systems may be common between family
members, which allows all the equipment that makes up a system and even system diagrams to be
copied to the PBS of a new family member. In areas of the new ship where the GA is common across
family members it is also possible to locate equipment in the appropriate room and perhaps even the
appropriate 3D location for a new ship.

Beyond creating new family members, design automation can also be used to define the key parameters
in a family of ships. For example, when creating new breadth or length options in a family like the one
illustrated in Fig.5, it would be beneficial to use the approach described in Arens et al. (2018) to select
combinations of length and breadth to optimise both capital expense (CAPEX) and operating expenses
(OPEX) before defining possible dimensions for new family members.

7. Conclusions

A Digital Twin built in a combination of PLM and CAD tools can help shipyards implement effective
processes for the design of modular ships. Starting with a product breakdown structure in a PLM tool
and building a parameterised 3D concept model that includes the ship surfaces, a solid representation
and a general arrangement allow the rapid creation of new family members. Adding automation and
intelligent design exploration expands design possibilities to a level where completely new ways of
value creation are enabled.

330
References

ARENS, E.A.; AMINE-EDDINE, G.; ABBOTT, C.; BASTIDE, G.; STACHOWSKI, T.H. (2018),
Utilizing process automation and intelligent design space exploration for simulation driven ship design,
Marine Design XIII, Taylor & Francis, pp.983-993

BALLÉ, J.; SUTHERLAND, K.A. (2019), The Meko Platform, Marine Technology, January, pp.34-39

IHC (2013), Innovative vessels IHC Packhorse and IHC Supporter Class, IHC Merwede Insight,
Autumn 2013, pp.33-36

NIEUWENHUIS, J.J. (2013) Evaluating the appropriateness of product platforms for Engineered-to-
order Ships, PhD Thesis, TU Delft

NSRP (2008), Integrated Product Data Environment Specification, National Shipbuilding research pro-
gram, Navy Product Data Management Initiative

SORENSEN, K.; CHRISTIANSEN, K.G. (2019), Toward more flexible warships, Marine Technology,
January, pp.26-33

STACHOWSKI, T.H.; KJEILEN, H. (2017), Holistic ship design – How to utilise a digital twin in
concept design through basic and detailed design, Int. Conf. Computer Applications in Shipbuilding,
Singapore

331
Life-Cycle Assessment of an Antifouling Coating Based on a
Time-Dependent Biofouling Model
Dogancan Uzun, University of Strathclyde, Glasgow/UK, dogancan.uzun@strath.ac.uk
Yigit Kemal Demirel, University of Strathclyde, Glasgow/UK, yigit.demirel@strath.ac.uk
Andrea Coraddu, University of Strathclyde, Glasgow/UK, andrea.coraddu@strath.ac.uk
Osman Turan, University of Strathclyde, Glasgow/UK, o.turan@strath.ac.uk

Abstract

This paper presents a novel life cycle assessment (LCA) for antifouling coatings based on a time-
dependent biofouling prediction model. The life cycle assessment covers environmental and monetary
effects born from paint production to application, hull maintenance and added fuel consumption due
to biofouling on the ship hull. The calculations related to the production and applications of the
paints were made using the data provided by shipyards and coating manufacturers.The added
frictional resistance due to biofouling accumulation and hence the added fuel consumptions during
ship operations were predicted by time-dependent biofouling model proposed in the literature and
then implemented into the overall life cycle assessment. The effects of ship operating profile and route
on the fuel penalty due to biofouling accumulation on the antifouling coating were investigated for
three case studies. The results were presented in terms of differences in increases in effective power,
fuel oil consumption, fuel oil consumption costs, total costs and CO2 emissions due to different ship
operating profiles and routes.

1. Introduction

The use of fouling control coatings is the most effective method to keep ship hulls clean. There are
several types of fouling control coatings mainly categorised into two groups in terms of their working
principles such as biocidal and non-biocidal coatings. Biocidal coatings, for example, controlled
depletion polymer (CDP), self-polishing copolymer (SPC), hybrid SPC type coatings, release biocides
to delay biofouling accumulation on a ship hull. On the other hand, non-biocidal coatings, i.e. foul-
release coatings, provide comparatively smooth surface and hence notably decreases the adhesion
strength of fouling organisms (Judith et al. 2013). Although it is possible to find a large number of
coating products within different types in the market, there is no scientifically grounded approach or
method to select the most suitable coating for specific ships (Swain et al. 2007).

IMO (2011) published a guideline for the control and management of ships’ biofouling to minimise
the transfer of invasive aquatic species. As mentioned in this guideline there are factors to consider
while choosing an antifouling system. These factors can be listed as follows:

• Planned maintenance schedule (dry-docking periods): schedule of underwater hull cleaning and
dry-docking operations may influence on fouling control coating selection. For instance, CDP
type paint could be more cost effective compared to SPC type paint for a ship which undergoes
dry-docking every 3 years of operation (Lejars et al. 2012).
• Ship speed: fouling control coating selection may rely on the ship speed which is also a definitive
parameter for ship type. Non-biocidal, foul-release coatings provide a self-cleaning feature for a
certain range of ship speed (Yebra et al. 2004).
• Operating profile: ship operating profile may show varieties based on the type of ship and
contract. Ship type and operational behavior play a significant role in coating selection as well.
For example, slow polishing antifouling and foul-release coatings are more suitable for high-
speed vessels with short idle time periods, while fast polishing coatings are more effective to keep
the hull surface smooth for slow vessels with long idle time periods (Yebra et al. 2004).
• Any legal requirements for the sale and use of the antifouling system.

332
According to International Standard Organization (ISO), life cycle perspective is required to assess
the whole consecutive and linked stages of a product system, from the raw material acquisition or
generation from natural resources to final disposal (ISO 2006). As described by Curran (2006) life
cycle assessment is a way to evaluate a system or product by taking everything into account in
relation to the subject in question from beginning to end. This approach covers all processes for a
product or system from manufacturing to the disposal or recycling of it (Wang and Zhou 2018).
According to Curran (2017), there are four interactive steps to conduct LCA on a system or product.
The first step should explain the goal and scope of the conducted LCA analysis. The next step, namely
Inventory analysis, presents the collected information about materials, energy and emissions. The
environmental impact is assessed in Impact assessment step, and finally, the results are presented to
decision makers in the Interpretation step.

Although it is well-established and extensively used in many industrial sectors, its application in
marine industry is recent and limited as the ship system is comparatively more complex than the
industrial applications. Life cycle studies in marine industry were introduced by Fet (2002) and the
study showed that LCA method could be employed for ships, however boundary selection plays an
important role and may lead to conflicting results. Shama (2005) presented the detailed ship’s life
cycle and stated the importance of applying LCA in the marine industry. Based on a life cycle
perspective, the design-software was used to assess various green technologies along with their
environmental impacts (Tincelin et al., 2010). Chatzinikolaou and Ventikos (2015) conducted a
detailed life cycle impact assessment on the hull subsystem of the ship. They estimated life cycle air
emissions of a Panamax oil tanker by employing a mathematical framework (Chatzinikolaou and
Ventikos 2015). Mountaneas et al. (2014) made a comparison for the environmental impacts of a
tanker, bulk carrier and container ship. Wang et al. (2018) applied the LCA process to a short route
ferry to investigate the optimum maintenance and construction choices by considering life cycle cost
and environmental impacts. Dong and Cai (2019) compared the environmental impacts of two
different design solutions, i.e. light hull and heavy hull, for a Panamax bulk carrier. Demirel et al.
(2018) developed a model for life cycle assessment of antifouling coatings, and two different coatings
were compared in terms of life cycle costs and environmental impacts.

However, there exists no study investigating the effects of ship operating profile on fuel penalty due
to biofouling accumulation on the antifouling coatings and hence on Greenhouse gases (GHG)
emissions in the framework of a life cycle assessment.

In this study, the effect of ship operating profile on fuel penalty due to biofouling was evaluated using
life cycle assessment (LCA) method. The LCA model used by Demirel et al. (2018) was adapted by
employing the time-dependent biofouling growth model proposed by Uzun et al. (2018). Then, three
case studies were carried out to investigate the effects of operating profile and route on the fuel
penalty due to biofouling accumulation on the antifouling coating. The results were presented in terms
of increases in effective due to biofouling, fuel oil consumptions and costs, overall costs and CO2
emissions for 30 years of the life cycle of a bulk carrier ship.

2. LCA Modelling

2.1. Goal and scope definition

This study aims to investigate the effects of different ship operating profiles on the added fuel
consumption due to biofouling accumulation and hence extra GHG emissions by using LCA method.
The implementation of the model provides interpretations to end users, such as naval architects and
ship owners/operators, to decide if the performance of the coating is satisfying for the operation in
question. The scope of the LCA consists of three life phases for the antifouling coatings including the
application of the coating, operation and maintenance (renewal or cleaning). Construction,
dismantling and maintenance due to different reasons are ignored since they are not directly related to
the life cycle of the antifouling coating. The performance categories to be compared in the case
studies are fuel oil consumptions and costs, total costs and CO2 emissions.

333
2.1.1. System boundary, assumptions and limitations

The boundary of developed LCA and series of assumptions and limitations are outlined as below:

• The maintenance schedule is limited only for dry-docking for hull maintenance in every
three years; therefore neither machinery nor other maintenances are considered in this study.
• Hull maintenance at each time will provide clean hull surface so that initial fuel
consumption values will be used after dry-docking.
• The emissions occurring during paint applications were ignored. However, the cost of each
action related with paint application was considered.
• CO2 emissions due to paint production are taken to be equivalent to the CO2 emission due to
the electricity consumption, and the conversion factor of 0.53936 kgCO2e / kWh is used for
calculations according to Defra and DECC (2010). CO2 emissions due to main engine fuel
consumption are calculated according to emission factors given by (IMO 2015). Emissions
due to paint application in dry-docks are ignored.
• Average vessel life for handy max bulk carrier is taken as 30 years.
• The increases in fuel consumption due to biofouling during the operation phase are
calculated based on the time-dependent biofouling growth model presented in Uzun et
al.(2018).
• The average market price for heavy fuel oil is taken as 390 $/ton, according to the January
of 2019 prices, https://shipandbunker.com/prices/av/global/av-g20-global-20-ports-average
• Environmental impact assessment is made by comparing CO2 emissions occurred in case
studies and taking smooth condition as a benchmark.
• The increment in fuel oil consumption due to biofouling is assumed to be proportional with
the increase in effective power.
• The loss of the paint during application is assumed to be 30% for each paint application
process.

Fig.1: Boundary of the LCA system

The approach used in Dong and Cai (2019) was adapted and altered according to demands of the
present study. Fig.1 illustrates the diagram of the LCA system boundary along with energy, fuel, cost
and emissions. As shown in the figure ship construction and dismantling is excluded.

2.2. Life cycle inventory analysis

The available data for LCA is presented in this section. The required data is divided into three
categories to conduct this life cycle analysis for evaluating the performance of antifouling coatings.
These are listed as follows. It is of note that, from this point onward, the data related to paint
applications and costs are not shown explicitly due to the confidentiality issues. However, the
examples of the required data are shown in Table III, Table IV and Table V.

334
2.2.1 Ship and Operation data

A handy size bulk carrier was taken to be operated in three different real bulk carrier operations with
varying routes and operating profiles. The ship characteristics and required parameters are given in
Table I.

Operating profiles are named as Operation 1, 2, 3, and the details are given in Table II. As can be seen
in Table II idle days, average ship speed, sailing days and operating days are available for each
operation. In addition to this, three years of real noon reports were used for ship operating profile. The
30 years of ship operating profile was generated over the operating profile data in the first 3 years by
assuming that the ship operating profile will be the same during the life cycle. Figs.2 to 4 represent
the observed ship routes of operations which are plotted by using GPS coordinates of ship reported in
noon data. These figures show differences and similarities between sailing routes of operations as well
as the regions of the ports where the ship spends time for loading or unloading operations.

Table I. Ship characteristics


Vessel type Bulk-carrier
Deadweight 40000 ton
Length 179 𝑚
Breadth 28 𝑚
Design draft 10.6 𝑚
Wetted surface area 7350 𝑚2
Engine power 6.6 kW
Endurance 25k NM
Fuel type HFO
FO consumption(t/day) 20.4
@design draft and speed

Table II: Ship operation data


Data Operation 1 Operation 2 Operation 3
Idle days including port stays in 3 years (day) 326 507 284
Average speed (knot) 14 14 14
Sailing day in 3 years 769 588 811
Operating days in 3 years 1095 1095 1095

Fig.2: Ship route of Operation 1

335
Fig.3: Ship route of Operation 2

Fig.4: Ship route of Operation 3

2.2.2. Antifouling coating production

The production of the paint generates emissions indirectly due to energy consumption in acquiring of
raw material and processing these materials. However, the emissions only due to electric consumption
are taken into consideration as output for this LCA. It is assumed that the paints are produced using
the purchased electricity and the conversion factor of 0.53936 kgCO2/kWh is used according to Defra
and DECC (2010). Besides, the selling rate to ship owner is taken into consideration since the life
cycle costs are to be also evaluated.

Table III: CO2 emissions conversions during the production stage of antifouling coating
Paint conversions Litre/𝑚2 e kWh/litre e kWh/𝑚2 Conversion factor kg CO2/𝑚2
kgCO2/kWh
Anticorrosive
Tie-coat
Antifouling (1st coat)
Antifouling ((2nd coat)

336
2.2.3 Antifouling coating application

As a part of coating application on a ship, initial paint application in shipyards and maintenance
processes in dry-docks include a series of surface operations listed in Table IV. These surface
operations may show variety in price due to the usage of different materials and equipment. Surface
operation costs are taken into account for the initial application and all maintenance operations in 30
years of the life cycle period.

Table IV: Cost of surface treatment operations


Surface operation Cost per
Type unit area ($/𝑚2 )
High pressure fresh water washing
Wash down after the first coat
Grit blasting 1st
Grit blasting 2nd
Anticorrosive
Tie-coat
Antifouling (1st coat)
Antifouling (2nd coat)

Table V shows the example of the required data about the costs of coating products and the required
amount of these products for per 𝑚2 on the hull surface.

Table V: Cost of paint products


Paint Data Cost ($/litre) Amount(litre/𝑚2 )
Anticorrosive
Tie-coat
Antifouling(1st coat)
Antifouling(2nd coat)

The costs of each action of the initial and dry-dock paint application stage, as well as the paint costs,
are considered.

3. Results

3.1. Increases in Effective power

Increases in effective power (PE) due to biofouling accumulation at a design speed of 14 knots were
calculated via time-dependent biofouling prediction model proposed by Uzun et al. (2018) and then
were employed in the life cycle model. Three different 3-years of ship operation data were used in the
model and at the end of three years ship was undergone to the maintenance operation. This process
repeats itself for the 30 years of the life cycle. Calculations on increases in effective power were made
for ship design speed.

As seen from Fig.5, biofouling accumulation occurred during Operation 2 caused the most consider-
able effect on the PE with an 86% increase with respect to the clean condition. The reason for this
significant increase in PE can be attributed the fact that the 47% of the total time of Operation 2 was
stagnant. In addition, it was observed that ~ 90% of idle days were in a region between 20 and 30
degrees in latitude which can be assessed as medium fouling risk region. Operation 2 represents an
extreme condition and it is used for comparison in this LCA assessment. The maintenance interval is
1 year for this operation in normal conditions.

The increase in PE for Operation 1 due to biofouling was predicted to be 37% with respect to the clean
hull condition. As can be seen in Fig.5, the ship was not active during ~ 30% of total operation time in

337
3 years. In addition to this, ~ 44% of the idle days took place in the region between 0° and 10° in
latitude which can be assessed as high fouling risk region. On the other hand, a considerable
percentage of the idle days occurred in comparatively cold regions where latitude degrees higher than
30° and hence biofouling growth is slow.

Fig.5: Increase in effective power for 3 years ship operations along with the relative frequency of idle
time occurrence in these operations

The results presented in Fig.5 indicate that the increment in PE due to biofouling occurred while
Operation 3 was predicted to be 20%. The reason of this comparatively low increase can be attributed
the fact that the Operation 3 is the most active operation with only 26% of the stagnant time in three
years of ship operation compared to Operation 1 and Operation 2. In addition to that, the ship spent
only ~20% of total idle days in a region between 0° and 10° in latitude which can be addressed as a
high fouling risk region. The figure indicates that the important portion of the idle times in Operation
3 is in a relatively cold region where biofouling growth is restricted because of low temperature.

3.2. Fuel oil consumption and costs

Fig.7 demonstrates the fuel oil consumptions in clean (Benchmark clean) and fouled (Operation
fouled) conditions as well as the difference (Operation difference) between these two conditions for
each operation over 30 years of the life cycle.

It is seen from the comparison in Fig.6 that the fuel oil consumptions in clean condition showed a
considerable difference for Operation 2 compared to other operations. As can be seen from Fig.5 idle
days of Operation 2 is comparatively longer compared to other operations; therefore, this leads to
lower fuel consumption. On the other hand, it was observed that fuel oil consumption values in clean
condition for Operation 1 and Operation 2 are similar since the numbers of sailing days of these
operations are close to each other. Fuel oil consumptions for the clean condition are ~153.4×103 ton
for Operation 1, ~117.6× 103 ton for Operation 2 and ~158.4 × 103 ton for Operation 3.

338
The results illustrated in Fig.7 showed that fuel oil consumptions in fouled condition were predicted
to be ~180.4 × 103, ~171.8 × 103 and ~172.1 × 103 ton for Operation 1, 2 and 3 respectively over 30
years of the life cycle. The fuel penalties due to biofouling were predicted to be ~26.6× 103, ~54.3×
103 and ~13.7× 103 ton for Operation 1, 2 and 3 respectively. It is interesting to note that the ship in
Operation 2 burned much less fuel oil as the ship has less sailing days. However, this situation led to a
higher increase in effective power and hence extra fuel oil consumption due to biofouling
accumulation at stagnant times.

Fig.6: Fuel oil consumptions over 30 years of life cycle

Fig.7: Fuel oil consumption costs for 30 years life cycle

Fig.7 compares the fuel consumption costs for clean and fouled conditions and the differences
between these conditions.

The fuel oil consumption costs were predicted to be ~$ 60 million for Operation 1, ~ $ 45.8 million
for Operation 2 and ~$ 61.8 million for Operation 3 for clean conditions whereas these values altered
to ~$70.4 million, ~ $ 67 million and ~$ 67.1 million for fouled conditions for Operation 1, 2 and 3
respectively. The results presented in Fig.7 indicate that fuel penalty costs due to biofouling were
calculated to be ~$ 10.4 million for Operation 1, ~ $ 21.2 million for Operation 2 and ~$ 5.3 million
for Operation 3.

339
3.3. Total costs

Fig.8 illustrates the total costs including initial paint application, maintenance and fuel oil costs over
30 years of the life cycle for each operation. As can be seen from the figure same paint application
and maintenance procedure was conducted for each operation. Although this does not make any
difference between operations, these costs were taken into account as the aim to find total costs over
30 years of the life cycle.

Fig.8: Paint application and maintenance costs together with total costs for 30 years of life cycle

The results presented in Fig.8 show that total paint and maintenance costs were predicted to be ~$1.2
million for each operation whereas the total costs were predicted to be ~ $ 71.6 million, ~$68.2
million and ~ $ 68.3 million for Operation 1, 2, and 3, respectively.

3.4. Life Cycle Impact Analysis

Life cycle impact analysis was conducted based on the comparison with benchmark (clean) condition
which is the clean and ideal condition. Since the study does not aim to show Global Warming
Potential (GWP) of the operations, this sort of analysis was not conducted.

Fig.9 illustrates CO2 emissions due to fuel oil consumption as well as the emissions due to paint
production for each operation over 30 years of the life cycle. As can be seen from Fig.9, CO2
emissions due to fuel oil consumptions was found to be ~479 × 106 kg , ~ 430× 106 kg and ~493× 106
kg for Operation 1, 2, and 3, respectively at clean condition whereas these values changed to ~562 ×
106 kg,~535× 106 kg and ~536 × 106 kg respectively at fouled condition.

The results shown in Fig.9 indicate that CO2 emissions due to paint production were calculated to be
only ~ 4×106 kg while total emissions were found to be 566× 106 kg for Operation 1,539× 106 kg for
Operation 2 and 540× 106 kg for Operation 3.

The CO2 emissions due to paint production are negligible when it is compared with those due to fuel
oil consumptions. It was observed that the highest amount of CO2 emissions due to biofouling
accumulation, occurred in Operation 2 and this is followed by Operation 1 and Operation 3,
respectively. Since this study focuses only on CO2 emissions, other inorganic emissions were not

340
assessed as a part of life cycle impact analysis. However, other emissions such as CH4 , N2O, NOx ,CO
and NMVOC can also be calculated with the emission factors provided by IMO (2015).

Fig.9: CO2 emissions due to fuel oil consumptions along with total emissions including paint and
maintenance emissions

4. Conclusions and Discussions

The effects of ship operating profiles on the effective power of ship and fuel penalties due to
biofouling were investigated via LCA assessment based on a time-dependent biofouling growth
prediction model. Three different real 3-years of ship operation data were used to predict the increases
in effective power and fuel oil consumptions due to biofouling. The increases in effective power were
obtained at a design ship speed of 14 knots. The costs for paint applications and maintenance
operations as well as the costs for paint productions were also taken into account. In addition, CO2
emissions due to fuel oil consumption and paint production were included for the life cycle. The fuel
oil consumptions, fuel oil costs, paint and maintenance costs, total costs and CO2 emissions were
presented over 30 years of the life cycle.

The increases in the effective power for the ship were predicted to be 37% for Operation 1, 86% for
Operation 2 and 20% for Operation 3. It was shown that these increases in effective power due to
biofouling caused extra fuel costs of $10.4 million, $21.2 million and $5.3 million for Operation 1, 2,
and 3, respectively.

The total costs were predicted to be ~ $ 71.6 million, ~$68.2 million and ~ $ 68.3 million whereas
total CO2 emissions were found to be 566× 106 kg, 539× 106 kg and 540 × 106 kg for Operation 1, 2,
and 3, respectively.

Having shown the applicability of the LCA method for investigating the effect of ship operational
profiles on fuel penalty due to biofouling accumulation on ship hulls, this approach can be used to
decide maintenance intervals for the specific ship and operation in question. In this way, paint

341
application, maintenance and fuel oil consumption costs can be compared in order to have cost
effective and environmentally friendly maintenance strategies.

By including GHG emissions due to maintenance processes, the model can be updated, and
environmental impacts assessment can be conducted evaluating GWP which is regarded as an
important marine contributor. This study also suggests that the LCA is an applicable method to
evaluate the performance of an antifouling coating in terms of additional fuel oil consumption costs
and GHG emissions.

Acknowledgements

The authors gratefully acknowledge that the research presented in this paper was generated as part of
the project ‘Time-Based Biofouling Model For Ships’ funded by THE CARNEGIE TRUST FOR
THE UNIVERSITIES OF SCOTLAND , Grant agreement number : RIG007452.

References

CHATZINIKOLAOU, S.D. ; VENTIKOS, N.P. (2015), Holistic framework for studying ship
air emissions in a life cycle perspective, Ocean Engineering 110, pp.113-122

CURRAN, M. (2006), Life Cycle Assessment: Principles and Practice. http://www.cs.ucsb.


edu/~chong/290N-W10/EPAonLCA2006.pdf

CURRAN, M.A. (2017), Overview of goal and scope definition in life cycle assessment, Goal
and Scope Definition in Life Cycle Assessment, Springer

DEFRA & DECC. (2010), Guidelines to Defra/DECC’s GHG conversion factors for
company reporting: Methodology paper for emission factors, http://archive.defra.gov.uk/
environment/business/reporting/pdf/101006-guidelines-ghg-conversion-factors-method-
paper.pdf

DEMIREL Y.K.; UZUN D.; ZHANG Y.; TURAN O. (2018), Life Cycle Assessment of
Marine Coatings Applied to Ship Hulls, Trends and Challenges in Maritime Energy
Management. Springer

DONG, D.T. ; CAI, W. (2019), A comparative study of life cycle assessment of a Panamax
bulk carrier in consideration of lightship weight, Ocean Engineering 172, pp.583-598

FET, A. (2002), Environmental reporting in marine transport based on LCA, J. Marine


Design and Operations B (B1), pp.1476-1556

IMO (2011), Annex 26, Resolution MEPC.207(62), 2011 Guidelines for the control and
management of ships' biofouling to minimize the transfer of invasive aquatic species, IMO,
London

IMO (2015), Third IMO Greenhouse Gas Study 2014, IMO, London

ISO (2006), 14040: 2006 Environmental management-life cycle assessment-principles and


framework ISO 14044: 2006. Environmental Management-Life Cycle Assessment-
Requirements and Guidelines, ISO, Geneva

342
JUDITH, S.; TRUBY, K.; WOOD, C. D.; STEIN, J.; GARDNER, M.; SWAIN, G.;
KAVANAGH, C.; KOVACH, B.; SCHULTZ, M.; WIEBE, D.; HOLM, E.;
MONTEMARANO, J.; WENDT, D.; SMITH, C. ; MEYER, A. (2003), Silicone Foul
Release Coatings: Effect of the Interaction of Oil and Coating Functionalities on the
Magnitude of Macrofouling Attachment Strengths AU, Biofouling 19, pp.71-82.

LEJARS, M.; MARGAILLAN, A.; BRESSY, C. (2012), Fouling release coatings: a


nontoxic alternative to biocidal antifouling coatings, Chem. Rev. 112 (8), pp.4347-4390

MOUNTANEAS, A.; GEORGOPOULOU, C.; DIMOPOULOS, G. ; KAKALIS, N. (2014),


A model for the life cycle analysis of ships: Environmental impact during construction,
operation and recycling, Maritime Technology and Engineering, CRC Press

SHAMA, M. (2005), Life cycle assessment of ships. Maritime transportation and exploitation
of ocean and coastal resources, 11th Int. Congress of the Int. Maritime Association of the
Mediterranean, pp.1751-1758

SWAIN, G.W.; KOVACH, B.; TOUZOT, A.; CASSE, F.; KAVANAGH, C.J. (2007),
Measuring the Performance of Today's Antifouling Coatings, J. Ship Production 23, pp.164-
170

TINCELIN, T.; MERMIER, L.; PIERSON, Y.; PELERIN, E.; JOUANNE, G. (2010), A life
cycle approach to shipbuilding and ship operation, Int. Conf. on Ship Design and Operation
for Environmental Sustainability, pp.10-11

UZUN D.; OZYURT R.; DEMIREL Y.K.; TURAN O. (2018), Time based ship added
resistance prediction model for biofouling, Int. Marine Design Conf., Helsinki

WANG, H.; OGUZ, E.; JEONG, B. ; ZHOU, P. (2017), Optimisation of Operational Modes
of Short-Route Hybrid Ferry: A Life Cycle Assessment Case Study, Maritime Transportation
and Harvesting of Sea Resources

WANG, H.; ZHOU, P. (2018), Systematic evaluation approach for carbon reduction method
assessment – A life cycle assessment case study on carbon solidification method, Ocean
Engineering 165, pp.480-487

YEBRA, D.M.; KIIL, S.; DAM-JOHANSEN, K. (2004), Antifouling technology — past,


present and future steps towards efficient and environmentally friendly antifouling coatings,
Progress in Organic Coatings 50, pp.75-104

343
Use of Digital Twins to Enhance Operational Awareness and Guidance
David Drazen, NSWC Carderock Division, Maryland/USA, david.drazen@navy.mil
Alysson Mondoro, NSWC Carderock Division, Maryland/USA, alysson.mondoro@navy.mil
Benjamin Grisso, NSWC Carderock Division, Maryland/USA, benjamin.grisso@navy.mil

Abstract

In this paper, we will describe the concept of a system-of-systems digital twin where multiple twins for
a representative surface combatant are used to provide increased situational awareness to the ship’s
crew. We describe the approach for calculation of fatigue damage and how multiple simple frequency
domain tools could be used for course and speed recommendations at a range of timescales via a multi-
objective routing tool. Extension of the system to include models of the propulsion train and insight into
seakeeping guidance will be discussed.

1. Introduction

We are currently in the midst of what is being called a fourth industrial revolution, where society
leverages vast amounts of data to derive new insights into the world. Recent advances in high
performance computing, advanced data analytics, and artificial intelligence have resulted in numerous
commercial deployments of “digital twin” systems with a focus on proactive condition‐based
maintenance (CBM). Blends of physics-based models and data-driven models allow for proactive
identification of the likelihood of failure and allow for better management of the logistics tail associated
with ship maintenance. While the use of CBM can realize cost savings for end-users, there are also
significant operational benefits that can arise from the use of digital twins.

The concept of a digital twin as a technology trend has been on the rise for a number of years. It has
appeared in Gartner’s top technology trends of 2017, 2018, and 2019. Figure 1 is a screen shot from the
IBM Watson News Explorer showing 96 articles based on the term “digital twin” and that were
published over an approximately two month period in late 2018.

Fig.1: Screen shot of IBM Watson’s News Explorer showing the number of articles referencing “digital
twin”.

We are starting to reach a point where we now understand what the advantages and limitations are of
digital twins. Gartner’s 2018 Hype Cycle for Emerging Technologies puts “digital twin” at the Peak of
Inflated Expectations. While we might be nearing this peak from an industry perspective, the advantages
of a digital twin are evident to the authors and we maintain a healthy skepticism over what can and can’t
be achieved. This paper will present a description of how a digital twin can be utilized to enhance

344
situational awareness for ship operators. The focus of the discussion will be on naval ships, but
analogues can be drawn for commercial applications.

2. Definition of a digital twin

One issue we need to face is that different people have differing opinions of what a digital twin is.
Morais (2018) defines a digital twin as a “digital representation of a physical object” and highlights that
there should be a single digital version of each digital ship. Throughout the lifecycle, artifacts associated
with the twin should be included to ensure it represents the as-is condition of the platform. These
artifacts include such items as service records, repair and retrofit changes, class surveys, documentation
of fouling removal, plus actual sensor data of what is going on throughout the vessel. Much of this falls
under the concept of the digital thread, a linkage of data and artifacts across the lifecycle.

The digital twin, however, should only be as detailed as needed for the task at hand. Overly complex
models could become expensive to build and maintain. West (2017) looked at the cost of an aircraft
digital twin as envisioned by Tuegel (2011): an ultra-realistic, O(trillion) Degree of Freedom (DOF)
model of each individual plane as well as the attendant digital thread. While a lot of assumptions are
made, the cost estimates for the digital thread reach $80-$180 billion along with $1-2 trillion for
development and sustainment of the digital twin, including the computational power needed to exercise
such a twin. This, obviously, isn’t sustainable and it drives home the point that the twin needs to be
affordable enough for a return-on-investment based on the application of the twin.

We feel that the twin itself isn’t necessarily a single item, but rather a system-of-systems model where
the individual twins exchange information amongst one another. The real key is that the twin provides
insight into real-time system health for individual assets, capturing differences in wear and tear for each
platform. These insights allow for an assessment of the “as-is” condition, which can be used by the
operators to predict expected performance over a range of timescales. For operations, in-situ
measurements of the ocean environment might lead to recommended changes in course and speed, near-
term weather forecasts could identify more optimal ship routes, and climatological descriptions of the
ocean environment could provide insights on expected impacts to a platform during an upcoming
deployment. Figure 2 provides an overview of how this might be put into action.

Fig.2: Example of a digital twin that highlights potential impact to operations based on environmental
data over a range of timescales.

The definition that we’ve taken here is line with Morais (2018) and follows that of Erikstad (2018). We
agree that a twin is not an end product in itself, but rather an intermediary step that provides users with
improved insights into the platform. Erikstad (2018) defines a number of design patterns for twins
heavily influenced by the approach laid out within software engineering by Gamma et al. (1994). Our
views align with many of the structural patterns that he lays out: a baseline twin based on physics-based
model behavior, a load-based twin that uses the operating context rather than the asset response, a ML
proxy where behavior is based on data-driven modeling, and a benchmark twin where we use a model
of the asset in conjunction with actual data to monitor the system for expected behavior.

3. Approach

The work described in this paper is still in progress and in the subsequent sections we will describe the
process by which fatigue damage is calculated, how the ship routing would be conducted, and how we

345
intend to utilize modelling of Hull, Mechanical, and Electrical (HM&E) systems to provide additional
insights to the ship operators. Where needed, we have used a representative naval hullform for our
studies, Naval Surface Warfare Center Carderock Division (NSWCCD) Model 5415. While this model
only defines the outer lines of the hull, we can use the approach taken by Gheriani (2012) and use values
representative of a destroyer for structural details and machinery.

First, we will describe, in general, the response of a ship in a seaway via frequency domain calculations.
The elevation of the ocean surface is a stochastic process and computations are simplified by assuming
it to a be stationary Gaussian random process. As the ocean forces ship motions, the response of the
ship can also be modelled as a stationary Gaussian random process. If we limit the analysis to small
wave amplitudes, this allows for the formation of a Response Amplitude Operator (RAO), which
defines a linear relationship in the frequency domain between the forcing (ocean) and the response (ship
motion). These approaches have been in use for many years (St. Denis and Pierson, 1953) and can be
computed quickly with any modern computer hardware. Linear strip theory approaches are common
for the generation of RAOs due to their advantage of low computational cost. RAOs can also be
generated using non-linear 6 Degree-Of-Freedom (DOF) time domain tools such as the Large
Amplitude Motion Program (LAMP, Shin et al. 2003).

Decomposing the wave elevation into its Fourier components allows for the generation of a wave
spectrum (forcing) and allows for calculation of the ship response spectrum via
2
𝑆𝑗 (𝜔) = |𝐻𝑗 (𝜔, 𝜃; 𝑈)| 𝑆𝜂 (𝜔) (1)

where 𝑆𝑗 (𝜔) is the spectrum of a given ship response j, 𝑆𝜂 (𝜔) is the spectrum of the wave elevation,
𝐻𝑗 (𝜔, 𝜃; 𝑈) is the RAO for that ship motion, 𝜔 is the radian wave frequency, 𝜃 is the wave direction,
and U is the ship speed. For non-zero forward speed, the wave frequency in (1) should be replaced with
the encountered wave frequency,

𝜔2 𝑈 cos 𝜃
𝜔𝑒 = 𝜔 − 𝑔
(2)

which takes into account the effect of ship speed and relative wave heading in deep water. Here 𝜔𝑒 is
the encountered frequency, and 𝑔 is the acceleration due to gravity. As we will discuss in Section 4, use
of the expected ship speed based on the current state of the propulsion train will allow for calculation
of encounter frequencies that can be used to better estimate ship motions and fatigue damage.

3.1 Fatigue Damage

Fatigue damage caused by interaction of the ship with the environment can have impacts on service
life, impose operational restrictions, and potentially result in increased maintenance costs (both money
and schedule). Issues are similar to those encountered by the USAF as described by Glaessgen and
Stargel (2012) and insights on crack modelling from those platforms could be useful for naval ships.
Aircraft, however, return to a base or aircraft carrier post-mission which allows for access to data
typically inaccessible by decision makers during a mission. Naval ships have much longer deployment
durations and longer time periods between access these data or to the ship itself. Furthermore, as we
continue to build naval ships from material other than steel managing fatigue life becomes more
important due to the nature of the failure modes.

Schirmann et al. (2019) looked at the use of a digital twin to inform cumulative fatigue damage
assessments and evaluated four different routes in the Pacific Ocean. The authors found that one of the
routes resulted in large amounts of damage along a given route, which they attributed to large head
wind seas in that segment of the simulation. They use this example as an indication of the importance
of modeling damage and how these insights can be used to balance operational needs (deployment vs
servicing) across the fleet. Thompson (2018a) discusses the use of ship location, environmental data,

346
and spectral fatigue approaches to estimate fatigue damage using “virtual” sensors, i.e. without in-situ
hull strain or acceleration measurements. Using this virtual approach, Thompson (2018b) looked at the
variation in estimated fatigue damage from ten naval vessels within the same class, but with half of the
ships in the East and West Coasts of the United States. An operational profile and environmental
exposure was based on ship speed and input from operators and then combined with environmental data
from BMT’s Global Wave Statistics database. The author found that damage estimates for the East
Coast ships were higher than the West Coast by about 40-50% but notes that the conclusions are
dependent on a number of assumptions and uncertainties in the data used. Improved real-time data, such
as from a digital twin would be a first step in replacing these assumptions with facts. We take a similar
virtual sensing approach for our fatigue damage informed digital twin.

Fatigue encompasses the formation and growth of cracks that may occur under repeated loads. As such,
fatigue is a major concern in the long-term performance of ship structures since they are exposed to
cyclic loading due to the operational use of the ship. There are two main approaches for evaluating the
fatigue of a structural detail: (1) the approached based on Stress-Number of cycles (S-N) curve and (2)
the fracture mechanics (FM) approach. This work focuses on the S-N approach and the assumption that
fatigue damage accumulation is a linear phenomenon and can be modelled using Miner’s rule (Miner
1945). Miner’s damage accumulation assumes that a complex load sequence can be decoupled into
cycles of constant amplitude and that each cycle contributes to the total cumulated damage. One of two
methods could be used to determine the number of cycles at a given stress range that a structural detail
is subjected to. A cycle counting approach can be used if time history data is available or a frequency-
based approach if the stress response spectrum, Sj (ω), is known. For this work, since Sj (ω) is available,
the single-moment (SM) method described by Larsen and Lutes (1991) is used. For further details on
SM and a comparison with alternative approaches, the reader is directed to Larsen and Lutes (1991); a
brief summary is provided below.

The damage accumulation rate, as defined with the SM method, for a given operational condition i takes
the form

1 23𝑚/2 𝑚/2
𝐷̇𝑖 = 2𝜋 10𝐾 𝛤(1 + 𝑚/2)(𝜆2/𝑚 ) (3)

where,

𝜆2/𝑚 = ∫0 𝜔 2/𝑚 𝑆𝑗 (𝜔)𝑑𝜔 (4)

when 𝐻𝑗 (𝜔, 𝜃; 𝑈) is the RAO for that stress range at the detail of interest. The damage accumulation
rate must therefore be calculated each time there is a change in operational condition (i.e. change in
heading, speed, or seaway). The total damage accumulated over a route is

𝐷 = ∑𝐼𝑖=1 𝐷̇𝑖 𝑡𝑖 (5)

where ti is the time spent in operational condition i, and I is the total number of operational conditions
along the route.

3.2 Hull, Mechanical, and Electrical (HM&E) Digital Twins

Our work in modelling of the propulsion train is still ongoing, but we will describe the motivation and
the approach that we are planning to take. As we’ve discussed, the power of a digital twin comes into
play when we can leverage our best understanding of the ship’s current condition and use that to
evaluate future performance. Most work in the literature surrounding digital twin focuses on the
application to Condition-Based Maintenance (CBM), where components are serviced only when it is
needed not just because an arbitrary date has been reached. Lazakis et al. (2017) used a combination of
a Fault Tree Analysis (FTA) and a Failure Mode and Effects Analysis (FMEA) to identify critical

347
machinery components in a Panamax size container ship. Once these critical items were identified, a
neural network was built to predict future states of those items.

We are working on what Erikstad (2018) terms a “benchmark twin” where a twin of a piece of
machinery, say a main propulsion engine, is developed and run in parallel with the data coming off of
the system, see Fig.3 for an example.

Fig.3: Example of data flow for a benchmark twin of a shipboard engine.

This paradigm allows for not only detection of anomalous behavior from the engine, but also prediction
of output variables, e.g. shaft RPM and torque. Cipollini et al. (2018) used this approach to build a data-
driven model (DDM) of a naval ship’s propulsion model. Use of the DDM simplified having to build a
complex state model of the machinery. The authors found success using simulated data for a COmbined
Diesel ELectric And Gas (CODELAG) plant with a variety of machine learning approaches but found
the best correlation with neural networks. Based on the data available, these Benchmark Twin
approaches may or may not be sufficient for predictive fault analysis and we may need to pursue more
“traditional” predictive analytics approaches to address those needs.

Those values, along with the current angle of the controllable pitch propeller (if relevant), the current
ballast condition, biofouling condition of the hull and prop, and the associated characteristics of the
propeller (J-KT curves) allow for estimation of what the expected ship speed would be in a given
environment. These updated estimates of capability can be incorporated into the ship routing algorithm
as well as be used to update the operating profile of the ship for near-term and climatological
forecasting. The updated ship speed can also be used in (2) to calculate the actual encounter frequency
and in any subsequent RAO calculations for fatigue damage or ship motion calculations.

3.3 Ship Routing

Orlandi et al. (2018) describe the impact of different added resistance calculations on fuel consumption
estimates for ship routing based on simplified models and forecasted environmental conditions. While
the paper isn’t clear on the specifics, it appears that they use a modified form of Dijkstra’s algorithm
for the selection of an optimal route. The environmental models have also included ensemble
predictions of the environment, which allows for variability in the forecasted weather to be used for
probabilistic routing options. The authors ran a number of simulated cases using an approximately 167m
long ro-ro ship as a test case. The intent was to determine what the impact of various combinations of
wind and wave added resistance calculations were on the coefficient of added resistance, Caw. They
found that the added resistance in waves calculation was the largest contributor to variation in fuel
consumption (16%) when compared to their calm water baseline. They also found that the wind added
resistance could impact the solution, but less so than the wave effects.

Orihara and Yoshida (2018) describe a method for not only weather routing a ship, but a means of
comparing measured performance with computed estimates based on the encountered environment.
Comparisons were made for 20-minute time histories of Speed Over Ground (SOG) and fuel
consumption as well as standard deviations of pitch and roll. Via their “Sea-Navi” tool suite, end users
can evaluate potential routes not only for weather routing but also for motion limits or for fuel
conservation.

348
Much of the drive within the commercial shipping industry is focused on fuel efficiency as well as
regulatory compliance as outlined in regulatory guidance (IMO 2016). For naval ships, these
requirements are replaced with readiness and availability. Fuel consumption is taken into consideration
when routing ships, but any conserved fuel is more likely used for increased range. To address the
multiple needs of the Navy (and commercial industry), Sidoti et al. (2016) developed a tool that provides
ship routers with the ability to balance multiple objectives (time, distance, fuel consumption, safety,
etc.). A summary of the approach is given here, but the reader is directed to that paper and its references
for a full description of the approach.

The authors used a multiobjective shortest path algorithm with time windows allowing for ships to
pause at a given location via a waiting decision variable before continuing on to the final location at the
next time step. Furthermore, their approach takes into account variability in the ocean environment via
ensemble forecasts similar to the approach of Orlandi et al. (2018). The approach is bounded by an
envelope that is chosen prior to starting the calculations. The full set of Pareto solutions are obtained
when the algorithm is complete and solutions that lie along the Pareto front are presented to the end-
user for down selection. The power needed to achieve a desired speed is

𝑃𝑇𝑜𝑡𝑎𝑙 = 𝑃𝐶𝑊 + 𝑃𝑆𝑒𝑎 + 𝑃𝑆𝑤𝑒𝑙𝑙 + 𝑃𝑊𝑖𝑛𝑑 (6)

where PTotal is the total power required, PCW is the power required to achieve the desired speed in calm
water, and PSea, PSwell, and PWind represent the power needed to overcome the added resistances from the
wind sea, swells, and wind itself. PCW includes the calm water drag from testing, but also includes the
power needed to overcome drag due to hull and propeller fouling as well as drag due to the surface
current. The authors used strip theory calculations via the Ship Motions Program (SMP, Conrad 2005)
to determine added resistance based the ocean environment and relative wave heading. These values
are provided a Look-Up Table (LUT) in order to reduce the computational needs of the algorithm at
run-time. Once the total power required is known, the equivalent calm water speed is determined and
the fuel needs are calculated based on the fuel consumption relationship for a given engine.

4. Summary

Increases in computational power and access to relevant data has enabled use of digital twins for greater
insight into asset performance. We feel that a system-of-systems approach where multiple twins of
subsystems provide increased awareness of “as-is” condition can bring multiple benefits to naval ships.
We’ve described an approach where we are working towards implementing twins of HM&E systems
to inform propulsion system health and predict ship speed, twins of fatigue life to inform the impact of
the ocean environment on ship service life, and a multiobjective routing tool to provide guidance over
a short-term (0-5 day) time window. Extensions to in-situ guidance based on the current environment
and insights on expected damage based on wind and ocean wave climatology were discussed, but not
explored in depth. We are continuing to refine and integrate these subsystems together and validating
them against data collected during sea trials.

Distribution statement

Approved for Public Release. Distribution is Unlimited.

Acknowledgements

The authors would like to thank Dr. Thomas Fu of Office of Naval Research for funding this work.

References

CIPOLLINI, F.; ONETO, L.; CORADDU, A.; MURPHY, A.J.; ANGUITA, D. (2018), Condition-
Based Maintenance of Naval Propulsion Systems with Supervised Data Analysis, Ocean Eng. 149,
pp.268-278

349
CONRAD, R.E. (2005), SMP95: Standard Ship Motion Program User Manual, NSWCCD
Hydromechanics Dept. Technical Report, NSWCCD-50-TR-2005/074

ERIKSTAD, S.O. (2018), Design Patterns for Digital Twin Solutions in Marine Systems Design and
Operations, 17th Conf. Computer and IT Appl. Maritime Ind., Pavone, pp.354-363

GAMMA, E.; HELM, R.; JOHNSON, R., VLISSIDES, J. (1994), Design Patterns – Elements of
Reusable Object-Orientated Software Code, Addison-Wesley

GLAESSGEN, E.H.; STARGEL, D.S. (2012) The Digital Twin Paradigm for Future NASA and U.S.
Air Force Vehicles, 53rd AIAA Conf. Structures, Struct. Dyn., and Matl, pp.1-14

GHERIANI, E. (2012), Fuel Consumption Methodology for Early Stages of Naval Ship Design, Masters
Thesis, MIT, Dept. of Naval Architecture and Marine Engineering, Cambridge, pp. 83

IMO (2016), 2016 Guidelines for the Development of a Ship Energy Efficiency Management Plan
(SEEMP), MEPC.282(70), Annex 10, London

LARSEN, C.E.; LUTES, L.D. (1991). Predicting the fatigue life of offshore structures by the single-
moment spectral method. Stochastic Structural Dynamics 2, pp.91-120

LAZAKIS, I.; RAPTODIMOS, Y.; VARELAS, T. (2018), Predicting Ship Machinery System
Condition Through Analytical Reliability Tools and Artificial Neural Networks, Ocean Eng. 152,
pp.404-415

MINER, M.A., (1945) Cumulative Damage in Fatigue, J. Appl. Mech. 12

MORAIS, D.; WALDIE, M.; LARKINS, D. (2018), The Digital Twin Journey, 17th Conf. Computer
and IT Appl. Maritime Ind., Pavone, pp.98-105

ORIHARA, H.; YOSHIDA, H. (2018), Weather Routing Simulation as a Tool for Evaluating Ship’s
Performance in Operation, 17th Conf. Computer and IT Appl. Maritime Ind., Pavone, pp.391-402

ORLANDI, A.; BENEDETTI, R.; MARI, R.; COSTALLI, L. (2018), Sensitivity Analysis of Route
Optimization Solutions on Different Computational Approaches for Powering Performance in the
Seaway. 17th Conf. Computer and IT Appl. Maritime Ind., Pavone, pp.341-353

SCHIRMANN, M.; COLLETTE, M.; GOSE, J. (2019), Ship Motion and Fatigue Damage Estimation
via a Digital Twin, Life-Cycle Anal. and Assess. in Civil Eng., pp.2075-2082

SHIN, Y.; BELENKY, V.; LIN, W.-M.; WEEMS, K.; ENGLE, A. (2003), Nonlinear Time Domain
Simulation Technology for Seakeeping and Wave Load Analysis for Modern Warship Design, SNAME
Trans.

SIDOTI, D.; AVVARI, G.V.; MISHRA, M.; ZHANG, L.; NADELLA, B.K.; PEAK, J.E.; HANSEN,
J.A.; PATTIPATI, K.R. (2016), A Multiobjective Path-Planning Algorithm with Time Windows for
Asset Routing in a Dynamic Weather-Impacted Environment, IEEE Trans. Sys., Man, and Cybernetics

ST. DENIS, M.; PIERSON, W.J. (1953), On the Motions of Ships in Confused Seas, SNAME
Transactions, Vol 61., pp.280-357

THOMPSON, I.M. (2018a), Virtual Hull Monitoring: Continuous Fatigue Assessment without
Additional Instrumentation, Intl. J. Maritime Eng., A-293 – A-298

350
THOMPSON, I.M. (2018b), Fatigue Damage Variation within a Class of Naval Ships, Ocean Eng. 165,
pp.123-130

TUEGEL, E.; INGRAFFEA, A.; EASON, T.; SPOTTSWOOD, S. (2011), Reengineering Aircraft
Structural Life Prediction Using a Digital Twin, Int. J. of Aerospace Eng. 2011, 154798

VAN OS, J. (2018), The Digital Twin throughout the Lifecycle, 17th Conf. Computer and IT Appl.
Maritime Ind., Pavone, pp.482-488

WEST, T; BLACKBURN, M. (2017), Is Digital Thread/Digital Twin Affordable? A Systemic


Assessment of the Cost of DoD’s Latest Manhattan Project, Procedia Computer Science 114, pp.47-56

351
Autonomous ships and the COLREGS:
Automation Transparency and Interaction with Manned Ships
Thomas Porathe, Norwegian University of Science and Technology, Trondheim/Norway,
thomas.porathe@ntnu.no

Abstract

Maritime Autonomous Surface Ships, MASS, are now on the agenda of the International Maritime
Organization. In many countries research on autonomous navigation are conducted. A very important
piece in this puzzle are the algorithms underpinning collision avoidance. The primary goal is that an
autonomous vessel should behave just like any other ship and follow the International Regulations for
Preventing Collisions at Sea, COLREGS. This paper will take a look at the COLREGS and point at
some problems facing programmers translating these qualitative rules into program code. The author
advocated predictability and automation transparency as an interaction concept. An examples of
automation transparency in the MASS case is also given.

1. Introduction

Large autonomous merchant vessels are still on the drawing board. However, in Norway the building
contract is already signed for ‘YARA Birkeland’, the first Maritime Autonomous, Surface Ship
(MASS), an unmanned container feeder, scheduled to start tests in 2020, https://www.km.kongsberg.
com/ks/web/nokbg0240.nsf/AllWeb/4B8113B707A50A4FC125811D00407045?OpenDocument.
Lacking IMO regulations, the tests will have to commence in national waters, which in this case means
the Greenland area of Porsgunn and Larvik in southern Norway with complex narrow, inshore
archipelago navigation. It is a busy industrial area where a large portion of the ship traffic consists of
gas carriers and vessels with hazardous cargo and, summertime, an abundance of small leisure crafts
and kayaks. The sea traffic in the area is monitored by the Brevik VTS which in 2015 made 623
“interventions,” meaning that the VTS asked for some alteration from the planned sailing route, https://
www.ssb.no/191461/various-indicators-from-the-operational-area-of-vts-centres. Conducting autono-
mous navigation in such an area is a huge challenge.

The project is ambitious, the 80 meters long, unmanned, autonomous vessel, taking 120 containers with
a fully electric propulsion system, will replace some 40,000 truck-journeys every year. Thus moving
heavy traffic from road to sea, from fossil fuel to hydro generated electricity. The plan is currently that
she will start tests in 2020. First with a manned bridge onboard, then with the same bridge lifted off to
the quay side, remotely controlling the vessel, before finally attempting to go autonomously in 2022.

1.1. Unmanned, automatic and autonomous

We may think of traditional ships of today as “manual.” However, many ships navigate automatically.
With an autopilot in ‘track-following’ mode, set so that the ship can execute turns without
acknowledgment from the Officer of the Watch (OOW), the ship can follow a pre-planned route from
A to B without support - given that the plan is correct and does not pass over any rocks or shallow
water. This is the way the Norwegian coastal express Hurtigruten navigates during most of its inshore
route from Bergen to Kirkenes, (personal communication). But the OOW still has to be present on the
bridge to look out for, and handle, encounters with other vessels. What is needed to remove the operator
completely is different sensors that can see and identify moving obstacles in the sea, and a connected
autopilot with collision avoidance algorithms based on the International Regulations for Preventing
Collisions at Sea (COLREGS), IMO (1972). With such a system a ship may navigate autonomously.
But such an “autonomous ship” does not need to be unmanned. It may contain a maintenance crew, or
even a reduced number of navigators who take manual watches during difficult conditions, or maybe
daytime watches in good conditions, saving the automation for the long boring night watches or
uneventful oversee passages. With such a partly manned bridge the ship would have a “periodically

352
unattended bridge” according to IMO’s latest definitions, IMO (2018).

The watch can also be handed over to a Shore Control Centre (SCC) that can access the ships sensors
and communication, ready to wake up the OOW if something unexpected happens (in which case the
ship is remotely monitored). Or, the SCC could be granted access to the autopilot, in which case the
ship will be remote controlled. It is reasonable to think that this will be a gradual evolution towards
higher and higher levels of automation, maybe a combination of remote monitoring and control, and
autonomy.

It can also be useful to consider the concept “Operational Design Domain” (ODD) used by the self-
driving car industry, Rodseth and Nordahl (2017). In the maritime domain, it would mean that there
will be certain shipping lanes and fairways were the automation has been specifically trained and which
have been specifically prepared, maybe with designated lanes, or by specific technical infrastructure.
In these areas, a ship may navigate autonomously, while the ship in other areas must navigate manually
with a manned bridge or remote controlled from the shore.

For the discussion in this paper the focus will be on autonomy: whether permanently or periodically. If
a ship is in autonomous mode, i.e. a computer program is navigating and taking decisions, regardless of
whether the captain is in his cabin onboard or in a remote centre ashore. The crucial point here is how
the ship handles interaction with other ships, and particularly how it follows the rules of the road, the
COLREGS.

2. The COLREGS

For several centuries, ships came and went, sailing with the same wind and with the same tidal current
and it was not until the steam ships turned up that collision regulations became vital, Crosbie (2006).
In 1840 the London Trinity House drew up a set of regulations, one of which required a steam vessel
passing another vessel in a narrow channel to leave the other on her own port hand. The other regulation
relating to steam ships required steam vessels on different crossing courses, so as to involve risk of
collision, to alter course to starboard and pass on the port side of each other. The two Trinity House
rules for steam vessels were combined into a single rule and included in the Steam Navigation Act of
1846. During the years a number of iterations and internationalisations, through what is now the
International Maritime Organization (IMO), led to the latest revision of the International Regulations
for Preventing Collisions at Sea (COLREGS) on an international conference convened in London in
1972.

2.1. Qualitative rules

The collision regulations are, like legal text often is, written in a general manner so as to be applicable
in as many situations as possible. The precise interpretation has to be made in the context of the actual
situation judged not only on knowledge of the rules, but also on experience and culture, what the rules
call “the ordinary practice of seamen” as is stated already in the second rule.

The qualitative nature of COLREGS is a problem for programmers who is to write the code for the anti-
collision algorithms of autonomous ship. I will in this section point to some these “soft” clauses.

2.2. Rule 2: Ordinary practice of seamen

Rule 2 of the COLREGS is about responsibility. It has two sections. Section (a) state “Nothing in these
Rules shall exonerate any vessel, or the owner, master or crew thereof, from the consequences of any
neglect to comply with these Rules or of the neglect of any precautions which may be required by the
ordinary practice of seamen, or by the special circumstances of the case.” Section (b) that “In construing
and complying with these Rules due regard shall be had to all dangers of navigation and collision and
to any special circumstances, including the limitations of the vessels involved, which may make a
departure from these Rules necessary to avoid immediate danger.”

353
What this rule basically says is that you must always follow these rules, but that you must also deviate
from these rules when necessary to avoid an accident. In essence, if there is an accident it is a good
chance that you have violated one or both of these sections. The problem for the navigator is how long,
or close into an encounter, he or she should follow the Rules and when it is time to skip the rules and
do whatever is necessary to avoid a collision. The answer is: it depends on the circumstances. The Rules
give no hint as to the number of cables or miles, minutes or seconds. It does not even try to define the
“ordinary practice of seamen.” Similar soft enumerations are found for instance in Rules 15, 16 and 17.

2.3. Rule 15 to 17, risk of collision

Rule 15 of the COLREGS talks about “crossing situations”: “When two power-driven vessels are
crossing so as to involve risk of collision, the vessel which has the other on her own starboard side shall
keep out of the way and shall, if the circumstances of the case admit, avoid crossing ahead of the other
vessel.” Calculating when a crossing situation may lead to a collision is pretty strait forward given that
present course and speed can be extrapolated. (This is, however, not always the case as the intentions
of the other ship may not be known.) If the bearing to the other ship is constant over time, it can be
assumed that there exists a risk of collision. Rule 15 also defines which vessel should take action to
avoid collision. “The one which has the other on her own starboard side.”

The following rule then defines how this action should be done by the “give-way” vessel (Rule 16):
“Every vessel which is directed to keep out of the way of another vessel shall, as far as possible, take
early and substantial action to keep well clear.” The action could be a change of speed or a change of
course, but the problematic keywords here are “early and substantial”. There is no suggestion in miles
or minutes what constitutes “early”, neither how large course change or speed change constitutes
“substantial”.

Rule 17 defines the actions of “the stand-on” vessel: “(a), (i) Where one of two vessels is to keep out
of the way the other shall keep her course and speed. (ii) The latter vessel may, however, take action to
avoid collision by her manoeuvre alone, as soon as it becomes apparent to her that the vessel required
to keep out of the way is not taking appropriate action in compliance with these Rules. (b) When, from
any cause, the vessel required to keep her course and speed finds herself so close that collision cannot
be avoided by the action of the give-way vessel alone, she shall take such action as will best aid to avoid
collision. (c) A power-driven vessel which takes action in a crossing situation in accordance with
subparagraph (a)(ii) of this Rule to avoid collision with another power-driven vessel shall, if the
circumstances at the case admit, not alter course to port for a vessel on her own port side. (d) This Rule
does not relieve the give-way vessel of her obligation to keep out of the way.”

This rule adds to the complexity by using qualitative definitions like “as soon as it becomes apparent,”
“finds herself so close that collision cannot be avoided by the action of the give-way vessel alone,”
“action as will best aid to avoid collision” and “if the circumstances at the case admit.”

For a programmer programming the collision avoidance module of a navigation software the difficulty
is not only in judging which action, but also when to execute it “early” and “substantially”. The answer
will be the same as it was in the previous section: it depends on the circumstances. Are there only two
ships meeting alone on the ocean the task is relatively simple, but at the other end of the spectrum, in a
high complexity situation, e.g. in a constrained area like the Straits of Malacca and Singapore, the task
is of an entirely different dimension. Not only does the large number of ships in a limited space change
the value of variables like “early” and “substantial,” but an evasive manoeuvre for one ship may lead
into a close quarters situation with another ship and so on, in a cascading interaction effect with
unpredictable results.

A possible strategy for programmer trying to catch “the ordinary practise of seamen” for a specific area
(an ODD) could be to study large amounts of AIS (Automatic Identification System) data for the
specific area in questions and from that data deduce limits of “early” and “substantial action”. A useful
concept is ships “safety zones” which is the zone around ones ship that navigators tend not to let other

354
ships within. “A zone around a vessel within which all other vessels should remain clear unless
authorised,” https://www.iala-aism.org/wiki/dictionary/index.php/Ship_Safety_Zone. This zone tends
to be larger on the open sea than in narrow waters or in a port and can be studied using AIS data. Using
such AIS studies, establishment of a zone outside which an action can be considered “early” could be
attempted. But remember that the context is important, not only the static geographical context, but also
the time dependant traffic density context.

If all ships in such a complex situation where autonomous and governed by clever algorithms there is a
chance that such a collision avoidance application could be successful, but in a mixed situation where
most or many of the ships are controlled by humans, which are less predictable, the risk of a bad
outcome is evident.

2.4. Rule 19, restricted visibility

The final rule that I want to bring up here is Rule 19, Conduct of vessels in restricted visibility. This is
a quit lengthy rule which says: “(a) This Rule applies to vessels not in sight of one another when
navigating in or near an area of restricted visibility. (b) Every vessel shall proceed at a safe speed
adapted to the prevailing circumstances and conditions of restricted visibility. A power-driven vessel
shall have her engines ready for immediate manoeuvre. (c) Every vessel shall have due regard to the
prevailing circumstances and conditions of restricted visibility when complying with the Rules of
Section I of this Part. (d) A vessel which detects by radar alone the presence of another vessel shall
determine if a close-quarters situation is developing and/or risk of collision exists. If so, she shall take
avoiding action in ample time, provided that when such action consists of an alteration of course, so far
as possible the following shall be avoided: (i) an alteration of course to port for a vessel forwards of the
beam, other than for a vessel being overtaken; (ii) an alteration of course towards a vessel abeam or
abaft the beam. (e) Except where it has been determined that a risk of collision does not exist, every
vessel which hears apparently forwards of her beam the fog signal of another vessel, or which cannot
avoid a close-quarters situation with another vessel forwards of her beam, shall reduce her speed to the
minimum at which she can be kept on her course. She shall if necessary take all her way off and in any
event navigate with extreme caution until danger of collision is over.”

The Dutch Council of Transportation has added an amplification to this rule for Dutch mariners:
“During a period of reduced visibility unexpected behaviour of other vessels should be anticipated. The
speed and the correlated stopping distance must correspond with this situation,” van Dokkum, 2016.

The big difference with this rule versus Rule 15 above is that in restricted visibility both vessels are
suddenly give-way vessels and the responsibility for avoiding a collision is shared. The problems here
for a quantitative approach lies in soft terms like “safe speed,” “due regard to the prevailing
circumstances and conditions of restricted visibility” and “take avoiding action in ample time.” But also
in the problem of defining “restricted visibility.” As a meteorological phenomenon “restricted” is not
defined, nor is “safe speed”, although an assumption might be that the vessel should be able to stop
within the distance that can be overlooked. An assumption that cannot always be followed as in many
parts of the world ships regularly navigate in conditions of visibility where even the own ships forecastle
cannot be seen from the bridge.

Another reflection is that “restricted visibility” refers to human visibility of the eye, which in the
autonomous case can be translated to the visibility of the day-light cameras. Section (d) in Rule 19
which refers to when ships are detected “by radar alone” was added in 1960 after a number of “radar
assisted accidents”. An autonomous vessel will most probably, apart from day-light cameras, AIS and
radar, also have infrared cameras and maybe LIDAR. But even if sensor resources on an autonomous
ship could be judged as being better than the human eye, this rule makes it necessary to include visibility
sensors to decide if Rule 19, or Rule 11-18, “Conduct of vessels in sight of each other,” should apply.
A confiding factor here that needs to be taken into consideration is that fog often appears in patches or
banks, so even if your own ship may be in an area of good visibility, the other vessel might be hidden
in a fog bank, in which case Rule 19 apply.

355
A phenomenon worth taken into consideration is that while an autonomous vessel will weigh its
different sensor inputs in an objective manner resulting in a sighting with such and such probability, the
human operator on a manual vessel has a cognitive system that prefer visual egocentric input through
the eyes as compared to exocentric images from radar and electronic charts that needs to be mentally
rotated to be added to the inner mental map, Porathe (2006). An example of this is the allision of the
container vessel ‘Cosco Busan’ in 2007 with the San Francisco Oakland Bay Bridge in heavy fog but
with fully working radar and GNSS/AIS support, NTSB (2009).

3. Quantitative COLREGS

From a computer programmer’s point of view, it would be good if all qualitative, soft, enumerations of
COLREGS could be quantified into nautical miles, degrees and minutes. This would greatly facilitate
the development of the necessary algorithms that will govern future collision avoidance systems.
However, such a regulatory text would have to be very lengthy and it would still not cover all possible
situations. Instead COLREGS, like other legal text has a general format that is open to interpretations
in a court of maritime law, and the opposite of “the ordinary practice of seamen,” i.e. “good seamanship”
include juridical options such as “negligence” and “gross negligence”, van Dokkum 2016. Ships
technical performance and manoeuvrability, experience and training of seamen, all evolve with time
and but for the rules of the road to be valid they must be written in a general manner.

Instead it is the algorithms of collision avoidance applications that need to be precise and quantitative.
By using AIS data and large-scale simulations, applications can be made to learn the most effective and
efficient way of manoeuvring in different situations, still following the COLREGS. It would probably
be beneficial if such machine learning was ongoing “lifelong” for the AI (Artificial Intelligence) on the
bridge, which then would become more and more experienced through the years. However, it is unlikely
that the IMO would accept an AI on the bridge which was not certified and who behaved in a precisely
predetermined way for a specific situation (even if this could be defended by comparing the AI to a
trained and licenced third mate working his way up through the ranks to a more and more experienced
master).

Another important point to pay attention to is that, as long as there are manual ships governed by humans
on the sea, the actions of autonomous ships has to be predictable for these humans. Autonomous
navigation, supported by artificial intelligence on the bridge, has a number of advantages compared to
human, manual navigation: improved vigilance, improved sensing and perception, longer endurance,
and also an ability to look further into the future by keeping more alternative options open during the
decision-making process. For instance, by keeping track of all ship movements on a very long range an
AI might be able to predict a possible close quarters situation several house ahead of a human navigator
but may therefor make manoeuvres which might not make sense to an OOW on a manual ship in the
vicinity. Therefore, it is of outmost importance that autonomous ships are predictable and transparent
to humans.

4. Automation transparency

All one of us that are struggling with the complexity of digital tools know that they do not always do
what we want or assume they will do. They “think” different from us. An innate tendency of human
psychology is to attribute human traits, emotions, or intentions to non-human entities. This is called
anthropomorphism. We do so because it gives us a simple (but faulty) method to understand machines.

The assumption above is that if autonomous ships always follow COLREGs their behaviour will be a
hundred per cent predictable. But as we have seen above, this might be changed if e.g. the spectrometers
onboard the autonomous ship does not interpret “restricted visibility” the same way I do (and therefore
Rule 19 should or should not be used). Another important issue is understanding intentions. Interpreting
the intentions of other manual or autonomous ships are interpreted rightly is imperative to rule following
as well. An old accident in the English Channel 1972 can serve as an example of what misinterpreted
intentions (and therefore applying the wrong rules) may lead to: The ferry ‘St. Germain’, coming from

356
Dunkirk in France and destined for Dover, was turning slowly to port, away from the strait westerly
course to Dover. Instead her captain intended to take her SW, down on the outside of the Traffic
Separation Scheme (TSS), in the Inshore Zone, in order to find a clearer place to cross the TSS at a
“right angle” according to Rule 10 of the COLREGS. The bulk carrier ‘Adarte’ was heading NE up the
TSS towards the North Sea. The pilot onboard recognised the radar target as the Dunkirk-Dover ferry
and assumed, quite wrongly, that she would cross ahead of him and that there now existed a risk of
collision (Rule 15). ‘Adarte’ would then be the give-way ship and was obliged to turn starboard. The
pilot made a series of small course alternations to starboard to allow ‘St. Germain’ to cross ahead. But
instead ‘St. Germain’ continued her port turn and the two ships collided. ‘St. Germain’ sank, killing a
number of passengers, Lee and Parker (2007).

This accident is retold to illustrate the need to understand intentions and this goes for both manned and
unmanned ships. It is important that automation share information about its working, its situation
awareness and its intentions. Questions like: What does the automation know about its surroundings?
What other vessels that has been observed by its sensors? These questions could e.g. be answered by a
live chart screen accessible on-line through a web portal by other vessels, VTS, coastguard etc., Fig.1.
Based on its situation awareness the automation will make decisions on how it interprets the rules of
collision avoidance. It would be a benefit if the intentions of ships could be communicated, as argued
in, Porathe and Brodje (2015). Large ships obey under IMO’s SOLAS convention. A SOLAS ship (as
defined in Maritime Rule Part 21) is any ship to which the International Convention for the Safety of
Life at Sea (SOLAS) 1974 applies; namely: a passenger ship engaged on an international voyage, or a
non-passenger ship of 500 tons’ gross tonnage or more engaged on an international voyage, IMO
(1980).

Fig.1: An on-line chart portal showing the situation awareness of the autonomous ship, where it thinks
it is, what other ships and objects it has observed, and what intentions it has for the close future.
The AIS symbol with a designated “A” for “autonomous navigation” and the intended route
shown on demand is also a suggestion for addition in the ECDIS presentation.

SOLAS ships must transmit their position and some other information using AIS. In addition, SOLAS
ships are usually big and make good radar targets, which will provide a second source of information.
Furthermore, all SOLAS ship must make a voyage plan from port to port. Several passed and ongoing
projects aim at collecting route plans and coordinating ship traffic for reasons of safety and efficiency
(e.g. EfficienSea, ACCSEAS, MONALISA, SMART navigation, SESAME, and the STM Validation
projects). These attempts in route exchange would make it possible for SOLAS ships – also MASS - to
coordinate their voyages and show intentions well ahead of time to avoid entering into a close quarters
situation where the COLREGs will apply.

Route exchange would for instance allow each ship to send a number of waypoints ahead of the ships
present position though AIS to all ships within radio range. All ships can then see other ships intended
route. In the ACCSEAS project 2014 a simulator study was made with 11 professional British, Swedish
and Danish bridge officers, harbour masters, pilots and VTS operators with experience from complex

357
traffic in the test area which was the Humber Estuary. The feedback from the participants on the benefits
of showing intentions were overall positive, Porathe and Brodje (2015).

5. Conclusions

I have in this discussion paper pointed at some challenges regarding the rules of the road, COLREGS,
qualitative nature and the quantitative needs of programmers of collision avoidance applications. The
interaction between traditional ships in “manual mode” is from time to time problematic. The
introduction of autonomous ships which in their navigation follows a machine interpretation of
COLREGS might lead to many problems if not implemented carefully.

It is of great importance that the manoeuvres of autonomous ships are predictable to human operators
on manual ship. The Artificial Intelligence onbord has a potential to be much “smarter” than humans
and be able to extrapolate further into the future and thereby behave in a way that might surprise people
(“automation surprise”). Instead the AI should focus on behaving in a humanlike manner. This
automation transparency might consist of the Route Exchange technology developed in recent e-
Navigation projects like EfficienSea, ACCSEAS and MONALISA, which holds a basic infrastructure
for autonomous ship.

Acknowledgements

This research is conducted within the SAREPTA (Safety, autonomy, remote control and operations of
transport systems) project funded by the Norwegian Research Council, which is hereby gratefully
acknowledged.

References

CROSBIE, J.W. (2006), Lookout Versus Lights: Some Sidelights on the Dark History of Navigation
Lights, J. Navigation, 59/1, pp 1-7

IMO (1972), Convention on the International Regulations for Preventing Collisions at Sea, 1972,
(COLREGs), Int. Maritime Organization

IMO (1980), The International Convention for the Safety of Life at Sea (SOLAS) 1974,
http://www.imo.org/en/About/Conventions/ListOfConventions/Pages/International-Convention-for-
the-Safety-of-Life-at-Sea-(SOLAS),-1974.aspx

IMO (2018), Regulatory scoping exercise for the use of Maritime Autonomous Surface Ships (MASS),
MSC 100/5/6

LEE, G.W.U.; PARKER, J. (2007), Managing Collision Avoidance at Sea, Nautical Institute, London

NTSB (2009), Allision of Hong Kong‐Registered Containership M/V Cosco Busan with the Delta Tower
of the San Francisco–Oakland Bay Bridge San Francisco, California November 7, 2007, Accident
Report NTSB/MAR-09/0, PB2009-91640, National Transportation Safety Board, Washington, DC

PORATHE, T. (2006), 3-D Nautical Charts and Safe Navigation, Malardalen Univ. Press, Vasteras

PORATHE, T.; BRODJE, A. (2015), Human Factor Aspects in Sea Traffic Management, 14th COMPIT
Conf., Ulrichshusen

RODSETH O.J.; NORDAHL H. (2017), Definition for autonomous merchant ships. Version 1.0,
Norwegian Forum for Autonomous Ships. http://nfas.autonomous-ship.org/resources-en.html

VAN DOKKUM, K. (2016), The COLRGS Guide, DokMar Maritime Publisher

358
Implementation of a Data Driven,
Iterative Approach to Building Digital Twins
Jarosław Nowak, ABB Marine, Billingstad/Norway, jaroslaw.nowak@no.abb.com
Morten Stakkeland, ABB Marine, Billingstad/Norway, morten.stakkeland@no.abb.com

Abstract

This paper presents the implementation of a full system collecting data from marine equipment and
edge analytics, transferring it with secure manner to on shore IT infrastructure in order to construct,
maintain, and improve digital twin models. We will give examples where the on shore digital models
were used to improve and even derive new on board edge analytics, making it an iterative process. The
paper will provide a description of the full data stream, from the equipment on board to vessel models
and implementation in the cloud infrastructure.

1. Introduction

Undoubtedly, there has been numerous attempts of describing and defining the concept of digital twins.
And it still seems to be a lot of dispute on what exactly different persons, companies and customers
understand by this term. The humble intention of authors of this article is not to lay theoretical or to
some extend futuristic visions about what digital twins can become, but rather to outline how they are
implemented and used to provide digital services and added value in marine systems. The paper will
demonstrate how the digital twins are integrated into the on board and on shore systems, and describe
the infrastructure that facilitates developing, deploying, and updating digital twins.

Three different use cases are given, where digital twins are used to add value and insights;
benchmarking and measuring performance in a DC-grid electric propulsion system, condition
monitoring of rotating machinery, and a monitoring application where machine learning was used to
build a digital twin model. The three use cases illustrate several aspects of the practical usage of digital
twins. First, they demonstrate how different applications or models require different models, structure
and inputs. An electrical propulsion motor is a component in all the three use cases, but the
corresponding motor models share few if any common structures. The digital twin must be adapted to
the application, rather than encompassing all possible information. Second, they illustrate a second main
point of the paper, of how the development, application and maintenance of the digital twins is an
iterative process. In one example, it is described how several iterations and on board changes were
needed to improve the data quality to a sufficient level for the model to be accurate. In a second example,
the digital twin has been applied to modify and improve the on board system.

2. Digital twins

The treatment of digital twins in this paper will lean heavily on the systematic treatment and concepts
and classification provided by Cabos and Rostock (2018). They propose the following four business
drivers for investing into digital twins:

1) to increase the manufacturing flexibility and competitiveness,


2) improve product design performance,
3) forecasting the health and performance of products over lifetime,
4) improved efficiency and quality in manufacturing.

This article will focus mainly on point 3 and partially on point 2, as it is related to how the application
of digital twins are integrated into the ABB Marine business unit. In its portfolio of digital products,
ABB Marine offers systems and services that are supporting our customers in various aspects of vessel
and machinery operations, Fig.1. The article will mainly discuss the content of Smart Asset
Management block, giving three particular use cases.

359
Fig.1: Digital Service solution portfolio derived from ABB

Again, following Cabos and Rostock (2018), the three constituents of digital twin are:

• “A. Asset representation: i.e. a digital representation of a unique physical object (e.g. a ship or
an engine or part of it)” which in this article is discussed in section 3.1, ‘Asset model versus
system model’ covering to some extent, semantics of collected data.
• “B. Behavioral model: Encoded logic to allow predictions and/or decisions on the physical
twin” discussed in data collection and use-cases related chapters where some examples of
modelling methods from knowledge based, through system functions and ending with purely
data driven machine learning models are given.
• “C. Condition or configuration data: Data reflecting status of and changes to the unique physical
object during its lifecycle phases” is in fact a description of the digital twin enabler e.g.
computerized system of systems thoroughly described in chapter 3, ‘Infrastructure for
implementation of digital twins’

3. Infrastructure for implementation of digital twins

In order to take all benefits of using digital twin, there is a certain digital infrastructure needed to be
established both on premise e.g. on board marine vessel as well as on data receiving side e.g. onshore
infrastructure that is later referred in this article as a cloud. There are certainly different ways of
implementing such an infrastructure and one may find different terms associated with its description –
in this article we will use a term ‘system of systems’, which in principle is a collection of task-oriented
or dedicated systems that pool their resources and capabilities together to create a new, more complex
system which offers more functionality and performance than simply the sum of the constituent systems.
Number of systems used and described in the article is obviously finite and can be listed in the following
way:

• Diagnostic system i.e. data collection and analytics system deployed on board as well as in
virtualized environment in the cloud (that goes by the name of ABB Ability™ Remote
Diagnostic System)
• Decentralized Control System having multiple layers out of which control/field network and
client network are to be discussed
• Remote access platform software solution that implements secure remote connection and data

360
transfer
• Microsoft Azure cloud infrastructure enabling deployment of virtual environment to host on
shore digital twin as well as facilitates the storage of both raw and recalculated data in a simple
and readable format of SQL tables or flat files
• Dashboarding application for data visualization and sharing part of entire on shore digital twin
to various data consumers e.g. ship operators, vessel management companies, other vendors.

The building blocks of digital twin infrastructure and entire cycle of data transformation and its
processing to build and iteratively update behavioral aspect of digital twin is presented on Fig.2. Each
aspect of this environment is tagged with A, B, C letter that corresponds to the main constituents of
digital twin given in the ‘Introduction’ chapter. This way, we provide a context and relate practical
implementation of certain blocks to the concept of digital twin. In principle, what is presented on Fig.2
is a system of systems capable of collecting data on board according to different sampling regimes,
performing on premise analytics for decision support and data size reduction, compressing data before
securely transferring from marine vessel to cloud data centers and finally processing data of single
assets or systems or fleet of same assets in order to build and iteratively update digital models of physical
assets that are deployed on board (on premise)

Fig.2: Building blocks of digital twin infrastructure

In further sections of this article, most of the blocks and aspects presented in Fig.2 will be discussed in
details with some highlights as presented below:

• Asset representation i.e. conceptual modelling of assets and systems of assets on board and in
the cloud with its practical implementation with use of structured XML documents. Here we
answer the question of what data we are going to collect.
• Data collection part i.e. insight about how interfacing with other on premise digital systems and
smart devices is implemented and what are the main communication protocol of choice. Here
we try to give an answer on how we collect data with the strong goal of reusing existing digital
infrastructure rather than replicating data sources by adding more sensors and physical
connections. Aim is to minimize investment cost to build digital twin, yet not to lose
information that are critical to build proper digital models.
• Edge analytics i.e. analytics performed on board the vessel is an essence of behavioural aspect
of digital twins. Here we would like to explain the process of manipulating data in order to
predict the condition of actual physical asset.

361
• Data transfer from on board to on shore or cloud infrastructure involves data selection,
compression, secure transfer and remodelling part on consumer’s side. Until now, the
infrastructure for on board twin is described and the article enters description of the on-shore
twin.
• Cyber security is additionally considered in all steps presented in Fig.2, therefore we devote
entire chapter for that
• Once data leaves the vessel, they are processed further on the cloud side. This process often
requires remodelling the structure and meta data information so that fleet type analytics can be
applied. Having data in the cloud opens also opportunities for collaborative work of human
experts from different disciplines to improve models and analytics without necessity of
connecting to on board infrastructure. Improved models and recommendations can be then
applied into on board digital twin to keep consistency between on board and on shore digital
representation of the same physical assets.

3.1. Asset vs system representation

There are different ways of structuring information and data that are describing digital representation
of physical assets. One way of modelling a physical asset is to provide static information that will not
change over its lifetime. These types of information are called ASSET INFOS and could for instance
be the bearing type, serial number, rating plate information such as nominal speed of the motor or
nominal power. Another type of information that are more dynamic are actual measurements, otherwise
called INPUTS. Example of inputs are measured speed, temperature or current. INPUTS are
representing measurements taken from the source of measurement (such as sensor or other digital
system) without any pre-processing. Interesting factors in the definition of the INPUT are its type
(numerical or textual, time series or equispaced vector), its origin (e.g. information about data source
location) and the sampling rate in case of simple data readers. The third type of information are in this
case RESULTS which are digital information on how the INPUTS and ASSET INFOS has been
processed according to behavioural aspects of the digital twin model. RESULTS can also be numerical
values (for instance root mean square value calculated from raw vibration data) or textual (such as
warning information that the condition of the physical asset is starting to deteriorate).

An important note at this point is that although we typically have some predefined structure of ASSET
INFOS, INPUTS and RESULTS that are describing the physical asset, or in other words we have an
equipment model in place, these definitions may and will change over the course of iterative process of
updating models and digital twins. Therefore, it is important that our infrastructure can accept multiple
changes in the way we for instance define e.g. INPUTS by changing sampling rate of signal or
RESULTS by changing the underlying equations and analytics with minimum effort e.g. economic
impact.

One language, or to be precise, one markup language that can be used to describe the physical asset in
the digital world can be XML (extensible markup language) that is widely used in IT world mainly in
SOA (Service Oriented Architecture) type of applications. XML also found its application in describing
configuration of digital twin discussed in this article. This is mainly due to the fact that it combines
flexibility (anyone can define whatever tag or property in xml) with clear syntax check. In addition,
although XML does not have any semantics incorporated in the standards itself, it is almost self-
explanatory for any domain expert to understand the difference between motor speed and hull number
if only they are appropriately named in XML.

Another aspect of modelling physical assets is the way how the meta model is structured. In principle,
meta models should describe relations between digital data collected to build digital twin, so that it
reflects the function of modelled assets or its place in the wider hierarchy. In this article, we are using
a term of asset model that is a definition of single physical, repetitive object versus system model where
strict criteria for encapsulating meta model are released e.g. instead of building meta model for electric
motor we are more interested in the meta model of all energy producers and consumers that are playing
role in behavioural model of vessel’s energy efficiency calculation. In case of asset type modelling, it

362
is crucial that the instance of such model can be standardized and deployed multiple time without
additional re-configuration (or engineering). An example of such object modelled as asset is electric
motor, or pump or propulsion control system. In all of those cases we may define standard set of XML
elements and attributes and treat them as a template of the digital representation of the motor, or pump
or specific local control system. In consequence, the names of those properties and attributes will be the
same for all instances of assets modelled this way. It is still possible to position such an instance of the
asset in the broader hierarchy of subsystem or system thus its individual associations or location will be
treated as unique identifier of this asset.

There are many reasons why templates and predefined asset models find its applications and the main
one is obviously economic aspect. It is much quicker, easier and cheaper to engineer and deploy digital
infrastructure of entire propulsion machinery of the vessel if we simply do it by multiple instantiation
of standardized asset models. One example could be deployment of electric propulsion system that
consists of digital representation of 2 frequency converters, 2 electric motors, 2 transformers and 2
propulsion control systems, where each of them is deployed as an instance of its equipment type
template. Modelling according to asset model has its advantages but may also lead to unnecessary
complication in case one would like to aggregate and relate base properties of multiple assets following
some specific behavioural logic. One example has been given already e.g. building a model of energy
flow on board the vessel. Another one could be sensor fusion techniques for system level fault detection
– here we may bind the effect seen within e.g. instance of the asset type model (for instance high
temperature of motor winding) and correlate it with real cause of this bad effect that may be originated
from totally different place in the hierarchy (for instance wrong performance of the control loop for this
motor). In this case, we should make a translation of asset model properties to system model properties.
And this can be done by proper mapping the properties with use of some translation tables together with
a replication of original values records. In the era of a big data, cheap cloud storage and system that
easily scale up for different calculations needs, such an approach seems to much more reasonable than
for instance 10 years ago.

Fig.3 shows very simple example of asset model type xml template used to describe different properties
of frequency converter which is next translated with use of translation table into another system model.
Highlighted with red boxes, are examples of ASSET INFO or INPUT with various properties describing
how actual values are going to be acquired (data source address and sampling rate) and where they are
going to be stored (database, flat files, volatile memory). In the system model example that follows the
same XML syntax, there is an example of RESULT element with the attribute <formula> that describes
the equation in this case with use of proprietary calculation expressions.

Fig.3: Asset vs system models and translation table examples

363
One final note in the topic related to modelling of physical assets in digital application is that this process
is expected to be done in iterative way. It all depends on the business model and we may have cases
that the system once delivered, will never be changed. In the era of digital transformation and use of
digital twins we see however that there are growing opportunities and customer demands to add
different types of additional services, decision supporting systems, intelligence and analytics. All those
additional inquiries result in iterative updates of asset and system models as well as translation tables.

3.2. Data collection

Once the static definition of physical assets has been established in a form of XML documents and
deployed within diagnostic system runtime following multitier software architecture paradigm (with
storage and data management, application processing and business intelligence separated in separate
processes and nodes), it is time to deploy scenarios for a dynamics of digital twin e.g. from where, how
often the measurement are going to be acquired in order to fire related calculations. The topic is very
wide as itself and we will limit its description to few selected aspects only.

3.2.1. Communication interfaces

Nowadays, most of the marine applications are already digitalized to some extent. Typically, there are
VMS (Vessel Management Systems) in place that allows operators to supervise and control most of the
critical operations and processes. From that perspective, we may already expect that there is
computerized infrastructure of some sensors, controllers, operating stations etc, all connected with field
or Ethernet networks into control and automation system. These systems exchange information over
communication buses with use of various protocol types: from the closed, proprietary ones that are
typically vendor specific, through open ones that are supporting agreed international standards. In this
article, we will discuss use of OPC standard (OLE for process control) which is well established and
widely used way of exchanging data for non-critical and non-real time applications. The OPC
specification evolved over years starting from Data Access part, next extended with OPC HDA
(Historical Data Access) and OPC Alarms and Events. Technology wise, standard was created by
Microsoft and based on COM, DCOM and OLE technologies. Recent development done by OPC
Foundation group resulted in creating new specification e.g. OPC UA (Unified Architecture) that is
focusing more on platform independence, openness, security and service-oriented architecture.
Although OPC UA standard is undoubtedly a good answer for market demand to close the gap between
modern IT technology trends and traditional industries, it is still not that often that equipment and
system vendors for marine applications support OPC UA standard.

Fig.4: Communication topology for digital twin infrastructure

Fig.4 presents an example of network topology that can be found on board vessel discussed in this
article in the use cases section. From the data collection perspective, important is to highlight that data
provider node (in this case OPC Server node in Client Server network) is physically separated from the
OPC CLIENT that is receiving application (in this case on board digital twin). Depending on the OPC
CLIENT requirements for the signals and refresh rates, OPC Server creates corresponding signal groups
and CLIENT subscribes to that and is notified about each change of the signal value. Described

364
mechanism is specific to OPC Data Access data flow. There are additional security aspects to be
considered but these are discussed in chapter 3.4 Cyber Security.

3.2.2. Smart data acquisition

The most basic mode of exchanging data between OPC CLIENT and OPC SERVER is time scheduled
request to acquire data from server and fetch it on client side. For the sake of simplicity, we may call
this functionality as a reader. In ideal world OPC CLIENT would ask OPC SERVER for a data as
frequent as we decide. In practice, there are certain limitation for instance more data to be stored means
higher costs of storage, there are network bandwidth limitations, OPC tunnellers limitations and OPC
server implementation limitations as well e.g. it may not be possible for OPC SERVER to provide data
as frequent as OPC CLIENT is asking. Further in this chapter, authors will describe different ways of
bypassing some of those limitations without a loss of information.

The first approach is to use both synchronous and asynchronous interfaces for OPC CLIENT-SERVER
communication. With use of asynchronous interface, OPC Client does not poll the server every
requested interval – instead it is the server that notifies that client when data has exceeded user-specified
deadband and in that moment client polls for the data. The advantage of such an approach is that we
can set quite short interval and data should come only when they change, thus we minimize traffic
between server and client and save on the storage. An example could be that if the ship stays still in the
harbour and the reference speed for the propulsion motor is zero, there will be no values being polled
and recorded on the client side even though the time interval may be single second/seconds. The
drawback is that if the ship stays in the harbour for couple of days, our digital twin will have a data gap
and consumer of this data may not be always sure if this is due to failure of recording system or
asynchronous reading. In order to solve it one can add additional reader that uses synchronous interface
with longer intervals (e.g. minutes, hours). With synchronous read, OPC client always polls the OPC
server at regular predefined intervals and server is supposed to call back and provide the data.

Another approach of reading data in a smart way is to build some logical scenarios where for instance
the OPC client uses watchdog to monitor if specific variable exceeds predefined threshold and if it does,
there is a separate batch of OPC Client Server calls that collect data with high sampling rate only for
period of predefined time e.g. few minutes. After that, the data poll is stopped. There is yet another
approach to smart data acquisition where data are already received and our goal is to transform it in a
way we minimize their size thus cost and maximize their value for use in specific analytics. Data
preparation and transformation falls under the aspect of data science that is frequently called data
engineering.

The sampling rate in general should be adapted to the dynamics of the sampled process or signal,
together with a consideration on the need to measure, analyse and model the underlying functions. For
instance, it may be sufficient to sample the power of a propulsion once every minute for large tanker
with conventional propulsion, while sampling every few seconds may be needed to monitor the
performance of the components in a DP system. Then why not sample every available every second?
The answer is related to data volume, storage and transportation. The complexity and abundance of
systems and subsystems on board a modern vessel means that if sampling every second, the amount of
data would be very large, but without adding any insight or value. The communication link between the
vessel and on shore data centers is another factor that should motivate the data engineer to some care
when setting the sampling rates. Binary signals are well suited for asynchronous sampling. A new value
is then stored only when the state of the signal is changed. However, it is recommended to add
synchronous sampling at regular intervals, to ensure data integrity.

Integration and downsampling of signals are useful techniques that in some cases can be applied to
reduce the size of the data stream. For instance, consider an auxiliary pump in some subsystem on a
vessel. In this particular case, the data engineer considers two variables to be of interest; the energy
consumption and the accumulated number of running hours. When calculating the energy consumption,
the first step would be to integrate the measured power in order to calculate the accumulated energy in

365
units of kWh. When integrating, the sampling rate should be high in order to optimize the accuracy.
The accumulated energy consumption can then be downsampled without any loss of accuracy, the only
loss being temporal resolution. The main idea is to calculate the integrated signal in the edge analytics
and transmit the downsampled signals to shore. The average power consumption can easily be
calculated from this integrated signal. The example is illustrated in Fig.5. In this example, the power is
measured every second, integrated and downsampled once every minute. The average power is
calculated every minute. Note that for the sake of robustness, the integration should be performed as
close to the signal source as possible, preferably on an industrial grade, redundant device. The integrated
signal then has the advantage of providing useful information also during periods where the
communication between the data collecting edge device and the device itself is lost, due to maintenance,
network issues or other problems. The accumulated number of running hours should be calculated in a
similar manner. The number of hours the pump is running should be accumulated at the edge device,
and downsampled at an appropriate rate. One sample per day may be sufficient in this case.

Fig.5: Principle of integration and downsampling to reduce storage and bandwith requirements

3.3. Edge analytics

Once data are collected, transformed or downsampled, they can be processed by certain analytics
already on board the vessel to derive meaningful KPI (Key Performance Indicators) for decision
support. This type of analytics is called edge analytics as they happen within on premise systems. The
diagnostic system that is a main enabler within digital twin infrastructure has capabilities of calculating
results automatically at the arrival of data that are defined as inputs for those results. In the use cases
described in this article we are typically discussing time series sort of data so the time stamp of values
should act as reference for the calculation engine to pick up right values and us it to calculate final
result, Fig.6.

The pair Stamp -ValidFor is used in performing time-series computations. Values can be combined in
computation only when their validity periods overlap. The resulting validity period is the intersection
(common part) of the validity periods of the arguments. Thus, the stamp of the result will be the later
of the stamps of the arguments, and the end of its validity will be the earlier of the ends of the arguments.
Fig.6 illustrates how this might happen for a sample expression A+B. Note that when argument periods
do not overlap, there is no result produced, ABB MARINE (2017).

Fig.6: Validity and timestamp check for processing results

366
In the practical application, formula A+B is typically quite complicated signal processing analytics that
for instance calculates FFT spectrum out of collected vibration data. The body of FFT calculation
analytics may be some tailored made signal processing functions or a call to existing library embedded
within calculations runtimes such as MATLAB or R.

Often some results become inputs for other results, and thus we may have hundreds of results derived
from only several initial inputs that represent sensor measurements. This entire batch of equations may
require regular updates and modification as we learn more about behavior of physical asset and we
introduce changes and improvements in the chain from data collection scenarios to edge analytics. Point
is that every update of the formula for C e.g. from C=A+B to C=2*A+B trigs automatic recalculation
of underlying results so that digital twin updates itself automatically. For real-time or close to real time
application where criticality of edge analytics is high, one must consider moving calculations to the
control layer and deploy them on board programable controllers. Example of such an approach is
described in the use case for motor temperature protection.

3.4. Cyber security

From the marine market observation over last 5-8 years, significant increase of the cyber security
awareness has been observed especially on regulating side (marine notification classes) as well as end
users e.g. ship owners and operators. Traditional vendors of digitalized systems, especially global
companies such as ABB had the cyber security mindset, tools, regulations and solution in place already
long time ago – this is mainly due to the fact that other industries such an oil & gas, power (especially
nuclear), chemical were already demanding secure solution long time before it started to become a
standard in marine. Marine industry is exposed to digital transformation process and the concept and
implementation of digital twins are raising stronger concerns on how safe and secure are all data
transitions and accesses required to build such digital copies. As depicted on Fig.2, the ABB solution
for both on board and cloud twin is somehow embraced by cyber security frameworks. There is a
number of different techniques developed and implemented to secure our system and writing about all
of them would probably go beyond the main scope of this article. Therefore, we focus on selected
aspects described in following paragraphs.

Cyber security starts on board or on premise. The way how computers, smart devices, networks and
communication protocols are secured there, determines also probability of having somebody able to
breach the entire system. As shown on Fig.4, the heart of digital twin infrastructure which in our case
is a diagnostic system is interconnected with all vital components and network segments on board the
ship. Therefore, it is important to control and restrict network traffic that flows between non-critical
Application Network and critical automation and control network on south side and customers network
or even open internet on north side. For the secure OPC communication between digital twin
infrastructure and automation system, there must be an OPC tunneller and firewall in place. First one is
solving the problem of DCOM technology that OPC communication is based on. In short DCOM
(Distributed Component Object Model) developed by Microsoft years ago, has a pretty naughty feature
for all network administrators: it randomly assigns and uses communication ports from considerably
wide range (e.g. from 1024 to 5000) forcing all firewalls between OPC client and server side to be
opened for this range. It makes entire setup vulnerable and practically open for cyber-attacks. As a
solution, shown on Fig.4, there is an additional software application called tunneller that tunnels OPC
traffic through single, deterministically configured port. And for the communication via this port, the
second required component e.g. firewall depicted on Fig.4 with grey fill has to be placed and configured
so that it allows for communication over tunneller port only.

Another point exposed for potential cyber-attacks is the physical computer where the software hosting
digital twin is running. Few most important practises used to protect computers are listed below:

• With each release of the software product, performing Attack Surface Analysis to identify all
potential attack points
• For the entry points that has to be open, run and successfully pass security tests performed by

367
authorised and independent Device Security Assurance Center (DSAC)
• Perform system hardening with use of embedded Windows OS firewall to whitelist ports that
are needed and block all others that are not used by application
• Disable USB usage for mass storage media
• Apply regular operating system updates and patches – this is governed and executed within
cyber security service contract
• Install and update antivirus application on regular basis
• All code that is running digital twin application is signed with use of PKI (Public-key
infrastructure) digital certificates
• Strictly manage access control by introducing password policies, authorization and role-based
mechanism on all possible levels e.g. from operating system to the application itself

Cyber-attacks may be executed from inside e.g. some person or application is trying to breach the
system while being on board, or it could be an external attack from public internet. The latter one is
more likely as there are probably and unfortunately quite large number of hackers that would like to try
their skills to attack digital systems running on board such critical vessels as cruise liners or LNG
(Liquefied Natural Gas) tankers. Therefore, protecting ship to shore connection that facilitates for
instance remote access from vendors’ companies to computers on board or data transfer itself between
ship and shore must be handled with special attention. Solution that is used by ABB is called Remote
Access Platform. In a very short it consists of the software agent (RAP agent) deployed at the vessel
side that creates secure link and communicates with service centre – server side application that
functions as a core of the system, acting as knowledge repository, control centre and communication
hub. Important aspect here is that communication between ship and shore is always initiated by the ship
side and therefore firewall marked with red fill on Fig.4 is configured for outbound communication
only thus restricting entire inbound traffic. In addition, there are only two certain, public IP addresses
to be opened on firewall (for service centre and communication server points) and everything else can
be blocked. Entire communication such as file transfer or Remote Desktop Protocol are tunnelled
through secure link established between RAP agent and communication server. Prior to initiating such
secure link, RAP agent and communication server perform two-way authentication. The communication
itself is encrypted using TLS (Transport Layer Security) protocol. RAP provides also audit and security
features, including audit logs to track user and application access.

3.5. Data transfer and data ingestion

Once Remote Access Platform establishes secure link, it also facilitates automatic file transfer in a batch
mode. This is the way how the digital twin on board transfers selected measurements and results of
analytics to its counterpart e.g. digital twin in the cloud. The amount of data that is required to be stored
in the cloud is also changing along the iterative process of digital twin updates and improvements. From
one side, we try to minimize the scope of data transferred to save on satellite communication link costs,
but from the other side, having as much data as it is possible stored in the cloud allows for
multidimensional and fleet wide analysis that results in improvement of local models for individual
vessels. So instead of limiting ourselves with the scope of the data, we should rather minimize its size.
Techniques of downsampling and smart data acquisition have been described in chapter 3.2.2, but there
is still room for improvement with use of high ratio compression techniques. As it has been tested and
proven in real application, use of high compression methods may decrease the size of transferred data
by the factor of 8.

As soon as data arrive on receiving side that is typically a virtual machine with high storage capacity,
files are handled by the job running on shore side digital twin. This job is first decompressing data and
next is ingesting data into exactly the same software application as runs on board. Difference is that we
may set up multiple consumer instances and use either the same asset modelling method as on board
thus create exact copies of on board digital twin or with use of translation tables, one can rearrange all
model structure and use arriving data to analyse aspects of totally new nature.

368
4. Use case: efficiency analysis for marine DC-Grid systems

The digital twin that was developed in order to analyse the performance of the propulsion system on a
vessel with DC grid power distribution, is a fitting example of how several iterations were needed in
order to construct the model.

A single line diagram representing a schematic representation of vessel with DC grid is shown in Fig.6.
The digital twin concept for this case was developed in order to analyse and compare the performance
of the vessel in different modes of operation, with respect to fuel consumption. The information shown
in the diagram above provides sufficient information about the connectivity in the system to construct
this particular digital twin. Note that in order to simulate or model other aspects of the system, other
drawings and schematics may be needed. The digital twin in this context is understood as the functions
needed to perform this specific analysis, rather than a model encompassing each individual function of
each relevant components. No 3D drawings are necessary in order to inspect, evaluate and simulate the
fuel consumption related to the modes of operation.

Fig.7: High level diagram of a DC grid energy distribution system with 6 diesel engines and electrical
generators, and 6 thrusters

The measured inputs to this particular twin model are

• Electrical generator power, measured for each electrical generator


• Fuel consumption for each diesel engine

Other inputs are

• SFOC (Specific Oil Fuel Consumption) curves for the diesel engines, Madsen (2014)
• Model of the PEMS (Power & Energy Management System) function with corresponding limits

As a side note - one missing component when evaluating and benchmarking fuel consumption is a
digital twin in its most literal meaning – a digital model of the considerations made by the crew when
operating the vessel. There may for instance be valid reasons for operating with an extra generator
online during an operation, even though it degrades the performance with respect to fuel consumption.

The digital twin model was constructed in several iterations, where the first one was related to the signal
quality of the collected signals. After assembling the model and performing initial analysis, it became
evident that the sampling rate of most of the signals were high enough to monitor transit/ steaming
mode, but insufficient for monitoring of DP operations. The DP system follows the dynamics of the
wind, current and waves, which means that the sampling needs with intervals in the order of seconds,
not minutes. This issue was solved by performing modifications of the on board diagnostic system, by

369
adjusting the sampling rate of selected signals. The second iteration was regards to the selected signals.
A DC grid system can be operated with both opened and closed bus-tie breakers, which will influence
how the individual engines and thrusters are loaded. The status of these breakers was not included in
the initial set of logged data, and had to be added at a later stage, by interfacing the on board automation
system

A third iteration was also added, in the sense that the initial SFOC curves provided by the manufacturer
of the diesel engines were replaced by the empirical SFOC curves, extracted from over a year of
operation. This gives a more accurate image than the generic curves. It also corresponds to the principle
stated by Cabos and Rostock (2018), that the digital twin should be updated if the physical object
changes.

Completion of this stage of the digital twin then paved the way for more interactions and iterations.
Firstly, the twin enables benchmarking and measuring of the operation of the vessel, which can be used
as feedback to the on board crew. This may be reflected in the future operations, which again will be
measured by the digital twin.

In addition, the digital twin has been used together with other modules to simulate the theoretical
performance of a modified or slightly different system. For instance, how much energy could be saved
by adding energy storage to the system in Fig.6? This question can be answered quite accurately by
combining the digital twin with other simulation modules, as for instance an energy storage module and
a PEMS module. This serves as an example of how the digital twin itself enables iterations and system
modifications, and the twin model would off course need to be updated after retrofitting the vessel with
energy storage.

5. Use case: condition monitoring of rotating equipment

Another case, where there has been necessity to introduce iterative process of updating behavioral and
configuration aspect of digital twin is also related to the vessel with the power and propulsion system
symbolically presented on Fig.6. This time, the main analytics were related to condition monitoring of
main rotating electric machinery e.g. 6 electric generators and 3 selected tunnel thrusters. In the initial
approach, well proven methods for machine condition assessment based on spectral analysis of
vibration and current measurements have been used. Additional sensors such as accelerometers and
Rogowski coils have been placed on the machine to measure vibration and electric current with high
sampling rate (12,5kHz).

Fig.7: Automatic data collection process, analysis and visualization of machine condition

Original data collection and diagnostics scenario assumed that high frequency sampled measurements
of vibration and current are going to be performed maximum once per day under the condition that
rotating speed of monitored machine is stable during the high frequency measurement and the load of
machine exceeds certain level defined in the system (for instance more than 60% of nominal load).
These requirements are needed because:

• in order to perform effective automatic fault identification based on the vibration or current
spectrum, the corresponding spectrum must be distinct and this can be achieved only if the

370
variation of the speed is as low as possible
• in order to analyse trends based on specific indicators derived from the spectrum, we should
expect at least one measurement point per day to catch dynamics of mechanical faults that may
develop in the machine over weeks
• the higher load on the machine the better signal to noise ratio thus higher reliability of automatic
diagnosis

There is a certain limitation in a described measurement system, i.e although vibration and current are
sampled with high sampling rate of 12.5 kHz for the duration of e.g. 10 s, the rotation speed itself can
only be acquired once per second. This is due to the fact, that there has not been any additional
tachometer installed and the speed is acquired from automation system with use of OPC communication
protocol described in chapter 3.2.1. High frequency sampled measurements of vibration and current
together with average speed and load calculated while high frequency sampled measurements were
taken, arrive to diagnostic system and are checked against calculation criteria e.g. average load must
exceed threshold level and the variance of the speed must not exceed specific level. In case criteria are
fulfilled, input measurements are processed with use of multiple analytics from the domain of signal
filtering, spectral analysis and harmonics matching and checked against the warning limit. Very final
information to the users on board is presented as the graphic with traffic lights corresponding to different
machine faults, Fig.7.

The presented scenario found its successful implementation on numerous vessels and proved to be very
effective especially for typical propulsion motors, AC generators or direct online motors. However, in
the discussed case, it has been very soon discovered that the operating profile of especially tunnel
thruster’s motors requires some modification and improvement in a way data are collected and pre-
processed before actual analysis. And this is due to the fact, that tunnel thruster motors are mainly used
in the DP (Dynamic Positioning) mode and from the analysis of the speed and load measurements
derived from the digital twin it showed up that speed may vary more than 50% within 10 s duration of
measurements, Fig.8. This was mainly an effect of wave impact compensation while keeping a fixed
position of the vessel. In consequence, even though diagnostic system was triggering the measurements
once per hour, these measurements were not further processed as they did not fulfil precondition related
to minimal speed variance. In result, trends of main indicators contain very few points which did not
allow machine experts to give reliable diagnosis and recommendation about required maintenance
actions.

trigger point untrimmed

trimmed

power

speed

Fig.8: Operating cycle of tunnel thruster and effect of data trimming on spectrum quality

In the first step of iterative improvements the emphasis was put on increasing the number of
measurements. This was achieved by changing the condition scenario. Instead of checking the level of
speed variance once per hour, the diagnostic scenario checked every second whether the motor is at its
peak of speed increase within a single work cycle. Measurements were trigged immediately if this
condition has been fulfilled (see the triggering point marked in Fig.8). In result, there has been multiple
measurements taken per day. Many of them however still contained the speed falling edge so expected

371
to have high speed variance and had to be taken out from analysis. This however was not known basing
only on low time resolution measurements of speed. The solution was found by analysing high
frequency sampled current data. Close to sinusoidal signal of supply current of the motor contains
approximate of the motor speed. By analysing frequency of supply current wave, one could cut out the
time window containing measurement with most stable speed. The only criteria to be fulfilled was the
minimum length of the window size to fulfil spectrum resolution requirement and the variance of the
speed within this window. Once the window start and end point have been derived with use of some
optimization algorithm, the same time coordinates were used to cut out corresponding window of
vibration measurements (as all vibration and current channels were sampled simultaneously by DAU –
Data Acquisition Unit).

Once the high frequency sampled data were trimmed according to above scenario, they were further
processed with the same analytics as used in initial deployment. Fig.8 right hand side chart shows
vibration spectrum before (black) and after (red) trimming. It is clearly seen that spectrum derived from
trimmed data has a much more dominant main harmonic thus the fault identification part of analytics is
to be more reliable.

The last iteration of digital twin improvement was related to modification of baselining method and
proper calculation of warning limit. In the original approach, warning limits were based on initial, single
baseline measurements acquired for monitored machines in early stage of their lifetime. In addition,
those warning limits were checked against international standards such as ISO 10816-3:2009 (2009).
This approach however showed up to be wrong because on one hand measured values such as velocity
RMS were multiple times lower than the warning limit given by ISO standard, and on the other hand
the variance of calculated vibration indicators for each measurement point during the first few months
of machines lifetime was so high that it could exceed baseline limit derived from single measurement.
As a result, digital twin was producing false alarms which is highly confusing and unwanted situation
on board. High variance of resulting indicators was again originated from high dynamics of motors’
operating profile and even though diagnostic system checked criteria for the load range, it happened
that it caught measurement with extremely high energy that exceeded baseline warning limits.

Fig.9: Process overview where digital twin is used to add inboard protection systems

The solution for that was to include statistical variance in the baseline and warning limit calculations.
Instead of using single measurement as a baseline, there has been a larger set of measurement taken into

372
analysis. Time range to collect such a sample was set to half a year and with, in average, 4-6
measurement points per month it gave a good set of approximately 30 observations. Following
recommendations given by MOBIUS INSTITUTE (2017) for statistical alarm calculations and basing
on the assumption that vibration indicators spread follows normal distribution, the alarm limit calculated
as a function of average value and its standard deviation. Important note is that new approach was
applied individually for each machine which resulted in having different values of warning limits that
corresponded to actual and observed energy vibration levels specific to this machine

The entire analysis described above has been performed with use of on shore digital twin as it is much
easier to manipulate data, experiment with different equations and scale the analytics engine in the
cloud. Since the core of on shore digital twin is based on the same software infrastructure as the one on
board, the act of updating behavioural definition of on board twin was a matter of a single operation
performed with remote connection

6. Use case: machine learning models for fault prediction

Another example of an iterative process involving digital twins, is a project where machine learning
was applied to create a thermal model of a marine propulsion motor on a given class of vessels. This
generation of the digital twin model was different than in the previous example, in the sense that
structure and the content of the model was partially derived by machine learning. This digital twin was
then used as a basis to improve the on board protection systems and, the improved protection was
implemented in the PCU (Propulsion Control Unit) real-time controller. An overview of this process is
shown in Fig.9. Note that the feedback to the PCU is not a continuous operation.

7. Summary

The marine industry is currently going through an accelerated process of digital transformation. There
is very stimulating and open-minded environment created where all key players in the marked e.g. ship
owners, shipyards, system vendors and integrators are willing and trying to collaborate, integrate and
exchange information and data to solve various challenges together. In such environment, there is also
common and strong believe that building digital twins is a starting and fundamental step that eventually
will result in measurable business gains and will provide added value. In this article, it was demonstrated
how digital twins were used to add value for some specific use cases, and multiple lessons learnt when
integrating the models in the marine digital infrastructure are presented. It has been highlighted that the
digital infrastructure must be in place, using proven and working building blocks, in order to start
iterative and to some extent continuous process of improvements and updates that digital twins require.
Business wise, such a perspective strives for certain investments both on customer and vendor side,
followed by continuous, advanced service efforts.

References

CABOS, C; ROSTOCK, C. (2018), Digital Twin or Digital Model?, 17th COMPIT Conf., Pavone,
pp.403-411

ISO 10816-3:2009 (2009), Mechanical vibration – Evaluation of machine vibration by measurements


on non-rotating parts – Part 3: Industrial machines with nominal power above 15 kW and nominal
speeds between 120 r/min and 15000 r/min when measured in situ

ABB MARINE (2017), RDS4Marine Engineering Manual software version 5.2.3, manual

MOBIUS INSTITUTE (2017), CAT III Vibration Analysis training, online training course

MADSEN, K. E. (2014), Variable Speed Diesel Electric Propulsion, Technical report, Pon Power

373
There’s no Free Lunch:
A Study of Genetic Algorithm Use in Maritime Applications
Adam Sobey, Jeanne Blanchard, Przemyslaw Grudniewski, Thomas Savasta,
University of Southampton, Southampton/UK, ajs502@soton.ac.uk

Abstract

This paper surveys the applicability and performance of a variety of algorithms belonging to the
Genetic Algorithm (GA) family on maritime engineering problems. Nine different GAs from the state-
of-the-art are compared with the original GA. The applications are a yacht layout and a grillage
structure. The aim is to understand the GA mechanisms by visualising the behaviour, find appropriate
algorithms for common problems in maritime engineering and understand the common characteristics
of these applications. Guidance is given on the correct algorithm selection, with cMLSGA outperform-
ing other state-of-the-art algorithms on all the test problems.

1. Specialisation vs. generalisation

There’s no free lunch. As an algorithm is adjusted to solve a given problem more successfully, it will
inevitably degrade its performance across other types of problems. Another approach is that it can be
developed to be more “universal”, with a smaller reduction in performance across all problem types.
This means that there are specialist solvers, providing excellent performance on some problems but
much worse on others, and general solvers, capable of solving a range of problems well but never
reaching the performance spikes of the specialist solvers. A key element in ensuring success in
optimisation is therefore using the correct algorithm for the given problem.

Genetic Algorithm applications are increasing across a number of engineering and scientific problems.
These algorithms provide excellent results in training Neural Networks, feature selection in data and
optimisation to reduce the design space. Despite these advantages a recent ISSC report (Lazakis, et al.,
2018) indicates that there is a decline in academic literature related to their use in marine applications.
The reasons for this are unknown, certainly to the authors, and it implies that the full benefits are not
realised. A key problem with these algorithms is that their mechanisms appear to be a black box and
understanding more about how they relate to a given set of problems should improve their effectiveness.

Two problems from the marine industry with potentially different characteristics: a yacht layout and a
grillage structure; are characterised by applying a range of top performing GAs to these problems: U-
NSGA-III, MOEA/D, MOEA/D-MSF, MOEA/D-PSF, IBEA, HEIA, BCE, cMLSGA, the original GA
and MTS. These algorithms are selected as they are popular in the Computer Science literature but are
not pervasive in marine applications. The original GA represents an approach still common in the
marine industry and U-NSGA-III represents the best practice seen in the academic literature.

2. Review of Genetic Algorithms

2.1. A history of Genetic Algorithms

The basis for Genetic Algorithms is that if the fittest individuals in a given population mate to form a
new generation of children this new generation will be fitter, on average, than the last. Turing provided
the initial inspiration for the Genetic Algorithm in 1950 (Turing, 1950); in this paper the potential for
a biologically inspired learning machine is proposed, with a focus on Artificial Intelligence but no
explicit methodology. In the 1960s early group selection theories were developed and the first
successful implementation of a Genetic Algorithm, under the title adaptive systems, was performed by
Holland and is summarised in 1969 (Holland, 1962) (Holland, 1969). These initial studies inspired
others to develop algorithms for use in optimisation, or implement additional mechanisms, often
moving away from the initial biological inspiration. For example elitism (De Jong, 1975) is used to

374
improve the performance by retaining a percentage of the best solutions between generations.

The first multi-objective Genetic Algorithm, Vector Evaluated Genetic Algorithm (VEGA) (Schaffer,
1985) was developed in 1985. This is a key implementation as multi-objective optimisation is a key
benefit of using Genetic Algorithms with single objective optimisation providing the user with much
less information, unless the search space is large or complex. VEGA selects a group of individuals and
assigns them an objective, this approach found some success, but the search is focused predominantly
on the edges of the solution space, providing poor convergence on much of the front. (Goldberg, 1989)
developed what can be considered to be the algorithm closest to a standard version of a multi-objective
Genetic Algorithm utilising classical mechanisms. This was developed alongside the first Pareto
ranking approach proposed for multi-objective optimisation which is at the core of many modern
Genetic Algorithms, the niching algorithms, and this process is based on Pareto optimality.

(Fonseca & Fleming, 1993) later develop the Multi-Objective Genetic Algorithm (MOGA), the first
Genetic Algorithm utilising Pareto ranking and niching techniques to solve multi-objective
optimisation problems. This is integrated into MATLAB through the popular MATLAB GA Toolbox
(Chipperfield, et al., 1994), which is still currently used in many optimisation problems. In the same
year, an adaptive Genetic Algorithm (AGA) (Srinivas & Patnaik, 1994) is developed to reduce the
influence of crossover and mutation probability settings. It achieves higher performance on multimodal
problems compared to the standard GA. This is extended to the self-tuning, self-adaptive Genetic
Algorithm (SAGA) (Hinterding, et al., 1996). However, both AGA and SAGA are sensitive to a number
of hyperparameters.

Extending the ideas from Goldberg, non-dominated sorting method (Srinivas & Deb, 1994) and
simulated binary crossover (Deb & Agrawal, 1994) are used to develop NSGA. This algorithm
improves the Pareto ranking and creates a movement away from binary to real value encoding, on which
most current Genetic Algorithms are now based. The island model Genetic Algorithm is developed that
divides the population into sub-population islands and allows the migration of individuals between
islands, improving the diversity of the solutions and is among the first of the hierarchical algorithms
(Whitley, et al., 1999). Strength Pareto Evolutionary Algorithm (SPEA) combines the characteristics
from several previous multi-objective evolutionary algorithms (Zitzler & Thiele, 1999) and is upgraded
to SPEA2 (Zitzler E, 2001), this algorithm is still used in the marine optimisation literature. To reduce
the number of function calls, micro-GA (CA & Pulido, 2001) is introduced which is based on small
population sizes with an external population for updating and re-initialisation. The micro-GA is
improved to develop the Adaptive Micro-GA (AMGA) (Tiwari, et al., 2009).

Modern Genetic Algorithms are considered here to be from NSGA-II (Deb, et al., 2002.), as the oldest
currently competitive algorithm, and those developed after its development. Multi-island Genetic
Algorithm extends the concepts from the island model Genetic Algorithm. It is frequently used in the
industrial literature since there is a commercial optimisation software based on this algorithm produced
by SIMULIA, Isight, though this software also includes NSGA-II, AMGA and NCGA. In 2004, the
indicator-based selection approach is first developed in Indicator-based Evolutionary Algorithm
(IBEA), (Zitzler & Simon, 2004) which improves the efficiency of finding the final Pareto optimal
front. Rather than using dominance to evaluate the achieved solutions, the indicator is used to reflect
the diversity and quality of current Pareto optimal front in each generation and pushes the solutions to
the true Pareto optimal front. Most of the literature relating to marine optimization uses algorithms from
NSGA-II or before but does not adopt the more modern state-of-the-art algorithms, which are
summarised in the next section.

2.2 Mechanisms of current main Genetic Algorithms

Of the frequently used Genetic Algorithms in the optimisation literature those considered to be state-
of-the-art in various benchmarking exercises in evolutionary computation are not visible in the marine
literature. Therefore, a brief review of the current state-of-the-art in evolutionary computation is
performed to encourage further benchmarking of these algorithms. These are performed by splitting the

375
algorithms into 4 broad categories: niching, decomposition, co-evolutionary and multi-level selection
algorithms. Codes for: cMLSGA, HEIA, NSGA-II, U-NSGA-III, MOEA/D, MOEA/D-MSF,
MOEA/D-PSF, MOEA/D-M2M, BCE, IBEA and MTS are available for benchmarking in C++ and
python from multiple sources including https://www.bitbucket.org/Pag1c18/cmlsga.

2.2.1 Niching algorithms

The introduction of niching techniques increases the diversity of the population and helps the Genetic
Algorithms to improve the ability to solve multi-peak optimisation problems. The niching technique
was first based on a preselection mechanism, meaning that the parent individuals can only be replaced
when the newly created offspring individuals are higher in terms of fitness than the parent individuals,
otherwise the parent individuals are retained (Cavicchio, 1970). Niching is exemplified by the crowding
mechanism based niching technique found in the most popular Genetic Algorithm NSGA-II (Deb, et
al., 2002.) which uses non-domination to sort the fittest solutions, this mechanism is illustrated in Fig.1.
The algorithm’s popularity is because it is a robust general solver with few hyper-parameters and retains
a high diversity through its diversity preservation mechanisms. This methodology has been upgraded
to a variety of versions, such as U-NSGA-III (Seada & Deb, 2015) that unifies NSGA-II and NSGA-
III to be suitable for mono-objective, multi-objective and many-objective problems, the problem type
on which it is currently the top performing algorithm.

Fig.1: Non-dominated sorting on a bi-objective minimisation example (Wang & Sobey, Under Review)

2.2.2 Decomposition algorithms

Decomposition methods are a relatively new family of algorithms originating with CS-NSGA-II
(Branke, et al., 2004). In decomposition algorithms, the population is divided into sub-groups that
search different sub-regions of the search space and include a number of high performing algorithms
such as MOEA/D (Zhang & Li, 2007), MOEA/D-M2M (Liu, et al., 2014), DMOEA-DD (Liu, et al.,
2009) and (Liu & Li, 2009 ). These algorithms implement a number of additional setting parameters,
such as the weight vectors in MOEA/D, which greatly influence the solutions given by the algorithm
and it is important to determine their optimal settings. MOEA/D is the most popular decomposition
method with a number of different variants specialised for different problem types. It works by creating
a set of weight vectors and by dividing the population into N weighted multi-objective optimisation
sub-problems which are concurrently optimised in the algorithm. For each weight vector, 𝛾, the
Euclidean distances between any two weight vectors are calculated and the closest M weight vectors of
a weight vector 𝛾 are defined as its neighbourhood, illustrated in Fig.2, where the neighbourhoods can
overlap. Two individuals in the neighbourhood of weight vector 𝛾 are randomly selected to generate
the offspring solutions through crossover and mutation. The offspring solutions are then compared with
their parents and the neighbourhood of parents. If the new generated solutions are better than their
parents and neighbourhood of parents, they replace the previous solutions and the reference point of

376
the weight vector and its neighbouring solutions is updated. The non-dominated solutions of each sub-
problem are combined to achieve the Pareto optimal front. These methods require a priori knowledge
of the objective space or they can result in extremely poor performance. However, it is hard to obtain
the required knowledge of the objective space for practical optimisation problems before solving them.
When solving discontinuous problems or constrained problems these algorithms struggle with the large
gaps where there are no feasible solutions as the weight vectors point straight through the gaps and the
individuals struggle to go around these spaces, resulting in a waste of computational power. However,
these algorithms exhibit excellent convergence characteristics, dominating the benchmarking for
dynamic and unconstrained problems.

Fig.2: Mechanism of MOEA/D (Zhang & Li, 2007)

2.2.3 Co-evolutionary algorithms

The term coevolution is first introduced to describe the coexistence of plants and butterflies (Ehrlich &
Raven, 1964). However, the origins of this theory are older and they are clearly described by Darwin
(Darwin, 1859) when documenting the interactions between plants and insects. Coevolution can occur
in two forms: cooperation (Potter & De Jong, 1994), where organisms coexist and “support” each other
or competition (Hill, 1990), where an “arms race” occurs between species as only the fittest may survive
in the given environment.

In the coevolutionary approach, multiple populations of species of individuals coexist and evolve in
parallel, usually utilising distinct reproduction mechanisms with data exchange introduced between
them. The sub-populations can operate on the same search space (De Jong, 2006), or can be divided
into several regions using additional separation mechanisms (Jia, et al., 2018). The form of data
exchange between groups depends on the type of coevolution utilised: competitive or cooperative. In
cooperative coevolution the information is shared by different species to form a valid solution when
the problem is decomposed e.g. along the decision variable space (Potter & De Jong, 1994); or different
sub-populations may cooperate to form the Pareto objective front with different subpopulations
focusing on different regions (Coello Coello & Sierra, 2003); a simplification of this process in shown
in Fig.3. In competitive algorithms different groups compete in the creation of new sub-populations
(Goh & Tan, 2009) or populations (Lin, et al., 2016) with fitter sub-populations gaining a wider
proportion of children in the next generation; or via “arms race” where losing subpopulations try to
counter the winning ones by adaptation (Rosin & Belew, 1997). There are two currently top performing
methods that utilise this approach, Bi-Criterion Evolution algorithm (BCE) and Hybrid Evolutionary
Immune Algorithm (HEIA). In Bi-Criterion Evolution algorithm (BCE) (Li, et al., 2016) sub-
populations operate on the same search spaces and individuals for each group are selected at each
generation based on two distinct fitness indicators: the Pareto-based criterion (PC) and the Non-Pareto-
based (NPC). In the Pareto-based criterion, standard Pareto dominance is utilised which rewards

377
convergence whereas in the Non-Pareto based selection an additional indicator is introduced, based on
Hypervolumes (HV) which rewards diversity of solutions. This leads to an overall improvement in
diversity for the entire population, especially on many-objective cases and problems with irregular
search spaces and variable linkages. However, it is still convergence dominated. A similar approach
has been utilised in Hybrid Evolutionary Immune Algorithm (HEIA) (Lin, et al., 2016), but in this case
two distinct evolutionary computation methods are used, Immune Algorithm and Genetic Algorithm,
instead of separate quality indicators. This method shows excellent performance on quite a wide range
of problems, but is more convergence orientated and the performance of this method has not been
evaluated on highly discontinuous problems and constrained problems where the performance is
expected to be low, as it only utilises crowding distance for diversity.

Fig.3: Simplification of the mechanisms of co-evolutionary genetic algorithms (Wang & Sobey, Under
Review)

2.2.4 Multi-Level selection algorithms

Multi-Level Selection Genetic Algorithm was developed by the applicant to take advantage of the
recent evolutionary theories of (Wilson & Sober, 1989). They propose that evolutionary fitness does
not just depend on the fitness of the individual but can also on the collective of individuals that it is
associated with, an example might be that the survival of a wolf is not just dependent on its own fitness
but the fitness of its pack. So far MLSGA, and its variants, have shown to have top performance across
a range of state-of-the-art multi-objective problems. The algorithm seems to thrive in environments
with discontinuous fronts and constrained problems where diversity of the mechanisms is important.
Multi-level selection theory is unique in that it is the only diversity first search Genetic Algorithm.

Multi-level selection Genetic Algorithm (MLSGA) was first introduced by (Sobey & Grudniewski,
2018) and (Grudniewski & Sobey, 2018) where a collective level reproduction mechanism is
introduced, in addition to the individual level used in standard Genetic Algorithm, and the fitness
function is split between these levels. The algorithm works by randomly generating an initial population
which is classified into collectives according to the design variables. On the individual level, each
individual is evaluated through the individual objective function and the normal genetic operators are
utilised to perform individual reproduction. Simultaneously, each collective is evaluated using the
collective objective function. There is a competition among the collectives and the worst collective(s)
is eliminated. This collective is replaced by generating a copy of the best individuals from each of the
remaining collectives. The process is stopped when the termination condition is satisfied. There are two
main fitness evaluation methods, MLS1 and MLS2 (Sobey & Grudniewski, 2018). MLS1 uses the
aggregate of the individuals in the population to calculate the fitness of a collective. MLS2 calculates
different objectives using a fitness defined for the collective, with MLS2R defined as being reverse.

378
Therefore, MLS1 focuses on solutions at the middle of the Pareto optimal front and MLS2 and MLS2R
enhance the search ability at the two sides of the real Pareto optimal front separately. Based on the two
main methods, MLS-U, combining MLS1, MLS2 and MLS2R, can be utilised in MLSGA to maintain
the diversity of the search shown in Fig.4. This method has recently been combined with the co-
evolutionary approach, co-evolutionary Multi-Level Selection Genetic Algorithm (cMLSGA), to
increase its generality and is awaiting publication (Grudniewski & Sobey, Under Review).

Fig.4: MLS-U Pareto optimal front showing the different variants (Sobey & Grudniewski, 2018)

3. Test case 1- Structural Optimisations

3.1. Grillage model

The structural optimisation case study is based on a simple analytical model, the Navier grillage model
(Vedeler, 1945), which is adapted to improve the performance (Blanchard, et al., In Press). The adapted
analytical model is shown to be accurate to within 5% of FEA within the range of topologies that are
considered here. The original model calculates the deflection, w, for a grillage under simply supported
boundary conditions with Eq.(1),
 
mx ny ,
w =  f mn sin sin
m =1 n =1 L B (1)

where length, L, in the x-direction is stiffened with transverse stiffeners, NT, running perpendicular to
the x-axis and the breadth, B, in the y-direction is stiffened with longitudinal stiffeners, NL, running
perpendicular to the y-axis. The value for the coefficient fmn is calculated with Eq.(2) for odd wave
numbers m and n, in this case up to a value of 11,

16 PLB 1 ,
f mn =
 mnE m 4 ( N + 1) L + n 4 (N + 1) T
6
I I (2)
L 3 T 3
L B
where P is a uniform pressure applied to the panel, E are the elastic equivalent properties, IL the second
moment of area in the longitudinal stiffener and IT the second moment of area in the transverse stiffener.
The longitudinal bending moment, ML, at longitudinal position x and transverse position y is calculated
from the deflection with Eq.(3),

379
 2w  2   2 mx ny ,
M L = − EI L  2  = EI L 2  m f mn sin sin (3)
 x  yi L m =1 n =1 L B
similarly the transverse bending moment, MT, is determined with Eq.(4),

 2w  2   mx ny .


M T = − EI T  2  = EI T 2  n 2 f mn sin sin (4)
 y  xi B m=1 n =1 L B
The maximum stresses, σL,T max, on the crown element in each stiffener are derived from the moments,
ML,T, and calculated with Eq.(5) where ZL,T is the vertical distance of the centroid of an element to the
neutral axis and IL,T the second moment of area;

M L ,T Z L ,T . (5)
 L ,T max =
I L ,T

To calculate layer-by-layer stresses Classical Laminate Plate Theory is applied to the crown element of
the stiffeners, the location of the maximum stress on a grillage structure. The moments in the direction
of the stiffener, Mx,L for the longitudinal direction or Mx,T for the transverse direction, are calculated
with the grillage Eqs.(3) and (4). These are divided by the empirically derived factor, F, and the stiffener
width, a, before being implemented into the Classical Laminate Plate Theory, shown for the
longitudinal and transverse directions in Eqs.(6) and (7),

EI L   2 w  ,
M x,L = −   (6)
aF  x 2  y i

EI T  2w  .
M x ,T = −  2 
aF  y  x i
(7)

The empirical factor F is calculated as shown in equation (8) with E1 and E2 being the longitudinal and
transverse Young’s modulus of the laminate,
2
E  E 
F = 0.003 1  − 0.1202 1  + 3.9721 . (8)
 E2   E2 

The curvatures and strains are calculated using Eq.(9) from the extensional stiffness matrix, [A], the
extensional-bending coupling stiffness matrix, [B], and the bending stiffness matrix, [D]. The crown is
assumed to be in pure bending and therefore the normal forces per unit length, Nx and Ny, and shear
force, Nxy, are assumed to be negligible and set to 0. The width to height ratio of the cross section is
assumed to be small; this means that the lateral curvature is induced only due to the effects of Poisson’s
ratio and therefore transverse bending moment per unit length, My, is also set to 0. The extensional-
bending coupling matrix, [B], relates in-plane strains to bending moments and curvatures to in-plane
forces; the laminate is symmetric and therefore the [B] matrix is also set to 0,

  x0   A11' A12' A16' 0 0 0  0 


 0  '  
  y   A21
' '
A 22 A 26 0 0 0  0 
 xy0   A16' A '
A '
0 0 0  0  .
 =  
26 66

 x   0
'
0 0 D11 D12' D16'   M x  (9)
 y   0 0 0 D' '
D 22 ' 
D 26 0 
   21
  
 xy   0 0 0 D'
16
'
D 26 '
D66   0 

The stresses in the kth layer of the crown laminate can therefore be expressed as Eq.(10),

380
 x  Q11 Q16   x   x 
0
Q12
       ,
 y  = Q12 Q22 Q26   y0  + z  y 
(10)
 xy  Q16 Q66  k  xy0   
   xy 
k
Q26

where 𝑄̅ are the reduced stiffness terms, z is the ply centroidal value and τxy is the shear stress.

Table I: Design parameters for the optimisation


Parameter Variable Lower Boundary Upper Boundary
Number of Stiffeners 2 8
Width of Crown a 150 300
Crown Thickness b 5 20
Web Thickness c 5 20
Web Height d 150 300
Base Width e 150 300
Plate Thickness f 5 20

In this case two variants of the problem are solved, with different numbers of objectives. CLPT, where
the grillages are optimised for mass and stress, and CLPT3, where deflection is also added. It forms a
problem with 7 different variables, Table I, Fig.5.

Fig.5: Stiffener variables

3.2. Structural Optimisation Results

In this study a total of 10 genetic algorithms are used to solve the presented cases: MOEA/D (Zhang &
Li, 2007) as the top Genetic Algorithm on constrained problems; MOEA/D-MSF (Jiang, et al., 2018)
and MOEA/D-PSF (Jiang, et al., 2018) as improved variants of MOEA/D for imbalanced and uncon-
strained problems; MTS (Tseng & Chen, 2009), as the most proficient constrained solver that utilises
local search strategies that also performs well on unconstrained problems; HEIA (Lin, et al., 2016) as
an algorithm that shows high proficiency across a diverse set of problems and is a general solver with
a bias towards convergence; BCE (Li, et al., 2016) which is another more recent algorithm designed as
a general solver but with a strong bias towards convergence; U-NSGA-III (Seada & Deb, 2015) as the
many-objective universal variant of NSGA-II (Deb, et al., 2002.), which is the current state-of-the-art
seen in the marine literature; IBEA (Zitzler & Simon, 2004) which is a highly cited GA which is unused
within the marine literature; cMLSGA as the best general solver (Grudniewski & Sobey, Under
Review) and an algorithm developed to be similar to the original Genetic Algorithm representing a
solver which is still common in the marine literature.

The tests are performed over 30 separate runs, and the termination criterion is set at 300,000 function
evaluations for each run. The results are compared using the Hyper Volume (HV) and Inverted Gener-
ational Distance (IGD) indicators, as between them they provide comprehensive information on the

381
convergence, accuracy, and diversity of the obtained solutions. HV is the measure of volume of the
objective space between a predefined reference point and the obtained solutions which has a stronger
focus on the diversity and edge points and can be calculated according to (While, et al., 2012), where
higher values indicate better results. IGD is the measurement of the average Euclidean distance be-
tween the points in a true Pareto Optimal Fronts and the closest solution in the obtained set of solutions,
and is to be minimised, where 0 indicates the perfect convergence. This metric has stronger emphasis
on the convergence and uniformity of the points and can be calculated according to (Wang, et al., 2018).
Different population sizes have been evaluated and 1000 is selected as the best value for all algorithms.
The crossover and mutation rates are set as 1 and 0.08 respectively, and the rest of algorithm-specific
operation parameters are set as in the original publications. For all cases the objective normalization
strategy taken from (Zhang & Li, 2007) is used.

Table II: Convergence rankings for the different algorithms on both structural optimisation prob-
lems (Green boxes indicate that the algorithm is considered to be a general solver)
IGD
Rank CLPT CLPT3
Avg. Min. Max. SD Avg. Min. Max. SD
cMLSGA cMLSGA
1
0.847 0.649 0.968 0.123 3.18 2.89 3.52 0.155
BCE HEIA
2
0.898 0.688 0.939 0.043 3.34 3.04 3.85 0.169
HEIA U-NSGA-III
3
1.07 0.951 1.10 0.025 4.54 4.10 5.07 0.240
U-NSGA-III BCE
4
1.50 1.43 1.65 0.05 7.40 6.12 8.65 0.55
MOEA/D-PSF MOEA/D
5
1.77 1.29 2.31 0.27 13.3 12.1 13.8 0.397
MOEA/D-MSF MOEA/D-MSF
6
2.14 2.09 2.23 0.0554 15.4 15.1 15.7 0.120
MOEA/D MOEA/D-PSF
7
4.54 4.52 4.58 0.0122 54.0 48.6 56.6 1.36
IBEA Original
8
6.59 1.05 123 21.6 122 74.6 203 34.3
MTS IBEA
9
41.9 18.2 79.4 16.3 134 14.7 310 92.1
Original MTS
10
123 62.6 207 34.4 136 56 209 39

The convergence results show that the general solvers have the best performance on both problems,
with the top 4 performers all being the state-of-the-art general solvers. The top performer on both
problems is cMLSGA, which shows the lowest minimum IGD and the best average performance on
both problems. The worst performer on the simplest problem, CLPT, is the original GA but surprisingly
this is not the case for the 3 objective problem, where it outperforms both IBEA and MTS. There is a
large separation in the IGD metric between all of these problems, showing that the selection of the
algorithm is important. The diversity of the solution is also assessed with the rankings shown in Table
III. In this case the rankings are similar to the convergence results, in that they are dominated by the
general state-of-the-art solvers. In the CLPT case, U-NSGA-III performs poorly which is unexpected,
performing worse than the MOEA/D-MSF and MOEA/D-PSF updates. This contradicts much of the
literature which shows that NSGA-II has strong diversity on multi-objective problems. However, in the
CLPT3 case U-NSGA-III provides strong performance and therefore it is possible that the simplicity
of the CLPT results allows domination by algorithms with good convergence. The Pareto optimal fronts
are generated for the CLPT problem with an exemplar of the worst performing algorithms, Fig.6a), and
the best performing in Fig.6b) where the top performing algorithms all give a similar shape.

382
Table III: Diversity rankings for the different algorithms on both structural optimisation problems
(Green boxes indicate that the algorithm is considered to be a general solver)
HV
Rank CLPT CLPT3
Avg. Min. Max. SD Avg. Min. Max. SD
cMLSGA cMLSGA
1
0.865 0.864 0.865 3.18E-04 0.877 0.877 0.879 7.28E-04
BCE HEIA
2
0.864 0.864 0.865 1.26E-04 0.877 0.877 0.879 4.59E-04
HEIA U-NSGA-III
3
0.864 0.864 0.864 4.05E-06 0.876 0.876 0.876 4.13E-05
MOEA/D-MSF MOEA/D
4
0.864 0.864 0.864 1.5E-06 0.875 0.875 0.875 9.56E-05
MOEA/D-PSF BCE
5
0.864 0.864 0.864 4.31E-05 0.875 0.874 0.875 2.83 E-04
U-NSGA-III MOEA/D-MSF
6
0.864 0.864 0.864 7.51E-06 0.874 0.874 0.874 7.23E-05
MOEA/D MOEA/D-PSF
7
0.864 0.864 0.864 3.84 0.872 0.872 0.872 1.51E-04
IBEA MTS
8
0.862 0.850 0.864 2.54E-03 0.865 0.857 0.868 2.48E-03
MTS IBEA
9
0.857 0.855 0.860 1.24E-03 0.858 0.853 0.864 2.78 E-03
Original Original
10
0.836 0.825 0.845 4.36E-03 0.853 0.846 0.858 2.93 E-03

There is a difference between the two fronts with the original GA showing a limited range of points,
poor accuracy and sparse points. Fig.6b) shows that BCE can replicate the shape of the true Pareto
optimal front, with better accuracy, high density of points and range of points in the front. It
demonstrates the importance of selecting the correct algorithm for these problems. The final front is
convex and discontinuous, with small discontinuities along the low stress front. This profile is similar
to those seen in the evolutionary computation literature, e.g. ZDT1, where convergence algorithms
perform strongly, but has a higher number of discontinuities in the front than these idealised problems.

Fig.6: Examples from the best and worst Pareto optimal fronts for the CLPT problem a) original GA
b) BCE

The Pareto Sets for the CLPT3 problem are shown in Fig.7a), for the worst example, and 7b) for the
best example. This shows a similar trend to the CLPT, 2 objective problem, with the worst performing
algorithms showing poor accuracy and diversity of points, but the density is higher for the region of the
front that is found. The best performing algorithms find a wider range of points, with a bifurcation at
higher deflections, which contains a lower density of points and the algorithm does not find all of the

383
solutions in this space. The additional objective removes the discontinuities from the front but there are
a higher density of points in this region that are hard to find.

Fig.7: Examples from the worst and best PPareto optimal fronts for the CLPT3 problem a) original
GA b) cMLSGA

4. Test case 2- Boat layout

Boat layout optimisation is a common problem in the marine literature. In this case a motor yacht is
optimised to maximise the space utilisation of the cabins in the hull. The motor yacht particulars are
described in Table III.

Table III: Motor yacht particulars


LENGTH OVERALL 24.8 m
MAX BEAM 6.9 m
DRAUGHT 1.955 m
DISPLACEMENT 152.8 tonnes
LCB (FROM TRANSOM) 8.673 m

Fig.8: 3D cabin geometry

The yacht consists of a number of cabins where each of these can be defined by two polygonal faces
separated by a distance L, length of cabin, as shown on Fig.7. Offset points, wxn, are added to define
the two cabin shape in the (y, z) plane. The number of offsets is referred to as n and is the same for both
the aft and forward face (e.g. Fig.7, n = 4). The y-offset of the forward and aft faces of the cabin are
not required to be the same and they are implemented as variables in the GA as well as the x-position,

384
y-position and length. The more offset points that are used, the more the cabin face will reflect the
actual hull shape so the better the space utilization, illustrated in Fig.8, but results in a more complex
optimisation problem and longer computational time. A tolerance of 1% of wasted area is obtained with
7 offset points which is then used to define each cabin face. Using 7 offset points gives 17 variables per
cabin.

The objective of the optimisation is to minimise the wasted volume space in the hull. The first fitness
is therefore given by Eq.(12):
𝐻𝑢𝑙𝑙 𝑣𝑜𝑙𝑢𝑚𝑒 − ∑ 𝑐𝑎𝑏𝑖𝑛 𝑣𝑜𝑙𝑢𝑚𝑒
𝑓1 = . (12)
ℎ𝑢𝑙𝑙 𝑣𝑜𝑙𝑢𝑚𝑒

The second objective is to ensure that the ship is stable by matching the longitudinal centre of gravity
(LCG) with the longitudinal centre of buoyancy (LCB). The LCB is derived from the lines plan and the
design water line. The LCB is obtained by integrating the underwater sectional areas to give the hull
displacement and then by taking the sum of the moments of the volumes enclosed between the two
sections and dividing by the displacement. To derive the LCG of the cabin, the first step is to allocate
a weight factor to each cabin to reflect their effect on the overall weight distribution on the yacht i.e.
the engine room will weigh more than a simple cabin. Each cabin will therefore be given a proportion
of the total mass displacement of the yacht. The centre of mass is be assumed to be located at the centre
of volume of the cabin. The LCG is therefore obtained by dividing the volume distribution by the
volume of the cabin. Note that the transverse centre of gravity of the cabin is always on the centreline
(y = 0) as the cabin are symmetrical with respect to this line. The total LCG of the yacht is obtained by
summing of the mass moment of each cabin and dividing by the total mass of all cabins. The fitness
value of the trimming module is the normalised distance between the values obtained for LCB and LCG
shown in Eq.(14):

𝐿𝐶𝐵 − 𝐿𝐶𝐺
𝑓2 = . (13)
𝐿𝐶𝐵

To satisfy the non-overlap constraint between two cabins, i and j, at least one of these 4 conditions in
Eqs.(15)-(18) need to be met:

Longitudinal condition:
1. xi + li ≤ xj (14)
2. xi – lj ≥ xj (15)
Transverse conditions:
3. yi + max(wi0, wi1, wi2, … , win-1 )) ≤ yj (16)
4. yi – max(wj0, wj1, wj2, … , wjn-1 ) ≥ yj (17)

If one these conditions is respected, the cabins will not overlap. The violation value is defined as the
minimum value by which the cabin is violating the conditions i.e. the smallest value by which the cabin
position needs to be correct in order to respect the constraint. Secondly, all cabins must be within the
yacht hulls boundaries. As the hull shape is defined by a lines plan, the calculation of the maximum
allowable breadth is composed of three steps corresponding to interpolations in the (x, z), (y, z) and (x,
y) planes respectively. Thirdly all of the cabins volume have to be within a 20% margin of the
predefined standard sizes, which are derived from linear regression of 39 similar vessels. This ensures
that the cabins are realistically sized. The boat layout forms a more complex problem consisting of 68
variables, with 2 objectives and 3 constraints. This problem is highly discontinuous with a large region
of the search space consisting of infeasible regions.

4.2. Results

The same optimisation process is performed as for the composite structure, with the same
hyperparameters and algorithms, which are compared using the same metrics. In this case the general
performers again perform best in terms of convergence and the original Genetic Algorithm performers

385
the worst with the specialist solvers in the middle, shown in Table IV. The best performer across both
metrics is cMLSGA. However, in the diversity metric BCE performs poorly, as it is unable to find the
extreme values for the front, and only outperforms MOEA/D, of the algorithms that find any solutions.

Table IV: Performance ranking for the boat layout problem (General solver are coloured in green)
Algorithm
Rank IGD HV
Avg. Min. Max. SD Avg. Min. Max. SD
cMLSGA cMLSGA
1
0.095 0.054 0.181 0.035 0.406 0.241 0.484 0.065
HEIA U-NSGA-III
2
0.101 0.011 0.257 0.057 0.399 0 0.619 0.157
U-NSGA-III HEIA
3
0.106 0.008 0.326 0.081 0.398 0.100 0.589 0.109
BCE MOEA/D-MSF
4
0.236 0.162 0.360 0.042 0.148 0.120 0.181 0.014
MOEA/D-MSF MOEA/D-PSF
5
0.237 0.221 0.250 0.007 0.144 0.121 0.193 0.015
MOEA/D-PSF BCE
6
0.238 0.214 0.250 0.009 0.140 0 0.275 0.072
MOEA/D MOEA/D
7
0.299 0.216 0.474 0.075 0.071 0 0.175 0.061
MTS MTS
8
1E+30 1E+30 1E+30 0 0 0 0 0
IBEA IBEA
9
1E+30 1E+30 1E+30 0 0 0 0 0
Original Original
10
1E+30 1E+30 1E+30 0 0 0 0 0

The boat layout problem is more difficult to solve than either of the composite structural problems, as
there are substantially more variables and the space is more constrained in this problem. This means
that many of the algorithms with poorer diversity preservation struggle, not being able to find feasible
results on each of the 30 runs, with the robustness shown in Table V. Some of the algorithms perform
even worse with MTS, IBEA and the original algorithm not finding any feasible results.

Table V: The number of times the top performing algorithms find a solution
cMLSG HEI U-NSGA- BC MOEA/D- MOEA/D- MOEA/
A A III E MSF PSF D
Robustness
29 29 28 28 27 27 25
(\30)

The Pareto optimal front is illustrated for one good and bad examples, as shown previously. In this case
the Pareto optimal front is less interesting as finding layout orientations where the LCB and LCG match
for all of the cases is simple, shown in Fig.9b), meaning that only the wasted volume is of interest. In
this case finding the solution is the more critical element. However, Fig.9a) clearly shows that MOEA/D
cannot find good designs, finding only those cases with high LCG/LCB and with a large quantity of
wasted volume. In this case the problem demonstrated the difficulty in solving constrained problems,
constraining the same optimisation would increase the search space, but create less gaps in the space
and allow the algorithm to better move through these zones in the search. It highlights the importance
of diversity when selecting algorithms to solve this type of problem.

386
Fig.9: Examples from the worst and best Pareto optimal fronts for the boat layout problem a) MOEA/D
b) U-NSGA-II

5. Discussion and limitations

Two problems, a structural optimisation and a boat layout optimisation, are solved using 10 different
algorithms, 9 covering the state-of-the-art and 1 that represents the “original” Genetic Algorithm that
is still prevalent in the literature. There is a big difference in performance between these algorithms,
across both problems, with an unsurprising greater separation between the algorithms on the more
difficult test cases. It is interesting to note that the original GA is not always the worst performer,
showing some increase in performance over two more modern GAs when considering convergence on
the 3 objective structural problem. It is difficult to tell the main dominant characteristics for the
optimisation based on this analysis alone, although the general performers focusing on a higher
diversity are appearing to have the best performance. Therefore, the specialist algorithms exhibit poor
performance. This could be that many of these solvers also require substantial a priori knowledge and
have not been tuned specifically to these problems. Perhaps these algorithms are the most tuned to the
problems in the evolutionary computation literature, which makes them inefficient on other kinds of
problems.

This study utilises U-NSGA-III to represent NSGA-II, as it is the unified approach and has a similar
performance to NSGA-II but with a small decrease in performance on the multi-objective problems.
However, there are a range of different codes available for NSGA-II meaning that the results here do
not provide a definitive performance for NSGA-II. The provenance of the code is often not highlighted
in marine research but the use of older versions of NSGA-II would result in a poorer performance and
these older codes would fall further down the rankings provided here. To the authors knowledge the
most recent, currently available, update is version 1.1.6, https://www.egr.msu.edu/~kdeb/codes.shtml,
which should be used to ensure the best performance or U-NSGA-III. Whilst the use of U-NSGA-III
should provide similar performance to NSGA-II the results for the CLPT show an uncharacteristically
low diversity and perhaps this is related to the use of the unification code which is more specialised for
problems with higher numbers of objectives.

The range of Genetic Algorithms used within the maritime industry is limited. This appears to be with
good reason with only a few Genetic Algorithms performing better than NSGA-II on these problems,
though the increase in performance can be quite substantial. This paper only investigates two different
problems and it would be beneficial to see more of these studies performed on a wider range of
applications. Especially, a more challenging range of problems which would test the limits of what
Genetic Algorithms are capable of and show where new developments are required to ensure that these
algorithms are adequate for solving problems in the maritime domain.

6. Conclusions

Genetic Algorithms are used across a range of industries to help visualise and understand large search

387
spaces. However, a recent ISSC report, (Lazakis, et al., 2018), shows that there is a reducing quantity
of literature using these algorithms in the marine literature and this might be linked to a lack of synergies
with the Evolutionary Computation literature to continue to provide better problems as the problems
being tackled get more complex. We know that there is no free lunch, different algorithms perform
differently on different problems, and so to understand how general and specialist solvers perform on
some marine industry problems 9 state-of-the-art algorithms are benchmarked on 2 maritime engineer-
ing problems, one structural optimisation and one boat layout, each with different characteristics. The
results show the importance of general solvers to practical problems and that there is a large range in
performance between the state-of-the-art algorithms, which are rarely seen in the marine literature. The
top performing algorithm across all of the problems is cMLSGA, which is a new algorithm developed
in 2018, and so while there is no free lunch it is shown that there is the potential for a cheap meal, with
one algorithm performing most highly on the different variants of the problems selected here.

Acknowledgements

The authors would like to thank the Lloyd’s Register Foundation for their continued support of the LRF
UTC in Ship Design for Enhanced Environmental Performance at the University of Southampton and
without whom this research would not have been possible.

References

BLANCHARD, J.M.F.A.; MUTLU, U.; SOBEY, A.J.; BLAKE, J.I.R., In Press. Modelling the different
mechanical response and increased stresses exhibited by structures made from natural fibre
composites, Composite Structures

BRANKE, J.; SCHMECK, H.; DEB, K.; REDDY, M. (2004), Parallelizing Multi-Objective
Evolutionary Algorithms: Cone Separation, Portland, pp.1952-1957

CA, C.C.; PULIDO, G.T. (2001), A Micro-Genetic Algorithm for Multiobjective Optimization, Zurich

CAVICCHIO, D. (1970), Adaptive search using simulated evolution, University of Michigan

CHIPPERFIELD, A.; FLEMING, P.; POHLHEIM, H.; FONSECA, C. (1994), Genetic Algorithm
TOOLBOX For Use with MATLAB

COELLO COELLO, C.A.; SIERRA, M.R. (2003), A coevolutionary multiobjective evolutionary


algorithm, Canberra

DARWIN, C. (1859), On the Origin of Species by Means of Natural Selection, or the Preservation of
Favoured Races in the Struggle for Life, John Murray

DE JONG, K.A. (1975), Analysis of the behaviour of a class of genetic adaptive systems, University of
Michigan Engineering Library

DE JONG, K.A. (2006), Evolutionary computation: a unified approach, MIT Press

DEB, K.; AGRAWAL, R.B. (1994), Simulated Binary Crossover for Continuous Search Space,
Complex Systems 9, pp.1-5

DEB, K.; PRATAP, A.; AGARWAL, S. & MEYARIVAN, T. (2002), A fast and elitist multiobjective
genetic algorithm: NSGA-II, IEEE Transactions of Evolutionary Computation, pp.182-197

EHRLICH, P.R.; RAVEN, P.H. (1964), Butterflies and Plants: A Study in Coevolution, Evolution,
pp.586-608

388
FONSECA, C.M.; FLEMING, P. (1993), Genetic Algorithms for Multiobjective Optimization:
Formulation, Discussion and Generalization, San Mateo

GOH, C.; TAN, K.C. (2009), A competitive-cooperative coevolutionary paradigm for dynamic
multiobjective optimization, IEEE Transactions of Evolutionary Computation 13, pp.103-127

GOLDBERG, D.E. (1989), Genetic algorithms in search, optimisation and machine learning, Addison-
Wesley

GRUDNIEWSKI, P.A.; SOBEY, A.J. (2018), Behaviour of Multi-Level Selection Genetic Algorithm
(MLSGA) using different individual level selection mechanisms, Swarm and Evolutionary Computation
44, pp.852-862

GRUDNIEWSKI, P.A.; SOBEY, A.J., Under Review. cMLSGA: co-evolutionary Multi-Level Selection
Genetic Algorithm, IEEE Evolutionary Computation

HILL, J. (1990), The three C's - competition, coexistence and coevolution - and their impact on the
breeding of forage crop mixtures, Theoretical and Applied Genetics 79, pp.168-176

HINTERDING, R.; MICHALEWICZ, Z.; PEACHEY, T.C. (1996), Self-adaptive genetic algorithm
for numeric functions, Berlin

HOLLAND, J.H. (1962), Outline for a logical theory of adaptive systems, J. ACM 9, pp.297-314

HOLLAND, J.H. (1969), Adaptive plans optimal for payoff-only environments, Hawaii

JIANG, S.; YANG, S.; WANG, Y.; LIU, X. (2018), Scalarizing Functions in Decomposition-Based
Multiobjective Evolutionary Algorithms, IEEE Trans. Evolutionary Computation 22, pp.296-313

JIA, Y.H. et al. (2018), Distributed Cooperative Co-evolution with Adaptive Computing Resource
Allocation for Large Scale Optimization, IEEE Trans. Evolutionary Computation

LAZAKIS, I. et al. (2018), Committee IV.2- Design Methods, Liege

LI, M.; YANG, S.; MEMBER, S.; LIU, X. (2016), Pareto or Non-Pareto: Bi-Criterion Evolution in
Multiobjective Optimization, IEEE Trans. on Evolutionary Computation 20, pp.645-665

LIN, Q. et al. (2016), A Hybrid Evolutionary Immune Algorithm for Multiobjective Optimization
Problems, IEEE Trans. Evolutionary Computation 20, pp.711-729

LIU, H.; GU, F.; ZHANG, Q.; MEMBER, S. (2014), Decomposition of a Multiobjective Optimization
Problem into a Number of Simple Multiobjective Subproblems, IEEE Trans. Evolutionary Computation,
pp.450-455

LIU, H.L.; LI, X. (2009), The multiobjective evolutionary algorithm based on determined weight and
sub-regional search, pp.1928-1934

LIU, M.; ZOU, X.; YU, C.; WU, Z. (2009), Performance assessment of DMOEA-DD with CEC 2009
MOEA competition test instances, Trondheim, pp.2913-2918

POTTER, M.A.; DE JONG, K.A. (1994), A cooperative coevolutionary approach to function


optimization, Springer, pp.249-257

ROSIN, C.D.; BELEW, R.K. (1997), New methods for competitive coevolution, Evolutionary
Computation 5, pp.1-29

389
SCHAFFER, J.D. (1985), Multiple objective optimization with vector evaluated genetic algorithms,
Hilsdale

SEADA, H.; DEB, K. (2015), U-NSGA-III: A unified evolutionary optimization procedure for single,
multiple, and many objectives, Springer, pp.34-49

SOBEY, A.J.; GRUDNIEWSKI, P.A. (2018), Re-inspiring the genetic algorithm with multi-level se-
lection theory: Multi-level selection genetic algorithm, Bioinspiration and Biomimetics 13, pp.852-862

SRINIVAS, M.; PATNAIK, L.M. (1994), Adaptive Probabilities of Crossover and Mutation in Genetic
Algorithms, IEEE Trans. Systems, Man, and Cybernetics 24, pp.656-667

SRINIVAS, N.; DEB, K. (1994), Multiobjective Optimization Using Nondominated Sorting in Genetic
Algorithms, Evolutionary Computation 2, pp.221-248

TIWARI, S.; FADEL, G.; KOCH, P.; DEB, K. (2009), Performance Assessment of the Hybrid Archive-
based Micro Genetic Algorithm (AMGA) on the CEC09 Test Problems, Trondheim

TSENG, L.Y.; CHEN, C. (2009), Multiple trajectory search for unconstrained/constrained multi-
objective optimization, Trondheim

TURING, A. (1950), Computing Machinery and Intelligence, Mind 49, pp.433-460

VEDELER, G. (1945), Grillage Beams in Ships and similar Structures, Grondhal & Son

WANG, Z. et al. (2018), Optimal design of triaxial weave fabric composites under tension, Composite
Structures 201, pp.616-624

WANG, Z.; SOBEY, A.J., Under Review. A comparative review of Genetic Algorithm use in composite
materials and structural optimisation with evolutionary computation, Composite Structures

WHILE, L.; BRADSTREET, L.; BARONE, L. (2012), A fast way of calculating exact hypervolumes,
IEEE Trans. Evolutionary Computation 16, pp.86-95

WHITLEY, D.; RANA, S.; HECKENDORN, R.B. (1999), The island model genetic algorithm: On
separability, population size and convergence, J. Computing and Information Technology 7, pp.33-47

WILSON, D.S.; SOBER, E. (1989), Reintroducing group selection to the human behavioral sciences,
Behavioral and Brain Sciences 136, pp.337-356

ZHANG, Q.; LI, H. (2007), MOEA/D: A Multiobjective Evolutionary Algorithm Based on


Decomposition, IEEE Trans. Evolutionary Computation, pp.712-731

ZITZLER E. (2001), SPEA2: Improving the Strength Pareto Evolutionary Algorithms, ETH

ZITZLER, E.; SIMON, K. (2004), Indicator-Based Selection in Multiobjective Search, Birmingham

ZITZLER, E.; THIELE, L. (1999), Multiobjective evolutionary algorithms: a comparative case study
and the strength Pareto approach, IEEE Trans. Evolutionary Computation 3, pp.257-271

390
Automating Inspections of Cargo and Ballast Tanks using Drones
Erik Stensrud, DNV GL, Oslo/Norway, erik.stensrud@dnvgl.com
Torbjørn Skramstad, NTNU, Trondheim/Norway, torbjorn.skramstad@ntnu.no
Christian Cabos, DNV GL, Hamburg/Germany, christian.cabos@dnvgl.com
Geir Hamre, DNV GL, Trondheim/Norway, geir.hamre@dnvgl.com
Kristian Klausen, Scout Drone Inspection, Trondheim/Norway, kristian.klausen@scoutdi.com
Bahman Raeissi, DNV GL, Oslo/Norway, bahman.raeissi@dnvgl.com
Jing Xie, DNV GL, Oslo/Norway, jing.xie@dnvgl.com
André Ødegårdstuen, DNV GL, Oslo/Norway, andre.odegardstuen@dnvgl.com

Abstract
DNV GL has performed production surveys in enclosed spaces using drones since 2016, demonstrating
cost savings and increased personnel safety. It is a vision to develop autonomous inspection drones to
reduce the need to enter tanks. We expect that this will reduce survey duration and survey preparation
costs for the clients and be a major safety improvement for surveyors. A number of drone capabilities
are required to enable visual close-up inspection and non-destructive testing in enclosed, GPS-denied,
and poorly lit environments. We describe current and possible future survey scenarios and the desired
capabilities of an autonomous inspection drone for enclosed compartments. Then, we report status from
an ongoing research project managed by DNV GL and including several industry partners. We
highlight technical challenges and results on drone navigation functionalities, computer vision,
hyperspectral imaging, and ultrasonic steel thickness measurements.

1. Introduction

1.1 Drone surveys state-of-the-practice and challenges

DNV GL has performed production surveys in enclosed spaces using drones since 2016, contributing
to cost savings and increased personnel safety. These drones are off-the-shelf drones and require a
trained drone-pilot who operates the drone manually in addition to the surveyor. They are equipped
with a standard drone video camera and lighting. To reduce the consequences of crashes with the wall,
the drones have been fitted with a custom-made physical cage to protect them in case of contact.

Current commercial off-the-shelf drones lack a number of capabilities required for inspection drones in
GPS-denied and poorly lit environments such as inside ballast tanks, double hull bottoms, and oil tanks.
Also, since they are manually operated, drone crashes occur. This also implies that the survey requires
two persons instead of one, one drone-pilot and one surveyor who monitors the live video stream. With
two persons, there is a need for oral communication. The monitoring surveyor must give navigational
instructions to the pilot, shouting in a noisy environment.
The drone camera does not know where it is looking, so a photo or video stream cannot be tagged
automatically to a specific location in the 3D compartment. The surveyor needs to take notes manually
of the locations the drone has inspected, and of the locations of findings.
The drone camera lacks intelligence to alert the surveyor of a potentially suspicious area. This requires
the surveyor to monitor the video stream continuously. The quality of the video stream with poor light
and an unstable drone, makes the piloting as well as the video monitoring a tiring exercise.
The requirement for thickness measurements which the drone is not yet equipped to fulfil, implies that
humans must still access high structures, e.g. through climbing, scaffolding or rafting.

1.2 Future drone surveys - vision, goals, and benefits

The survey vision is to perform remote surveys to avoid the need for humans to enter tanks, and to make
the assessment process significantly more efficient. We expect that this will reduce survey duration and

391
survey preparation costs for the clients, and it will be a major safety improvement for surveyors. This
might also improve inspection quality, and reduce the environmental impact compared to current
surveys.

The drone vision is to develop an intelligent, autonomous inspection drone. The drone will fly by itself
in a cargo or ballast tank, track where it is, equipped with an intelligence to spot corrosion, cracks or
bad coating condition, measure steel thickness, and compare with historical data to see the development
of corrosion and cracks.

The goal of this project is to improve the inspection process, using semi-autonomous drones
instrumented with hyperspectral cameras, real-time image processing systems, and ultrasonic thickness
measurement equipment.

Among the benefits are:

• Improving ship safety through higher inspection quality, partly due to objective, uniform,
interpretation of results, and partly to improved visual senses through the hyperspectral camera
• Improving personnel safety by reducing the number of tank visits
• Reducing inspection cost, by removing the need for erecting scaffolding, water filling for
rafting, the need for oxygen and inerting the tank, and providing more stream-lined processing
and reporting of findings
• Improving efficiency, due to reduced inspection time, quicker reporting and faster decisions.
• Reducing environmental foot print, by no need for rafting and hence the filling and emptying
of polluted water.
• Improving inspection transparency, because the drone can track where it is and what it has
done, and this track can be logged. Therefore, the inspection scope can be documented in terms
of areas inspected, and the number and location of thickness measurements, likewise. The trust
in thickness measurement companies has been a concern in the industry, IACS (2002).

2. Present survey regime

Hull surveys are conducted to ensure the safety of the ship. IMO is the regulator and has laid out the
requirements for inspections, and IACS has added additional requirements and recommendations. On
top of that, the Class societies add their own requirements (Rules) and recommendations
(Recommended Practices).

Today hull surveys are mainly performed at class renewal surveys, i.e. every five years. The scope of
such surveys increases with age and is well described. Requirements for General Visual Inspection,
Close-up Visual Inspection, and Thickness Measurements are clearly laid out. Calibrated through many
decades of shipping, the risk level of this procedure can be regarded to be satisfactory.

As required though ISM (International Safety Management) Code and TMSA (Tanker Management
Self Assessment), and planned by the technical ship manager, hull inspections are today performed
more frequently. Typically, ship officers inspect each ballast tank every six months.

Visual close-up inspection requires the inspection to be performed at arm-lengths distance from the
structure. In practice, this means that the surveyor must be physically close to the inspection area. In
addition, thickness measurements are required to confirm or establish the thickness. This is currently
performed with ultrasound sensors in contact with the structure, and therefore termed ultrasound
thickness measurement (UTM).

To come close enough for visual inspection in 20-30 m high tanks, the inspector has to first enter the
tank and then use various means of access, as depicted in Fig.1. These include scaffolding, rafting in a
tank filled with sea water, or rope climbing. Fall accidents happen. Oxygen shortage is another risk

392
when entering a tank. Rafting accidents can happen inside the tank when the ship is rolling. Over-
heating is another risk when inspecting a tank in e.g. Dubai where the temperature inside the tank may
exceed 50° C. The current drone-based inspection practices where the drone essentially is a flying
camera is then yet another means of access to the upper and inaccessible parts of the structure.

Fig.1: Means of access for close-up inspection: rafting and rope climbing (left); scaffolding (right)

Fig.2: Areas of attention

Fig.2 indicates the areas of the hull that the surveyor should pay particular attention to (DNV GL-IS-I-
C5.1, 2017). We observe that many of these areas are in the upper parts of the structure, and
consequently require some means of access. These areas therefore lend themselves to inspection by a
drone equipped with a camera. These areas include structural elements such as cargo hold shell frames
and transverse bulkheads for dry cargo ships and bulk carriers; cargo tank deck transverses and
transverse bulkheads for double hull oil or chemical tankers; the welds of bulkheads to deck is a critical
area; for chemical tankers with corrugated stainless steel, the cracks are usually in the welds. A large
chemical tanker may have more than 40 tanks.

The surveyor looks for a number of different damages: cracks, corrosion, indents, and buckling. Bad
coating condition can be an indication of such damages. The coating condition and corroded areas are
rated into good, fair, and poor categories, based on the percentage corrosion in an area, Fig.3.

393
Fig.3: Examples of coating condition and corrosion; GOOD (left); FAIR (right)

The total cost of a single survey can exceed 1 000 000 USD once you factor in the vessel’s preparation,
use of yard’s facilities, cleaning, ventilation, and provision of access arrangements. In addition, the
owners experience significant lost opportunity costs while the ship is inoperable. Some examples to
illustrate the cost: scaffolding erection, 200 000 USD; 1-2 days in dock 100 000 USD for an oil tanker
on good rates; emptying tanks for methane gas and substitute with inert gases and oxygen, 100 000
USD for LNG/LPG ships. Hour costs of the inspection itself comes in addition, often including long
travels.

Releasing water from oil tanks after a rafting-based inspection pollutes the sea water. In Northern
Europe, this inspection method is therefore not allowed. Furthermore, the tanks must be emptied in
open sea away from ports and shore, so the ship would have to leave port, yet another 2-3 days out of
operation. In practice, rafting in oil tanks is therefore applicable only in voyage surveys.

3. Drone usage scenarios for survey and inspection

Tank and cargo hold inspections are challenging tasks as described above. Today, drones can facilitate
access to high structures in wide spaces but still require human entry into the compartment for both
drone pilot and surveyor. Furthermore, drones are today not suitable for inspecting narrow spaces
accessible only through many manholes.

In the present drone survey regime in DNV GL, the survey is performed by two persons inside the tank,
one surveyor and one dedicated drone pilot operating the drone by remote control. The drone always
remains within line of their sight. The surveyor is instructing the pilot where to position the drone, while
the surveyor is monitoring the video stream from the drone camera that is transmitted to the surveyor’s
tablet. The surveyor decides the necessary actions, such as move closer to view more detailed images,
move drone to new position, or decide that the inspected area is acceptable. The surveyor typically uses
his/her knowledge about suspicious areas in vessel structures in general (as illustrated in Fig.2), possibly
combined with knowledge from previous surveys of the same vessel, when guiding the pilot.

Considering that autonomous drones become available, it would be interesting to understand how such
devices could support new ways of working, possibly without entering tanks.

It is evident that drone capabilities will evolve over time. In this section, we will describe scenarios at
different time frames which would allow for new survey regimes. Furthermore, high-level requirements
are listed which drones should fulfil for autonomous applications.

3.1 Full scan scenario

In this most advanced scenario, a drone which has full indoor navigational capabilities can perform full
autonomous scans of compartments. It has uploaded a flight path based on an existing 3D model of the
ship or alternatively can self-identify structural member types from 3D point cloud information. In the
latter case, no 3D model would need to be provided but the 3D tank map is generated “on the fly”
(SLAM, Simultaneous Localization and Mapping). This means that e.g. ballast tanks can be opened on

394
voyage when they are empty, and the onboard drone can be dropped into the tank for inspection. The
drone follows a predefined path for taking close-up inspection photos of the full inner surface of the
tank. These photos are either mapped on the existing or generated 3D model from the drone flight. An
automatic stitching process assembles a 3D image from the inspection photos to put together a textured
3D model. Based on the onboard drone and on-voyage inspection, such scans can be performed
periodically without loss of operational time for the vessel. Scans could then be uploaded to a cloud
service with protected access for sharing with relevant stakeholders.

Although the first drone systems being capable of such autonomous flight are only just appearing in the
market, the capturing mechanisms already exist: It is possible to capture a precise 3D point cloud
through a handheld device and to map inspection photos on it, Wilken et al. (2015).

Time-correlating all captured imagery for one tank, the progressing change of coating condition can be
tracked. The surveyor could use VR equipment to enter into the mapped imagery of the tank while in
office, Cabos et al. (2017). The VR solution allows scrolling back in time at specific locations in the
tank which are examined. Calculation results from design time indicate in VR specific locations to be
checked for e.g. cracks.

The above scenario is analogous to continuous sensor measurements of mechanical components


(condition monitoring). The drone becomes an optical sensor to periodically track coating condition in
tanks.

Through the continuous tracking of coating condition, early warnings can be given to the technical ship
manager so that coating can be repaired before corrosion occurs. Through image recognition techniques,
coating breakdown can automatically be highlighted by an algorithm.

In the case that corrosion is detected on images, ultrasonic thickness measurements (UTM) could
become necessary. (For results on drone-based UTM, see section 4.3). This could be performed by a
special drone – different from the drone performing the visual scan. More extensive measurements
would be handled through human tank entry.

It could be argued that the frequent full close-up scans compensate for the lack of some human senses
when examining tanks. Thereby, this procedure has the potential to replace human tank entry for
inspection if the tank is continuously well maintained. Based on analyzing the equivalent safety level,
frequent full drone monitoring of a well-maintained coating could replace the current inspection regime
at five-year intervals. This would of course need to be accompanied by respective regulatory changes.

3.2 Partial scan with surveyor onboard

This scenario is still linked with the current survey regime, i.e. periodic surveys every 5th year, and a
predefined scope of the structural members to be examined. The surveyor will enter the ship, but not
the tank, as his/her presence is needed also for non-hull topics. The drone has already scanned those
parts of the tanks required by the rules. I.e. all required structural members including mapped photo
images can be visualized on screen or within a VR system for the surveyor being onboard. If the image
quality is not adequate, the surveyor can then command the drone to re-examine those spots of the tank
structure which need to be re-examined based on the pre-captured imagery. Further close-ups,
hyperspectral imaging, and thickness measurements can be used to inspect the structure more closely.
Therefore, in this scenario, no human tank entry is necessary.

3.3 Single-person drone-assisted survey

Enhanced navigation and man-machine interface of the drone control would enable a single surveyor
to do the survey and also to operate the drone without a dedicated drone pilot. This could be obtained
with high-level navigation commands such as “move to position (x, y, z)”, “hold position for a specified
time or until commanded to move on”, “move (x) cm to the left/right/up/down”, “keep a distance (x)

395
from inspected object”, etc. Thereby, the surveyor controls the drone rather through specifying intent
than through direct steering with the joy-stick. In addition, anti-collision functionality needs to be
included to avoid collision with tank main structures such as walls, ceiling and infrastructure like pipes,
beams etc. So, in this scenario, human tank entry would still be necessary, but a single person could
perform the drone survey.

3.4 Derived requirements

The drone is foreseen to be capable of surveying and inspecting many varieties of ship tanks and other
structures such as ballast tanks, double hull bottoms, and oil/cargo tanks (internal as well as external
inspections). Depending on above scenario and type of tank, different requirements result. Extending
on the list given in Cabos et al. (2017) these are combinations of:

• Control of drone would be autonomous except for high-level planning. I.e. the drone could
survey closed compartments on its own, analyse the data and report findings. The surveyor's
main task would then be to interpret the analysis results and decide which actions are necessary.
• Size of drone. For many types of tanks, the only access way to the interior is through a manhole.
This will impose limitations to drone size. A modular drone where the parts are assembled
inside the tank is a possible solution that could allow for a larger drone in operation. Still,
double bottom tanks, where bays are connected through manholes, make it necessary for drones
to be of small size.
• Localization of the drone is enabled through automatic indoor positioning technology. There
are several methodologies for this in GPS-denied environments. These are e.g. fixed beacons
in the compartment (see section 4.2) or through LIDAR or photogrammetric detection of
distance and direction of points on the steel structure.
• 3D model. Survey planning is guided through a pre-existing map which can be a 3D model,
possibly in a simplified format. This model would also serve to specify a flight route, by setting
way points, and have the drone navigate according to this pre-planned route. The model of the
structure might include information about suspicious areas from previous inspections of the
same vessel, or general knowledge about high risk structures (areas of attention, as in Fig.2) in
different types of compartments. This could enable inspections without the need of a human to
enter tank. Optionally, historical findings, experience databases, or Risk Based Inspection
(RBI) survey plans aid in programming specific flight paths
• 3D image mapping. Through an image mapping algorithm, captured photos can be displayed
on the model, thereby comprising an updated “visual digital twin”. The model itself can be
derived from captured point clouds and image data thereby reconstructing a 3D model.
• Image recognition methods identify structural defects in photos and mark areas for necessary
follow-up and/or prioritize them for additional human assessment. Hyper-spectral imaging
(HSI) enhances the possibility of automatic degradation identification, e.g. level and type of
corrosion, (see section 4.4). It might also allow the detection of non-original coating or
corrosion under coating. Image recognition can be performed either on-board the drone, or on
a server placed inside or outside the tank and which communicates with the drone. The surveyor
then only interacts with the drone when suspicious observations are detected by the analysis
software. All observations and decisions should be marked with the drone's position.
• Remote connectivity allows a user to interact with and guide the drone for additional close-up
capturing where found necessary. Image data or higher-level information about findings,
including related position data are transmitted to the surveyor's computer screen for manual
interpretation.
• Virtual Reality (VR) techniques facilitate access to the pre-scanned tank information
• Measurement gear is carried by the drone for measurements of thickness and deformations
and/or detection of cracks

396
4. Inspection drone technical challenges and results

4.1 General

In this section, we report current results from an ongoing research project, ADRASSO (Autonomous
Drone-based Surveys of Ships in Operation). The project is managed by DNV GL, and it includes the
following industry partners: Scout Drone Inspections, Jotun, Norsk Elektro Optikk (NEO), Idletechs,
plus The Norwegian University of Science and Technology (NTNU). The project is partly funded by
the Norwegian Research Council for 3 years and started mid-2018.

Fig.4: Inspection drone for closed compartments, e.g. oil tanks (by courtesy of Scout Drone Inspections)

4.2 Drone specifications and key challenges

The key specifications of a drone utilized for the inspections described in section 2 require a navigation
system that does not rely on satellite-based systems (such as GPS) or magnetic compasses. In addition,
it needs a system to perform on-board collision detection and avoidance, to ensure safe distances to the
object to be inspected, without constant input from the user.

To provide a robust location of the drone, the navigation system is aided by ground beacons, which
virtually works as an indoor satellite navigation system. These are small, and easy to set up and
configure. When the data from the ground beacons are fused with an Inertial Measurement Unit (IMU),
an accurate location can be calculated.

For collision detection and avoidance, the control system uses different sensors placed in all directions
around the drone that measures distances to the nearest objects. The same sensors can be used to let the
operator specify a “lock-in” distance to a wall for inspection.

The control system includes a semi-autonomous mode, where the user provides high-level movement
commands, while the system ensures safe collision-free movement. Furthermore, the drone carries the
instrumentation needed to perform the inspection. Three instruments are considered in the project:

1. RGB video camera for visual close-up inspection


2. A hyperspectral (HS) camera with necessary lighting
3. An ultrasonic sensor to provide Ultrasonic Thickness Measurements (UTM) of wall thickness.

The RGB camera is enhanced with computer vison for automatic detection of cracks and corrosion.
(See section 4.5 for computer vision results).

The HS camera is planned to be enhanced with a software pipeline for improved assessment of the
coating condition, including automated good/fair/poor rating, detection of corrosion under coating, and
detection of non-original coating that may hide damages. As the HS-camera is a line-scanning device,

397
the capturing motion of the camera needs to be controlled by the on-board drone control system, and
the images stitched together using the motion data. For more information about the HS-camera and
preliminary results, see section 4.4.

For the UTM operations, the control system also needs to ensure a proper contact force on the target
object, for the duration of the measurement. This means 1 second of contact for the sensor currently
under investigation.

Finally, the drone system will include an AR/VR user interface for easy, remote operation, reducing the
need for trained drone pilots.

As noted above, all of these different instruments require different control modes for accurate data
capture, which is a key challenge when performing these tests in an automatic or autonomous fashion.
In addition, the HS-camera has very strict requirements for lightning. The light source needs to be
uniform and temperature stable over the frequency range of the HS-imager and calibrated with proper
exposure. Halogen lights provide the most uniform lightning across the spectrum, but they can be heavy
and inefficient.

4.3 Drone-UTM results

Initial tests have been conducted to show the feasibility of a UTM sensor attached to the drone. The
UTM sensor is provided with a probe, a control board, and a tank for liquid couplant gel. The tank is
equipped with an automatic pump. The trials were conducted in a laboratory setting, where the target
was a 12 mm steel plate attached to an aluminum wall 1m above the ground. In the tests, a skilled drone
pilot guided the drone to the steel plate and managed to ensure good contact between the probe and the
wall piece. The on-board autopilot aided in keeping altitude and heading, while the pilot had to
constantly adjust the forward and sideways motions of the drone manually. The next step is to use the
on-board sensors to automate the motion, so that the control system performs the measurement
maneuver automatically. In this case, the inspector only has to select the spot to measure.

Fig.5: Drone carrying ultrasound sensor for thickness measurements – UTM (courtesy by Scout Drone
Inspections)

4.4 Hyperspectral imaging – use cases

The motivation for investigating hyperspectral imaging is that we hope it can improve the visual close-
up inspection by detecting problems with the hull that cannot be detected by the human eye. This section
presents hyperspectral imaging and what we have done so far in the project. A hyperspectral camera
essentially sees many more colours (wavelengths) than an RGB camera. While an RGB camera sees
three colours: red, green, blue (Fig.6, left), a HS camera can see several hundred colours, as illustrated
in Fig.6, right.

398
Fig.6: RGB camera (left) vs hyperspectral camera (right)

Approximately 100 test panels have been prepared at Jotun’s material laboratory. The laboratory has a
salt chamber where temperature and humidity can be controlled to corrode steel plates. The test panels
include corrosion of varying severity, varying coating types, coating thicknesses, and coating ages, and
corroded panels that have been re-coated.

Hyperspectral data were then collected by scanning the panels in NEO’s laboratory, which is a
controlled environment with constant camera distance, constant scanning speed, and stable and uniform
lighting. We currently used NEO’s Hyspex VNIR-1800 camera for the first trials, https://www.hyspex.
no/products/vnir_1800.php. It is a push broom camera (https://en.wikipedia.org/wiki/Push_broom_
scanner) with 182 bands and 1800 pixels, and 0.4-1.0 µm spectral range. It weighs 5.0 kg, consumes 30
W, and the dimensions are 39x9.9x15 cm3, Fig.7.

Fig.7: NEO’s HySpex VNIR-1800 hyperspectral camera

Fig.8 shows examples of panels that have been corroded in a salt spray chamber for 1, 4, and 7 days,
respectively. For the naked eye, it may be difficult to distinguish between the three panels and assess
the corrosion severity. The PCA analysis (Principal Component Analysis) shows the colour variance of
the pixels in the image on a colour scale from blue to red. Here, we show only the 1st principal
component. We observe that in the most corroded panel, at the bottom, there are both blue and red
pixels in quite close vicinity, whereas when we look at the same spots in the left image, it is hard to see
that these areas are so different in colour. However, we cannot conclude anything yet with respect to
assessment of corrosion severity.

The three panels in Fig.9, from top to bottom, have first been corroded in a salt spray chamber for 1, 4,
and 7 days, respectively (as shown in Fig.8). Then, they have been coated with coatings of various
thicknesses in the range 35 -120 µm. Fig.9, left images, shows an example of 35 µm coating thickness.
Fig.9, right images, show the PCA result, visualizing the spectral variation in the image in the 1 st

399
principal component. As before, blue and red pixels are most dissimilar in terms of spectra, and yellow
and green are in between. We observe that the HS camera sees more variance in the pixel spectra in the
most corroded panel at the bottom. However, we cannot yet conclude from this whether it is able to
detect corrosion under coating.

Fig.8: Corroded panels 1, 4, 7 days (left, top to bottom); PCA of hyperspectral images (right)

Fig.9. Corroded panels with 35 µm coating hiding the corrosion: RGB images (left); PCA of HS images
(right)

4.5 Automated object classification of cracks in images

4.5.1. Data collection and preparation

We collected 1.5 million uncategorised images from surveyors’ documentation of findings from surveys
to be used in surveyor reports during 4 months in 2018. These images are photos captured by DNV GL
surveyors during inspections to document findings, i.e. structural damages of the hull including missing
coating, cracks, corrosion, indents, and buckling. From this huge dataset of 1.5 million images, 946
images containing cracks were extracted to train a machine learning (ML) model, Xie (2018).

The procedure to extract a usable dataset of 946 images of cracks from the 1.5 million uncategorised
images took 5 work-months during the remainder of 2018 and included several labour-intensive steps.

400
It is beyond the scope of this paper to provide a detailed account of the procedure, but the short version
is that we first extracted 600,000 images considered relevant; then manually classified 180,000 out of
the 600,000 images into three categories: crack, corrosion, and other. This was done using the open
source tool PhotoSift (https://www.rlvision.com/photosift/about.php), resulting in 11,000 images
containing cracks. From the 11,000 images, 1500 images were selected. 946 images out of the 1500
were then labelled on pixel-level, using the free tool GIMP (https://www.gimp.org/downloads/),
labelling each pixel that was deemed to be part of the crack. The descriptive statistics for the labelled
images are provided in Table I. We observe that the cracks occupy only a small part of the image in
terms of the number of pixels, only 0.3% of the total pixels in the image, on average, and maximum
12.9%. There is therefore a very low fraction of positive data points, and consequently a very
unbalanced dataset.

Table I: Dataset of crack images; descriptive statistics


Description N mean min max
Images - Number of pixels 946 2,682,785 49,290 15,925,248
Cracks - Number of pixels 946 7,333 17 405,919
Image width/height ratio 946 1.3 0.7 1.8
Crack/image pixel ratio (%) 946 0.3375 0.0055 12.904

As we experiment with semantic segmentation as well as with object classification, we need different
next steps. In this paper, we report the object classification study, only, and therefore provide the data
preparation steps for this study, only. The next step was to create a training dataset by dividing the 946
images into 311,001 smaller image patches, patch size 100x100 pixels. The main objective of this
approach was to increase the training data set. Each image patch was labelled as either crack or non-
crack. We used a threshold of 50 crack pixels, labelling the image patch as crack if ≥50 crack pixels,
and as not-crack otherwise. Since the images are already labelled on pixel-level, we could automatically
classify the image patch. We observe from Table II: that the number of image patches labelled as crack
constitute only 2.7% of the total number of patches. This dataset is therefore highly unbalanced, and we
expect that the results would tend to classify all test images as not-crack. Actually, if the training and
test dataset were equally unbalanced, we could simply classify all test images as not-crack and still
achieve a 97% accuracy. We therefore randomly selected 9,385 out of the 302,469 not-crack image
patches. The descriptive statistics for this balanced dataset are provided in Table III:.

Table II: Unbalanced dataset of image patches for object classification; descriptive statistics
Description Patches Number of crack pixels
N % of N mean min max
All Patches 311,001 100 21 0 10,000
Crack Patch 8,532 2.7 773 51 10,000
Not-crack patch 302,469 97.3 0.1 0 50

Table III: Balanced dataset of image patches for object classification; descriptive statistics
Description Patches Number of crack pixels
N % of N mean min max
All Patches; number of crack pixels 17,917 100 368 0 10,000
Crack patch; number of crack pixels 8,532 48 773 51 10,000
Not-crack patch; number of crack 9,385 52 0.1 0 50
pixels

4.5.2. Object classification model - training, validation, and test data

A transfer learning approach was used. The base model used was Xception, Chollet (2017). Xception
was chosen because it had the highest accuracy in the ImageNet competition, https://en.wikipedia.org/
wiki/ImageNet. at the time we selected it. We removed the output layer as well as the last hidden layer
and replaced with one hidden layer and our output layer having two classes, crack and no-crack. The

401
model was then retrained with our training data but using the pre-trained weights of the Xception model
as the initial weights. We used the Xception API provided by Keras (https://keras.io/), running on top
of TensorFlow (https://www.tensorflow.org/).

Here, we only report the results using the balanced dataset in Table III:. We divided the image patches
into a training set and a validation set, 80/20, randomly selected. We used 14,333 patches out of the
total 17,917 for training, Table IV:. We used the remainder of the image patches as validation data,
Table V. The test dataset consisted of images which were not used as part of the training and validation
datasets. The test dataset consisted of 144 images, Table VI, including both crack and not-crack images.

Table IV: Training dataset of image patches for object classification


Description N % of N
All Patches 14,333 100
Crack patch 7,078 49
Not-crack patch 7,255 51

Table V: Validation dataset of image patches for object classification


Description N % of N
All Patches 3,584 100
Crack patch 1,454 41
Not-crack patch 2,130 59

Table VI: Test dataset of whole images for object classification


Description N % of N
All images 144 100
Crack images 94 65
Not-crack images 50 35

4.5.3. Test Results and Discussion

Referring to Table VII:, the precision = TP/(TP+FP) and recall = TP/(TP+TN) were 63.4% and 47.9%,
respectively. The number of correct classifications (TP+TN) were 69 out of a total 144 predictions
(48%). The number of true/false positives/negatives are provided in Table VII:. A precision of 63%
means that the ratio of true vs. false crack detections is approximately 2/3 and 1/3, respectively, meaning
that the algorithm flags cracks when there is no crack there in 1/3 of the detections. A recall of 48%
means that the algorithm misses to detect more than half of the actual cracks.

It is interesting to compare these results to previous results with smaller training datasets. In a previous
study presented at COMPIT18, Xie et al. (2018), the number of correct classifications, using a balanced
dataset, were 44 out of 58 (76%). In the previous study, we used a smaller training dataset, 7,401 image
patches vs. 17,917 in this study, Table III:. So, the Deep Learning mantra of just adding more training
data might seem to have some limitations.

Table VII: Crack detection test results


Description N
True Positives (TP) 45
True Negatives (TN) 24
False Positives (FP) 26
False Negatives (FN) 49

The test classifies an image as containing a crack or not. It therefore indicates the existence of a crack
in an image without localizing it in the image. However, we do not know which pixels in the image that
the classifier thinks belongs to the crack. This is a motivation to investigate classifiers that either put a
bounding box around the crack object or classify each pixel through semantic segmentation.

402
5. Discussions and Conclusions

There are several preliminary learnings from this project so far:

• The more autonomy we can achieve, the higher the benefits will be in surveys with respect to
cost reductions, increased personnel safety and reduced environmental footprint. Currently,
more high-level commands are under development, and we believe that the scenario Single-
person drone-assisted survey is within sight in the near future.
• It is feasible to perform UTM with a drone. More autonomous functionality to increase its
stability and contact force will make drone-UTM easier, faster, and more reliable.
• Preliminary hardware design studies indicate that a single drone, based on current state-of-the-
art, will not be able to be instrumented with all the required sensors due to drone size vs. payload
weight and size constraints, e.g. battery weight and size constraints. We will probably need
specialized drones, e.g. one drone for thickness measurements and another drone for
hyperspectral imaging. Maybe we might also need a different kind of drone or robot for
inspecting large cargo tanks vs. double hull compartments.
• We have experienced that there is a large manual effort to collect and prepare images for a
machine learning (ML) algorithm for crack detection. Also, we experienced that the number of
relevant and useful images to train the ML is a low percentage of all the collected data. A
learning for future data collection is to put in place work processes where surveyors collect and
submit data in a way that make the images more easily useful for ML.
• As for the computer vision results on crack detection, the preliminary conclusion is that the
detection of cracks is still significantly inferior to a human surveyor. More R&D is required
before we can develop a production-ready “crack detection assistant”. However, it should also
be observed that we have only evaluated our algorithm on images captured by surveyors where
surveyors already found cracks (i.e. a 100% crack detection accuracy for the surveyors). We
still do not know if the algorithm would have been able to detect cracks that the surveyors over-
looked during inspection. A more realistic test would be to use a video of the whole tank as test
data to the computer vision. Furthermore, this study reported on an image classification
approach, only. Further work includes to explore object detection (bounding box approach) as
well as semantic segmentation.
• As for the crack images, we experienced that they are very heterogeneous. Some images were
from inside dark tanks and other images were from the deck in sunlight, and they are from very
different parts of the hull structure. It should also be observed that the images used for training
the ML were never intended for this purpose but rather were captured by surveyors to document
findings and therefore do not lend themselves as well to this purpose as we had hoped. We now
hypothesize that we will need to create subcategories of crack images that are more
homogeneous within each subset to improve the detection performance.
• As for the training and test approach of the ML algorithm, it is a question whether our approach
of using image patches as training data and whole images as test data results in poor
performance because training and test data may be too different, and ML algorithms are not
good at generalizing and abstracting.
• DNV GL is also developing crack detection algorithms for other assets like photovoltaic panels
and wind turbine blades. A preliminary observation is that automatic detection of cracks in ship
hulls is a much harder problem to solve than e.g. cracks in photovoltaic panels and wind turbine
blades, for many reasons that is beyond the scope of this paper to explain.
• For hyperspectral imaging, it is premature to conclude on whether it is able to detect corrosion
under the coating and assess corrosion severity. Among many issues, we still do not know the
limits with respect to variables such as coating thickness, coating type, coating colour, and the
severity of the corrosion. Experimenting with various types of lighting (LED, halogen, spectral
properties, polarization etc.) may also be considered.

403
Acknowledgements

The ADRASSO project is supported by project grant number 282287 from the Norwegian Research
Council. A number of people provided inputs to this study, including the DNV GL ship surveyors Ole
Martin Østbye, Dag Børre Lillestøl, Cezary Galinsky, and Vegar Rype. A number of other DNV GL
surveyors around the world have contributed with photos of damage findings. Kjell Olaisen and Morten
Østby, DNV GL, have contributed in discussions on future inspection regimes. Øyvind Smogeli, DNV
GL, reviewed the paper. Aida Kazagic, Jotun, provided numerous test panels for analysis of hyper
spectral images. Trond Løke, Norsk Elektro Optikk, provided lab facilities and equipment for collection
of hyperspectral data from the Berge Helene ship. All hyperspectral data from Jotun test panels were
also collected in Norsk Elektro Optikk’s lab. Scout Drone Inspections provided images of the drone and
preliminary experiments with drone assisted UTM measurements.

References

CABOS, C.; WOLF, V.; FEINER, P. (2017), Remote Hull Surveys with Virtual Reality, 16th COMPIT
Conf., Cardiff

CHOLLET, F. (2016), Xception: Deep Learning with Depthwise Separable Convolutions, https://arxiv.
org/abs/1610.02357

DNVGL-IS-I-C5.1 (2017), Instruction to surveyors - Survey and repair guidance – hull and equipment,
http://one.dnvgl.com/internalservicedocuments/PDF/is/2017-10/IS-I-C5.1.pdf

IACS (2002) Website Tracks Thickness Measurement Companies, https://www.marinelink.com/news/


measurement-thickness306345

IACS (2018), URZ, Requirements concerning Survey and Classification, http://www.iacs.org.uk/


publications/unified-requirements/ur-z/

IMO (1993) A.744 (18), Guidelines on the enhanced programme of inspections during surveys of bulk
carriers and oil tankers, http://www.imo.org/en/KnowledgeCentre/IndexofIMOResolutions/Maritime-
Safety-Committee-(MSC)/Documents/MSC.197(80).pdf

IMO (2001), Condition Assessment Scheme, Resolution MEPC.94(46)

WILKEN, M.; CABOS, C.; BAUMBACH, D.; BUDER, M.; CHOINOWSKI, A.; GRIESSBACH, D.;
ZUEV, S. (2015), IRIS - An innovative inspection system for maritime hull structures, Int. Conf.
Computer Applications in Shipbuilding (ICCAS), Bremen

XIE, J. (2018), Image data preparation & crack classification model, DNV GL Report No. 2018-1331

XIE, J.; HAMRE, G.; STENSRUD, E.; RAEISSI, B. (2018), Automated crack detection for drone-
based inspection using convolutional neural network, COMPIT Conf., Pavone

404
Drawingless Production in Digital Data-Driven Shipbuilding
Ludmila Seppälä, CADMATIC Oy, Turku/Finland, ludmila.seppala@cadmatic.com

Abstract

The paper aims to explore the complexity behind a seemingly simple question: Can vessels be built
without the use of traditional drawings? It also explores the evolution of the role CAD played in the
past and will play in the future of production and manufacturing in digital data-driven shipbuilding.
The use of foresight and socio-technological transition theories provide a broader perspective to the
case.

1. Introduction: innovations in digital data-driven shipbuilding and foresight

If the question was formulated as follows "Can 3D models replace traditional design drawings?", the
impromptu answer most people would give would be "Of course, and if not done already, it is only a
matter of time before it will". There are many ongoing discussions about digitalization, digital
transformation and digital manufacturing, industry 4.0 and smart factories, AI and more. Mostly, these
are vision-like predictions based on the latest technological advancements and overall hype among
influencers and future-minded professionals developing strategic directions for their business.

If the answer is so simple and straightforward and such an obvious direction for development, one can
only wonder why this change has not happened already. 3D models have been around in shipbuilding
for almost 50 years, should this not be a sufficient amount of time to polish the technology and replace
old artefacts, such as 2D drawings?

I argue that this question is not simple, and it does not refer to technological aspects only. In this paper,
I will consider the underlying issues and apply the systematic foresight approach in the search for an
answer to what the future of drawingless shipbuilding production is in the long term, and what the
landscape implications are.

Firstly, for the sake of clarity and to avoid misunderstanding, the terminology used needs to be explored.
We often refer to the future, as a single, clearly defined outcome of certain phenomena. It is a globally
shared definition and it allows only a narrow space for speculation on questions like: Are the roots
causes of the future in the past or present? Can we affect the future? Is it singular or could there be
many futures? And the most philosophical of all - can we know the future or is it just a prediction?
Leaving aside the ambivalent nature of the future, I will rely on commonly used terminology from one
of the new fields of academic studies - futures studies. It acknowledges the philosophical nature of the
subject, but offers practical methods of effectively addressing the need to speculate about the future in
decisions taken in the present.

2. Maritime industry through the lenses of transitions theory and waves of transformation

2.1 The power of creative destruction and Kondratieff's waves

This paper is based on the approach of socio-technological transitions and waves of transformation
theories. The link between technological innovation and long cycles of economic development, firstly
theorized by Schumpeter (1939) in his theory of economic development, is common to the most widely
accepted theoretical frameworks within long wave researchers. Kondratieff’s (1935) type long wave
patterns and Schumpeter’s (1939) clusters of innovation are the constituting elements of the techno-
economic paradigm shift framework developed by Freeman and Perez (1988), where great surges of
development induce socioeconomic transformation effects across all economic activities and provide
the critical driving force of each long cycle of economic growth, Freeman (2009), Perez (2010).

405
Kondratieff's waves theory gives a broad structured view to the history of technological development
of the society. Based on the data from rolling 10-year returns on the top 500 Standard&Poor's financial
data, spikes or waves can be observed to match technological changes. The picture below presents the
waves along a timeline. Behind every significant upswing in financial returns that moved societal
development forward, we can trace a considerable step in using new technology: steam engines,
railways, electricity, automobiles and petrochemicals, and information technology. None of the
mentioned technologies is an isolated innovation or achievement. It is something that society was able
to adapt and use profitably. Technological breakthroughs are tightly linked to societal development in
terms of adaptation and acceptance.

Fig.1: Kondratieff waves: linking rolling 10-years returns on the S&P top 500 and technological
destruptions

According to this theory, the next and 6th wave of destruction will be fueled by intelligent technologies.
Where “Intelligence” is a keyword, differentiating itself from the previous wave, based on information
technology. We can speculate whether intelligence would mean actual AI or the possibility to be not
only digital, but digital data-driven. One practical example of this change is the ongoing development
of Digital Twins. As elaborated by Cabos and Rostock (2018), a Digital Twin is a digital representation
of the object, enriched with behavioral models and configurations or conditions. As Hafver et al. (2018)
point out, the novelty of digital twins is not the existence and use of 3D models as assets, but how these
models are bundled. In other words, it is not about the IT anymore, not about the possibility to have all
data and the 3D model in a digital format, but about the intelligence behind this data.

Before proceeding with the core question, I will outline two more theories: deep transitions for zooming
into the wave cycles and Multi-Level Perspective (MLP) for understanding the background of
innovations.

2.2 Deep transitions theory and evolution of CAD as innovation.

According to the theory of deep transitions developed by Schot and Kanger (2018), there are more
details inside each wave as it can be split more accurately into approximately 50 year cycles. Without
going into theoretical aspects, an interesting result can be observed if this theory applied to changes in
shipbuilding. Fig.2 presents the main technological changes and innovations concerning deep transi-
tions.

406
Fig.2: Deep transition transformation and historical shipbuilding milestones, including CAD
development

The era of IT in shipbuilding aligns with the beginning of commercialized use of CAD systems. Started
in 1970, CAD made its way from being an innovation into something that became a common and
essential part of any shipbuilding project. Trigger for the development was in IT advancements of that
time, and it took about 50 years to mature technology, get the newest hardware in use and change the
mindset of users to get acquainted with it. In its turn, increased accuracy of design opened possibilities
to handle projects ever bigger in size and excelling in complexity taking shipbuilding to the whole new
level. The innovation served the industry and changed it. However it was not an isolated phenomenon,
it was possible because of societal developments, requiring a large number of cargo and other types of
vessels needed for global trade. Here the context of innovation plays an essential role in the transition.
The next chapter aims to explore what surrounds technological innovation and what makes it possible
to transition from being a great idea into becoming an industry standard

2.3 Multi-Level Perspective - innovations in the context.

The Multi-Level Perspective (MLP) provides a useful framework for understanding how transition
happens and what makes it possible for an innovation to become viable and widely used.

The theoretical framework, multi-level perspective approach, was originally developed by Frank Geels
and presented in several the research articles: Kemp (1994), Rip and Kemp (1998), Geels (2002, 2010,
2011, 2014). By original design, the framework works with socio-technological transitions, with a focus
on sustainability. It presents a transition process in the context of three main layers: landscape, regime,
and niche.

The landscape layer represents the most stable structure – existing state of things; it is as a mixture of
the political and economic landscape, the historically and socially stable way of doing things, and time
proofed technology that is in use for a long time. The regime layer is more dynamic, and it consists of
existing practices and the ways of process organization. Niches are the most dynamic places for
incubation of new ideas and practices. Presumably, these appear and frequently disappear, often having
little impact on the system as a whole. However, when there is pressure from the landscape, the regime
level has cracks and openings, creating space for the niche to enter the regime level and reshape it.
“Niche-innovations may break through more widely if external landscape developments create
pressures on the regime that lead to cracks, tensions and windows of opportunity,” Geels (2010).

I argue that the tension and “window of opportunity” presented by globalization and the development
of IT technologies, such as increased computing power and graphics cards, allowed CAD to progress
to regime and landscape levels. A side effect of the transition, was that CAD providers became
significant players in the maritime industry.

407
Fig.3: Multi-Level Perspective (MLP) framework, adapted from Geels (2012)

The following sections aim to discover existing landscapes and regimes in shipbuilding in the context
of the question whether drawings can be abondened, and scans for possible openings to change existing
practices of drawing use in production.

3. Summary of horizon scanning in the maritime landscape, regime, and niches

Horizon scanning is a primary foresight activity for collecting data. Horizon scanning aims to
continuously and objectively explore, monitor and assess current developments and their potential
implications for the future, Miles (2012). Using the theories outlined above, it becomes a more
straightforward task to perform a horizon scanning relying on the MLP framework and focusing on the
question at hand.

The picture below presents a symbolic division of the main actors in the maritime sector and a
provisional split between the layers they mostly operate in. The division is not a precise border, since
the same actor might often be involved in several distinctively different activities. However, it provides
an indication as to where information can be searched for.

408
Fig.4: Identification of main actors on levels of the MLP framework in shipbuilding and offshore
industry

This approach is distinctively different from an unsystematic overview of possible innovations that
might or might not become the regime. Instead, possible pressures and tensions are scanned on a
landscape level. Based on this method and focus question, a vast amount of information was collected
and summarised. Mostly online sources and publicly available information was analyzed. Due to the
significant amount of references, they are not included in the reference list. The table below presents a
summary of leading trends and their probable impact on the CAD landscape and specifically on the
possibility to replace or avoid 2D drawings in shipbuilding production.

Change in the The impact on CAD landscape Impact on the possibility


landscape for drawingless
(megatrend) production
Industry 4.0: a Ensuring quality and control for manufacturing is stressed by The automatic output
blurring of classification societies. More accessible, with lower ROI robotic from 3D models to
borders between and automation solutions for shipyards force these to look for production and CNC
design and competitive advantage in this area. machines, PLM approach
manufacturing: for manufacturing
Robotics, New
materials, 3D
printing
Intelligent IT: AI, The main driver of changes in the 6th wave – removing hardware Enabling more computing
cloud computing restrictions, cloud services, and storage, AI as a “brain” for IT. power and resources for
services Azure and blockchain apps by MS provide broader possibilities handling data in desired.
to use computing power and IT recourses. First ones to benefit For example, instead of a
are logistics and warehouse storage management systems 3D model to be locked
(PLM), and design optimization solutions based on data from inside CAD, it becomes
previous projects. PLM – to become shipbuilding specific. easily accessible via the
Currently most PLM systems on the market are generic and offer cloud and can be
only a certain level of integration between design tools and adequately analyzed or
manufacturing control systems. even created by regression
analysis
Intensified role of As a specific part of intensified intelligent IT, information and Vast amounts of data in
information data stops being just a digital matter and becomes to be the 3D models will be utilized

409
management and subject of a presentation on demand for production tasks as well more efficiently allowing
Digital Twins as any other functions within the life cycle. the substitution of 2D
blueprints on demand
basis.
Democratizing Data natives require more digitalization in all processes on a UX design becomes a
tech and data cultural level. Shortcomings are not accepted, thus putting paramount part of
native generation additional pressure on CAD to become more intuitive and fewer CAD/PLM, touch
settings-based applications. Many shipyards and design interfaces and VR/AR
companies refer to the inability of the new generation of will be more familiar to
engineers to work with 2D drawings. new generations
compared to 3D or 2D
drawings
Design from Using statistics and AI for selecting optimal designs, reusing of Would reduce the overall
stock, modular designs and modules from previous projects to reduce design need for drawings and
design costs and time is current direction of development for many partially replace them
yards. Less bulk work in design and modeling, intensive and with references.
repetitive reuse of designs, large amounts of standardised units
with prefabrication possibilities.
Saturation of Decreased demand for EU shipbuilding competence and growth Centralised shipyards
market with of expertise in specialized vessels from Asian shipbuilding. with limited number of
cruise Fewer but bigger ships to cover the existing needs of the standard projects would
liners/special maritime industry will lead to more centralized shipbuilding leave less space for
ships/bulk areas. customized design and
carriers require less production
data for every separate
project as mostly it would
be repeating construction.
Limits to the For decades, the race for bigger vessel sizes and optimized Same as above
growth for vessel production workflows in shipyards fueled the forces behind
and shipyard developments in the maritime industry. The maximum size of
sizes vessels nearly achieved – bigger vessels are problematic from
stability/building/management/operation points of view.
However, large vessels are cost-effective and will gradually
replace the aging existing fleet of smaller vessels. The building
of large ships will continue, but the total number is capped.
The growth in Increasing sea level and the ability to use floating constructions More needs for custom
floating cause an increased need for tools to model irregular structures constructions, and
construction (beams, foundations, pillars, etc.) and, connect with strength integration with
varieties calculation systems. Floating constructions will become more mechanical software.
diverse to cover accommodation and living centers, industrial Wider exposure of
and power generation units, etc. shipbuilding to
neighboring industrial
areas will provide
possibilities to use
existing practices from
construction and building
industries, that are
significantly bigger in size
and hence have more
developed PLM practices.
Deglobalization Shipbuilding benefited from globalization greatly. Besides the Wildcard – increased risks
possibility to build ships in a distributed location, involving for limitations and
people regardless of locationos, but also in sharing information. sanctions, and possible
With increased political tensions this will slow down and pose a intensification of borders
potential risk for using global knowledge and resources. for subcontracting and
cultural differences in
approach for

410
manufacturing.
Climate change, Level of retrofit projects will remain high in the next 5-10 years Steel outfitting tools, the
requirements for to adjust existing vessels to comply with tighter regulations. design of nonstandard
sustainability and However, after all existing vessels are converted, the amount of constructions, use of 3D
emission retrofit projects is expected to decrease, leaving developed capture technologies to
reductions – technology to deal with conversion and repurposing vessels. provide design models
stated goals of The high level of innovations to reduce emissions and optimize with “as is”surroundings
most engine operating regimes will lead to a large number of steel and data will be at the highest
policymakers mechanical constructions on board, which are different from level of development. As
standard units or typical constructions for which CAD models a result, people will stop
were optimized. using 2D drawings, which
will boost the use of 3D
models and VR/AR
solutions.

4. The next wave of Intelligent technologies and focus on drawingsless production

Based on the summary of the collected data, typical scenarios can be built. For clarity, the scenarios are
presented in a graph with two main dimensions: the intelligence of IT and the level of automation in
production. Additional dimensions could be considered, such as technology savvy society, where the
willingness to use modern 3D intuitive or VR/AR based functionality plays the most or the least
important role. However, this will double the number of the possible scenarios and will blur the
distinctive difference between them.

The intelligence of IT allows the provision of data in digital format that is suitable for production. For
example, based on the data collected by CADMATIC, when the first 3D viewer, eBrowser, was
introduced on the market in 2000, the direct estimation from shipyards was that they were able to reduce
the number of drawings needed for production by 30%. The 3D model became accessible not only to
CAD users – typically designers in the office, but also to production staff, and it didn’t require any
special skills or training to use. It provided a powerful push for reducing the amount of drawings.
However, there are still cases when the amount and types of drawings involved in production are
justified by tradition and processes in the yard and less by the practical need for these drawings for
manufacturing.

Following up this development, after introducing CADMATIC’s eShare as a central portal for all
interlinked project information and eGo, for access with offline touch devices, a further reduction of
70% of drainwgs was achieved. This is only one example where increased intelligence in IT technology
significantly affected the number and types of drawings involved in production, but was capped by
societal readiness to change the existing regime. Pionering yards, focused on innovation and
effectiveness were more ready to make the change that the ones where tradition and status quo were
placed high. The human and societal factor conflict with technological possibilities in this case.

For the second dimension, the level of automation was selected as a variable. There are already now
many possibilities to automate production: steel cutting and bending, welding robotics, 3D printing,
automatic adjustments for workshop flows based on data analysis. Together with boosting development
in robotics, this becomes an essential factor for ship manufacturing. A holding element in this
development is the cost of machines and implementation.

The picture below presents four main possible scenarios. They are based on a division of high-low
levels in IT intelligence and automation. Two stereotypical scenarios numbers 4 and 2, present
“business as usual” and “high hopes for change” possibilities. The other two, numbers 1 and 3, provide
a possibility to see conflicting trends and tensions in the landscpae providing opportunities for
innovations to grow.

411
Fig.5: Scenarios of the future for drawingless production in digital data-driven shipbuilding towards
2050

While all four scenarios are possible, scenario number 2 is perhaps preferable, if we want to be
optimistic and disregard natural limitations in development. A combination of scenarios 1 and 3 would
present a somewhat realistic picture in the medium long perspective. In both cases, gradual elimination
of drawings in the production process is likely outcome. Taking into account the main driver of
intelligent IT, drawings are already gradually substituted with 3D viewers and direct data transfer to
production or manufacturing control systems. The key role of CAD in the process of substitution is in
providing interactivity with data and faster access to it within change management. Originally, input to
CAD was provided by users. However this is slowly changing in the direction of embedded design rules
and substitution of direct parameter input with the those obtained based on analysis or AI.

Interaction with data distinctively differentiates the digital era. The original attempts to standardise
drawings aimed to improve readability and quality of production. For the data-native generation, this
poses unnatural limitations. Instead of a static snapshot, people prefer to obtain data on demand and
manipulate it.

The following use case illustrates this process. Traditionally, a large number of drawings in shipbuilding
comes from piping production data or spool drawings. Estimations are that a big cruise liner, of about
350m, has about 10000 spools. With current practice, these drawings will be automatically generated
and annotated, however still about 5% (with effective use of CAD and settings matching needs of
production) require manual work (based on data from CADMATIC customers). The process itself is
quite laborious and time-consuming. However, the main culprit is the use of these drawings in
production. Every drawing has to be manually examined and used as an instruction to manufacture a
piece of pipe and often data provided on the drawing is not sufficient or outdated due to changes in
design by the time it reaches the workshop. The possibility to generate and visualize production data
on the fly would remove the disconnect between design and manufacturing.

As a practical example of such developments, CADMATIC already has customers who use the
possibility to provide online connection to design data in the production workshop and display the data
most suitably in 3D viewers with annotated models or using AR with HoloLens. The foundations for
this technology are set, and the direction is defined. The question remains whether the window of
tension is sufficient for such innovations to progress, spread and become part of the regime.

412
5. Conclusions

Intelligence in IT systems goes hand in hand with an expected increase of automation and use of
robotics. However, based on expert theories and materials studied, the leading role will belong to
intelligent IT. CAD development in the last 40 years is an example of how a small innovation became
the backbone of the regime. Further steps in development for CAD would mean adding intelligence.
Besides being a modeling tool with interfaces to calculations and production systems, it can become a
cradle for all IT systems used in shipbuilding and offshore projects: calculations, optimization, 3D
modeling, production data, and further information management to serve as a platform for Digital Twins
and asset management in digital data-driven shipbuilding.

Instead of a vision statement, this paper provides an analysis and outlines the background for the future
of drawingsless production in digital data-driven shipbuilding. The future is not fixed, and exists only
in the plans made for it. Thus, the main aim of the paper is to open a discussion and provide input for
thought for those directly or indirectly involved in the marine industry. In the next stages of the project,
present research will be used as an input for a discussion by experts. Based on the guided strategic
foresight principles, via participation and a wide range of of opinions, a more elaborated picture will
emerge. This will enable not only the formulation of a vision, but also affect the future course of events.

References

CABOS, C.; ROSTOCK, C. (2018), Digital Model or Digital Twin?, 17th COMPIT Conf., Pavone

GEELS, F. (2002), Technological transitions as evolutionary reconfiguration processes: a multi-level


perspective and a case study, Research Policy 31 (8/9), pp.1257-1274

GEELS, F. (2010), Ontologies, socio-technical transitions (to sustainability), and the multi-level
perspective, Research Policy 39, pp.495-510

GEELS, F. (2011), The multi-level perspective on sustainability transitions: Responses to seven


criticisms, Environmental Innovation and Societal Transitions, pp. 24-40

GEELS, F. (2012), A socio-technical analysis of low-carbon transitionsL introducing the multi-level


perspective into transport studies, Journal of transport Geography, pp. 471-482

GEELS, F. (2014), Reconceptualising the co-evolution of firms-in-industries and their environments:


Developing an inter-disciplinary Triple Embeddedness Framework, Research Policy 43, pp.261-277

HAFVER, A.; ELDEVIK, S.; PEDERSEN, F.B. (2018), Probabilistic Digital Twins, Position paper,
DNV GL

KEMP, A. (1994), Technology and the transition to environmental sustainability - The problem of
technological shifts, Futures 26, pp.1023-1046

KONDRATIEFF, N.D.; STOLPER, W.F. (1935), The Long Waves in Economic Life, Review of
Economics and Statistics 17/6, pp.105-115

PEREZ, C. (2002), Technological Revolutions and Financial Capital: The Dynamics of Bubbles and
Golden Ages, Edward Elgar

PEREZ, C. (2010), Technological revolutions and techno-economic paradigms, Cambridge J. Eco-


nomics, pp.185-202

413
PEREZ, C. (2015), Capitalism, technology and a green global golden age: the role of history in helping
to shape the future, Political Q., pp.191-217

RIP, A.; KEMP, R. (1998), Technological, Change, in Human Choice and Climate Change, Battelle
Press, pp.327-399

SCHOT, J.; KANGER, L. (2018), Deep transitions: Emergence, acceleration, stabilization and
directionality, Research Policy

SCHUMPETER, J.A. (1939), Business Cycles: A Theoretical, Historical, and Statistical Analysis of
the Capitalist Process, McGraw-Hill

414
Digital Twin for Monitoring Remaining
Fatigue Life of Critical Hull Structures
Tapio Hulkkonen, NAPA Ltd, Helsinki/Finland, tapio.hulkkonen@napa.fi
Teemu Manderbacka, NAPA Ltd, Helsinki/Finland, teemu.manderbacka@napa.fi
Kei Sugimoto, ClassNK, Tokyo/Japan, sugimoto@classnk.or.jp

Abstract

Current technology in weather and environment, monitoring combined with the open AIS information
available for tracking ships in service has opened possibilities for estimating the real environmental
history of a ship experienced during her lifetime. When this big data is processed with advanced
analyses and combined with the digital twin of a real ship in operation, it will open possibilities for
estimating valuable and important information, such as the remaining fatigue life for ship structures.
This paper will present and demonstrate the different process steps and possibilities that NAPA software
enables, starting from analyses of environmental conditions, direct loads with 3D panel method and
FEM analyses of the load history. The paper will also discuss technical risks, limitations and the
accuracy of this approach.

1. Introduction

Safe operation of ships must be ensured throughout their lifetime. This is verified during the ship design
phase and ensured later, throughout operation with regular surveys. Classification societies play a
central role in specifying rules and standards for ship structural design. The classification society rules
combine the most recent research and knowledge available to ensure ship structural integrity. In many
currently used class rules and guidelines, such as IACS CSR (Common Structural Rules), the dominant
loads to structural strength are specified as simplified formulae based on different ship types. These
simplified loads were developed to comprehensively cover load conditions estimated from actual ships.

There is a clear industrial need to develop better and more economic marine structures for different
purposes. Novel designs will push the classic classification society rules to their limits, and rule
development is a continuous process at classification societies. Also, the environment is changing.
IPCC (2018) sates that with high confidence the human-induced global warming has already caused
multiple observed changes in the climate system and the current trend is that change will continue.

When considering new structural configurations or different operational conditions, their impact on the
structural strength must be evaluated taking the particular characteristics of the ship properly into
account. One of the latest structural strength assessment methods introduced for this kind of direct
strength analyses is called “Load and Structural Consistent Analyses” and it is introduced as an optional
requirement in ClassNK (2018) “Guidelines for Direct Load Analysis and Strength Assessment”. It is
used in this paper as a general reference for the structural strength assessment. On the high level, similar
approaches are introduced also by other classification societies; this paper also utilizes concepts
presented by DNVGL (2015) for direct strength analyses guidelines.

These procedures directly utilize the wave-induced loads acting upon ships and use direct strength
calculation to reproduce the structural stress. This method makes it possible to reproduce actual sea
states to high degree of accuracy and evaluate the details of each ship characteristic correctly.

Recent development in computer technology has made it possible to collect accurate information on
ships’ experienced environmental conditions. Advanced satellite-based communication and measure-
ment systems together with data storage possibilities, have made it possible to record and store big data
related to the environmental conditions of a particular ship at a very high accuracy, throughout her years
in service. When this information is connected to structural analyses, it provides a good basis for
estimating the existing and future service needs of the ship.

415
The scientific background of these structural and wave theories is not a new one, they have been
available for a some time. However, adaptation of these theories has been sometimes limited by the
lack of easy-to-use software or other computational resources. This paper will demonstrate the
applications of these methods in creating a digital twin of a ship from a 3D NAPA product model, and
how this technology could be used for estimating the structural integrity of the ship.

2. Estimation of long-term environmental conditions

In the recent years the increased availability of ship position data together with improved global weather
data in the sea areas has made it possible to rely on the publicly available data sources to construct the
time history of ship operations. We can obtain the ship position data from Automatic Identification
System (AIS) messages.

Since 2004, all passenger ships are required to be equipped with an AIS transponder according to the
SOLAS regulations set forth by the International Maritime Organization (IMO). Besides passenger
ships, an AIS transponder is required to be fitted on all ships over the size of 300 gross tonnage on
international voyages and cargo ships over 500 gross tonnage, even if they are not engaged on
international voyages. AIS transponders automatically send messages providing information on the
ship’s identity, type, position, course, speed, navigational status, and other safety related information.
Initially, the AIS messages were intended to pass information between ships to improve safety by
providing ships and onshore centers with better awareness of the nearby navigational situation.

However, the AIS messages are publicly available, and they can be collected by terrestrial antenna
reaching ships within the range of 15-40 nautical miles depending on the height of the antenna. For a
global coverage of the AIS message collection, international companies have launched microsatellites
equipped with AIS receivers, Fig.1. The number of satellites has increased to the level where today we
can have a truly global coverage of the AIS messages from the entire global merchant fleet.

Fig.1: AIS transponders. Left: terrestrial antenna (T-AIS), http://www.imo.org/en/ourwork/safety/


navigation/pages/ais.aspx; Right: microsatellite AIS (SAT-AIS), https://www.esa.int/Our_
Activities/Telecommunications_Integrated_Applications/SAT-AIS_for_maritime

We combine the weather information with the actual position where the ship’s location has been
observed at each time instant of its navigational history, Fig.2. We acquire the global ocean weather
data from independent weather forecast providers. Ocean weather data includes; wave, and wind
conditions, and sea currents.

The wave conditions are based on the nowcast wave data from WAVEWATCH III (WW3) model of
National Centers for Environmental Prediction (NCEP). Nowcast uses the current weather radar and
other observations available immediately to estimate global conditions. Wave conditions are provided
every 180 minutes at spatial resolution of 1.25 degrees, which means roughly 100 km grid size.

416
AIS messages also contain information on the draft of the ship, which is essential for estimation of
motion response of the ship and consequent loads on the hull girder. Ship location, speed, heading and
draft are obtained from the AIS messages, which are collected in few minute intervals. AIS location is
based on the Global Navigation Satellite System (GNSS), which usually is Global Positioning System
(GPS). These signals can be considered reliable. However, the crew onboard the ship manually enters
draft values in the AIS message as voyage-related information and it may be inaccurate for this reason,
Adland et al. (2017).

Fig.2: Global ocean weather, wave, wind, and current, interpolation to the ship position

Weather nowcasts are provided with given spatial resolution grid and temporal intervals which need to
be interpolated to the ship position at each time instant the location is obtained. We do this by trilinear
spatio-temporal interpolation, which is described in Haranen et al. (2017), Fig.3. Wave conditions are
given as two main wave directions, namely swell and wind waves. For each of these, the significant
wave height Hs, zero crossing period Tz, and direction is given. Also, wind speed and direction, as well
as the speed and direction of sea currents, are given. The accuracy of the wave height nowcast is within
0.3 m globally and the wave period within a couple of seconds. Similar results are obtained for all the
main international wave forecast providers by Bidlot (2017), who compare hundreds of globally
positioned wave buoy measurement during the year 2016 with the forecasted wave conditions.

Fig.3: Spatio-temporal, location-time, interpolation of the weather, Haranen et al. (2017)

417
3. Preparation of weather data for structure analyses

The raw data of a ship’s navigational history with the weather nowcast data is available in table format
showing information at intervals of about 10 minutes. It includes ship position from AIS together with
characteristics of wave and swell information interpolated to each AIS datapoint, Fig.5.

Fig.5: Raw data with ship navigation history and interpolated nowcast weather data

The most important data for this analysis is wave and swell height, direction and period. The wave
height is expressed as significant wave height and needs further processing to be used in fatigue
calculation. The data includes also the ship speed and wave direction.

The hydrodynamic analyses of ship response with 3D panel method have ship speed and wave
direction as variables and therefore the wave scatter diagrams are created separately for different
speed and different wave headings.

The first step of analysis is to make wave scatter diagrams from the time series of wave condition
at ship positions. The format will be here the same as in standard North Atlantic Wave Data, Rec.
No.34, IACS (2001), but there will be separate diagrams for different wave directions and speed.
The adopted analyses steps are:

• Collect now-cast data from a reasonably long period. In Fig.6 example, the measured period
of a real cruise ship in operation has been 970 days resulting in about 150000 samples,
which results in a sample distance about 9 minutes on the average.
• AIS information has also arrival and departure port information and it is possible to separate
harbor conditions from the conditions at sea. In the examples below, the harbor conditions
are included in the scatter diagram, because they can be considered as typical for this kind
of vessel.
• Wave and swell information are both included. The data is organized to different tables
according to the difference between ship heading and wave directions at 30-degree inter-
vals. This is the same interval which is used in stress RAO calculation.
• Sea currents also affect the hydrodynamic response and it is possible to include them in the
analyses in a similar way to changes in ship speed. Here, it is neglected however, and it is
assumed that the effect would be small.
• The data is organized in scatter tables based on their significant height and wave zero cross-
ing period.

418
Fig.6: Typical scatter diagrams of now-cast wave data

Each short-term Hs/Tz sample presents a probability of certain combination of significant wave height
and wave period in a wave scatter diagram. These are indications of real short-term wave spectra and a
common assumption is to present the real wave spectra as a Pierson Moskowitz wave spectrum for a
single short-term sea state IACS (2001):

Pierson Moskowitz spectra are used further together with stress RAO from FEM analyses to define the
stress response spectrum.

4. Product model as a source of structure analyses

This analysis uses the 3D NAPA product model effectively as a primary source of information. In this
paper, the NAPA product model shares information by using several calculation modules of NAPA
software. The product model gives the information shown schematically in Fig.7 for further processing:

• Details of structure for global and local strength evaluation (ST).


• FEM mesh generated automatically from structural model (FEM).
• Structural lightweight distribution from 3D structure model (ST).
• Other lightweight components from weight calculation module (WG).
• Deadweight from loading conditions and compartments (SM and LD).
• Hull form for 3D hydrodynamic panel method (NPN).
• Calculation of pressures with 3D panel method (SHS)

The structural model is communicated from NAPA to an external FEM solver with the help of industrial
standard interface files. By taking care of the interface file formats and all the necessary input data, the
presented analyses are possible with most of the commercial FEM solvers.

419
Fig.7: Information from NAPA product model

The ship is NAPA demonstration ship without any real connection to a real ship in operation. This paper
is intended to demonstrate the software tool, not an actual ship design. However, the ship is intended to
present a typical cruise ship with realistic main particulars, Table I.

Table I: Demo ship main particulars


Length over all 344m
Reference breadth 36.0m
Draft 9.5m
Block coefficient 0.64
Displacement 80000 ton

The main characteristics and typical design concepts of a cruise ship can be expressed:

• The watertight hull below main deck covers the lower part of the ship including ship technology
compartments like engine room, storages, technical spaces, major tanks.
• The decks and longitudinal bulkheads have only the minimum number of openings and
discontinuities below main deck.
• The upper deckhouse part of the ship includes the hotel-like compartments including passenger
cabins, public spaces and recreational activities.
• The decks and bulkheads have maximum number of openings to give good access and a nice
architectural view. The structures have several discontinuities and many of the details are
sensitive to high stress concentration and therefore also to fatigue.
• The deckhouse extends to the whole length of the ship and participates to longitudinal strength.
• Loading condition and draft variation is small and neglected from fatigue analyses.
• It is assumed here that the structural detail under fatigue, considered here, is not affected by
corrosion or non-linear wave loads close to waterline. For other details and ship types the
situation is different and it should be considered in further works.

420
The demonstrated analyses target a balcony opening in deckhouse outer surface. The hotel part
(deckhouse) is typically optimized to be as big as possible volume for the certain size of ship hull and
machinery. The weight of deckhouse is important for the sake of ship stability and it should allow
impressive architectural ideas for large open spaces with good sea view for passengers. On the other
hand, it is affected by ship strength and flexibility of the hull.

5. Estimation of FEM loading by 3D panel method

NAPA has developed a three-dimensional potential flow panel method for forward-speed
hydrodynamic calculations. As a summary, the three-dimensional seakeeping computation method was
extended from zero speed to forward speed computations. Validation calculations for two hull forms
were carried out to select the detailed way of the extension. The details of the theory are presented in
Kalske (2017). The developed method is further implemented in NAPA software hydrodynamic
modules SHS and it is tightly integrated to other NAPA modules and database.

The major results of the calculation procedure are the hydrodynamic pressures, Fig.8, in the panels of
hull wet surface and the ship 6 degree of freedom movement as function of wave period and wave
height.

Fig.8: Hydrodynamic pressure from SHS panels

Fig.9: FEM pressures corresponding to panel pressures

421
The 3D panel pressures are interpolated to FEM mesh wet surface by using the Shepard method
proposed in Mikkola (2008) by further referencing Nielson (1993). The Shepard method is a general
method for interpolating quantities between irregular grids. In this context, it used to calculate the
weighted pressure on FEM shell elements from nearby hydrodynamic panels:

6. FEM analyses and interpretation of results

FEM mesh is created with NAPA FEM and the result is first the bulk data file without loads and
boundary conditions. In this analysis, the structural detail to reproduce stress for fatigue assessment is
embedded in the FEM mesh. A sub-model technique is also possible as an alternative solution and the
choice between these depends more or less on software tools available and computer resources.

Fig.10 FEM mesh and a fatigue detail

The FEM model mass and inertia is used as input for hydrodynamics calculation. The pressures from
hydrodynamic analyses are further used as FEM model pressures. The FEM analyses are quasi static,
where the external pressure is balanced with FEM model inertia forces. This will give good balance for
FEM load cases and the necessary supporting forces from boundary conditions are negligible.

Depending on the FEM solver, there might be an automatic inertia relief feature available for balancing
or possibility to include calculated acceleration from SHS results directly to FEM. In our approach, we
have used a direct method, which is applicable for most of the FEM solvers by introducing only node
point forces to FEM input. This is done by calculating accelerations from pressure loads and then
calculating balancing node point forces from acceleration and node point mass.

This procedure has the benefit of having zero support reactions at constraint nodes within numeric
accuracy. The drawback is that the approximation errors are visible in differences between FEM
acceleration and SHS acceleration. The accelerations should converge to the same values with a proper
set of modelling parameters.

The resulting RAO for any stress component can be expressed as, DNVGL (2015):

FEM solver is run for each combination of phase, speed, heading and wave period. Depending on the
selected parameters, it may result in many load cases, which will need CPU time and storage space.

422
Table II: FEM analyses parameters
Variable Count Range
Phase angle 10 0 – 360°
Speed 4 0 - 12 m/s
Headings 12 0 – 360°
Wave period 18 1.5-18.5 s

Table III: Characteristics of FEM model


Component Count Stored units
Total load cases 8640 10 x 4 x 12 x 18
Node points 19000 a 6 x 8 byte
Shell elements 23000 a 24 x 8 byte
Beam elements 21000 a 16 x 8 byte
Total storage All load cases 50 GB

7. Fatigue analyses

DNVGL (2015) presents full stochastic fatigue analyses with detailed description of formulas. In this
method, hydrodynamic loads are directly transferred from the 3D panel method to FE models and hence
the method is suitable for fatigue calculations of details with complex stress pattern and loading.

ClassNK (2018) presents the concept of Dominant Load Parameters (DLP). The DLP can be, for exam-
ple, the global bending moment or the other loads which lead to one critical stress component of fatigue
detail. The basic difference with full stochastic analyses is that DLP’s are evaluated case by case, based
on evaluation target, and the design waves are selected to produce these DLP’s. The similar equivalent
design wave (EDW) concept is applied also in the background of many modern fatigue rules like CSR
(2016) and also conceptually in component stochastic method of DNVGL (2015).

The benefits of the DLP and EDW concepts are to reduce the number calculations and speed up the
design process. The basic calculation processes for fatigue analyses are similar from software develop-
ment point of view.

For stress-distribution and further analyses there are closed form damage estimate formulas presented
in DNVGL (2015) for one or two-slope S-N curves, which use either Weibull or Rayleigh distributions.
Because the now-cast weather data and stress spectra are processed to Rayleigh distributions here, the
fatigue damage is estimated also from Rayleigh distributions.

Wave scatter diagrams from now-cast data are presented as Pierson Moskowitz wave spectra. There
will be one spectrum for each pair of Hz (significant wave height) and Tz (wave period), IACS (2001):

The response spectrum is calculated from stress spectrum and RAOs:

Spectral moment of order n are integrated from response spectrum. In general without known scatter
spreading function fs(), IACS (2001) recommends to use the standard cosine function with power 2.

423
The load response is approximated as Rayleigh distribution with each short-term condition:

Cumulative distribution may be estimated by a weighted sum over all sea states and heading directions.
The fatigue life is in general estimated cumulative crack growth like in Palmgren-Miner rule. The cu-
mulative sum should be kept < 1.0, which will give precondition to estimate the remaining lifetime.

The selected S-N curves depend on corrosion exposed and other parameters to be specified. In fatigue
calculation there are special concerns related to for example thickness effect, mean stress and corrosion.
They should be concerned in selection of S-N curve and in evaluation of stress ranges from FEM results.
For example, two slope S-N curve:

Fig.12: Two-slope S_N curves in DNVGL (2015)

Instead of Palmgren-Miner rule, the fatigue damage can be calculated from closed form equations for
Weibull or Rayleigh distributions, with one or two slopes in S-N curve:

Fig.13: Example of damage estimate for one-slope S-N curve and short term Rayleigh distribution.
Details of expression in DNVGL (2015)

424
Using Rayleigh distribution fits well with the analyses of now-cast wave information, because it is
expressed as set of short-term Rayleigh distributions. The calculated cumulative damage D in Fig.13
gives direct information about used fatigue life. The D should be in general less than 1.0 and the time
covered by now-cast data in relation of expected ships lifetime indicates the remaining fatigue life.

8. Discussion

The combination of AIS messages with the weather data – besides its other usage – is also suitable for
estimating fatigue life of a ship. Even though weather predictions have their limitations and
inaccuracies, it is giving anyway a new viewpoint to further analyses. Instead of using standard
predictions, it is now possible to track the whole operation history of the ship.

The proposed strength analyses technology is a generic method for estimating fatigue damage
experienced during the ship lifetime. It can utilize not only the now-cast information of ships’
experienced lifetime but the expected wave statistics based on general historical data.

It is suitable for different kind of structures including vessels of novel construction or extreme size. The
estimation of fatigue damage can be used in case of damage experienced or for estimating the actual
condition of structures and maintenance needs for the future. The benefit in comparison to other non-
destructive fatigue detection methods is that it may change conventional field inspections or docking
into a new optimized style.

The example of fatigue detail in cruise ship is a special case by having no effect from non-linear sea
pressure distribution close to waterline nor direct loads from cargo holds, which are both very common
in cargo ships. For the dynamic sea pressure close to waterline, there are already methods proposed like
IACS 2016, which can be used when needed.

The stress variation from changed loading conditions in the global strength level is slow and it not
considered in general as a problem for fatigue, but on the local level, the cargo pressures have
remarkable effect. Already now, it is possible to get accelerations for the compartments, which has
effect on the loads. Anyway, without detailed information on fill rates and other compartment
information, this is not enough for detailed analyses.

For local loads initiated from cargo, there is a general problem that AIS data does not contain detailed
loading condition information, which is needed for local pressures and further fatigue analyses. One
solution for this is using the information from electronic logbook, which is currently getting acceptance
by many authorities. At IMO, guidelines and amendments for the use of electronic record books under
MARPOL has been initiated by the Sub-Committee on Pollution Prevention and Response and using
electronic logbook is already now encouraged in MARPO PPR (2018). Technically it is possible to
connect the electronic logbook to already mandatory loading computer for tracking the loading
information in real time. This is not yet commonly available in all ships, but may open new possibilities
in the future.

The proposed fatigue calculation method depends on its general statistical phenomena and there are
many uncertainties related in general to fatigue predictions. Based on IACS (2013) uncertainties in the
CSR (2016) may be split into three categories:

• Load and load effects


• Capacity
• Analysis methodology

The applied full stochastic method can solve some of these uncertainties. For example, clear
improvement can be achieved in:

425
• Wave environment being described realistically in the history of the ship. For future prediction
the assumption of using North Atlantic spectra as default one remains.
• The pressure is not based on equivalent design wave and it is possible to include realistic
pressures into analyses. Pressure close to waterline still needs special attention.
• Global loads are based on real wave history.
• Phasing between load components can be considered in a realistic way.
• Relative deflections and double hull bending can be considered in a realistic way.
• Stress direction in full stochastic analyses is realistic.

With proposed full stochastic method, there are still many basic uncertainties, for example:

• The real fatigue life depends on design S-N curves, which includes well-defined conservatism,
and which are effected by testing methods.
• Mean stress effects may have significant effect on fatigue and in general estimating real residual
stresses is difficult.
• The proposed method does not bring any new insight to corrosion effects nor workmanship of
details.

The full stochastic method is interesting option from software development point of view, because the
process is well defined and it is possible to utilize product model information to develop an automatic
process.

The full stochastic analyses require computer resources which were not easily available some decades
ago. With ever-increasing computation power and model-based approach in ship design, direct strength
and load assessment are more and more achievable today. Hydrodynamic panel method and FEM
analyses both can also effectively utilize parallel computing, and disk space for storing gigabyte or
terabyte sized results is no longer a problem.

Even though there are still many uncertainties, the proposed method can serve as a valuable first-hand
tool for estimating certain service or maintenance needs of a ship and give value for ship owners and
classification societies. The digital twin approach can even enable the shipyards to provide life-cycle
models and based on them, offer life-time maintenance services.

9. Conclusions

The proposed strength analyses technology is a generic method for estimating fatigue damage
experienced during the ship lifetime. It can utilize both now-cast information of ships experienced
lifetime and the expected wave statistics based on general historical data.

Even though there are still many uncertainties, the proposed method can serve as a valuable first-hand
tool for estimating certain service or maintenance needs of a ship and give value for ship owners and
classification societies.

The proposed method needs still validation in many aspects. The work in NAPA will continue in further
development of the methods and evaluation of real usability as a ship design tool.

Nomenclature

ClassNK A classification society, member of IACS.


DNVGL A classification society, member of IACS.
IACS International Association of Class Societies
NAPA NAPA Group, a software house or NAPA software, depending on context.
RAO Response Amplitude Operator
FEM Finite Element Method in general or NAPA software module, depending on context.

426
AIS Automatic Identification System
LD NAPA software module for loading conditions
ST NAPA software module for structure design, NAPA Steel
WG NAPA software module for weight calculation
SHS NAPA software module for hydrodynamic analyses
SM NAPA software module for ship model management
NPN NAPA software module for panelization of hull surface
DLP Dominant Load Parameter

References

ADLAND, R.; JIA, H.; STRANDENES, S.P. (2017), Are AIS-based trade volume estimates reliable?
The case of crude oil exports, Maritime Policy and Management 44 (5), pp.657-665

BIDLOT J.-R. (2017), Intercomparison of operational wave forecasting systems against buoys: data
from ECMWF, MetOffice, FNMOC, MSC, NCEP, MeteoFrance, DWD, BoM, SHOM, JMA, KMA,
Puerto del Estado, DMI, CNR-AM, METNO, SHN-SM January 2016 to December 2016, European
Centre for Medium-range Weather Forecasts (ECMWF), https://www.jcomm.info/index.php?option=
com_oe&task=viewDocumentRecord&docID=18333

HARANEN, M.; MYÖHÄNEN, S.; CRISTEA, D.S. (2017), The Role of Accurate Now-Cast Data in
Ship Efficiency Analysis, 2nd Hull Performance & Insight Conference (HullPIC), Ulrichshusen, pp.25-
38, http://data.hullpic.info/hullpic2017_ulrichshusen.pdf

KALSKE, S.; MANDERBACKA, T. (2017), Development of a new practical ship motion calculation
method with forward speed, 27th Int. Ocean and Polar Engineering Conf.

CLASSNK (2018), Guidelines for Direct Load Analysis and Strength Assessment, March 2018

IPCC (2018), Global Warming of 1.5 ºC, Special Report, The Intergovernmental Panel on Climate
Change (IPCC)

MIKKOLA, M. (2008), Transferring Wave Induced Loads on a Finite Element Model of a Ship, MSc
Thesis, Tampere University of Technology

NIELSON, G. (1993), Scattered data modeling, IEEE Computer graphics and applications 13(1993)1,
pp.60-70

DNVGL (2015), Fatigue Assessment of Ship Structures, Class Guideline, DNVGL-CG-0129

IACS (2001), No.34 Standard Wave Data, IACS Rec. 2000

IACS (2016), Common Structural rules, 2016

IACS (2013), Harmonised CSR, Uncertainties Related to the Fatigue Assessment Procedure, TB-
Report

MARPO PPR (2018), Sub-Committee on Pollution Prevention and Response (PPR), 5th session, 5-9
February 2018

427
BigDataOcean Project:
Early Anomaly Detection from Big Maritime Vessel Traffic Data
Konstantinos Chatzikokolakis, MarineTraffic, London/UK,
konstantinos.chatzikokolakis@marinetraffic.com
Dimitris Zissis, University of the Aegean, Lesbos/Greece, dzissis@aegean.gr
Marios Vodas, MarineTraffic, London/UK, marios.vodas@marinetraffic.com
Giannis Tsapelas, National and Technical Univ. of Athens, Athens/Greece, gtsapelas@epu.ntua.gr
Spiros Mouzakitis, National and Technical Univ. of Athens, Athens/Greece, smouzakitis@epu.ntua.gr
Panagiotis Kokkinakos, National and Technical Univ. of Athens, Athens/Greece,
pkokkinakos@epu.ntua.gr
Dimitris Askounis, National and Technical University of Athens, Athens/Greece, askous@epu.ntua.gr

Abstract

This paper discusses the concept and results of the BigDataOcean project, and specifically the anomaly
detection pilot. While in the past, surveillance had suffered from a lack of data, current tracking
technologies have transformed the problem into one of an overabundance of information, with needs
which go well beyond the capabilities of traditional processing and algorithmic approaches. The major
challenge faced today is developing the capacity to identify patterns emerging within huge amounts of
data, fused from various sources and detecting outliers in a timely fashion, to act proactively and
minimise the impact of possible threats. Within this context we first define an “anomaly”, before
proceeding to present the BigDataOcean anomaly detection service; a service for the classification and
early detection of anomalous vessel patterns. The service makes use of state-of-the-art big data
technologies and novel algorithms which form the basis for a service capable of real time anomaly
detection.

1. Introduction

Today numerous maritime systems track vessels during their voyages across the sea. Such is the
Automatic Identification System, a collaborative, self-reporting system that allows efficient exchange
of navigational data between ships and shore stations, intended to be used primarily for surveillance
and safety of navigation purposes in ship to ship use, ship reporting and vessel traffic services (VTS)
applications (ITU, 2014). Beyond simple collision detection, researchers and scientists are finding out
that these data sets provide a new range of possibilities for improving our understanding of what is
happening or could happen at sea. As such “anomaly detection” has been identified by operators/
analysts of the operational community as an important aspect requiring further research and
development, Martineau and Roy (2011). Anomaly detection can be understood as a method that
supports situational assessment to build models of normal data and then attempt to detect deviations
from the normal behaviour in observed data that thus, may be of interest for further investigation,
Riveiro et al. (2018), Laxhammar (2011), Brax (2011).

To date, the availability of a larger number of sensors does not guarantee a reduction of risk as promised,
mostly due to the enormous volumes and the velocity of the data that these systems and operators are
faced with. Forecasting complex maritime situations emerging, such as probable collisions or
groundings, suspicious activities or vessels’ spoofing their identity has become a challenging task for
surveillance system operators. They are now exploring complex and heterogeneous data; a situation
that may lead to other undesired consequences for the system like uncertainty or time constraint
violations in the decision making, or undesired effects for the operator such as fatigue and cognitive
overload, Riveiro et al. (2018). Time critical computing systems are systems in which the correctness
of the system is dependent not only on the accuracy of the result produced, but also on the time in which
it was computed; such systems include avionics and marine navigation systems, defence systems,
command and control systems, robotics and an ever-increasing number of Internet of Things (IoT)
applications. For applications such as navigation, surveillance and others, timeliness is a top priority;

428
making the right decision regarding a collision avoidance manoeuvre is only useful if it is a decision
made in due time.

Unfortunately, current state of the art techniques and technologies are incapable of dealing with these
growing volumes of high-speed, loosely structured, spatiotemporal data streams that require real-time
analysis in order to achieve rapid response times. In this paper we present an adaptation of the lambda
architecture as developed in the context of the BigDataOcean, an EU-funded project aiming to improve
data sharing and linking between enterprises and entities of the maritime domain and other domains.
The project focus is to increase the integration of Big Data and Data Analytics frameworks in blue
economy by exploiting the huge potential from cross-sectorial blue data applications and to deliver out-
of-the-box value-added Big Data services for maritime applications using advanced queries and
analytics. In the following sections we first attempt to accurately define an “anomaly”, before
proceeding to present the BigDataOcean anomaly detection service; a service for the classification and
early detection of anomalous vessel patterns. The service makes use of state-of-the-art big data
technologies and novel algorithms which form the basis for a service capable of real time anomaly
detection. Preliminary results indicate the efficiency of the proposed methodology when detecting
maritime incidents.

2. Problem definition and related literature

The International Maritime Organization (IMO) defines Maritime Domain Awareness (MDA) as “the
effective understanding of anything associated with the maritime domain that could impact upon the
security, safety, economy, or environment,” IAMSAR (2010). Situational awareness refers to the
knowledge of the elements in the maritime space necessary to make well-informed decisions as well as
processes involving knowledge and understanding of the environment that are critical to those who need
to make decisions within the complex sea area. According to NATO (2007) Maritime Situational
Awareness (MSA) is defined as “The understanding of military and non-military events, activities and
circumstances within and associated with the maritime environment that are relevant for current and
future NATO operations and exercises where the Maritime Environment (ME) is the oceans, seas, bays,
estuaries, waterways, coastal regions and ports”. Nimmich and Goward (2007) give a pragmatic view
of MDA importance for Maritime security, as well as its economic and social impact.

The concept of anomaly is an important building block for developing situational awareness of the sea
environment, Snidaro et al. (2015). An anomaly can be considered a critical event to which the system
is generally called to react to. Usually, a threshold establishes if input data can be considered unexpected
or anomalous, thus raising an exception. The concept of anomaly has a different meaning, depending
on the context used as well as the requirements, Roy (2008). Roy and Davenport (2009) present a
categorisation, based on a taxonomy of the maritime situational facts involved in anomaly detection
identified and validated through knowledge acquisition sessions with experts. Over the last century, a
large number of diverse maritime situation awareness systems have been developed with different
objectives and characteristics depending on what “an anomaly” means for its stakeholder and what are
the operational needs. Typical technological systems that are used for Maritime Domain Awareness
include Automatic Identification System (AIS), satellites, long range radars and long range Unmanned
Aerial Vehicles (UAV). Output data from these systems are being used to provide real-time status of
the observed sea environment, early warnings as well as data analytics for decision and policy making.
The availability of plethora of sensors, higher data storage capacity, cheaper devices and better database
management systems have made it possible to access huge volumes of data related to the Maritime
Domain Awareness, Riveiro and Pallotta (2018).

The exploitation of such volumes of data present new opportunities in a large number of applications,
including safe vessel traffic management and collision prevention, Perera et al. (2012), coastal
protection, Rabasa et al. (2012), environment protection and safety, Roarty et al. (2013), search and
rescue missions, Breivik et al. (2013), anti-terrorism activities as well as the prevention of illegal
activities such as smuggling and piracy, Axbard (2016).

429
The sheer amount of data that needs to be processed in order to provide real or near-real-time anomaly
detection requires the combination of various steps including typical big data computational models
with machine learning algorithms, Abielmona and Rami (2013). For instance, Li et al. (2006) indicated
four steps required to tackle the problem of anomaly detection for moving objects, including micro-
clustering and Support Vector Machine for classification purposes.

Laxhammar (2008) applied a Gaussian Mixture Model as the cluster model and the Expectation-
Maximization algorithm as the clustering algorithm, as an extension of the model suggested by Holst
and Ekman (2003). The surveillance area was divided into grid cells and for each cell the points’
velocities were modelled by a two-dimensional Gaussian distribution. Korb and Nicholson (2004)
applied the CaMML machine learning tool on the AIS data from the Australian Defence Science and
Technology Organization (DSTO). Lee et al. (2007) proposed a trajectory clustering algorithm to find
similar, or normal patterns. A trajectory was divided into a series of line segments in the partition step
which were then clustered by applying the density-based clustering algorithm DBSCAN.

Zhen et al. (2017) presented a similarity measurement method between vessel trajectories based on the
spatial and directional characteristics of AIS data. Zhang et al. (2017) then applied the method of
hierarchical and k-medoids clustering to model and learn the typical vessel sailing pattern within
harbour waters. The Naïve Bayes classifier of vessel behaviour was built to classify and detect
anomalous vessel behaviour. Jousselme el al. (2015) presented the development and implementation of
fusion algorithms for maritime anomaly detection, and the definition of associated criteria and measures
of performance. Jousselme et al. (2015) suggested that adequate uncertainty representation and
processing is crucial for this higher-level task where the operator analyses information correlating with
his background knowledge.

Those systems are focused on specific scenarios (e.g., collision prevention) and are mostly applied in
historical data, in which they attempt to discover vessels’ operational pattern. They do not focus on
solution’s scalability or model’s adaptability to “new data”, making thus evident the need for more
transparent, interpretable and explainable machine learning-based systems, Riveiro et al. (2018). In this
paper we propose a novel architecture capable to overcome the impediments of huge amount of data
arriving at the system with high velocity and detect in near real-time various types of anomalies.
Furthermore, the proposed anomaly detection service is modular in the sense that the set of possible
anomalies can be further extended without adding computation burden to the system.

3. Approach and preliminary results

In this section we present the design and implementation options selected for the proposed anomaly
detection services and we provide the preliminary assessment of the efficiency of those services against
specific maritime incidents.

3.1. System architecture

In our approach we have deployed a modified Lambda architecture shown in Fig.1. This scheme allows
the decoupling of batch processing (usually performed upon historical data) and real-time analysis,
which typically exploits the knowledge extracted from the batch processing. Specifically, in our
approach the BigDataOcean batch layer performs the analysis of historical positional data of vessels
and extracts port-to-port routes. This is a long-running process which takes several hours to complete.
Once completed, the extracted routes are fed into the real-time layer in order to accommodate detection
of vessel anomalies in real-time. Upon detection those incidents are displayed to the end user through
the service layer. Furthermore, historical data are sent from this layer back to the batch layer at specific
time intervals defined from the seasonality of the data, thus replacing previous processed routes with
new ones.

430
Fig.1: Modified Lambda architecture for the Anomaly Detection service (World Fleet icons adapted
from https://www.vecteezy.com

3.1.1. Batch layer analysis

The BDO batch layer undertakes the role to identify the “safe” route between two ports; a knowledge
that can be used to detect vessels travelling out of this “safe” path. This is achieved through data analysis
techniques performed upon a huge amount of historical data that have been collected through an AIS
network. The batch layer calculates “safe” routes between each pair of ports globally by correlating
vessels’ positional information (i.e., geographical coordinates broadcasted through AIS) with port
geometries so as to link all the positions transmitted from the vessels with a specific port-to-port
connection. A “safe” route is a set of convex hulls; each one is indicating an area with dense AIS data
transmissions and is being produced through clustering of positional data of vessels travelling from the
departure port towards the destination port. Specifically, in our approach we have used K-Means, a
clustering algorithm which partitions the data points in groups according to their spatial proximity and
density. A normal route is then created as the collection of the convex hulls formed when computing
the minimum polygon that encloses all geographical positions in a group. The convex hulls represent
the confidence interval, which means that if a vessel that performs the specific port-to-port voyage is
travelling out of a convex hull, then an anomaly event is raised. Upon completion of calculation, these
routes are fed to the real-time layer, which is then capable to detect possible anomalies in streaming
data. The operation of convex hull calculation is executed in sparse time intervals in order to update the
model when a significant amount of new data is gathered.

3.1.2. Real Time Layer

In this layer queries are performed on streaming and previously unseen data, thus enabling detection of
security incidents in near-real time. More specifically, streaming positional and voyage-related AIS
data are consumed and combined with static datasets and data mining models in near real-time to detect
anomalous events. This layer includes an Apache Kafka distributed platform to which AIS messages
are forwarded from the company’s station network. Then, those messages are processed and divided
into multiple topics based on the nature of each AIS message (which is identified based on the message
type), giving thus, separate topics for the position reports (i.e. messages of type 1, 2 or 3), for the static
and voyage related data (i.e., messages of type 5), etc. These topics are consumed by another component
responsible to perform distributed computing and identify the maritime security incident types

431
mentioned afore. This component is based on Akka, a free open-source toolkit capable to build
powerful, reactive and concurrent applications. It uses actors that evaluate whether the conditions that
constitute a situation as “anomaly” exist, once a message is received.

Our platform is currently able to detect in near real-time a collection of four distinct types of possible
anomalies, namely route deviations, AIS switch-offs, imminent collisions and groundings. In the
following section we define each event type and discuss the algorithms we used in more detail.

• A route deviation happens every time a ship is found outside the normal route it is expected to
follow, according to the departure and destination port. The model produced in the batch layer
is loaded to the real-time service in order to be used for checking if a position coming from the
stream of AIS is within the boundaries the convex hulls dictate. Thus, if a ship deviates at some
part of its trajectory our service will detect it and provide it as an indicator that something might
be wrong to the service layer.
• An AIS switch off happens when the service stops receiving data from a ship whilst the ship is
well within the coverage of AIS base stations. Even though ships (especially large ones) are
required by IMO to have their AIS transmitter on, gaps in AIS data are a very frequent
phenomenon and they could be a result of many factors, e.g. malfunction of the AIS receiver
or of the vessel’s AIS transmitter, signal loss due to environmental noise, and intentional switch
off. Intentional switch off usually indicates that the ship is about to engage in an illegal activity,
except from cases where it passes through areas that have increased piracy, e.g. Somalia, and
the switch off is used as a measure of protection.
• We define an imminent collision as an unsafe proximity event between at least two vessels. To
handle the complex and demanding computations of determining proximity events, we have
followed a distributed approach by partitioning areas of the Earth into a set of identifiable grid
cells and using separate Akka actors to monitor each grid cell. Each position message consumed
from the Kafka stream is forwarded to the actor responsible to monitor the corresponding area.
The actor stores previously received messages and uses the newly arrived message’s timestamp
to forecast the position of the other vessels in the grid cell based on their past messages. This
is achieved by projecting the most recent position of each vessel that the actor is monitoring at
the exact time of the newly received message. Then, using the course over ground of those
projected positions we determine whether their course is intersecting with the course of the
vessel from which the new message was received and, in such case, a new imminent collision
event is yielded.
• Groundings occur when a vessel has travelled in swallow sea (i.e. the sea depth was less than
the ship’s draught). In most cases this would also mean that the vessel has travelled out of the
“safe route” and thus a route deviation event would be yielded before the grounding. To identify
groundings the real-time layer considers vessel’s position and navigation behaviour. The vessel
may report through the navigation status field of AIS messages that is not be under command,
or its ability to manoeuvre is restricted, or even explicitly report it is aground. Similarly, rapid
decrease in vessel’s speed, or frequent changes of course over ground in short time also indicate
anomalous behaviour of the vessel. This information is correlated with bathymetry data and
vessel’s draught reported through AIS to increase the accuracy of detected groundings.

In the following subsection we present preliminary results of our approach for anomaly detection. The
streams of data are simulated using historical tracks of verified incidents retrieved from the EMSA
reported incidents, http://www.emsa.europa.eu/. Our experiments highlight the performance of our
approach through examining characteristic examples of such security incidents.

3.2. Preliminary results

In this section we provide the preliminary assessment of the proposed architecture in terms of execution
speed and present two indicative cases in which our algorithms achieved early detection of the vessels’
anomalous behaviour. The experimental evaluation was performed on a machine with 12 threads (6

432
cores), an AMD Ryzen™ 5 2600 CPU @ 3.4 GHz and 16 GB of RAM, running Ubuntu 18.04 with
Java OpenJDK 8, Scala 2.12.8 and Akka 2.5.21. Real-time architectures are evaluated in terms of data
ingestion delay, CPU and memory usage. The number of actors created in our system is proportional to
the number of vessels from which messages are received. This leads to the conclusion that the number
of actors created is finite, and thus, increasing the memory and the processing power of the system will
alleviate any stress. Thus, the only factor that may affect our system is the volume and the velocity of
incoming data that the system should cope with. For our experiments we used a dataset which contains
20M AIS messages received from 4599 vessels sailing in the Greek seas from February 2016 until July
2017 to assess the system’s capability to ingest data. In order to test the system’s scalability against
different volumes of incoming data, we created two more subsets from the dataset containing 1/3 and
2/3 of the initial AIS messages respectively and these were streamed into our real-time layer. We tested
multiple configurations of our system with 2 threads, 4 threads, 8 threads and 12 threads for each dataset
and the results of our experiments are highlighted in Table I. Preliminary results demonstrate that our
approach achieves under second ingestion time even with 2 threads. In terms of velocity the data are
streamed from files, having orders of magnitude higher speed compared to the actual system where the
data will be fed into the real-time layer from the receivers through the Internet and thus will have much
slower pace.

Table I: Data ingestion results (total execution time in seconds)


#threads 1/3 Subset (seconds) 2/3 Subset (seconds) Full Dataset (seconds)
2 147 277 841
4 86 159 550
8 60 106 267
12 51 101 161

Besides the system’s performance we also evaluate the accuracy of our approach, against two case
studies of maritime security incidents. Fig.2 shows a visualization of a part of the trajectory of the tanker
Ovit. This vessel is a Malta registered tanker built in 2011 that ran aground on the Varne Bank in the
Dover Strait, England, in the early morning of 18 September 2013, while carrying vegetable oil from
Rotterdam to Brindisi. The vessel was using ECDIS for navigation and its passage plan, which was
prepared by an inexperienced junior officer, was unsafe as it passed directly over the Varne Bank. The
lack of experience of the Officer Of the Watch (OOW) and the false configuration of the ECDIS safety
settings resulted in a 19 minute delay till he understood that the vessel was aground and the inability to
recover vessel’s historical track from the system. Furthermore, Dover coastguard watch officer that was
operating the Channel Navigation Information Service was also not qualified and did not issue the
warning to Ovit. Under those circumstances Ovit ran aground at 04:34 on 18 September 2013 in the
Dover Strait. Fig.2 (a) below highlights the convex hulls that were produced through the batch-layer for
this port-to-port connection (i.e. from Rotterdam to Brindisi) and the past track of the vessel. Black
triangles are reported positions where the ship stopped, and white circles are the ones it was moving. It
is evident from Fig.2 (b) and (c) that the ship was either moving outside or on the edge of the normal
route a few minutes before running aground. After that the vessel changed its course and headed to the
port of Dover.

In another example M/V Goodfaith, a bulk carrier sailing under the flag of Cyprus started its last voyage
on the 10th of February 2015 from Elefsis, Greece heading to Odessa, Ukraine. The vessel was sailing
in ballast condition and was in perfect shape as its special survey maintenance operations were
successfully completed on the 9th of February. Its voyage plan included standard passages followed by
vessels sailing in the Aegean, Marmaras and Black Sea, but bad weather conditions were prevailing in
the area of South Evvoikos and Kafireas Strait. The vessel gradually lowered the engine speed to avoid
main engine overspeed, but as the vessel was reaching Kafireas strait the weather conditions had
significantly worsened with wind force 9Bfrs and high waves and by the time she was crossing the strait
(i.e., at approximately 01:00), Goodfaith was heavily drifting towards the north-west coast of Andros
island, where it ran aground at 01:28.

433
(a) Zoom level 1

(b) Zoom level 2

(c) Zoom level 3


Fig.2: Grounding of oil/chemical tanker Ovit while travelling from Rotterdam to Brindisi.
https://www.gov.uk/maib-reports/grounding-of-oil-chemical-tanker-ovit-on-the-varne-bank-
in-the-dover-strait-off-the-south-east-coast-of-england

434
Fig.3 highlights part of the trajectory of the Goodfaith. It is evident from Fig.3(a) that the vessel deviated
from the convex hulls of this port-to-port connection much earlier than the grounding incident. Contrary
to the Ovit case, this deviation was on purpose, to avoid bad weather conditions. Nevertheless, such
deviation yielded early notification of possible anomaly in our system. Finally, Fig.3 (c) shows the last
part of the vessel’s route few minutes before running aground.

(a) Zoom level 1

(b) Zoom level 2


Fig. 3: Grounding of bulk carrier M/V Goodfaith while travelling from Elefsis to Odessa. http://
www.hbmci.gov.gr/js/investigation%20report/final/07-2015%20GOODFAITH.pdf

4. Conclusions

In this work we presented the architectural approach to building a real time maritime anomaly detection
service, as deployed in the context of the EU funded BigDataOcean project. We show how big data

435
challenges can be overcome to increase our understanding of maritime traffic and improve our
monitoring of vessel activities in real time. In addition, we demonstrate the capability of the proposed
service to ingest data of high volume and velocity and present some example case studies regarding real
world maritime anomalies and evaluate the ability of the presented service to detect these in due time.

References

ABIELMONA, R. (2013), Tackling big data in maritime domain awareness, Vanguard, pp. 42-43

AXBARD, S. (2016), Income Opportunities and Sea Piracy in Indonesia: Evidence from Satellite Data,
American Economic Journal: Applied Economics 8.2, pp.154-94

BRAX, C. (2011), Anomaly detection in the surveillance domain, Diss. Örebro Universitet

BREIVIK, Ø. et al. (2013) Advances in search and rescue at sea, Ocean Dynamics 63/1, pp.83-88

HOLST, A.; EKMAN, J. (2003), Anomaly detection in vessel motion, Internal Report Saab Systems,
Järfälla

IAMSAR (2010), Amendments to the International Aeronautical and Maritime Search and Rescue
Manual

ITU Radiocommunication Sector (ITU-R), (2014), Technical characteristics for an automatic


identification system using time division multiple access in the VHF maritime mobile band,
Recommendation M.1371-5

JOUSSELME, A.L.; PALLOTTA, G. (2015), Dissecting uncertainty-based fusion techniques for


maritime anomaly detection, Information Fusion (Fusion), 18th Int. Conf. IEEE, pp.34-41

KORB, K.B.; NICHOLSON, A.E. (2004), Bayesian Artificial Intelligence, Chapman & Hall/CRC Press

LAXHAMMAR, R. (2008), Anomaly detection for sea surveillance, 11th Int. Conf. Information Fusion,
pp.55-62

LAXHAMMAR, R. (2011), Anomaly detection in trajectory data for surveillance applications, Diss.
Örebro Universitet

LEE, J.G.; HAN, J.; WHANG, K.Y. (2007), Trajectory clustering: a partition-and-group framework,
ACM SIGMOD Int. Conf. Management of Data, pp.593-604

LI, X.; HAN, J.; KIM, S. (2006), Motion-Alert: Automatic anomaly detection in massive moving
objects, IEEE Intelligence and Security Informatics Conf. (ISI 2006), Berlin, pp.166-177

MARTINEAU, E.; ROY, J. (2011), Maritime anomaly detection: Domain introduction and review of
selected literature, No. DRDC-VALCARTIER-TM-2010-460, Defence Research and Development
Canada Valcartier (Quebec)

NATO (2007), NATO Concept for Maritime Situational Awareness, MCM-0140

NIMMICH, J.L.; GOWARD, D.A. (2007), Maritime Domain Awareness: The Key to Maritime
Security, Int. L. Stud. Ser. US Naval War Col. 83: 57

PERERA, L.P.; OLIVEIRA, P.; SOARES, C.G. (2012), Maritime traffic monitoring based on vessel
detection, tracking, state estimation, and trajectory prediction, IEEE Trans. Intelligent Transportation
Systems 13.3, pp.1188-1200

436
RABASA, A.; CHALK, P. (2012), Non-Traditional Threats and Maritime Domain Awareness in the
Tri-Border Area of Southeast Asia: The Coast Watch System of the Philippines, Rand National Defense
Research Inst, Santa Monica

RHODES, B.J.; BOMBERGER, N.A.; SEIBERT, M.C.; WAXMAN, A.M. (2005), Maritime situation
monitoring and situation awareness using learning mechanisms, Military Communications Conf.,
Atlantic City

RIVEIRO, M.; PALLOTTA, G.; VESPE M. (2018), Maritime anomaly detection: A review, Wiley
Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8.5: e1266

ROARTY, H.J. et al. (2013), Expanding maritime domain awareness capabilities in the Arctic: High
frequency radar vessel-tracking, Radar Conf. (RADAR), pp.1-5

ROY, J. (2008), Anomaly detection in the maritime domain, Optics and Photonics in Global Homeland
Security IV vol. 6945, Int. Society for Optics and Photonics

ROY, J.; DAVENPORT, M. (2009), Categorization of maritime anomalies for notification and alerting
purpose, NATO workshop on data fusion and anomaly detection for maritime situational awareness,
La Spezia, pp.15–17

SNIDARO, L.; VISENTINI, I.; BRYAN, K. (2015), Fusing uncertain knowledge and evidence for
maritime situational awareness via Markov Logic Networks, Information Fusion 21, pp.159-172

ZHANG, Z. et al. (2017), Review on Machine Learning Approaches in Maritime Anomaly Detection
Based on AIS Data, Electrical Engineering and Automation: Int. Conf. Electrical Engineering and
Automation (EEA2016), pp.880-887

ZHEN, R. et al (2017), Maritime anomaly detection within coastal waters based on vessel trajectory
clustering and Naïve Bayes Classifier, J. Navigation 70.3, pp.648-670

437
Hydrodynamic Design Procedure via Multi-Objective
Sampling, Metamodeling, and Optimisation

Luca Antognoli, Roma Tre University, Rome/Italy, luc.antognoli@stud.uniroma3.it


Simone Ficini, Roma Tre University, Rome/Italy, sim.ficini@stud.uniroma3.it
Marco Bibuli, CNR-INM, Rome/Italy, marco.bibuli@cnr.it
Matteo Diez, CNR-INM, Rome/Italy, matteo.diez@cnr.it
Danilo Durante, CNR-INM, Rome/Italy, danilo.durante@cnr.it
Salvatore Marrone, CNR-INM, Rome/Italy, salvatore.marrone@cnr.it
Angelo Odetti, CNR-INM, Rome/Italy, angelo.odetti@ge.issia.cnr.it
Ivan Santic, CNR-INM, Rome/Italy, ivan.santic@insean.cnr.it
Andrea Serani, CNR-INM, Rome/Italy, andrea.serani@insean.cnr.it

Abstract

A hydrodynamic design procedure is presented, combining multi-objective sampling, metamodeling,


and optimisation. A design study of a flapped surface for a passenger hydrofoil is discussed. Hydrody-
namics, stability and control are optimised with focus on maximum lift, minimum drag, and manoeu-
vrability/stability performance during take-off and turning manoeuvres. Shape optimisation and control
design are applied in combination with validated CFD simulations. Specifically, the hydrodynamic de-
sign of the foil sections is achieved though optimisation, combining automatic shape/grid modification,
adaptive sampling and metamodeling, and multi-objective optimisation algorithms for maximum lift
and minimum drag. A robust control scheme is designed for the optimised shape. Flaps and rudders
are commanded to stabilize roll and pitch motions, as well as steering the vessel during the desired
manoeuvres.

1. Introduction

When in flight conditions, hydrofoils have low sensitivity to waves both in terms of speed loss and
seakeeping performance, furthermore they have small wake/wave washing effects and reduced fuel
consumption, if compared to fast displacement ships with same payload and speed. These characteris-
tics are particularly evident in the case of submerged-wings hydrofoils. Their initial development dates
back to the ‘60s, when several Navies began to develop submerged-wings hydrofoils for military pur-
poses. In those years, the American Navy together with Boeing developed the first Jetfoil as a missile
launcher. Later in the ‘70s, the Italian Navy started the development of a Sparviero-class immersed-
wing missile launcher. In the ‘90s, catamarans equipped with submerged wings were built in both Nor-
way and Japan. The Norwegian design Foilcat 2900 used a pair of inverted-T wings at the bow and an
inverted Greek “Pi” wing at the stern. The Japanese design by Mitshubishi, the Super Shuttle 400 Rain-
bow, used an inverted Greek “Pi” aft wing. In recent years, the Italian company Rodriquez, now Inter-
marine, developed a prototype of fully-submerged wing hydrofoil, which requires the improvement of
some aspects including flight efficiency and safety, along with the increase of payload to be competitive
on the market. Despite its early developments, immersed-wings designs have not found extensive ap-
plications on the market of fast vehicles and their production has been limited to a few examples. This
is because alongside the undeniable advantages, this solution has presented some limitations that have
not facilitated its wide commercial use. Critical issues are: overall efficiency of the wing complex; take-
off phase stability and control; efficiency of the propulsion system both in the take-off and cruising
phases; maintenance of control surfaces; payload versus installed power; safety and stability during
navigation.

In the present paper, the study a new vehicle with hybrid features is outlined. This new design aims at:
maintaining the typical advantages of submerged-wings hydrofoils; overcoming problems related to the
in-flight stability; improving, compared to the state of the art, payload, wing efficiency, etc. Specifi-
cally, a synergetic design study of a flapped surface for a passenger hydrofoil is presented, where the
vessel is characterized by immersed fore and aft foils. The fore foil is attached to the hull by a single

438
strut that embeds a vertical rudder; the fore foil has two independent flaps for the motion stabilization.
The aft foil is attached by means of three vertical struts (each equipped with an independent rudder);
the aft foil has two lateral and two central flaps. Foils are straight/tapered and have NACA 16-3075
sections.

Experimental and computational fluid dynamics (EFD, CFD) are used to assess design performance,
which are later optimised for maximum lift, minimum drag, and manoeuvrability/stability performance
during take-off/turning via shape design and control. The aim of EFD is the collection of data for hy-
drodynamic characterization and CFD validation of the forward fully-submerged inverted T-foil. A 1:7
scale model has been designed and manufactured at CNR-INM laboratories by rapid prototyping tech-
niques, using a performing 3D printing polylactic acid (PLA) with steel stiffening. The prototype has
been tested at CNR-INM towing tank covering a wide range of speeds and deflection angles of the flap.
An in-house finite volume method (FVM) is used for unsteady Reynolds-averaged Navier-Stokes equa-
tion (RANSE) calculations. The agreement with the EFD data indicates reliable predictions and suita-
bility for simulation-based design optimisation and control (by accurate definition of hydrodynamic
coefficients). The shape design of the foil sections is achieved though optimisation, combining auto-
matic shape/grid modification, adaptive sampling and metamodeling, and multi-objective optimisation
algorithms for maximum lift and minimum drag. Finally, the development of the control system has
been achieved, overcoming two major difficulties: the hard-nonlinear behaviour of the overall craft and
the presence of unstable dynamics that require reliable and robust stabilization schemes. A robust con-
trol scheme is designed for the optimised shape. Flaps and rudders are commanded to stabilize roll and
pitch motions, as well as steering the vessel during the desired manoeuvres.

2. Computational fluid dynamics

The numerical solution of the unsteady RANSE is achieved through an in-house developed simulation
tool. The algorithm is formulated as a finite volume scheme, with variables co-located at cell centres.
Turbulent stresses are taken into account by the Boussinesq hypothesis, and several turbulence models
(both algebraic and differential) are implemented. Here the Spalart and Allmaras (1994) turbulence
model is used. The free surface is taken into account through a single-phase level set algorithm. The 3D
wing geometry is fully discretized together with flaps and struts with a body-fitted structured grid, al-
lowing for an in-depth analysis of the flow features near the body. In order to treat complex geometries
or bodies in relative motion, the numerical algorithm is discretized on a block-structured grid with par-
tial overlap, possibly in relative motion. This approach makes domain discretization and quality control
of the calculation grid much easier than with similar discretization techniques implemented on meshes
structured with adjacent blocks. Of course, grid connections and overlaps are not trivial, as with stand-
ard multi-block approaches, but must be calculated in the pre-processing phase. The coarse/fine grain
parallelization of the RANSE code is obtained by distributing the structured blocks among available
distributed memory (nodes) or shared memory (threads) processors. Pre-processing tools, which allow
the subdivision of structured blocks and their distribution among the processors, are used for load bal-
ancing, while fine tuning is left to the user. The communication between the processors for the coarse
grain parallelization is obtained using the standard message passing interface (MPI) library, while the
fine grain parallelization (shared memory) is achieved through the open message passing library
(OpenMP). The efficiency of the parallel code has been examined in earlier research, showing satisfac-
tory results in terms of acceleration for different test cases, Broglia et al. (2007, 2014). More details on
the code implementation and application may be found in Di Mascio et al. (2001, 2007, 2009) and
Muscari et al. (2006).

The wing group at the bow region was characterized by an extensive analysis of the loads at different
regimes. Four different angles of attack of the whole group were considered (in addition to the built-in
angle of 1.5°): 0°, 3°, 5°, 7°. For each of them the flap was also rotated by: -15°, -10°, -5°, +5°, +10°,
+15°, where with the plus sign (+) a rotation downwards of the flap is intended.

439
(a) (b)

(c) (d)

(e) (f)

(g) (h)

Fig.1: Top: Pressure coefficient distribution with +5° flap angle. Angles of attack are (from top left
to bottom right): (a) 0°, (b) 3°, (c) 5°, (d) 7°. The colour map (red/blue) goes from -0.5 to 0.5.
Bottom: Streamwise velocity distributions with +5° flap angle on a domain section at the centre
of the flap. Angles (e-h) are ordered as in the top images. The map colour (white/blue) goes
from -0.78 m/s to 8.55 m/s

Being the total number of simulations quite demanding in terms of computational time, steady compu-
tations were performed. According the present numerical scheme, which is based on a pseudo-com-
pressible technique, a steady approach is intended as an average of a time varying solution, which is
correct in view of the mere evaluation of the forces acting on the body. For the flap rotation, the Chimera
technique was widely exploited. For every angle of rotation of the flap, a new grid is designed to make

440
the solution as smooth as possible. When the angle of attack of the wing group is varied, we preferred
to rotate the inflow direction rather than the whole grid, so that the shown solutions are coherently
counter-rotated.

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Fig.2: Top: Pressure coefficient distribution with +15° flap angle. Angles of attack are (from top left
to bottom right): (a) 0°, (b) 3°, (c) 5°, (d) 7°. The colour map (red/blue) goes from -0.5 to 0.5.
Bottom: Streamwise velocity distributions with +5° flap angle on a domain section at the centre
of the flap. Angles (e-h) are ordered as in the top images. The map colour (white/blue) goes
from -0.78 m/s to 8.55 m/s

441
Figs.1 and 2 show on the two top rows the pressure distribution on the two sides of the forward wing:
as it can be seen, as the angle of attack increases the back of the wing is in a condition of increasing
depression with increasing angle on attack. In Fig.1 the changes of pressure distribution and streamwise
velocity field with the flap lowered to +5° can be appreciated. In general, the bow wing group is char-
acterized by a regular pressure distribution for angles of attack of 0°, 3° and 5°, while at 7° there are
problems for angles of rotation of the flap of +10 and +15°, Fig.2. From the velocity field shown in the
bottom rows of Fig.2, it appears evident how the flow over the back of the profile is completely sepa-
rated, thus indicating a stall condition.

3. Experimental fluid dynamics

In order to develop a scaled model of the bow wing group with strut and flaps, the loads acting on the
body are numerically evaluated, in some meaningful condition. Specifically, a finite element simulation
of the loaded model was performed in order to assess the maximum deformation of the wing. 3D print-
ing is used and a model at scale 1:7 with moving flaps is obtained, Fig.3a. Towing-tank experiments at
different angle of attacks and flap deflections were performed. Significant deformations of the wing at
high speed (20 kn, 25 kn) are a concern. For this reason, more rigid models are currently in the design
phase to be tested in the near future. A preliminary comparison of numerical and experimental results
is included in Fig.3b, showing lift versus speed at the built-in angle of attack of 1.5°.

(a) (b)
Fig.3: Experimental model (a) and comparison of numerical and experimental results (b)

4. Optimisation procedure of hydrodynamic performance

4.1. Problem formulation

A 2D section is considered for this preliminary optimisation (namely a NACA 16-3075 profile),
assuming take-off conditions. The built-in angle of attack is set here to 2°, whereas the flap is deflected
by 15° (downwards). No additional pitch angle is considered. A Reynolds number of about 1×106 is
used. An idealised flapped profile with no gaps is considered. The optimisation aims at reducing the
drag coefficient CD, while maintaining the lift coefficient at least equal to the original value, CL*. The
optimisation problem is formulated as

Minimise 𝐶𝐷 and Maximise 𝐶𝐿


(1)
Subject to 𝐶𝐿 ≥ 𝐶𝐿 ∗

4.2. Shape modification

The profile is modified by adding in z direction both to the pressure and the suction side a Hicks-Henne
function, e.g., Masters et al. (2015), with one bump, 𝑏(𝜉):

442
log 0.5 𝑡2
𝑏(𝜉) = 𝑎 [sin (𝜋𝜉 log 𝑡1 )] (2)

𝜉 is a nondimensional curvilinear coordinate (where 0 is the leading edge and 1 the trailing edge of the
unflapped section), 𝑎 the maximum bump amplitude, 𝑡1 controls the bump location, 𝑡2 defines its width.
Two design variables are chosen, namely the amplitude 𝑎 and the position 𝑡1 of the bump, whereas 𝑡2 is
kept fixed equal to 2. The variables’ range of variation is set to ±3%c (c is the unflapped chord) and
0.2-0.6, respectively. In the following, both variables are normalized between 0 and 1.

Eq.(2) is applied to both sides, therefore the section thickness remains constant. Furthermore, shape and
RANSE computational grid are automatically modified according to Eq.(2).

4.3 Multi-objective optimisation method

A multi-objective version of the deterministic particle swarm optimisation (MODPSO) method is used
to solve the problem of Eq.(1). Details can be found in Pellegrini et al. (2017).

4.4 Adaptive metamodeling

Two metamodeling techniques are used, namely stochastic radial basis functions (RBF) and Gaussian
processes (GP), and combined with adaptive sampling methods as described in the following.

4.4.1 Stochastic radial basis functions

Consider an objective function 𝑓(𝐱), where 𝐱 ∈ ℝ𝑁 is the design variable vector and 𝑁 the design space
dimension. Let the true function value be known in 𝐽 training points 𝐱𝑗 with associated objective
function values 𝑓(𝐱𝑗 ). The metamodel prediction 𝑓̃(𝐱) is computed as the expected value (EV) over a
stochastic tuning parameter of the RBF metamodel, e.g., 𝜏 ∼ unif[1,3]:

𝑓̃(𝐱) = EV[𝑔(𝐱, 𝜏)]𝜏 , with 𝑔(𝐱, 𝜏) = ∑𝐽𝑗=1 𝑤𝑗 ||𝐱 − 𝐱𝑗 ||𝜏 (3)

𝑤𝑗 are unknown coefficients and || ⋅ || is the Euclidean norm. The coefficients 𝑤𝑗 are determined
enforcing exact interpolation at the training points 𝑔(𝐱𝑗 , 𝜏) = 𝑓(𝐱𝑗 ) by solving 𝐀𝐰 = 𝐟, with 𝐰 =
{𝑤𝑗 }, 𝑎𝑖𝑗 = ||𝐱𝑖 − 𝐱𝐣 ||𝜏 , and 𝑓 = {𝑓(𝐱𝑗 )}.

The uncertainty 𝑈𝑓̃ (𝐱) associated with the prediction is quantified as four times the square root of the
variance. The maximum-uncertainty adaptive sampling (MUAS) method identifies new training points
by solving the following single-objective maximization problem:

𝐱 ⋆ = argmax[𝑈𝑓̂ (𝐱)] (4)


𝐱

Accordingly, new training points are adaptively placed where the prediction uncertainty is maximum.
Details of methodology, implementation, and example applications are found in Volpi et al. (2015) and
Pellegrini et al. (2018). Here, both CL and CD are interpolated by the RBF model and their prediction
and associated uncertainty considered for optimisation and adaptive sampling. The latter is performed
considering the largest uncertainty associated to lift and drag coefficients, respectively.

4.4.2. Gaussian process

A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint
Gaussian distribution. The mean function 𝑚(𝐱) and the covariance function 𝑘(𝐱, 𝐱 ′ ) of a real process
𝑓(𝐱) are defined as (Rasmussen, 2004)

443
𝑚(𝐱) = EV[𝑓(𝐱)] and 𝑘(𝐱, 𝐱 ′ ) = EV[(𝑓(𝐱) − 𝑚(𝐱))(𝑓(𝐱 ′ ) − 𝑚(𝐱 ′ ))] (5)

and 𝑓(𝐱) may be approximated as

𝑓(𝐱) ∼ 𝑓̃(𝐱) = GP(𝑚(𝐱), 𝑘(𝐱, 𝐱 ′ )) (6)


2
The covariance functions is evaluated as 𝑘(𝑥𝑝,𝑖 , 𝑥𝑝,𝑗 ) = 𝑒 −𝜃𝑝 ‖𝑥𝑝,𝑖−𝑥𝑝,𝑗‖ with a set of free tuning
parameters 𝜃𝑝 , where 𝑖 and 𝑗 are training set indices and 𝑝 is the design variable index. The parameters
are defined so as to maximize the log likelihood. Mean and variance associated to the prediction are
calculated accordingly, e.g. Williams (1998).

Finally, the uncertainty 𝑈𝑓̃ (𝐱) associated with the prediction is quantified as four times the square root
of the variance. The MUAS criterion of Eq.(4) is used for the adaptive sampling procedure, considering
the largest uncertainty between lift and drag coefficients.

4.5 Optimisation results

Five points are selected as initial training set in the nondimensional domain, specifically [(0.5,0.5);
(0.5,0); (0,0.5); (0.5,1); (1,0.5)]. Five additional points are defined based on the adaptive sampling
methodology described above, Figs.4 and 5. Four points are placed at the domain corners with both
RBF and GP. Although the global trend of CL and CD provided by RBF and GP is reasonably similar,
their uncertainty structure is quite different. Therefore, the fifth RBF point is different than the fifth GP
point.

(a) RBF (b) GP


Fig.4: Metamodel prediction (top) and associated uncertainty (bottom) for lift coefficient, CL

444
(a) RBF (b) GP
Fig.5: Metamodel prediction (top) and associated uncertainty (bottom) for drag coefficient, CD

The two Pareto fronts obtained by MODPSO with both RBF and GP are shown in Fig.6. It may be
noted how the span the same objective function region. Nevertheless, they are quite different to each
other, indicating convergence of the metamodel training has not been achieved yet. The training point
(in red) belonging to the front is used for comparison to the original profile. Specifically, the drag
coefficient is reduced by 18%, whereas the lift coefficient is increased by 10%. As a consequence, the
hydrodynamic efficiency is increased by 35%. As a comparison, Fig.7 shows the section along with the
pressure distribution and the y-component of the vorticity vector of original and optimised profile.

Fig.6: Multi-objective optimisation results

445
(a) Original (b) Optimised
Fig 7: Comparison of y-component of vorticity of original and optimised profile

Finally, and in addition to the current design space, a bio-inspired wing section (taken from owl wings)
named ISHII is also considered. Fig.8 shows a comparison between the original profile and an ISHII
section. As visible, a second curvature on the pressure side allows the flow to remain more attached
giving a global increase of 1.5 to 2 times the efficiency of the original NACA. The vorticity fields in
the depicted configuration (7° angle of attack and 20 kn advancing speed) allow to appreciate the lower
shedding and the back reattachment of the flow for the the ISHII profile, when compared to the NACA.
As a drawback, the thickness of the ISHII profile is fairly reduced, resulting in poor manufacturing and
structural performance.

Obviously, being the simulations 2D, the vorticity intensity is significantly greater than the correspond-
ing 3D. This means that the 2D simulations must be considered only in the sense of a first estimation
of the global performances of an optimised design.

Fig.8: Comparison between the NACA 16-3075 original profile (left) and a bio-inspired ISHII profile
(right) at 20 kn, 7° angle of attack and 10° flap rotation. On top the pressure fields, on bottom
the vorticity fields

446
5. Hydrofoil control of flapped surfaces

The control system development for a hydrofoil vessel is a challenging problem to be tackled for two
main reasons: the first is the hard non-linear behaviour, Khalil (2002), of the overall craft, making a
demanding task the design of a suitable mathematical model, the second is the presence of unstable
dynamics that require the development of reliable and robust stabilization schemes, Zhou et al. (1996),
in such a way to guarantee the regulation of the vessel motion within the operative limits.

Fig.9 shows the actual vessel sketch with the relevant parameters used to design the control system.
The results of a combined take-off and turning manoeuvre is reported in Figs.10 and 11, showing the
combined action of the control scheme acting on the different control surfaces (flaps and rudders) to
stabilize successfully the motion on the different axes while at the same time tracking the desired ref-
erences.

Fig.9: Representation of the overall vessel, used for control

The first issue requires the design of a mathematical model based on ordinary differential equations
(which represent a much more handily tool from a control standpoint, with respect to distributed pa-
rameters or numerical models) that captures the main motion behaviours and embeds the controllability
and observability characteristics of the system for the stabilization-oriented analysis. The second point
is related to the development and performance evaluation of the control system devoted to the regulation
of roll, pitch and yaw motions during the different operative phases (take-off, cruise, turning). In par-
ticular, given the unstable dynamics of the roll motion, a robust control scheme is designed to ensure a
stabilized motion within the predefined operating limits and external disturbances. A robust control
scheme based on local linearization and custom-shape of the desired closed-loop dynamics is designed
to command the proper driving of flaps and rudders in order to stabilize the roll and pitch motions, as
well as steering the vessel where intended.

447
Fig.10: Combined take-off and turning manoeuvre – Top plot depicts the speed profile, central plot
reports the immersion values of fore and aft foils, bottom plot shows the angle values com-
manded to the fore and aft central flaps for pitch control

Fig.11: Combined take-off and turning manoeuvre – From the top, the first plot depicts the angle
values commanded to aft lateral flaps for roll control; the second plot reports the commanded
rudder angle for the steering actions; the third plot represent the heading of the vessel and the
fourth plot reports the yaw-rate (i.e. the turning speed)

6. Conclusions

A hydrodynamic design procedure of a passenger-hydrofoil flapped surface has been presented, com-
bining adaptive multi-objective sampling, metamodeling, and optimisation. Hydrodynamics, stability
and control were assessed and optimised with focus on maximum lift, minimum drag, and manoeuvra-
bility/stability performance during take-off and turning manoeuvres. Validated CFD simulations were
used for hydrodynamic performance predictions, provided by a RANSE solver. Stochastic radial basis
functions and Gaussian processes were used as adaptive metamodels.

448
CFD results pertained to the 3D submerged wing under a variety of conditions, whereas current opti-
misation results are limited to a flapped 2D section of the foil in take-off phase. Two design variables
were used, with shape modifications provided by the Hicks-Henne function. Despite the use of an ide-
alised 3D geometry and a low-dimensional design space, results are promising as the drag coefficient
is reduced by 18%, whereas the lift coefficient is increased by 10%. The hydrodynamic efficiency is
increased by 35%.

Future work includes use of 3D simulations in the optimisation phase possibly under multiple condi-
tions, definition of higher-dimensionality design spaces, along with final EFD campaign for the opti-
mised wing.

Acknowledgements

The work is part of the CNR-INM activities within the research project IBRHYDRO, led by the Italian
shipyard “Intermarine S.p.A.” and financially supported by the Italian Ministry of Infrastructures and
Transport.

References

BROGLIA, R.; DI MASCIO, A.; AMATI, G. (2007), A Parallel Unsteady RANS Code for the Numer-
ical Simulations of Free Surface Flows, 2nd Int. Conf. Marine Research and Transportation, Ischia

BROGLIA, R.; ZAGHI, S.; MUSCARI, R.; SALVADORE, F. (2014), Enabling hydrodynamics solver
for efficient parallel simulations, Int. Conf. High Performance Computing & Simulation (HPCS), Bo-
logna

DI MASCIO, A.; BROGLIA, R.; FAVINI, B. (2001), A second order Godunov-type scheme for naval
hydrodynamics, Godunov Methods: Theory and Applications 26, pp.253–261

DI MASCIO, A.; BROGLIA, R.; MUSCARI, R. (2007), On the application of the single-phase level
set method to naval hydrodynamic flows, Computers and Fluids 36, pp.868-886

DI MASCIO, A.; BROGLIA, R.; MUSCARI, R. (2009), Prediction of hydrodynamic coefficients of


ship hulls by high-order Godunov-type methods, J. Marine Science and Technology 14, pp.19-29

KHALIL, H.K. (2002), Nonlinear Systems, Prentice-Hall

MASTERS, D.A.; TAYLOR, N.J.; RENDALL, T.; ALLEN, C.B.; POOLE, D.J. (2015), Review of
aerofoil parameterisation methods for aerodynamic shape optimisation, 53rd AIAA Aerospace Sciences
Meeting, p.0761

MUSCARI, R.; BROGLIA, R.; DI MASCIO, A. (2006), An overlapping grids approach for moving
bodies problems, 16th Int. Offshore and Offshore and Polar Eng. Conf.

PELLEGRINI, R.; SERANI, A.; LEOTARDI, C.; IEMMA, U.; CAMPANA, E.F.; DIEZ, M. (2017),
Formulation and parameter selection of multi-objective deterministic particle swarm for simulation-
based optimisation, Applied Soft Computing 58, pp.714-731

PELLEGRINI, R.; SERANI, A.; DIEZ, M.; WACKERS, J.; QUEUTEY, P.; VISONNEAU, M. (2018),
Adaptive sampling criteria for multi-fidelity metamodels in CFD-based shape optimization, 7th
European Conference on Computational Fluid Dynamics (ECFD 7), Glasgow, UK, 11-15 June 2018.

RASMUSSEN, C.E. (2004), Gaussian processes in machine learning, Advanced Lectures on Machine
Learning, Springer, pp.13-27

449
SPALART, P.R.; ALLMARAS, S.R. (1994), A one-equation turbulence model for aerodynamic flows,
La Recherce Aerospatiale 1, pp.5-21

VOLPI, S.; DIEZ, M.; GAUL, N.J.; SONG, H.; IEMMA, U.; CHOI, K.K.; CAMPANA, E.F.; STERN,
F. (2015), Development and validation of a dynamic metamodel based on stochastic radial basis
functions and uncertainty quantification, Structural and Multidisciplinary Optimisation 51(2), pp.347-
368

WILLIAMS, C.K. (1998), Prediction with Gaussian processes: From linear regression to linear
prediction and beyond, Learning in graphical models, Springer, pp.599-621

ZHOU, K.; DOYLE, J.C.; GLOVER, K. (1996), Robust and optimal control, Prentice-Hall

450
Discrete Event Simulation for Strategic Shipyard Planning
Yong-Kuk Jeong, KTH Royal Institute of Technology, Södertälje/Sweden, yongkuk@kth.se
Huiqiang Shen, Seoul National University, Seoul/Korea, shenjim712@snu.ac.kr
Youngmin Kim, Seoul National University, Seoul/Korea, bestofall@snu.ac.kr
Young-Ki Min, Seoul National University, Seoul/Korea, nasmin@naver.com
Jong Gye Shin, Seoul National University, Seoul/Korea, jgshin@snu.ac.kr
Philippe Lee, Xinnos Co. Ltd., Seoul/Korea, philippe_lee@xinnos.com
Jong Hun Woo, Korea Maritime and Ocean University, Busan/Korea, jonghun_woo@kmou.ac.kr
Yong-Gil Lee, Korea Maritime and Ocean University, Busan/Korea, yaleyong@kmou.ac.kr

Abstract

This paper describes a discrete event simulation system for the strategic planning of shipyard, such as
expanding the capacity of a drydock or gantry crane facilities. We verified several scenarios to
improve the ship production system of a large Korean shipyard by using such process-centric
simulations. The improvement in production and logistics load due to facility expansion and layout
change were confirmed against quantitative results.

1. Introduction

The basic elements of the shipyard's production environment are divided into six categories: product,
process, facility, workforce, space, and schedule. These are related to each other, and they are
basically composed of processes. Based on the shipbuilding process, the space element layout is
determined, and the facility is constructed considering the process and product. The schedule is
determined by considering constraints such as customer requirements and workforce. The relationship
between production environment factors can vary slightly depending on the situation, but generally,
the remaining components are decided around the process as mentioned above.

However, it is not easy to change the layout, which is determined by considering the shipbuilding
process. The layout of the shipyard should be considered not only for the shipbuilding process but
also for the geographical requirements, because it is large and can affect the shipbuilding process
when the layout is changed. Therefore, in order to change these factors, a systematic strategy should
be established and accessed. At this time, a simulation model can be utilized. Simulation models can
be used to build virtual production environments such as shipbuilding production environments, and
quantitatively compare the effects and damages that can be achieved when production factors are
changed.

In this study, based on the existing shipyard simulation framework and system, we compare and
analyse the quantitative performance of the shipyard with changing production environment. Changes
in the shipyard production environment constituted a scenario based on the requirements of large
shipyards in Korea.

2. Simulation framework and computational shipyard dynamics

The shipbuilding simulation framework, Fig.1, proposed by Woo et al. (2016) defines six information
models for shipbuilding simulation as product, process, schedule, facility, space, and workforce. In
the simulation platform, a simulation model is constructed, basic functions and algorithms necessary
for performing the simulation are defined, and targets for constructing the simulation system are
defined quantitatively using KPIs. The computational shipyard dynamics proposed by Kim et al.
(2018) stated that the relationship between the input variables (product, process, schedule, facility,
space, and workforce) and the output variables (KPIs) has the same characteristics as functionals in a
mathematical sense. The relationship is quite complex due to the characteristics of the shipbuilding
industry and difficult to express in an explicit manner and can be constructed using a discrete event

451
simulation model. The output variables are then calculated by numerically evaluating the simulation
model, rather than by an analytic method.

Fig.1: Shipyard DES simulation framework, Woo et al. (2016)

In this study, the simulation system is applied to compare the quantitative performance according to
the shipyard strategy plan. In order to perform the simulation according to the scenario, input and
output variables should be defined. The input variables can be product-mix, production plan, layout,
facility operation policy, available workforce condition and dispatching rule. The output variables are
workforce, production cost, production volume, and so on. The relation between them can be
expressed as Fig.2.

Fig.2: Simulation procedure for strategic shipyard planning

3. Applications

3.1. Comparison of quantitative performance with changes in shipbuilding production


environment (use case #1)

The first case is to use a simulation model to change the layout and working time of the shipyard
production environment. In this case, the facility and space elements generally act as variables or
constraints in the simulation model. In the case of facilities, capacity of facilities, workload per unit
time are used as variables and constraints, and spatial factors are used as variables and constraints in
roads, workshops, and stockyards. And the workforce element is generally used as a variable rather
than a constraint condition and affects the simulation result by varying the workload per unit time
according to the skill of the worker. The product, process, and schedule elements are input
information required to perform the simulation and can be defined mainly as master plan data type,
Fig.3.

452
Fig.3: Basic components of shipyard production environment

In this case, the simulation model consists of a geographic information model, a facility model, and a
process model. The geographical information model includes spatial information such as workshop,
stockyard, and road. The facility model includes facility information such as outdoor cranes,
transporters, etc., and the process model includes information on the preceding and following
processes in accordance with the shipbuilding process.

At this time, the geographic information model has shipyard layout information as coordinate
information of GIS object, and it should include mapping information for linking with process
information. And, maximum load information such as quantity and area should be defined so that it
can act as a constraint condition in simulation execution. The process model should include all of the
shipbuilding processes according to the product type, and the product information should have
predefined the operation period, input schedule, and dispatching rule, Fig.4.

Fig.4: Simulation model (use case #1)

The variables of the simulation model and their quantitative performance are summarized as follows.
Simulation variables represented by product information and schedule information can be used to
derive production quantities including process loads as quantitative results. The effects of the new
production facility and technology can be applied to the simulation model by updating the equipment
and workforce information, and the result can be derived as production quantity, number of required
workers and production cost. Finally, the result of rearrangement of yard layout can quantitatively
calculate the change of logistic flow when spatial information is changed.

In the actual case, simulation results were compared in order to reduce the working time due to the
automation equipment of the panel block manufacturing plant (P001) and the work time saving due to
the increase of the area of the curved block manufacturing plant (C001). In order to compare the
results, we modified the master plan data, which is the simulation input information, by shortening the

453
work time of the P001 factory and C001 factory by 10%. And we tried to distribute the products with
specific conditions to new workshops, and to check the load of new workshops.

When changing the master plan data, the working time was modified based on the assembly block,
and the scheduling of the modified trailing process and the loading date information of the upper
block were reflected collectively. For example, if the specific process of the assembly block is
shortened by 10%, the start date of the trailing process is also shortened by the shortened period. The
erection process start date of the parent block was also modified to correspond to the end of the last
process of the subblock, Fig.5.

Fig.5: Reduction of working time

As a result of the simulation, we compared the number of blocks placed in workshop and stockyard,
Fig.6. In the workshop, which is a work space, the number of products decreased, and there was no
significant change in the stockyard that functions as a buffer. This means that although the number of
products put into the workshop decreased due to the shortening of the work time, the load on the
stockyard did not change much because there was no substantial change in the buffer period.
Therefore, in order to substantially improve the production volume, it is necessary to shorten the batch
of input information and to modify the production plan associated with it.

Fig.6: Number of blocks in workshop and stockyard (use case #1)

In addition, changes in the shipyard production environment also affect logistics flow in shipyards,
Jeong et al. (2018). Logistics flow in this result has changed as Fig.7. The number of transfers of
products using transporters between workshops and stockyards was compared with the traveling
distance, but there was no significant change in the number of transfers. However, when the traveling
distance during each transporter simulation period was compared, it was confirmed that the travel
distance was shortened. This means that a single move travelled a shorter distance, and it can be
deduced that a block is placed close to the workshop.

In this simulation model, a new factory that can process products with specific conditions is added
virtually. Because it is a factory that does not exist already, we added new logic so that products
satisfying the condition can be processed in this factory. Simulation results can be used to calculate
the load of the work carried out at the factory during the simulation period (Fig.8). This can be used as
a basis for defining the maximum quantity of work for a new workshop according to the production
plan.

454
Fig.7: Transporter usage analysis

Fig.8: Work load analysis of the new factory

3.2. Comparison of workload and waiting time due to change of input order (use case #2)

The shipyard includes large and small workshops that perform various types of processes. Each
workshop performs a defined process, typically characterized by the product, Fig.9. Among the
various unit processes, the panel block assembly workshop has a higher level of automation because
its products are standardized compared to other workshops in the shipyard. Therefore, it is possible to
apply a certain level of flow production type manufacturing system, and standard operation time per
process is also high. In the second case, the panel block assembly workshop is assumed to have
changed the order of input of products, and the quantitative performance is compared and analysed.

Fig.9: Panel block assembly workshop

Panel block work process consists of steel plate welding, stiffener welding, sub-assembly, assembly,
inspection, main assembly and so on. The detailed process and time differ depending on the
characteristics of the product, and this information can be calculated using standard information. As
mentioned above, unlike other processes, the panel block assembly process has a high degree of
automation because the product is standardized to a certain level, and thus the time required for the
unit process can be calculated relatively accurately.

455
Fig.10: Scheduling template of field scheduler using Microsoft Office Excel

The time required to make the panel block depends on the length of the steel plate, the number of
stiffeners used, the welding length, and the assembly method. Therefore, the field scheduler must
consider these factors in advance to determine the order of input so that no delay occurs. Currently,
when determining the order of input, the standard data collected on site and the experience of the field
scheduler are used to prevent delays between processes. However, it is difficult to respond quickly
and accurately when an emergency quantity occurs or when the order of input is changed (Fig.10). To
do this, we define the process-centric simulation model for the panel block assembly process and
analyse the results, Fig.11.

Fig.11: Simulation model (use case #2)

The simulation model is basically based on the process performed at the panel block assembly
workshop. Welding time was calculated by using standard information and standard work time of
each product and length of stiffener. Once the simulation is done, the steel plate is put into the process
according to the determined order, and the process is performed according to the flow type production
method, so that the time and the waiting time of the product can be analysed in detail in each process.

456
Through this, the work time and waiting time for each date and process are elaborated in detail. If too
much waiting time occurs, the cause of the delay can be deduced through the simulation result log,
Fig.12.

Fig.12: Simulation results (use case #2)

4. Conclusions

In this study, the method of using the simulation system to quantitatively evaluate the shipbuilding
strategy plan is explained in detail. The shipyard can define the production environment components
around the process, the facility is large and occupies a large area, the relationship between the
production environment components is complex, and there are various constraints. Therefore, a
systematic and quantitative analysis method is needed to implement the strategic plan. In this study,
we apply the proposed shipbuilding simulation framework and process-centric simulation model to
actual shipbuilding case. In addition, it provided quantitative results that can support decision making
in establishing strategic plan and the basis of analysis to support decision making of strategic plan. It
is expected that this method will be useful for future changes in the shipyard's production
environment or in establishing strategies for long-term development.

Acknowledgements

This research was jointly supported by the National IT Industry Promotion Agency (NIPA) grant
funded by the Korea government (MSIT) (S1101-16-1020, The development of manufacturing
strategy and execution simulation for shipyard manufacturing cost) and the Korea Evaluation Institute
of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (10050495,
Development of the simulation-based production management system for the middle-sized
shipbuilding companies).

References

JEONG, Y.K.; LEE, P.; WOO, J.H. (2018), Shipyard Block Logistics Simulation Using Process-
centric Discrete Event Simulation Method, J. Ship Production and Design 34/2, pp.168-179

KIM, Y.; WOO, J.H.; JEONG, Y.K.; SHIN, J.G. (2018), Computational Shipyard Dynamics, J. Ship
Production and Design 34/4, pp.355-367

WOO, J.H.; KIM, Y.; JEONG, Y.K.; SHIN, J.G. (2016), A research on simulation framework for the
advancement of supplying management competency, J. Ship Production and Design 32/4, pp.1-20

457
Designing Ship Digital Services
Stein Ove Erikstad, NTNU, Trondheim/Norway, stein.ove.erikstad@ntnu.no

Abstract

In this paper we present a system design driven approach to developing digital ship services. We follow
a Needs-Function-Form path towards balancing service value against installation and running cost.
We outline a model for quantifying service value and cost as a function of both data input characteris-
tics as well as computational analysis capabilities. The presented framework is not tested in a real
case, but we provide an illustrative example for an onboard service for monitoring motions and slam-
ming vibration on passenger comfort. We define alternative levels of quality and value of this service
by using what we have called a “Digital Service Maturity Index”. This index will be influenced by both
the ship sensor configuration, the availability of external operational context data streams, as well as
the characteristics of the digital twin model providing operational insight.

1. Introduction

In recent years we have seen a rapid development of sensor and communication technologies - also
known as the Internet of Things (IoT) - both in the maritime sectors and in the society as such. In the
wake of this development we have seen the installation of extensive sensor systems onboard both new
and existing vessels, as well as the development of Digital Twin technologies that provides close-to-
real time, high fidelity digital models of ships.

In parallel, the maritime industry has identified digital services as a key enabler for improved
operations, both in terms of efficiency, safety and environmental impact. Digital services are also a
prerequisite for remotely controlled and autonomous navigation concepts.

These two developments should be intrinsically connected, belonging respectively to the need space
and the solution space of digital service development. Still, we see a misalignment between them that
needs to be understood on a fundamental level. Sensor systems have been developed without a proper
understanding of how the corresponding data streams can be used for supporting real-world design and
operational decision-making among shipping stakeholders. Ship owners and operators, as well as
equipment suppliers, have gathered large quantities of data over recent years, without the proper idea,
competence or capacity to use them for decision support. The consequences are wrong sensors installed
in non-optimal locations, and with arbitrary fidelity, accuracy and frequency. At the same time, suites
of digital services are developed without benefiting from the opportunities for improved insight and
better decision-making that can be achieved by exploiting both sensor data and other external data
streams.

Fig.1: The development and implementation of many existing sensor-driven services have been oppor-
tunity-driven rather than needs driven, with varying utilization and value-to-cost ratios

458
To alleviate this situation, we need to take a design-driven approach within a holistic “sensors-to-ser-
vice” (S2S) perspective. The basis is industry recognizable functional breakdown structures of core
shipping operations, both operational (onboard/remote), tactical (planning and scheduling) and strate-
gic (design, fleet renewal). For each of these functions, a model relating the quality of input data to the
value of improved decision making and improved vessel and fleet performance needs to be established.
For example, what is the value of real-time sensor observations of motion behaviour on crew and pas-
senger comfort and stress levels? How can these observations be turned into actionable insight that
support better decisions? Or, what is the value of reduced fuel consumption and emission levels from
real time sensor observations of combined speed-over-ground measurements (AIS), shaft torque, and
metocean data? How will this value be influenced by sensor quality parameters, such as frequency and
accuracy.

Digital Twin technologies have recently received considerable attention, also within the maritime in-
dustries. A digital twin is a virtual model that captures the state and behaviour of a real asset, such as a
ship, a semisubmersible, an offshore wind turbine or a fish farm, in close to real time, based on sensor
input, Erikstad (2017). The virtual model resolution can be based on physics, e.g. structural analysis
and fluid mechanics, or artificial intelligence and machine learning, or a combination of these. A digital
twin can be considered an extension of engineering simulation and analysis models, but where the state
rendering and corresponding derived performance are based on real-time sensor observations rather
than anticipated load cases.

In the maritime industry, digital twin implementations are typically complementing a range of other
digital asset technologies, ranging from advanced analysis models, product lifecycle models, CFD and
FEA, and increasingly big data and machine learning. In our work, we will consider these different
technologies not as alternatives, but rather as complementary technologies that can be combined to
achieve important insights into marine systems design and operations.

More precisely, a Digital Twin (DT) can be defined as “a digital model capable of rendering state and
behaviour of a unique real asset in (close to) real time”. A DT can be defined by five core characteristics.
It has identity, by connecting to a single, real and unique physical asset, such as a ship, a semisubmers-
ible, a riser, or a wind turbine. When we observe a state on the DT, it corresponds one-to-one with a
potential observation on a particular physical asset. We need a representation of the DT, which implies
the capturing of essential physical manifestation of the real asset in a digital format, such as CAD or
engineering models with corresponding metadata. The DT captures the state of the corresponding real
asset in (close to) real time, which demarcates a DT from a traditional CAD/CAE model. It will typi-
cally comprise behaviour, using a computational model that reflects basic responses to external stimuli
(forces, temperatures, chemical processes, etc.). Lastly, it captures the prevailing operating context –
primarily physical aspects such as wind, waves, temperature, etc., but potentially also financial, regu-
latory and operational aspects.

Further, the digital twin is not an end product in itself. Rather, we should regard it is an “opportunity
maker” by acting as a live, rich data source, both beyond what can be offered by point-based sensor
configurations and beyond what is directly observable. The opportunities to exploit this are many and
diverse.

2. An engineering design perspective on digital services development

Taking the position that digital services should be derived from needs, their development should follow
an engineering design process containing the same steps as you would find in a ship design process.
There are several alternative models for this process. In this paper we will use the system-based design
process as a template, Erikstad and Levander (2012). The step-by-step process of the SBD process can
be summarized as follows:

• Customer requirements - Mission statement

459
− Task, capacity, performance demands, range and endurance
− Rules, regulations and preferences
− Operating conditions, like wind, waves, currents, ice
• Functional requirements - Initial sizing of the ship
− Based on capacity, where the areas and volumes needed for cargo spaces and task re-
lated equipment defines the size of the vessel
− Based on weight, where the cargo weight and the weight of task related equipment
and of the ship itself defines the size of the vessel
• Form - Parametric exploration
− Variation of main dimensions, hull form and lay out of spaces on board to satisfy
the demands for both capacity and weight
• Engineering synthesis
− Calculating and optimising ship performance, speed, endurance and safety
• Evaluation of the design
− Calculating building cost and operation economics

Fig.2: The System Based Design process, Erikstad and Levander (2012)

The System Based Design process is rooted in a fundamental understanding of design as a mapping
between different representational spaces, from the needs defined by the market and key stakeholders,
via the functions required to fulfill these needs, to form elements that will provide these functions,
synthesized into the final design, Suh (1990), Pahl and Beitz (1984), Coyne et al. (1990).

Fig.3: Engineering design as a mapping from needs via functions to form

2.1. Defining the needs – deriving a service value proposition

The first stage of the engineering design process starts with capturing the needs of the involved stake-
holders and beneficiaries, Pahl and Beitz (1984), and ends up with a value proposition from which the
function and corresponding form elements can be derived.

For a digital service design, the following need elements will be the most important:

460
• The overall goal of the service, i.e. what is the high-level situation the service will improve
• The primary users of the service, as well as other involved stakeholders
• The scope of the service, both from a temporal perspective as well as decision-making level
• The quality of the service, which is primarily a cost-value trade-off

As an illustrative example we will use the development of a new digital service for improving passenger
comfort onboard cruise vessels. This service has a large design space. It is bounded on the lower end
extreme of providing no monitoring or decision support beyond the ship´s navigator sensory feedback
on the bridge as a basis for making operational decisions. On the high end, we can imagine a service
where the navigator has real time insight into the current comfort level of every individual passengers,
such as motions, vibrations, etc., thus she will know the consequences of alternative navigation actions,
such as speed and heading, within an overall vessel routing that has balanced the scheduling require-
ments, fuel cost and passenger comfort by simulating the vessel´s voyage within the weather and
metocean forecast.

The example used is primarily related to the habitability of a vessel, Rumawas (2016), as one of several
human factors aspect to consider in ship design and operation. This is to some extent regulated by
government statutory requirements, as well as classification society rules. This includes specific class
notations, such as DNV GL’s comfort class, or guidelines such as ABS´ “Guidance Notes on Noise and
Vibration Control for Inhabited Spaces”. None of these places any specific requirements for the real
time monitoring of habitability performance aspects. For naval vessels and offshore service vessels
habitability has been found to have a significant impact on sleep, fatigue, motion sickness and task
performance. For cruise ships and other passenger vessels, passenger comfort is a major performance
criterion; thus, all factors, including motions, vibrations, sounds and light, are taken into consideration.
In the design phase of these vessels a number of models and tools are available for supporting decisions,
while for mission planning and operation there is limited support. At the same time, we know that
operational decisions do have an impact on the factors influence passenger comfort, and that these will
vary dependent on situation and external operating conditions. For instance, are the ship motions impact
on seasickness much higher for passengers in a vertical position rather than horizontal, Gahlinger
(2000), thus should passenger activity and position be taken into account in a decision support service
for finding the optimal combination of comfort level, speed and heading.

2.1.1. Decision-making levels

We will need to set the scope of the service in terms of its decision-making level. There are three main
levels that are relevant:

• Strategic level, relating to long term decisions. This predominantly relates to design decisions
as well as major retrofits. For the illustrative pax comfort example, this would include service
elements such as predicting the relation between ship main characteristics and motion behav-
iour, or simulating relevant operational states and missions with respect to aggregated comfort
level.
• Tactical level, relating to medium term planning decisions. This would typically relate to voy-
age or mission planning. For example, ship routing using medium term weather forecasts would
fall into this category.
• Operational level, for decision pertaining the immediate operational theatre. An example would
be the choice of vessel speed and heading combinations that are feasible within the given tac-
tical and strategic boundaries, and that would minimize some measure of aggregated, close-to-
real-time pax comfort level.

These decision-making levels are intrinsically connected both ways. Tactical level decisions will be
constrained by ship capabilities determined at the strategic level, while at the same time providing the
relevant missions to underpin design time vessel performance analysis. Operational level navigation

461
decisions available are constrained by the routing and scheduling decisions taken at the tactical level,
and is providing the operational states from which the mission is planned.

Fig.4: The different decision-making levels and their interaction

2.1.2. Temporal perspective on service scope

Decisions influence value. Decisions are made in the present and will affect the future. Still, by under-
standing the past our ability to make good decisions, by providing insight, experience and knowledge,
may be improved. Thus, we need to understand the interaction between the past, the present and the
future in service development.

In a simplified model we have three main temporal perspectives on decisions support:

• Hindsight, looking at past time series or events, and by aggregation and analysis deriving mod-
els for operating conditions and corresponding system behaviour and performance
• Insight, observing and analysing the current state to provide decision support on immediate
actions
• Foresight, predicting how the relevant aspects of future will evolve to understand the implica-
tions of current decisions, typically by scenario modelling and simulations

Again, these temporal aspects are no silos, but part of a continuum, and extensively intertwined. Hind-
sight timeseries have limited decision making value in themselves, but are the basis for consequences
of given circumstances in the present, as well as scenarios and statistical distributions pertaining the
future to be used in simulations.

Fig.5: The different temporal aspects of service scope and their interaction

2.1.3. Service scope map by combining the temporal and decision level perspective

If we combine these two perspectives, we obtain a map that can be used to delimit the scope of a digi-
tal service.

462
Hindsight

Insight

Foresight
Historical Ship operations
Engineering
timeseries
analytics, models simulations to
deriving models
relating design predict, evaluate
Strategic linking ship
characteristics
parameters and and validate
vessel motion design
and vibration and
behaviour performance
noise levels

AIS, metocean
Route planning
and IMU
Vessel daily service to
timeseries to
rerouting based minimize
Tactical derive models for
pax comfort
on weather aggregated
forecasts motion induced
performance in
comfort factors
seaway

Voyage IMU
Current motion Short horizon
timeseries to
behaviour to simulation of
assess comfort-
Operational relevant
support ship
speed and
navigation
impacts on pax
navigation
heading decisions comfort indexes
performance

Fig.6: Combining decision levels and temporal perspectives into a scoping map to identify and delimit
the scope of digital services – with illustrative examples

2.1.4. Scaling service quality

Using the scoping map in Fig.6, we may for example identify the need for a service providing opera-
tional insight to support navigation decisions that will minimize negative impact on passenger comfort.
Still, within these boundaries there is a large range of services that may be provided. This is basically
related to what we call service quality level. For the example used here, related to passenger comfort,
this may range on the lower end from relying on the perceived ship motion behaviour by the ship nav-
igator and subsequent actions to find the speed/heading combination that is assumed to minimize neg-
ative pax impact (which of course is not really a digital service), or, slightly more advanced, providing
a simple IMU sensor reading to quantitatively see the effect on accelerations by adjusting operations.
In the other end of the quality scale, we could perceive a service where we would have real time insight
into the onboard position of every passenger on board, as well as quantified high precision measures of
the 6 dof accelerations, velocities, vibrations and noise levels at each of these positions. This will enable
a quantified performance measure of both the individual and aggregated pax comfort level in close-to-
real time, to form the basis of a navigational decision support service.

In between these two extremes we will have a number of possible intermediate solutions, forming a
service quality scale, providing value from close to zero in the case of ignorance, to what in information
theory is called the “value of clairvoyance” (VoC). Where exactly we would position ourselves on this
scale is basically a cost-benefit trade-off, in which we would need to establish both a relevant cost
model and a “value model”, using the set of quality criteria as input parameters.

For the value model, we propose the introduction of what may be termed a Digital Services Maturity
Model, capturing the level of insight the service is able to provide. The term is intentionally a rewriting
of the well-known Capability Maturity Model (CMM), Humphrey (1989), capturing the level of for-
mality and optimization of software development processes, on a step-wise scale from ad-hoc practices,
via formally defined steps, managed result metrics, to actively optimized processes. The CMM model
comprise the following levels:

463
1. Initial: The software process is characterised as ad hoc, and occasionally even chaotic. Few
processes are defined, and success depends on individual effort and heroics.
2. Repeatable: Basic project management processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier successes on projects
with similar applications.
3. Defined: The software process for both management and engineering activities is documented,
standardised, and integrated into all processes for the organisation. All projects use an approved
version of the organisation’s standard software process for developing and maintaining
software.
4. Managed: Detailed measures of the software process and product quality are collected. Both
the software process and products are quantitatively understood and controlled.
5. Optimising: Continuous process improvement is enabled by quantitative feedback from the
process and from piloting innovative ideas and technologies.

Following this pattern, we define five levels capturing that in a similar way constitutes a digital ser-
vice maturity index as follows:

1. Observe: Simple registration and presentation of sensor observations


2. Measure: Combined observations and analytics that establishes a comprehensive state of the
vessel
3. Model: Linking observations of vessel state and observations of external operating conditions
providing insight and understanding of behaviour and performance
4. Predict: Building on the insight from the model to predict the consequences of alternative con-
trol actions
5. Decide: Complete insight into relevant vessel state, behaviour and performance, and able to
predict with a high degree of precision and certainty the outcome and value of every relevant
action as a response to this insight.

Fig.7: The digital service maturity index capturing service level quality and capability

Climbing these maturity levels implies higher value, but also increased cost. Thus, in order to determine
what would be the correct level, we need to gain insight into the cost, which would require us to map
the service needs via functions to form elements, such as sensors, communication, storage, analytics,
etc., similar to what in Andrews (2011), is called requirements elucidation.

2.1.5. The value proposition for the illustrative case

To conclude the needs definition phase, the service to be designed should be summarized in a value
proposition capturing the scope in terms of stakeholders, functions, temporal aspects and decision-
making level. This value proposition will thus serve as a starting point for identifying technical func-
tions and corresponding requirements.

464
For the pax comfort service used as an example in this paper, such a value proposition can be exem-
plified as follows: “To provide a high-fidelity service to improve total passenger comfort level in sea-
way, by providing onboard real-time insight into individual passenger comfort performance levels, as
well as foresight support for operational navigation decisions”.

2.2. From needs to functional requirements

From the needs, the functional requirements can be derived. This is similar to the functional structure
derived in the ship design process. This is illustrated in Fig.5, splitting the functionality between data
acquisition, analytics and visualisation and user interaction.

Fig.8: Functional structure for (offshore) ship (left), vs. a functional structure for a digital ship service
(right)

The data acquisition functions can again be divided into sensor observations (e.g. IMUs, vibration sen-
sors, sound sensors, etc., external data acquisition (e.g. weather data, weather data), data processing
and storage, and (ship-to-shore) communication. All these functions are critical for the performance
and quality for service provision, while at the same time they will have a significant impact on the total
cost. In defining the functional requirements, it is important to use solution neutral descriptions. As-
sume for instance that the service to be developed would monitor the level of seasickness for individual
passengers in close to real time. One function would thus be the observation of the estimated position
of every passenger, with corresponding requirements in terms of accuracy, frequency and latency. The
functional requirement definition should not imply how this should be realized, for instance video im-
age analysis, sensors placed on the passengers, crew reporting, etc. This will help to keep the design
space open in terms of how the service will be realized, allowing for a more informed cost-benefit
analysis in the embodiment design phase, Pahl and Beitz (1984).

2.3. From function to form

As we for a ship will seek to find the optimal form elements to fulfil a certain function, such as e.g. the
machinery to provide power or the propeller to provide thrust, we need to find the proper form elements
for each of the primary functions identified in the previous steps. An aid in this process would be design
catalogues such as those proposed in Pahl et al. (1984), as in Fig.9. A corresponding catalogue for
service design would map between the functions identified in Fig.8 (right) and form elements such as
sensors, external data services, communication and storage solutions, analytic transformations, and UX
resources.

465
Fig.9: Design catalogues of form elements that supports required functions, from Pahl et al. (1984)

It will be the embodiment of the functions into form elements, that are further synthesised into a com-
plete service solution, that will drive the cost of the solution. Since many digital services are imple-
mented on cloud platforms, many of the cost elements can be derived from the “Total Cost of Owner-
ship” calculators offered by the cloud solutions, taking into consideration, computational requirements,
storage, communication bandwidth etc.

As discussed in Section 2.1.3, the service value will be dependent on the service quality level, as meas-
ured by the maturity index. On the opposite end of the lever is the associated embodiment complexity
that drives cost, both in the implementation (onboarding) phase and in operation. These complexity
related cost drivers are typically associated with parameters that can be linked to the five major com-
plexity aspects, structural, behavioural, contextual, temporal and perceptional, Gaspar (2012).

Fig.10: Cost driving factors of service, linked to fundamental complexity aspects

Further, these cost elements can be linked to different cost driving aspects, such as asset type, the phe-
nomena to be captured, the monitoring scope, the event types to be raised, and the hindsight capabilities.
An example for a wind turbine service installation is shown in Table I.

466
Table I: Examples of cost driving aspects for a wind turbine structural monitoring solution
Cost driv- Description Example Relevant service Influence on cost
ing aspect characteristics
Asset class Complexity, and Asset class: wind tur- Structural: #parts, Model size
and type associated cost, bine form, material, foun- Number of itera-
varies across asset Types: Land based, dation tions on each time
classes (bridges, offshore monopile, Behavioural: Degree step
wind turbines, offshore tripod, float- of (non-)linearity, Number of sensors
cranes, ...), and ing, … number of modes, … streams
across different Contextual: Load
types within each characteristics (e.g.
class (suspension only wind loads on-
bridge, cantilever shore, also wave
bridge, etc. loads offshore)
Phenom- Related to the type Simple loads (e.g. Analysis capabilities Post-processing
ena of insight the ser- bending moments) required by the ser- computations
vice will provide Local stresses and vice Number of input
strains Sensor configuration and output streams
Aggregated struc- and quality
tural fatigue Temporal aspects,
Soil fatigue & deple- hindsight capabilities
tion
Monitoring The number, place- For wind turbine, this A monitoring loca- A relatively limited
scope ment and capability may range from a tion will either re- influence on both
of monitoring loca- single sensor for quire a sensor, or to overall model size
tions provided by tower top movement, be derived by analy- and computational
the service, which to multiple sensors sis, increasing re- resources.
reflects how de- placed at hotspots quired computational Increased post-pro-
tailed the asset capability cessing resources
state and behavior
is to be replicated
Events The types of events A high stress event The event rules con- Potentially signifi-
(alerts) that is may be triggered figuration, including cant influence on
raised, their trig- when passing a event types, trigger- cost, both compu-
gering criteria, and threshold value, trig- ing rules, and event tation (separate
the corresponding gering the generation documentation. process) and stor-
documentation of a stress recovery The event payload is age (large data
payload for each time series (currently) generated footprint). Payload
event by a separate process cost dependent on
for stress recovery. grid size, fre-
quency and time
span covered
Hindsight The capabilities of On a wind turbine, The persistence pol- Primarily related to
capabilities the service to view every monitoring lo- icy for the output storage cost.
and analyze past cation point produces streams of virtual Influenced by per-
performance data, a multi-channel out- sensors and (post- formance data fi-
and with which la- put stream that we processed) aggrega- delity (frequency),
tency and level of may want to view in tions. as well as latency
detail a plot or re-analyse Also the capability to (e.g. DB vs file)
later re-generate output by Storage capability
re-computing past may be replaced by
time intervals. computational re-
By assumption, all sources for perfor-
sensor input data is mance data regen-
stored eration

467
3. Discussion and conclusion

The starting point for understanding the value of digital services comes from information economics,
observing that the economic value of information, either as “raw” sensor observations, or by the ser-
vices built on these observations, is derived from the corresponding improved decision-making by in-
dividuals, making choices that yield better payoffs than decisions made without this information avail-
able.

Thus, a rational approach to digital ship services based on ship sensor installation is to start with a zero-
sensor consideration. The first sensor to be installed would be the one yielding the highest value-to-
cost ratio, and continue with consecutive installation we reach the point where the marginal value con-
tribution by one additional sensor is equal to the marginal (total lifecycle) cost.

As was illustrated in Fig.1, there are two opposing perspectives that can be taken in the development
of digital services:

• A service-driven perspective, starting from the end-user needs for decision support in opera-
tion, and tracing this all the way back to specific sensor-based observations of vessel behavior
• A sensor-driven perspective, starting from the sensors installed on the vessel, analyzing how
these can provide valuable input to both new and existing digital services.

The position taken in this paper is that the first of these two is preferable, and should follow the same
basic process as for engineering design processes, starting with the needs of the beneficiaries of the
service, mapping this into the functional domain with corresponding requirements, and finally to the
form domain, synthesizing a complete solutions from the required form elements.

In this paper we have only outlined a hypothesis on how this process should take place – further research
into this topic is definitely needed and should be supported by a number of case studies to test and
validate the process. To be able to understand the relation between the value provided by a digital
service, and the cost of realizing it, we need to develop a research-based foundation for quantitative
measures of ship-in-service performance. The scope should cover operational, tactical and strategic
perspectives, and correspondingly, a model for the business value of digital services in terms of how
the quality of decision making will impact revenue, cost as well as other non-monetary performances.

At NTNU we intend to continue our research related to digital service development. This will include
tasks such as:

• A functional breakdown structure of ship operations, and their associated performance


measures
• The identification of endogenous and exogenous variables that have an impact on these perfor-
mance measures
• The identification of specific input data that will influence the quality of operational decisions,
and further how this relates to data quality parameters such as frequency, accuracy, fidelity,
etc.
• Requirements, from an operational services point-of-view, for data and analytic capabilities to
be provided by an integration platform (e.g. DNV GL´s Veracity).
• Design patterns for digital twin implementations, including sensor configurations, signal pro-
cessing, data and analytics services, and end user applications
• Human-machine interfaces for both digital twin integration platforms and end-user digital ser-
vices

We believe there is still a long way to go before this field is mature. One part is technology develop-
ment, which is likely to continue at the same fast pace as we have seen during recent years, But just as
important is a better understanding of how we are able to turn observations and data, both from sensors

468
and other digital resources into actionable insight that actually generate a value that is higher than the
lifecycle cost of producing it. Systematically designing from needs seems like a promising starting
point.

“We have for the first time an economy based on a key resource (information) that is not only renew-
able, but self-generating. Running out of it is not a problem, but drowning in it is” (John Naisbitt)

References

ANDREWS, D.J. (2011), Marine requirements elucidation and the nature of preliminary ship design,
Trans. R. Inst. Nav. Archit. (RINA) 153 (Part A1), Int. J. Maritime Eng. (IJME)

COYNE, R.D.; ROSENMAN, M.N.; RADFORD, A.D.; BALACHANDRAN, M.; GERO, J.S. (1990),
Knowledge-Based Design Systems, Addison-Wesley

ERIKSTAD, S.O. (2018), Design patterns for digital twin solutions in marine systems design and
operations, COMPIT Conf., Pavone

ERIKSTAD, S.O. (2017), Merging Physics, Big Data Analytics and Simulation for the Next-Genera-
tion Digital Twins, HIPER Conf., Zevenwacht

GAHLINGER, P.M. (2000), Cabin Location and the Likelihood of Motion Sickness in Cruise Ship
Passengers, J. Travel Medicine 7, pp.120-124

GASPAR. H.; RHODES, D.; ROSS, A.; ERIKSTAD, S.O. (2012), Addressing Complexity Aspects in
Conceptual Ship Design - A Systems Engineering Approach, J. Ship Production and Design 28, pp.1-
15

HUMPRHEY, W. (1989), Managing the Software Process, Addison-Wesley

PAHL, G.; BEITZ., W. (1984), Engineering Design, Springer

RUMAWAS, V. (2016), Human Factors in Ship Design and Operations: Experiential Learning, PhD
Thesis, NTNU, Trondheim

SUH, N.P. (1990), The Principles of Design, Oxford University Press

469
A Method for Generation and Analysis of Feasible General Arrangement
and Distributed System Configurations in Early Stage Ship Design
Joseph W. Donohue, University of Michigan, donohuej@umich.edu
Conner J. Goodrum, University of Michigan, cgoodrum@umich.edu
Michael J. Sypniewski, University of Michigan, mjsyp@umich.edu
David J. Singer, University of Michigan, djsinger@umich.edu
Colin P.F. Shields, CNA Corporation, Shieldsc@cna.org

Abstract

The development of general arrangements (GA) and distributed system configurations (SC) in early
stage design is critical for understanding architectural implications and the inherent interdependencies
between GAs and their supporting SCs. It is difficult, however, to develop general arrangements and
conduct meaningful analysis on them with the limited information available at early design stages. In
previous work completed at the University of Michigan, a method was established to probabilistically
map ship components to the physical vessel architecture using network mechanics and Bayes Theorem
given limited information about the way ship components are logically connected. This paper expands
the method by developing the capability to account for physical characteristics of components, enabling
designers to realistically assess the impact of their decisions on the design space.

1. Introduction

Naval ship design has been described as a “wicked problem” – a class of problems that cannot be fully
understood until an attempt at a solution has been made, Andrews (1998). This is particularly evident
when designing a vessel’s general arrangements (GA) and distributed system configuration (SC). In
early stage design, it is difficult to find a SC and GA that are both feasible and functional since design
requirements are vague and information about the vessel’s geometry is limited. Despite these difficul-
ties, developing GAs and SCs is crucial to refining design requirements and understanding design chal-
lenges, Andrews (2012).

In short, designers need to know where components (e.g. propulsion, pumps, weapons, etc.) should be
allocated within the ship early to decrease the risk of design failure as the design progresses. They need
to know how to design the physical architecture of the ship to accommodate the distributed system, and
conversely how to design the distributed system to accommodate the physical architecture. A poor de-
cision in early stage design can lead to infeasible or unnecessarily constrained solutions. In turn, it can
result in later decisions unnecessarily revolving around a single system component, undesirable and
overly complex designs, or costly redesign effort. Thus, methods to quickly develop and analyze GAs
and SCs in early stage design are extremely valuable to the larger design effort.

Previous research, Shields et al. (2018), developed a method to bridge the gap between existing auto-
mated GA tools – which required a high level of design information – and the needs of designers in early
stage design. This approach applies a stochastic process that describes a routing as a series of random
steps, known in network mechanics as random walks, to develop an ensemble of possible distributed
system routings between system components. The method couples the information from ensembles of
SCs with a network representation of the vessel’s physical architecture to elucidate the logical relation-
ships that exist between components, system connections, and the physical architecture. Ultimately, the
method provides the ability to determine the likelihood that a given component or routing will reside
within a given space in the ship based on its functional dependencies to other components. The method
provides a novel approach to developing and analyzing GAs and SCs in early stage design.

The method is limited, however, in that it does not account for the physical characteristics of compo-
nents (such as component volumes) that impact the set of possible solutions. In this paper the network-
based approach is expanded to account for a component’s physical characteristics such as volume and

470
pipe sizing by layering the new design information over the results from the original method. This added
capability creates a more realistic and practical design tool suitable for early stage design. As with the
original method, the presented network-based ship model can be updated with new information as de-
cisions are made. This enables designers to understand at the early stages of design what information
limits or determines design lock-in, and how decisions impact the design solution space.

2. The design space and network model of a ship

The presented method represents the design space as described in Brefort et al. (2018), by classifying
design solutions in terms of two independent categories: the physical and logical architectures. In this
representation, physical architectures (PA) consist of spatial and geometric relationships such as the
vessel’s hull form, displacement, and structure. Logical architectures (LA) consist of components that
are functionally connected, such as a radar connected electrically to an auxiliary generator. The inter-
section of the physical and logical solution spaces contains all physical solutions that meet the design
requirements. This relationship is represented in Fig.1.

Fig.1: A visualization of the breakdown of the design space

Consistent with Shields et. al. (2018), a network representation is used to define both physical and
logical architectures, Gillespie et al. (2013), Rigterink (2014). Physical solutions are expressed as the
logical architecture overlaid on the physical architecture, Shields et al. (2016,2017). Fig.2 shows exam-
ples of the network representation of the physical architecture, logical architecture, and physical solu-
tion.

The logical architecture is known in network parlance as a directed network, with edges representing
connections such as electrical wiring or pipe routings. Vertices represent system components. As the
name suggests, the direction that edges are traversed in a directed network is critical. A “source” com-
ponent such as a generator, provides some quantity – such as power – to a “sink” component, such as a
pump. The terms “source” and “sink” are relative, and a single component can be a source to one com-
ponent and a sink to another. Edges in directed networks are conventionally drawn with arrows pointing
from source to sink. Additionally, edges are assigned attributes that are associated with a system such
as an electrical distribution system or a chill water system. It is possible to have a component belong to
multiple ship systems (such as a chill water pump that requires electricity), in which case it will have
edges corresponding to both systems.

The physical architecture is represented as an undirected network, with vertices denoting physical ship
spaces (including p-ways) and edges denoting adjacencies. The physical solution contains vertices and
routings from the logical architecture mapped to locations in the physical architecture. The physical
solution shows the connectivity of components in the logical architecture within the physical architec-
ture and is an abstract representation of the ship’s general arrangements.

471
Fig.2: From left to right: the network representations of the logical architecture, physical architecture,
and physical solution

The following section explains how the network representation is used to develop probabilistic maps of
GAs.

3. Probabilistic Mapping

Using the network representation detailed in the previous section, the method for probabilistically map-
ping logical architecture components and routings onto the physical architecture leverages a measure
from network science called current flow betweenness (also referred to as random walk betweenness,
Newman (2005)). Thus, if a designer had a defined physical architecture and logical architecture but
didn’t know exactly where logical architecture components should be placed in the vessel, the proba-
bility of a component and routing being placed at each location in the ship can be determined. An
example of a generated probabilistic map is shown in Fig.3.

Fig.3: Example of the probabilistic mapping approach. Here, the designer knows that component A will
be located at (0,0) and estimates that component C has a 25% chance of being located at (2,2)
and a 75% chance of being located at (2,1). The mapping of B and all of the routings is unknown
but can be determined using, Shields et al. (2017).

Current flow betweenness can be thought of as the probability that a random walk over a network be-
tween vertices s and t will visit vertex n, Newman (2005). To illustrate, if you were in a city standing at
an intersection A and took a completely random route to get to intersection B, the current flow between-
ness of each intersection in the road network is the probability of passing through the intersection on
the way to B as shown in Fig.4. (Technically this is incorrect as M.E.J. Newman, the inventor of current
flow betweenness, originally defined current flow betweenness as the measure described here averaged
over all start and endpoints s and t, but this simplification is sufficient for our purposes and done for
clarity.)

This is used to compute the probability distribution of a routing between two locations L1 and L2 in the
physical architecture. Specifically, if components C1 and C2 were connected in the logical architecture
and located at L1 and L2 in the physical architecture, the current flow betweenness between L1 and L2
gives the probability distribution of routing locations in the physical architecture. This demonstrated in
Fig.5.

472
Fig.4: A conceptualization of current flow betweenness using a road network analogy. Starting at inter-
section A, a person only has two possible routes to B. Thus, the current flow betweenness is 0.5
for every vertex (or intersection) between A and B

This is further expanded to account for components with uncertain locations by combining component
location probabilities (provided by the designer) with the routing probabilities derived from current
flow betweenness using Eq.(1), Shields et al. (2018):

𝑐 𝑐 𝑐 𝑐
𝑃𝑖𝑗𝑘 𝑙 = ∑ ∑ 𝑃𝑢 𝑘 𝑃𝑣 𝑙 𝑃𝑖𝑗𝑢𝑣 (1)
𝑐𝑘 𝑐𝑙

c c
Pij k l denotes the probability that and edge (ck, cl) in the logical architecture includes the physical edge
c
(i, j) in a routing between components k and l. Similarly, Pu k indicates the probability that component
k is located at physical location u.

Components with completely unknown location probabilities can be mapped by adding a “super node”
to the physical architecture. This temporary node is added with edges to all possible locations. Then,
the current flow betweenness is computed between the component’s source location and the super node.
The probability of the component mapping to a location L is then simply the current flow betweenness
of L. If the source location is also unknown, another super node is added to represent the unknown
source location and the current flow betweenness is computed between the two super nodes. This is
shown in Fig.6. Probabilities for all source/sink pairs are then combined to determine to final mapping
probabilities using Bayes’ Theorem.

Fig.6: For components with unknown mappings,


Fig.5: The probability distribution for a routing
a super node is added to the physical ar-
between components 1 and 2 when their
chitecture. The "current" - or probability -
locations are known. This can be easily
flowing through each edge from each pos-
expanded to account for uncertain loca-
sible location is equivalent to the mapping
tion mappings using Eq.(2).
probability of the location.

Eq.(2) shows how Bayes Theorem can be arranged to compute the probabilities for unknown location
mappings, Shields et al. (2018):

473
𝑐𝑙 𝑐𝑚
𝑐 ∏𝑐𝑙 ,𝑐𝑚 𝑃𝑠𝑢
𝑃𝑢 𝑘 = 𝑐𝑙 𝑐𝑚 (2)
∑𝑣 ∏𝑐𝑙,𝑐𝑚 𝑃𝑠𝑣

c
Pu k is the probability that component k maps to location u, given all edges (c l, cm) in the logical archi-
tecture that contain component k. The notation is the same as that used in Eq.(1), where s denotes the
super node and u and v represent possible locations in the physical architecture. The next section details
the addition of component and physical architecture characteristics into the computation.

4. Accounting for component characteristics and constraints

Characteristics like volume, weight, head loss, and required power impact the mapping probability dis-
tribution. Large components, for instance, are less likely to be located in small spaces than smaller
components because there are fewer feasible arrangements for large components.

To differentiate between types of characteristics that require fundamentally different approaches to


evaluate, they are divided into two classes called “capacity characteristics” and “loss characteristics”.
Capacity characteristics, such as weight or volume, are tied to constraints imposed by the vessel’s phy-
sical architecture. The engine room of a ship, for example, has only a finite amount of space to hold
equipment. Similarly, the weight at every location in a vessel must be within a certain range to ensure
proper stability and trim.

By contrast, loss characteristics pertain to the relationship between components in the logical architec-
ture and the physical characteristics of their connections. A pump that needs to provide a certain amount
of pressure head to another component, for instance, must overcome a certain amount of head loss in
the pipe that connects the two components. The size of the pump and length and material of the routing
would directly impact the feasible arrangements just as volume limits the possible ways components
can be placed in the physical architecture.

4.1 Accounting for capacity characteristics

Capacity characteristics are accounted for after first computing the mapping distribution using the orig-
inal probabilistic mapping method as discussed in section 3. This allows the map to start from a baseline
of minimal initial design information and to update it as more information becomes available and pro-
vides initial mapping probabilities. Characteristics themselves are overlaid directly on the physical and
logical architectures as shown in Fig.7 and Fig.8.

Fig.7: Sample logical architecture for compo- Fig.8: Sample physical architecture with known
nents A, B, and C with known volumes volume information

474
To compute the updated probabilities, the power set – or all subsets – of the list of logical architecture
components are enumerated. These subsets represent all the ways that components could be assigned
to a location in the physical architecture without regard to feasibility. Then, for each location, the fea-
sible subset of the power set that can be accommodated by the capacity of the location is found. Next,
the initial mapping probabilities are combined to find the probability of each arrangement in the feasible
subset occurring at a location. Finally, the various arrangement probabilities are summed to arrive at
the probability that each component will map to each location. To update routing probabilities, the
Shield’s et al. (2018) method is repeated using the new location probabilities. A detailed description of
this process follows.

The first step of this process, enumerating the power set of the list of components, is simple in concept
but is computationally expensive. For larger ship-sized systems, this cannot be done by hand. In fact,
the size S of a power set for a set of size n is given by:

𝑆 = 2𝑛 (3)

Therefore, the number of ways to allocate components to a location in the physical architecture grows
exponentially with the number of components. As such, ship-sized systems could require a high-per-
formance computing cluster (or HPC) to enumerate the power set. Further, for a real application, it may
be necessary to limit the number of components considered by grouping components together, stopping
the power set enumeration algorithm when the subsets become infeasibly large, or removing compo-
nents from consideration whose location has already been determined.

Next, the power set is referenced to determine which arrangements of components are feasible at every
location. For example, if location L had 10 m3 of space, feasible arrangements are sets from the power
set that occupy 10 m3 of volume or less. Generalizing this for any capacity constraints, the following
statement gives the set of feasible arrangements for a location L:

𝑎 ∈ 𝐹𝐿 (4)
𝑘

𝑖𝑓 𝐶𝐿 ≤ ∑ 𝑐𝑖 ≤ 𝐶𝐻
𝑖=1

FL is the feasible set at location L, a an arrangement taken from the power set, ci the capacity character-
istic of a component in a, k the number of components in a, and CL and CH the lower and upper bounds
of the capacity constraint at location L.

After identifying the feasible set of arrangements, the probability of each arrangement occurring is cal-
culated. Using the original mapping probabilities given by Shields et al. (2018), the overall probability
of the arrangement is found via the formula for the intersection of independent events. For example, the
probability of arrangement {A, B} occurring at L for a logical architecture with components A, B, and
C is given by:

𝑃(𝐴𝐵) = 𝑃(𝐴)𝑃(𝐵)(1 − 𝑃(𝐶)) (5)

P(A), P(B), and P(C) are the probabilities that components A, B, and C are located at L given by Shields
et al. (2018). In general, the probability for an arrangement is found using the equation:
𝑃(𝑎𝑖 ) = ∏ 𝑃(𝑐𝑘 ) ∗ ∏(1 − 𝑃(𝑥𝑘 )) (6)

ai is an arrangement, ck is a component belonging to arrangement ai, and xk is a component in the logical


architecture that does not belong to ai. This calculation is repeated for every location in the physical
architecture.

475
Since the goal is to find the probabilities that each individual component will be located at a location L,
the arrangement probabilities must be aggregated. Because the arrangements themselves are mutually
exclusive (no two arrangements can exist at the same time), this is done by summing the arrangement
probabilities that contain each component. Mathematically, this is written as:

𝑃(𝐶) = ∑ 𝑃(𝐴𝑐 ) (7)

C is a component and Ac is all feasible arrangements that contain C. The routing probabilities are then
recomputed using Shields et al. (2018).

Because the routings themselves may have capacity characteristics that impact the final probability
distribution, they need to be included in the computation. To do this, the routings are temporarily con-
sidered as additional components.

First, the routing decision variables are related to the capacity characteristic under investigation. These
variables can be anything the designer chooses, such as pipe diameter. At this stage in design, it is
generally not known exactly how routings will be physically routed through the space. A routing could
go through the middle, along a bulkhead, etc. So, rather than computing the routing’s capacity charac-
teristic directly, it is necessary to use a measure that scales correctly with the decision variable so that
a useful comparison between diameters can be made. This representative measure can be thought of as
a pseudo characteristic (like pseudo-volume) of the routing.

Here, the issue of needing to re-enumerate the power set with the additional pipe routing component
again arises. This would require another computationally expensive program that may require the use
of HPC resources. It would be cumbersome and slow to redo this step every time a new component was
added to the system, or every time a designer wanted to experiment with a new pipe diameter. Fortu-
nately, it isn’t necessary to find the entire power set. It is only necessary to consider the feasible ar-
rangements for each location. Since the size of the power set scales by 2n, the new set that needs to be
enumerated is only twice the size of the feasible subset. Thus, the time required to enumerate the set is
only two times the size of the feasible subset, which should be significantly shorter than enumerating
the entire power set for all components.

Fig.9: The probabilistic map for the system in the above figures before volume information is added

476
After this enumeration, the new set of arrangements is checked to determine the new feasible subset
with the pipe routing component added. Then, the probabilities are recalculated using the same process
as previously described.

The example depicted in Error! Reference source not found. through Fig.10 shows how the proba-
bilistic map for a distributed system with three components changes when volume information is con-
sidered. Unlike the original map before the volume update, the mapping distribution for each component
does not sum to one. This is because for each component, there may be a GA that precludes the com-
ponent from fitting into the physical architecture. In other words, this probability represents the likeli-
hood that the design will be physically infeasible. This is shown by the map in Fig.10, where compo-
nents B and C have a 25% chance of not fitting in to the physical architecture.

Fig.10: The probabilistic map after volume information is added. Because components B and C cannot both
be located at (1,0) due to volume constraints, their probability of mapping there has decreased. With-
out any other changes, both B and C now have a 25% chance of not mapping to the physical archi-
tecture at all. This is useful to know, because it means the design is at risk of being infeasible.

4.2 Accounting for loss characteristics

As discussed, loss characteristics are drawn from constraints imposed by relationships between compo-
nents in the logical architecture and the physical characteristics of their connections. An example is the
pressure head supplied to a component via a pump and a pipe routing. The relative locations of the
pump and component, the length of the routing, the pipe material, and more would impact the vessel
solution space. As these variables are fixed, just as volumes were set in the capacity constraint example,
the number of ways to arrange the components and routings decreases. As more decisions are made,
the solution space may converge to just a handful of options. In some cases, just as with the imposition
of capacity constraints, there may be no practical solutions and design decisions will need to be recon-
sidered.

The loss constraint problem is handled by randomly creating a large sample of potential vessel solutions
using the probability distributions calculated in the previous sections. Each solution is then scored based
on some criterion tied to the loss characteristic under consideration. For example, if the designer wished
to limit routing length, the system score would be the path length – or the number of network edges
traversed between linked components – for every source/sink pair mapped to the physical architecture.

477
Then, the feasibility of each solution is checked by comparing the system score to a feasibility threshold,
which is characteristic of the loss constraint. For the routing length example, the feasibility threshold
would be the maximum allowable path length in the physical architecture. Fig.11 shows an example of
a feasible and infeasible solution.

Fig.11: The solution on the left is feasible because the routing between the components is less than or
equal to the imposed limit. The solution on the right is infeasible because it exceeds the path
length limits.

Finally, for every component, every possible location in the physical architecture is checked for validity
by looking at the generated solutions and their feasibilities. When considering component C and loca-
tion L, for instance, the set of vessel solutions where C is located at L is examined. If the vessel solutions
are only feasible below some probability threshold, then location L is eliminated from the solution space
by removing it from the network and recomputing the location probabilities using Shields et al. (2018).
To illustrate, if the set of generated solutions where component C is located at L only contains a feasible
arrangement in <1% of cases (where 1% is a chosen probability threshold) then mapping C to L is not
a valid solution. L is then taken out of the set of possible locations that C can be placed and recompute
the probability distribution.

The same process can be used to compute routing probabilities themselves. Rather than mapping com-
ponents to locations, edges are substituted in the logical architecture and edge locations are substituted
in the physical architecture. For example, rather than C mapping to L, logical edge (C, B) would map
to a routing location (L1, L2, L3) where C is located at L1 and B is located at L3.

In the next section, examples of this approach are applied in full and combined with an information
hierarchy that demonstrates how the full probabilistic mapping method might be used.

4.3 The probability distribution and analysis of design feasibility

As briefly mentioned, accounting for routing and component characteristics in the mapping probability
distribution may lead to some components or routings that have some probability of not mapping to any
location in the physical architecture. Mathematically, this probability is expressed as:

𝑃𝐼 = 1 − ∑ 𝑃(𝐿𝑘 ) (8)
𝑘
Lk is the kth location in the physical architecture and PI is the probability of infeasibility for a compo-
nent. For each component of routing, this probability represents the likelihood that the design will be
infeasible given the current constraints and design information. A high probability of infeasibility indi-
cates a high level of design risk, since proceeding with the design without making changes to the phys-
ical or logical architecture will likely lead to a GA and SC that is physically incapable of accommodat-
ing the component or routing in question.

478
5. A design example

To illustrate how this method is used, it is applied to a simplified design problem. At the beginning of
the design process, a designer tasked with creating GAs and SCs may only have a general list of system
components and a physical architecture that has yet to be fully developed. To reflect this reality, this
example starts with a basic logical architecture and a physical architecture that contains no volume
information. The initial logical and physical architectures are shown in Fig.12 and Fig.13:

Fig.12: Initial logical archi- Fig.13: The initial physical architecture


tecture

Fig.14: Initial probabilistic map for the system. The size of the nodes indicates the likelihood that a
component from each system maps to each location. Darker edges represent a higher likelihood
of a system routing mapping to a physical architecture edge.

Using these architectures as a baseline and providing minimal constraints on the mapping, the proba-
bilistic map is initialized using the method described in section 3. This initial map is shown in Fig.14,
and the initial location mappings are shown in Fig.15.

As the design progresses, and more information becomes known, the probabilistic map can be updated
for loss constraints. Here, path length (which is directly tied to other loss characteristics like head loss

479
and power loss) is used. First, a large sample of potential physical solutions is generated using the
probabilities from Fig.14. A sample solution is shown in Fig.16.

Fig.15: Initial mapping probabilities for individual components. Larger nodes represent higher mapping
probabilities.

Fig.16: A sample solution randomly generated using the initial probabilistic map. Red edges indicate
electrical routings while blue denote cooling system routings.

Next, each solution is scored to determine the path length between each source/sink pair. The scores
for each source/sink pair for the solution shown in Fig.16 is are given in Table I.

Table I: System scores (path) for the solution shown in Fig.16.


Source/Sink Score
Main Propulsion/Radar 4
Main Propulsion/Chiller (electrical) 1
Main Propulsion/Pump (electrical) 8
Chiller/Pump 1
Pump/Main Propulsion (cooling) 2
Main Propulsion/Chiller (cooling) 1

Each location is then investigated to determine whether or not it is a feasible mapping for each compo-
nent. For this example, a feasible mapping is defined as a solution that contains no routings longer than
path length 6. Thus, the solution in Fig.16 is infeasible because the electrical path between the main
propulsion and the pump is greater than 6. It is intractable, however, to enumerate every possible solu-
tion to determine the feasibility of every mapping. Therefore, as discussed in section 4.2, a probability
threshold is used. Here, the threshold used is 1%. If mapping the chiller to location (0,1), for example,
is only part of a feasible solution in less than 1% of the sampled solutions than the location (0,1) is
considered an infeasible mapping for the chiller.

After each mapping has been compared to the sample to determine feasibility, probabilities are recom-
puted using the probabilistic mapping method from section 3.0 while explicitly setting all infeasible
mapping location probabilities to zero. The results for the final map are shown in Fig.17.

480
Fig.17: The probabilistic map updated to reflect the path length constraint

Fig.18: The logical architecture updated to Fig.19: The physical architecture updated to reflect vol-
reflect volume information. ume information

The designer can also incorporate capacity constraints by overlaying them onto the architectures and
using the probabilities from Fig.17 as a starting point. Here, volume is used as a capacity constraint.
Fig.18 and 19 show the logical and physical architectures updated to reflect the known volume infor-
mation.

To update the probabilistic map, the power set of all components is enumerated. Next, the mapping
probabilities, taken from the initial probabilistic map, are combined to find the probability that a given
arrangement will occur at a location using Equation 6. This is repeated for every location in the physical
architecture. Table II shows an example of the computation for a single location for three possible
arrangements in the physical architecture. Note that this is repeated for all sixteen possible arrangements
and at every location.

481
Table II: Sample of arrangement probability calculation for a single location in the physical architecture
Arrangement Calculation
{Main Propulsion, Chiller, Pump} P(Main Propulsion)*P(Chiller)*P(Pump)*(1-P(Radar))
{Chiller, Pump} P(Chiller)*P(Pump)*(1-P(Radar))*(1-P(Main Propulsion))
{Main Propulsion} P(Main Propulsion)*(1-P(Pump))*(1-P(Radar))*(1-
P(Chiller))

After finding the arrangement probabilities, the feasible subset is found for every location in the phys-
ical architecture. For this example, the feasible subset for each location is the subset of arrangements
that fit at the location. Table III shows the feasible subset for a single location in the physical architec-
ture. Since the locations have uniform volume in this example, the feasible subset is the same for every
location.

Table III: The feasible subset of arrangements given a volume constraint of 10 m3


Arrangement Total Volume (m3)
{Radar, Chiller, Pump} 10
{Radar, Chiller} 7
{Chiller, Pump} 6
{Radar} 3
{Chiller} 4
{Pump} 3
{Main Propulsion} 8

Finally, for each component and location the feasible arrangement probabilities are summed to find the
individual mapping probabilities for every component. To illustrate, the probability that the chiller maps
to location (1,0) is found by summing the feasible arrangement probabilities that contain the chiller:

𝑃(𝐶ℎ𝑖𝑙𝑙𝑒𝑟) = 𝑃({𝑅𝑎𝑑𝑎𝑟, 𝐶ℎ𝑖𝑙𝑙𝑒𝑟, 𝑃𝑢𝑚𝑝}) + 𝑃({𝑅𝑎𝑑𝑎𝑟, 𝐶ℎ𝑖𝑙𝑙𝑒𝑟}) + 𝑃({𝐶ℎ𝑖𝑙𝑙𝑒𝑟, 𝑃𝑢𝑚𝑝})


+𝑃({𝐶ℎ𝑖𝑙𝑙𝑒𝑟}) (9)

The resulting probabilistic map, updated to reflect volume information, is shown in Fig.20.

Fig.20: The probabilistic map updated to reflect volume information

Now that volume has been accounted for, design risk can be quantified by computing the probability
of infeasibility. This is done here for each component using Eq.(8). Table IV shows the results.

482
Table IV: Probabilities of infeasibility
Component Probability of Infeasibility
Chiller 0.51
Radar 0.0
Main Propulsion 0.69
Pump 0.39

Because the probabilities of infeasibility for the chiller and main propulsion are high, the design has a
high likelihood of failure given the current level of design knowledge and the incorporating decisions
made to this point. Changing the physical architecture to have more available volume would increase
the number of possible arrangements, allow for more design flexibility, and decrease the risk of failure.
Alternatively, selecting different components (like a more compact propulsor) or changing the distrib-
uted system configuration could increase the likelihood of a successful design.

6. Conclusion

Using the probabilistic mapping approach, designers can develop GAs and the distributed SC without
the resource intensive process of iterating through multiple candidate solutions every time new design
information is introduced. Further, it provides a more expansive view of the solution space because it
shows the likelihood of every possible arrangement given the prevailing design knowledge. Because of
this, it can be used to understand the impact of decisions, so designers can evaluate risk before resources
are allocated to develop a detailed design. Our addition of physical and logical architecture character-
istics and constraints enhances the usefulness and practicality of the approach to produce a tool that
designers can use for real problems. Future work could focus on using the results from probabilistic
mapping to develop new metrics to allow further insight into the impact design decisions have on design
risk. A metric to quantify risk at individual locations, for example, would be useful in determining areas
of the design to allocate additional engineering resources.

References

ANDREWS, D.J. (1998), A comprehensive methodology for the design of ships (and other complex
systems), Proc. Roy. Soc. Lond.: Math. Phys. Eng. Sci. 454, pp.187-211

ANDREWS, D.J. (2012), Art and science in the design of physically large and complex systems, Math.
Phys. Eng. Sci. 468 (2139), pp.891-912

ANDREWS, D.J.; DICKS, C. (1997), The Building Block Design Methodology Applied to Advanced
Naval Ship Design, 6th Int. Marine Design Conf., Newcastle, pp.3-19

BREFORT, D.; SHIELDS, C.; JANSEN, A.H.; DUCHATEAU, E.; PAWLING, R.; DROSTE, K.;
JASPER, T.; SYPNIEWSKI, M.; GOODRUM, C.; PARSONS, M.A.; KARA, M.Y.; ROTH, M.;
SINGER, D.J.; ANDREWS, D.; HOPMAN, H.; BROWN, A.; KANA, A. (2018), An architectural
framework for distributed naval ship systems, Ocean. Eng. 147, pp.375-385

GILLESPIE, J.W.; DANIELS, A.S.; SINGER, D.J. (2013), Generating functional complex-based ship
arrangements using network partitioning and community preferences, Ocean. Eng. 72, pp.107-115

NEWMAN, M.E.J. (2005), A measure of betweenness centrality based on random walks, Soc. Network
27 (1), pp.39-54

RIGTERINK, D.T. (2014), Methods for Analyzing Early Stage Naval Distributed Systems Designs,
Employing Simplex, Multislice, and Multiplex Networks, University of Michigan

483
SHIELDS, C.P.F.; RIGTERINK, D.T.; SINGER, D.J. (2017), Investigating physical solutions in the
architectural design of distributed ship service systems, Ocean. Eng. 135, pp.236-245

SHIELDS, C.P.F.; SYPNIEWSKI, M.J.; SINGER, D.J. (2016), Understanding the relationship be-
tween naval product complexity and on-board system survivability using network routing and design
ensemble analysis, Int. Symp. Practical Design of Ships and Other Floating Structures (PRADS), Co-
penhagen, pp.219-225.

SHIELDS, C.P.F.; SYPNIEWSKI, M.J.; SINGER, D.J. (2018), Characterizing general arrangements
and distributed system configurations in early-stage ship design, Ocean. Eng. 163, pp.107-114

484
A Perspective on the Past, Present and Future of
Computer-Aided Ship Design
Henrique M. Gaspar, NTNU, Ålesund/Norway, henrique.gaspar@ntnu.no

Abstract

The work here presented is an attempted answer to an invitation by the same title by V. Bertram to the
author, with the objective of propose future paths for computer-aided Ship Design tools, based on ob-
servation of past and present trends. The paper starts selecting key timeless characteristics of these
tools, namely level of detailing, integration capabilities, 2D/3D modelling, analysis & simulations and
data size & handling. A brief relationship among them is done to introduce low level and high level
consequences. Past features and the way of handling the selected characteristics are commented in
terms of nostalgia (e.g. the problems that are now solved) and regrets (e.g. the problems that we created
solving older problems). Present aspects focus on an incomplete screenshot of all the parts that the
current ship design toolbox offers and the vain battle to integrate these parts. Future insights are com-
mented in terms of hopes (e.g. integration will happen), worries (e.g. but it may be expensive) and fears
(an unknow hype will doom us all). An optimistic conclusion closes the paper, inspired by a 1985 article
from D. Andrews, proposing data-driven ship design as a factor toward sophisticated design practices,
a library of previous designs and models and open access to ship design data.

1. Timeless Computer-Aided Characteristics in Ship Design

Computer-aided in ship design (CASD) has been a key focus point at COMPIT and other maritime
conferences even before the author, now 40 years old, was born. Most of the material consulted for this
work presented a diachronic approach, comprising five decades of CASD in few pages and/or slides to,
later in the section two, focus on the real point of interest of the authors, such as mesh quality, 2D and
3D drawing abilities or magical analysis tools integrated, especially in academic papers sponsored by
the people that really do and develop CASD, the software companies. In this sense, I must disclaim that
the current paper has no hidden agenda, neither advocates for a software or approach that will solve all
ship design issues. The past, present and future cited in the title is used to remind us that problems and
solutions evolve, while some key characteristics remains timeless, namely level of detailing, 2D/3D
need, analysis and simulation, integration capabilities and data size and handling. The criteria to select
these timeless characteristics have been mainly based on the experience of the author with CASD both
academically, that is, teaching and developing online snippets Gaspar (2018a), and via industrial pro-
jects, that’s is, applying the tools and research to, some extent useful, at the real ship design daily task
Ulstein and Brett (2012) Brett. et al. (2018). Needles to remind that the perspective highlighted in the
title is somehow subjective and prone to criticism, and a self-evaluation of the reader by her own expe-
rience is suggested.

2D/3D models seemed a core point to start the discussion on CASD characteristics. While traditionally
one may say that, in concrete terms, we have 2D (paper/screen) and 3D (physical objects) representa-
tions of the ship in different phases of the design process, modern design suites are offering now the
possibility of a “full 3D process” all over the lifecycle, with the possibility of automatic and “limitless”
2D views of the structure. While this is the aim of most design offices that the author has been in contact
with, this seems to be far from reality, with most of the 3D modelling being transformed in 2D some-
where in the process, and later re-modelled in 3D with few (or none) help from the original source. As
perceived in this and rest of characteristics here discussed, the phase of the lifecycle influences strongly
on how the 2D or 3D nature of the design is perceived, how much data is required, and how presenting
this data in 3D or 2D satisfies. Modern 3D tool promises magically all the 2D required in few clicks of
a button, but in practice this process seems to require more re-design than expected. Needless to say,
companies use both, and rarely the models interchange easily.

485
Level of detailing seems to be intrinsically connected to the value chain phase that the CAD software
will be used. (CASD and CAD may be seen sometimes in the text as interchangeable and synonymous
concepts, but I attempted to use the first specifically in the ship design context and the former in a more
general concept.) While in the conceptual phase little of detailing engineering is required to develop a
tender package, CAD software that focus on sketching and, in the last two decades, realistic 3D render-
ing, seems to be the choice for early stages designer. This bring us to the question of how much detailed
is required to sell and understand a ship? It seems that purely CAD modelling programs, such as Rhi-
noceros or Autocad, can do 3D and 2D drawings very detailed during conceptual phase, while specific
CASD software like Cadmatic or FORAN can be used for outfitting and yard documentation. NAPA,
MaxSurf and other more analytical tools have as well their level of detailing for specific tasks, while
ignoring other important parts of the ship design (SD) process.

Traditionally CAD may be connected to purely geometrical (lines, surfaces, volumes and its intrinsic
properties), or the promises again that the info that these geometries implies can be further analysed.
Having CASD systems as input and output for the analysis and simulation was the mainstream during
the 90s, and I recall having to input hull coordinates by hand to get hydrostatics results during my
bachelor. Most of the serious calculation was done in spreadsheet like codes, or “one runners” with
ASCII text input. Nowadays most common mechanical simulations are found in the large CAD suits
(e.g. FEM and CFD), and specific maritime, like Maxsurf, that extracts the key data from the geometry
to calculate stability and propulsion. In a paradoxical way, much more advance product data manage-
ment (PDM) and mechanical CAD software, such as Siemens NX, does not include (neither maintain)
the basic functionality and simulation required for a ship designer, and therefore we are back to have
two models in two places, plus the Excel spreadsheet in the background with the “real” calculation
based on sea trials and previously designs. For the more advanced analyses, done in Python, Matlab or
other more very specific software based on the academic routines (e.g. Ship X), we observed a large
amount of effort (and time) to construct models disconnected from their CAD models, and a large re-
work every time that the model is updated.

Integration capabilities are mainly connected to capacity of CAD software being used among the dif-
ferent lifecycle stages of the SD process, how information from one stage can be connected to the other
stage with smart use and adaptation from the previous model. In an ideal situation, conceptual 3D mod-
els for the GA could be used as initial point to more detailed hydrodynamic and structural analysis
without the need to remodel the whole hull, and outfitting could smartly fit pipes and wires to the correct
boundaries. Documentation to the yard could be done semi-automatically. In reality, this level of inte-
gration is not as efficient as promised, and a collection of reliable but hard to integrate software is used
for each phase of the value chain. Integration is also observed and later discussed on the physical
data/file sense, with one software being able to open some CAD file done in the other. Primarily, we
want to access the info about the model and use it in a renewed context. Later, we can comment about
integration in seeing how the boundaries of one object interacts, decompose and encapsulate other ob-
jects. The real advantages of successful CAD tools seems in the coherence in keeping compatibility
(and therefore integration with the past), as well successfully delimiting the borders and allowing the
designer to either integrate a new object into the CAD or to have tangible in and out dataflow to connect
with it, even if via very specific scripts or, in the former case, CSV tables. Note also that it is difficult
to delimit when CAD becomes pure coding in the design task. The “geometry” overall can be defined
by coordinates (structural aspect), while the changes in the behaviour are, without the aid of human
friendly visualization, changes in the coordinates over time, either geographical or vectoral.

Data handling is related to the capacity of transforming the data available in information via the efficient
storage, access, analysis and report of data. Simon (1973) divides this information in three categories:
long-term memory (e.g. experience from previous designs and distributed memory in the experience of
each of the stakeholders that contribute to the design, from engineers to operators); external-memory
(from sensors and real-time log of the operation) and instructions (from design and operational rou-
tines). An efficient data-driven CAD should be able to incorporate and analyse the relevant design data
available. Efficient data-driven methods applied to ship design are commented in the last section of this
paper as insight on how handle proper the SD data.

486
Assuming that we have an overview on what each of these characteristics mean in the SD domain, a
next step seems to understand how they interact with each other, assuming a distinction between a lower
level of interactivity between them (less is more) and a higher level of interaction (the more the better).
This is briefly presented in Table 1, where the left diagonal part of the table (light blue) investigates the
combination of two characteristic in a lower level of interaction, while the right diagonal (light green)
investigates the same two characteristics in a higher level of interactivity. Such exercise is useful to
evaluate in each state of interaction were/are/will we aiming for when observing the past, present and
future of CASD.

Table I: Matrix of relationships among CASD characteristics, with a low level of interaction (left diag-
onal) and a higher level (right diagonal)
2D/3D Level of Detailing Analysis and Simulation Integration Capabilities Data and Handling
2D/3D
Model and analysis All in one software,
tools coupled either able to provide
Comprehensive
together, with most of the analysis,
models with capacity One file with multiple
2D/3D simulation on the go, either able to
to filter and extract models, larger size
promising possibility incorporate older
multiple view points
to optimize topology/ compatible models in to
arrangement the larger system
Level of
Detailing
Simulation of the Flow of detailing, with Few files with large
whole model, with zooming and filtering, data (size), and
Every level of
filter in specific parts with categories division zoom/filtering
detailing
Level of Detailing and analysis of overall such as taxonomy, size, capabilities inside
requires a new
consequences. Time ownership, spatial the software to
model
consuming and position, functional delimit boundary at
complex to analyse. requirements. any stage.
Analysis and
A new model for One simulation of one Inputs and outputs
Simulation “All in one” software,
every new small part or portion shared among
such as FEM, CFD,
analysis, with of the whole, a new simulation. A single
Analysis and dynamics,
minimal simulation for each (or fewer) large data
Simulation thermodynamics
relevant info time/ phase/ degree sets with multiple
sharing the same set of
required in each of hierarchy. Faster values and attributes
inputs.
model and simpler to be filtered.
Integration
One simulation/
Capabilities Single or fewer files
software for one type
Every level of detail with simulation and
Single software of behaviour, no
has very defined bor- results integrated.
for one task, connection with other
ders, not connected Integration Capabilities Better on see multi-
one model for simulations.
in a flow (one file for domain
each task Separated and
each level) consequences,
individual inputs and
harder to filter.
ouputs.
Data
Separated data for Very defined Separated files, with
Handling Individual files
every level, with de- boundary of inputs separated analysis and
for each model,
fined amount of re- and outputs. Every results. Easy to filter, Data Handling
small size, large
quired data/ files to simulation has it’s hard to see multi-
number of files
establish a level own data. domain consequences.

The arguments here presented are discussed in the next section in the light of the past, present and
future, heavily influenced by a quote from Hausher (1937) in a cheerful analysis of St. Augustine con-
ception of time, “The past is that which is no more; the future that which is not yet. And if the present
were perpetually present, there would be no longer any time, but only eternity. For the present to belong
to time it must pass. Hence time only exists because it tends to not-being.” Evoking thus the past can
lead us to the good (nostalgic) and bad (regrets) impression of a certain time-interval, while the same
good and bad in the future can be understood as hopes and fears. I invite to reader to join me in this
ingenuous philosophical exercise and apply the same dichotomic assessment of her impressions on
CASD experiences, and analyses what can be used a proposal for a more optimistic future in computers
and ship design.

487
2. Past (The past is that which is no more)

2.1. Nostalgia – the problems that we solved

When discussing the sketch of this paper with my (older) colleagues, they emphasized the aspect of
documentation that CAD gave to the ship and other industries. Design, in the transcendental insight of
creating a form to fulfil the function, seemed secondary. CAD thus appeared as the future step to doc-
ument, copy, reuse and detail the design, a digital alternative for the blueprint firstly, and a drastic way
to change the engineering and yard offices lately. The equivalent process of doing each of the parts,
assemblies, blocks and other drawings were firstly mimicked in the new CAD systems, one digital file
for each required drawing, and multiple copies of parts due to individual storage of each drawing in a
unique file Reffat (2006). While in the developed countries this transition happened in late 70s and early
80s, in the developing countries, such as Brazil, this took one decade more, and I still remember visiting
the drawing section of a design office, with the universal drafting machines using the whole area of the
floor, similar to the image from Fig.1 (left).

Fig.1: Two examples of the nostalgia of the pre-cad times: (left) design office resembling my memory
at early age (a) and drawing technicians laying down during work and getting paid for it. (right),
https://www.vintag.es/2018/08/life-before-autocad.html

The boundary of developments that makes the older drawing tables obsolete are, by far, connected to
the development of computer itself, with the advent of faster and more reliable computing power, screen
resolution, storage capacity, as well as the internet and portability. So, we may agree that much of the
slow, low-def, large, expensive and limited storage of the previous computers that the CASD systems
rant when most of us were BSc students, just learning how to design a ship is now only in the memory,
and our younger colleagues have a very different picture than Fig.1.

For the sake of exemplification, take a ship design suit used in the university courses, such as MaxSurf
or Paramarine. Both suits offer built-in 3D hull examples with most of the fundamental calculations in
the click of a button, such as hydrostatics, propulsion and structure. Moreover, ability to export the hull
to another CASD program is, with some effort, a possibility. The advancement in fairing splines is
nowadays impressive and, compared to the past, it is seamless to adjust a hull to a table of offsets or
vice-versa, getting a very detailed table from a complex hull. I still remember in my BSc the need to
adjust lines one by one by hand, with strange holes in the hull. This is no longer the case for my students,
and they are much freer to explore the design space in new hull shapes that I recall being, Fig.2. More-
over, the more skilled ones are able to use more advance 3D software, such as Rhinoceros or Siemens
NX, and parametrize their hull there, creating a script to divide in modules, assemblies and even auto-
matic GA generation, Monteiro and Gaspar (2014). The old school may argue (with reason) that the
high dependency on CASD for all the calculations may remove the student from the tactile knowledge
of a ship, since the abstraction required for a 3D drawing of a hull is different than decomposing it on
the 2D surfaces that are cut to physically assembly the hull from frames and plates.

488
Fig.2: The evolution of hull form visualization in two examples: (left) Hullform software, where the
author in the 90s modeled his first ship in a CASD systems and (right) Maxsurf software used by
the Ship Design students at NTNU for their bachelor thesis.

Fig.3 exemplifies this shifting of understanding with model tank tests of the past, Fig.3a, where the hull
lines are printed in a paper and carefully cut in frames and later laminated, demanding a carefully and
tactile work from the designer and technicians, versus its modern counterpart, Fig.3b, where the CASD
system feeds directly the CNC machine that can mill diverse hull shapes much faster, requiring less
interaction and manual ability. Forecasting similar transition, not in a longer future, an updated version
of this paper can easily substitute the nostalgia of the CNC for fast and reliable 3D printing, DNVGL
(2018).

Fig.3: Workshop of a towing tank in the 60s, with the drawing feeding the hand cutted frames and plates
IPT (2017) (left); A CNC machine available at NTNU in Ålesund (right)

Connecting these fragments of the past to the characteristics presented in Table I, we can assume that
CASD systems have been efficient in understanding the 2D/3D nature of a ship, with amazing devel-
opments in facilitating the work of the designer in the transition in an specific phase of the value chain.
Level of detailing is also a gain when compared to old CASD, with amazing capabilities that specific
software can give to very detailed problems, such as Cadmatic (outfitting) and DNVGL Sesam (struc-
tural analysis).

Itemizing in few keywords to help with this exercise, with the fear to sound redundant, modern CASD
systems are truly able to:

i. Quicker 2D / 3D design processes, designers can create new concepts in short time
ii. Reliable document the whole ship design process, from early design to construction and mainte-
nance.
iii. Explore a large number of configurations during early stages, smartly copying, pasting and
adapting past design into new ones

489
iv. Export drawings and design between formats, as well as filtering level of detailing
v. Connect design (drawing) with performance (analysis), and visually understand cause and con-
sequence of changes in internal (e.g. geometry) and external (e.g. environment) parameters.
vi. Adjust very precisely hull lines and understanding its effects in the hydrodynamical and struc-
tural with pretty reasonable accuracy
vii. Incorporate script language to the parametrization and optimization of geometry and perfor-
mance

2.2. Regrets – the problems that we created

During the literature review for this paper I was impressed to read the optimistic conclusions from
papers 20+ years older, Storch and Chirillo (1992), Ross (1993), commenting on the prosperous future
of fully integrated CASD systems, that would allow the design, analysis and detailing in a seamlessly
linear process of tools that would just work together. This is not the case in a real ship design environ-
ment, and it may not be in the near future, which raises the question: what could we have done better?
Which problems did we created when we solved the issues discussed in the previous section?

Taking the personal experience of the author on attempting to implement product lifecycle management
(PLM) technologies in a ship design environment, Gaspar (2018b), especially in the use of 3D all over
the process. Ideally, the 3D concept developed during the tender package could be the start point to feed
the next phases. Software like Rhinoceros are able to produce amazing detailed hull lines and 3D GAs,
and connected to a state of the art engineering can develop detailed design synthesis in a short period
of time Brett et al. (2018), Andrews (2018). Very little, however, of the original 3D file was used in the
next phases of engineering analysis and detailing, with each of the design groups, such as hydrody-
namic, structures and cargo systems to name few, redoing and redrawing the hull and GA over and over
again in each of their specific software (e.g. Star CCM, DNVGL SESAM, Cadmatic). More impres-
sively, a major change in the design implies an amazing time of re-work and precious engineering time
in correcting each of the non-connected engineering models. The same lack of integration was observed
in the subsequent phases. Observing the development of the systems from the past, five reasons can be
commented, namely: proprietary formats, lack of integration, licensing and profitability of the tool, high
cost to training and a deviation from the principle of parsimony.

Proprietary and commercial software is the status quo of modern engineering CAD systems. The ship
design case, with a much smaller spectrum of the field compared to civil or mechanical engineering,
have even more specific software suits developed over the time, and following the development of
established software such as NAPA Cadmatic, Friendship or FORAN one can see the stepwise im-
provement made by qualified engineers over the time, a constant improvement of precision and features.
Specializing for a certain niche, such as structural analysis (DNVGL SESAM) or outfitting (Cadmatic)
created however a lack of priority in assuring that formats should be open and interchangeable besides
market benchmarks, such as DWG or neutral formats as IGS. Therefore, each ship design software has
its own proprietary format with reduced capabilities of importing files from the competitors, and when
it happens, precious information is lost. The inability of collaborating in an open standard seems to me
a key component in the lack of integration observed in the past and continuing today. As noted by Brett
et al. (2018), much of precious engineering hours is used in converting CAD models from one format
to the other due to the lack of integration.

The commercial aspect of modern CAD software also keeps an amazing level of paperwork and licens-
ing contortionism with outdated technology from the 90s, a political approach that yet requires a dedi-
cated server which checks a license for each of the computers that are using the software. While this
seemed a good solution to avoid piracy and gain control from the side of the creators on the past, this
looks extremely counter-productive in face of the modern online tools, apps and pay per use techno-
logies that we have available in non CASD computer software. On the top of it, count the hours that IT
technicians use to install the software in each machine, a long process configuring client and server due
to antiquate anti-piracy policies. I dare to speculate that the ship design developers would benefit of
providing as open as possible solutions to install and use their software in order to gain terrain in the

490
market share. The current state of the art outside CASD in licensing is pretty straight forward: enter the
website, download the software, try for 30 days and pay a license after it, all online and in the click of
a button. For CASD the story is different: you enter a website, fill a contact form, wait for the contact
days later, explain why you want to use the software, beg for a trial, wait for the license before trial,
have problems installing the trial license, when it works discovers that the license is incomplete and
able to run just a small number of cases, begs again for a better license, which is only given with the
promise that much money needs to be spent. It is almost as if they do not want anyone else to use their
software.

Blaming the other side of the scale, we can find a resistance in the ship design culture of incorporating
the new when the traditional is the norm. An engineer that spent half of his career using a standard CAD
tool will not achieve the same productivity when changing to another software that will facilitate the
life of its peers – why improve the next phase of the value chain if my work is damaged? As an extreme
example, I encountered an old school engineer that yet would do all his isometric drawings in 2D instead
of creating a 3D drawing and exporting infinite different views. He was very effective on it, and could
very fast create any exploded or isometric view of his design purely in an outdated 2D CAD system.
However, he was unable to share the 3D model to his younger colleagues because such model exists
only in his mind. Therefore, when one says about the high cost of training for a new CAD system, able
to better connect the design phases, for instance using 3D all over the processes, the high cost does not
solely relies on paying the CAD company for the training and tutorials, this can even be justified as
investment, but the lack of productivity from the experienced engineers that need to change from one
tool to another and, in more extreme cases, re-thinking their design procedure in a different way.

Younger engineers face the other part of the (lack of) productivity spectrum: too much parameters and
choices to be done and a difficulty to apply parsimony principles. Modern CAD software presents a
large number of features, that can be parametrized and optimized in a never-ending process of design
and testing of minimum details. For the sake of exemplification, let’s say that 20 years ago an advanced
CFD analysis of a new hull would take approximately two weeks to be done, using MATLAB script
reading the output from WAMIT – that was the case of the first offshore project that I was exposed.
Today, with all the advancement in 3D, it yet takes two weeks for the same results, changing mainly
the way that the results are presented with a (small) improvement in the precision. A common mistake
when using 3D all over the process, for instance, is focusing in modelling details that are irrelevant for
that part of the process and experiencing with adjustment and mesh refinement that gain little in the
performance and design.

3. Present: Current State of the toolbox available for Ship Designers and challenges for PLM
integration

My understanding on the current state of the CASD toolbox is that we have the parts, with several
standalone tools able to solve a specific problem in one value chain phase. Figure illustrates this idea,
with the activities in the value chain in the top, namely conceptualization, design, constructing, manu-
facturing, assembly, commissioning, delivery, operation and scrapping, based on Ulstein and Brett
(2012) and an example of the disciplines and current quality of the modelling and analysis offered by
modern CASD software, presented in the text and a collage of screenshots.

We can follow the proceedings of previous COMPIT to grasp an idea on how individually each of these
tools evolved, with more precise FEM and CFD analysis, more refined meshes, complicated 3D models
and even, recently virtual reality and online tools. But an agreement on the integration of these tools
seems yet far in the future, and it surprised me that references from 20+ years ago also reached the same
conclusion, exemplified here by Ross (1993) (…) “The ultimate goal of integrated ship CAD/CAM is
the total integration of all processes, from early design through production. Although many U.S. ship-
builders have made investments of millions of dollars in CAD/CAM systems, the various aspects of the
systems tend not to be integrated with each other. For example, a shipyard may have one CAD system
for structural design and a different CAD system for outfitting design.” This lack of integration was the
norm before, and it seems to me yet a problem to be tackled.

491
Fig.4: Ship design value chain at the top (based on Ulstein and Brett 2012) and the parts available for
the CASD toolbox

The reason for this slow pace can be justified by the very specific segment that ship design have in the
CAD community, requiring a large investment to reach the precision required, to a relatively small
number of potential buyers. As also as each ship is a one of a kind, Erikstad (1996), there is a lack of
consensus on how the product and systems should be integrated, with each design and yard having
independent and confidential routines, with the rare common point of documenting for the classification
society (e.g. using classification society software) or for the yard (e.g. using SFI system, as in the Nor-
wegian case).

But are we able to integrate ship design software? Take a look on this optimistic consideration from
Ross (1993), “integrated ship design programs include flexibility for future growth, technical excellence
of the modules, communication with other programs, and making the programs user friendly. Flexibility
for the future has been noted as an advantage of a program that uses separate databases for each of its
modules (although the use of separate data- bases has disadvantages as well in regard to user friendliness
and efficiency of operation). The need for flexibility, even within common database and product model
types of ship design programs, is recognized by a number of program developers.” Optimistic in 1993,
and not yet a reality in 2019.

Modern CAD is attempting to integrate multiple tools by incorporating PLM ideas (and vice-versa),
enabling the access to multiple revisions of assemblies, tracking product data through the lifecycle and
sharing among the designers. This CAD way of thinking requires a well-organized hierarchical structure
of the product which is not always a consensus in the ship design case. Modern CAD software suites
promise a flexible working environment where assembly definition is made to fit certain working prac-
tices, allowing to check-out only necessary data which keeps the designing process efficient, stores and
manages data independently given the possibility of working with more than on taxonomy, Levišaus-
kaitė et al. (2017).

PLM software promises an integrated design platform merging product data management (PDM) and
virtual prototype concepts, such as 3D libraries of components, CAE/CFD tools and re-use of previous

492
designs. This platform benefits from the organizational concepts from PLM methodology, as well as
virtual prototype techniques used to simulate main lifecycle process, from design visualization to con-
struction. A well-integrated PLM system is (theoretically) able to manage product data and process-
related information as one system by use of software, providing accessibility for multiple teams across
the company, such as CAD models, documents, standards, manufacturing instructions, requirements
and bill of materials

PLM can be divided into 6 elements, illustrated in Fig.5, CIMdata (2011), Andrade et al. (2015). In
short, Database is related to indexation tools and document management, Modelling and Simulation
tools is composed by all the software used to design the vessel and virtual prototyping, Value Chain
Processes are related to the management of the processes within the Ships VC, Product Hierarchy man-
agement is establishing the classification of all the ship systems and components, Product Management
administrates all the information related to every component and Project Management connects every
process to the entire vessel life-cycle.

Fig.5 : PLM elements, Andrade et al. (2015)

Every design company has already some sort of PLM, even if not integrated. Ships are designed and
analyzed; Yards receive all the drawings and owners and users are able to operate and schedule mainte-
nance of the real product. What differentiates the current system from what Ross aimed far in 1993 is
yet connected to the level of integration. Let us take for instance the value-chain phase between the
conceptualization (sales), design and the construction. The final products of the sale is a concept able
to convince the owner that she should invest in that ship; the design products are the thousands of
drawings required by the yard, while the final product of the yard is the ship itself. As commented,
current practice is that a department works on the concept, on an exclusive 3D model, while designers
in other departments recreate the same model in several other models with much re-work until the final
basic documentation. It is not rare for the shipyard to re-do many of the drawings to adjust the outfitting.
A 3D model used in concept will usually not be used for structural analysis. 2D CAD drawings, docu-
ment editors, spreadsheets and presentations tool are indeed the only tools widely used during the whole
value chain. In the database side, the most common tool is the mapped drive, without modern assembly
/ tag / filtering options. In this sense, the same ships that were already decided in the initial phase needed
to be redone and recreated in several other software, mainly due to the lack of integration among appli-
cations. Therefore, an integrated PLM platform is the one which re-uses and builds up former designs,
with a same language among the six PLM elements observed in Fig.5.

493
PLM systems promise to keep control of products’ digital data structuring, using dedicated (and expen-
sive) software for improving the management and collaboration of the team throughout the product
development process. If modern PLM systems deliver on the promise of handling the aforementioned
challenges of multi-taxonomy/disciplinarily issues, it is reasonable to assume that we can start to use
PLM as a foundation towards handling and storing diverse data, and furthermore develop other emer-
gent technologies Keane at al. (2017).

4. Future (The future that which is not yet)

4.1. Hopes – the problems that should be solved soon

The hype for full digitalization and digital twin may not be a reality in the recent future, but can intensify
the need for CASD integration commented previously. We see this integration in two fronts: first
PDM/PLM systems including ship desig tools in its suites, Siemens (2015), and secondly CASD soft-
ware allowing a more flexible connection with PLM suites, Cantos (2016).

Commercial PLM software are gaining more and more attention (and clients) in the ship design indus-
try, with the promise of presenting a common standard to create, provide storage and sharing capabili-
ties of the value-chain data across the ship design community. This trending goes in line with the de-
velopment of such methods in other industries (e.g. automotive and aeronautic) and its features aim to
tackle some of the ship design integration issues discussed so far, Gaspar (2018b).

The next generation of PLM promises features from CASD tools, added to collaborative design envi-
ronment with unique and declared characteristics as access privileges, maturity status, position in ship,
set of attributes, revision history, unit effectivity, and locking status. In other words, for controlling,
accessing and managing the design data the components in the assembly do not need to be hierarchically
ordered. Thus, it leaves an option for the ship designers to decide on the level of detail in assembly by
making separate parts or subassemblies as design elements in the CAD environment. The non-conven-
tional assembly approach enables multiple organizational breakdowns of a ship which obviates data
duplication. This means that multiple taxonomies/views of an assembly such as functional and physical,
Fig.6, which loads required unit once even if it belongs to multiple views, instead of pre-determined
subassemblies of a product which add duplicates. This approach facilitates the day-to-day tasks by re-
ducing the complexity while loading and maintaining the design elements. Future integrated PLM with
CASD is theoretically able to generate different configurations of a structure allowing the user to con-
figure the product with several effectivities which are several configurations, Levišauskaitė et al.
(2017).

Fig.6: Multiple hierarchical breakdowns promised for future integration of PLM and CASD, Siemens
PLM software (2015)

On the ship design side, connection with coherent PLM systems seems to be a consensus among the
bigger player, and traditional CASD tools have in the recent years focused their efforts connection and
interfaces between ship data and PLM suits, here exemplified by FORAN software, Cantos (2016); It
is advertised that all the information generated in modern CASD, such as FORAN, may be transferred

494
to a PLM and may be subject to all processes: control, configuration and releases lifecycle and process
management, Perez (2016). The authors comment about key requirements for the integration of CASD
and PLM, such as continuous synchronization, mastering standards, sharing of attributes, handling
unique identification in both systems, acces and visualization of model item, bi-directional transfer of
the vessel build strategy Cantos (2016). Part of these requirements are illustrated in Fig.7.

Fig.7: CASD software integrated to PLM, Cantos (2016); Perez (2016)

On the user point of view, the ship designer, Brett et al. (2018) remind us that the industry is not ignoring
the issue, as stated that “different activities have been carried out in Ulstein to explore the opportunities
for upgrade design work processes and practices”, with the inclusion of modern product data manage-
ment ideas, such as conceptual design and engineering of vessels based on a modularized and standard-
ized approach, prototype design tool able to concurrently integrate the modules the value-chain, testing
the efficiency gain of the proposed solution against the current status quo.

4.2. Worries – the problems that we may not solve

By reading literature favourable to the implementation of modern PLM systems in ship design one may
have the impression that the data integration problem is solved, and today designers are able to effi-
ciently handle with a high flexibility the multidisciplinary nature of the design process and efficient 3D
re-modelling facilitation. But this is not what is observed in real ship design practices. CAD and PLM
tools promises for over two decades that 3D models created at the beginning of the process could be re-
used during the detailing process, especially with automatic and parametrized routines able to convert
the 3D in 2D with a single click, as well as smartly reusing previous models to parametrize common
changes and modular vessel configuration. This is not the reality observed by the author when working
direct with ship design companies in industrial projects. Much of the precious engineering time is yet
used to convert, fix and re-use old models rather than engineering a more efficient hull or propulsion
system, Gaspar (2018b).

The drawbacks of a decision to go completely via PLM are, so far, well known. Criticism from personal
field studies shows that the integration and flexibility promised by such systems is not yet delivered in
everyday ship design activities, with productivity levels even lowering when fully 3D practices are
implemented. It is vital to understand how complicated and time consuming the implementation of a
PLM project might become depending on the company requirements. Often maritime companies con-
sider PLM system as too time and resource consuming before bringing benefits and tend to avoid it or
postpone its implementation. Another drawback is the failures of previous implementation attempts
Keane et al. (2017). Add to this the economic price of proprietary software, with a high integration cost
for purchasing and training among the many engineering and yard departments and limited freedom to
customize libraries and engineering procedures. Some of the criticism is also extended to the users of
the tool, unable to adapt traditional techniques to the new standard – but who would blame them, if the
old technique still gets the work done?

Some of the worries when merging CASD with modern PLM philosophy can be summarized as follows,
Gaspar (2018b):

495
• Additional work to add existing ship data in the new format/standard/library
• Fear or losing relevant information and productivity when using a new tool
• Lack of ship design terms, integrated analyses and regulations incorporated in the available
PDM / PLM tools, with the designer adapting ship systems to a more purely mechanical taxon-
omy.
• Inability to incorporate commonly used ship design files (data types) in the database
• Lack of seamless integration with third-party ship design tools, such as classification societies
tool
• Proprietary and closed software package, constraining customization and share of data among
traditional CASD systems
• High cost to acquire, install, train personal and keep servers running.
• Resistance from experienced engineers to use a new tool
• Risk of being locked to a system, and lose independency if features and licenses’ terms changes
• Ignore that individuals have different preferences on how they solve the problem, and that a
certain methodology proposed by the new system may not the most effective among users

It is my impression that this (lack of) compatibility problem will not solve itself soon, as some PLM or
CASD vendors advertise. No type of one size fits all software is yet able to properly cover efficient
toolbox integration ship design. On the other hand, PLM tools seems a necessary evil, presenting a great
step towards efficient modelling and storage of CAD models, and a large library of model and designs
running seamless across the whole engineering office is a reality – these models not necessarily talk the
same language, but they are there, in a common place. I remind the reader to take these comments with
a grain of salt, that the main criticism discussed here is on the capacity of tools to be integrated effi-
ciently among the value chain phases, and not on the usability of tools, quality or technical excellence
of the tools, which are undoubtedly higher than compared to the past.

4.3. Fear – the problems that we do not know that exists

Entering in the realm of speculation, I suggest here another exercise: to investigate the fear of the un-
know on the realm of current trendy words and see how much CASD is connected to the relevant ones.
My proposal is that the reader expose herself to the hot topics from the Hype Cycle for emerging tech-
nologies, Walker (2018), and speculate how applying an emergent and unknow technology can improve
the ship design environment. Observing Fig.8, from a 2018 analysis on emergent technologies, we re-
alize that some of these topics are already a reality in COMPIT conferences, such as digital twin, AI,
neural-networks, autonomous robots and augmented reality. Does Digital Twin can prove itself profit-
able? Can the task of the designer and engineer be substituted by AI and deep neural networks in a near
future? Will robots substitute the crew on the operational phase? All the money invested in augmented
reality, did it will ever pay itself? It is unknow at the moment, and we may be afraid of the answer.

5. Proposal for an optimistic future – A Bias for Data Driven Methods

It requires not much investigation to aim for an optimistic future. Back to cite this and previous COM-
PIT proceedings, we realize that the quality, number of choices and reliability of the CASD tools used
to handle ship design are only better and better. It includes not PLM tools already mentioned, or the
computational power per se available for designers, but the software and methods able to use it. A large
number of commercial (and some free) tools are available, such as hull modelling, optimization tools,
automatic rule check, CFD, powering and propulsion, flooding simulation and space allocation, to cite
few. The challenge them seems to have a common standard and/or culture able to collect, access, ana-
lyse, assure quality and, most important, connect this data among all the lifecycle phases without the
need of being locked to a rigid system.

496
Fig.8: Hype Cycle of emergent technologies, Gartner (2018)

5. Proposal for an optimistic future – A Bias for Data Driven Methods

It requires not much investigation to aim for an optimistic future. Back to cite this and previous COM-
PIT proceedings, we realize that the quality, number of choices and reliability of the CASD tools used
to handle ship design are only better and better. It includes not PLM tools already mentioned, or the
computational power per se available for designers, but the software and methods able to use it. A large
number of commercial (and some free) tools are available, such as hull modelling, optimization tools,
automatic rule check, CFD, powering and propulsion, flooding simulation and space allocation, to cite
few. The challenge them seems to have a common standard and/or culture able to collect, access, ana-
lyse, assure quality and, most important, connect this data among all the lifecycle phases without the
need of being locked to a rigid system.

I have been in the last years advocating for the use of data-driven methods in ship design, Gaspar
(2018b). CASD with integrated data-driven capabilities should re-uses and builds up on former designs,
allowing the designer to really fetch former designs from a database, building up new concepts based
on the new information, as well as re-using advanced 3D models for many value-chain phases (sales,
concept, basic, construction). A data-driven ship design culture should act on keeping the collected and
analysed data as accessible as possible during the design process, focusing not only on a standalone
problem but on the holistic of the process, across the whole value-chain. This includes access to the
analyses made during the design process, options and behaviour of the systems under the multiple op-
erational scenarios studied, without the filter of a locked proprietary system. A data-driven design must
integrate smartly the data used as input and gotten as output from the available ship design tools, as
well as incorporating empirical knowledge from stakeholders. Computer science seems to have under-
stood the lack of compatibility problem among data well, and the high pace development of internet
tools just accelerated the need for common standards and practices.

It seems that data (and code/methods) openly available to designers and engineers, idealised in a library
of previous designs is an essential step towards improved CASD. A library of design methods and
practices connected to level of novelty and ship design types, Andrews (2018), should be implemented
in design offices. The designer should understand what type of data is required (either existing or to be
developed), making use of previous designs as examples. To maintain traceability, use of modern ver-
sion control of files and infrastructure, enabling collaboration and rollbacks, a common procedure in

497
software development, should be incentivised for SD files. The popularization #hastags# is another way
to embrace multi-hierarchical data, allowing plural data labels, such as functional/spatial/economic hi-
erarchies (as observed in Fig.6), connecting different value chain taxonomies. Thus, main machinery
can be part of a propulsion system in one division (functional) and part of the hull in another division
(physical). Use of data formats as open as possible, including values (e.g. simulations inputs, codes and
results) and models (e.g. 2D and 3D models in open source formats, such as SVG or STL) is also a way
to not sabotage oneself in the future, with continuous integration of collected and generated data, across
the lifecycle.

I close my ingenuous discussion with an ingenious citation from Andrews in 1985, where he concluded
that “ship design is a far from simple process and furthermore the momentum behind preliminary CASD
to simplify the initial design ‘synthesis’ is no longer necessary or desirable. The more sophisticated
design description, provided by an integrated synthesis rightly makes the designer consciously address,
as early as possibly, many of the less tangible design issues. With the developments underway in CAD,
Artificial Intelligence and expert systems the ship designer (…) must mould the application to design
of computer methods to provide an open, responsive and ‘softer’ approach to CAD”. By advocating for
an integrated synthesis (and more recently a sophisticated design practice, Andrews (2018)), Andrews
is touching the core point of handling efficiently existing ship data. A comprehensible design synthesis
requires sophistication enough investigate all the data available and select only the information relevant,
and my perception is that such synthesis can benefit much from an open data-driven approach, espe-
cially if a large library of examples is available to stakeholder scrutiny, where the comprehension can
be analysed in many layers. Recent works, such as Ebrahimi et al. (2018), present some example of
these synthesis applied to commercial ships, but more should be produced towards a general consensus
of comprehensible.

Acknowledgements

Holding an associated professor position at the Department of Ocean Operations and Civil Engineering
at NTNU (Ålesund, Norway), I have no commercial or professional connection with the software and
companies cited. Statements on usability and performance reflects solely my opinion, based on personal
experience, with no intention to harm or diminish the importance of past or current commercial engi-
neering tools. This research is connected to the Ship Design and Operations Lab at NTNU in Ålesund,
which is partly supported by the EDIS Project, in cooperation with Ulstein International AS (Norway)
and the Research Council of Norway.

References

ANDRADE, S.; MONTEIRO, T.G.; GASPAR, H.M. (2015), Product life-cycle management in ship
design: From concept to decommission in a virtual environment, 29th European Conf. Modelling and
Simulation (ECMS), Varna

ANDREWS, D. (1985), An Integrated Approach to Ship Synthesis, The Royal Institution of Naval Ar-
chitects

ANDREWS, D. (2018), The Sophistication of Early Stage Design for Complex Vessels, Trans. RINA,
Special Edition

BRETT, P.O., GASPAR, H.M., EBRAHIMI, A., GARCIA, J.J.G. 2018. Disruptive market conditions
require new direction for vessel design practices and tools application. Marine Design XIII, Volume 1
Proceedings of the 13th International Marine Design Conference (IMDC 2018), June 10-14, 2018, Hel-
sinki, Finland. CRC Press 2018 ISBN 9780429803314.

CANTOS, F.J.R. (2016), Integration between Shipbuilding CAD Systems and a Generic PLM Tool in
Naval Projects. МОРИНТЕХ-ПРАКТИК

498
CIMdata (2011), Introduction to PLM, CIMdata Inc.

EBRAHIMI, A.; BRETT, P.O.; GARCIA, J.J. (2018), Fast-Track Vessel Concept Design Analysis
(FTCDA), 17th COMPIT, Pavone

ERIKSTAD, S.O. (1996), A Decision Support Model for Preliminary Ship Design, NTNU, Trondheim

GASPAR, H.M. (2018a), Vessel.js: An open and collaborative ship design object-oriented library,
IMDC, Helsinki

GASPAR, H.M. (2018b), Data-Driven Ship Design, 17th COMPIT, Pavone

HAUSHEER, H. (1937), St. Augustine's Conception of Time, Philosophical Review 46/5, pp.503-512

IPT (2019), Década de 1960 - Construção de modelos de embarcações nas antigas dependências da
Seção de Engenharia Naval (IPT - Instituto de Pesquisas Tecnológicas), IPT, São Paulo

KEANE, A.; BRETT, P.O.; EBRAHIMI, A.; GASPAR, H.M.; AGIS, J.J.G. (2017), Preparing for a
Digital Future – Experiences and Implications from a Maritime Domain Perspective, 16th COMPIT,
Cardiff

LEVISAUSKAITE, G.; GASPAR, H.M.; ULSTEIN, B. (2017), 4GD framework in Ship Design, 16th
COMPIT, Cardiff

MONTEIRO, T.G.; GASPAR, H.M. (2014), Modular Vessel Parametrized Design, NTNU, Ålesund

REFFAT, R. (2006), Computing in Architectural Design: Reflections and an Approach to New Gener-
ations of CAAD, Information Technology in Construction, pp.655-668

ROSS, J.M. (1993), Integrated Ship Design and Its Role in Enhancing Ship Production, NRSP Ship
Production Symp., Williamsburg

SIEMENS (2015), Teamcenter 11.2: 4th Generation Design, Siemens PLM software white paper,
s.l.:s.n.

SIMON, H.A. (1973), The Structure of Ill- Structured Problems, Models of Discovery. Boston Studies
in the Philosophy of Science 54

STORCH, R.L.; CHIRILLO, L.D. (1992), The Effective Use of CAD in Shipyards, SNAME

ULSTEIN, T.; BRETT, P.O. (2012), Seeing what’s next in design solutions: Developing the capability
to build a disruptive commercial growth, 11th IMDC, Glasgow

WALKER, M. (2018), Hype Cycle for Emerging Technologies, Gartner

499
The Future of Ship Design: Collaboration in Virtual Reality
Robert Spencer, Stirling Labs, Perth/Australia, robert.spencer@stirlinglabs.com
Jeremy Byrne, Stirling Labs, Melbourne/Australia, jeremy.byrne@stirlinglabs.com
Paul Houghton, Stirling Labs, Perth/Australia, paul.houghton@stirlinglabs.com

Abstract

This paper summarises some of those findings regarding emerging best practice for effective collabo-
ration on ship design in VR, discusses specific financial, team-building and integration, knowledge
management, and workflow benefits achievable through its application; and explores where this tech-
nology will take the industry.

1. Background

The term Virtual Reality (VR) is used here to describe the experience of using a head-mounted com-
puter display (HMD) with highly accurate (sub-millimetre) 6-degree-of-freedom positional tracking
system and computer software which imperceptibly updates the display in synchronisation with user
movement, with sufficient fidelity that the user can sustain the illusion that they are immersed in the
virtual reality environment (VRE) displayed in the HMD.

A key factor of these HMDs is that since it is worn on the face and all head movement is captured,
when used with appropriate software, the user’s perspective of the VRE will always be correct regard-
less of where virtual objects are or where the user is looking. This is in contrast to other technologies
that are sometimes referred to as VR, such as “caves” or 3-degree-of-freedom HMDs.

The VR hardware provides a computer-generated visual experience and the software can use this ca-
pability to present a VRE that creates what Slater, M. (2009) calls the place illusion (PI) and plausibil-
ity illusion (PsI). Essentially the VRE causes the sensation of being in a real place (PI) and that the
scenarios witnessed, such as colleagues conversing, is actually occurring in this place (PsI). Not all
VR software is intended to achieve this goal, and there is considerable variability in how well it is
achieved.

2. Introduction

Throughout the long history of design and engineering, our ability to document and communicate our
ideas has improved slowly. Miming shapes with hands, drawing quick pencil sketches and detailed
3D CAD modelling are all current methods used every day to convey ideas between engineers and to
civilians such as fabricators or ship owners. Much of this communication fails to an extent harmful to
the project, as Stacey and Eckhert (2003) argued, and the Cruz Lozano et al. (2014) model quantified
for sketches.

Modern CAD and PLM systems can capture an incredible degree of detail, including metadata indi-
cating provisionality and uncertainty. For highly trained staff, using 3D design represented on a 2D
screen backed by large databases is an excellent interface to this rich data. However, Vandenberg
(1978) show that individuals’ capability to interpret various views including flat 2D or 3D representa-
tions is significantly variable, while Goh and Spencer (2018) showed that using head-mounted virtual
reality displays and software designed for enhanced understanding of virtual space makes CAD and
PLM data more accessible to untrained users.

VR in shipbuilding adds the ability for all stakeholders to deeply understand the design before it is
built, which can have a profound effect on the way we design and build ships. This enhanced under-
standing improves designs, increases operational efficiency, and reduces costs from rework, Goh and
Spencer (2019).

500
3. Ingredients for Success

During the course of the development of the ShipSpace toolset and its introduction into a number of
ship design workflows, a design pattern for VR's effective integration has emerged.

3.1 Making it Real

Creating strong place illusion and plausibility illusion, sometimes grouped together under the term
“presence” are significant factors in the efficacy of VREs in ship design, Goh and Spencer (2019).
Our user testing confirmed that key aspects of achieving this immersion include:

• Preservation of real-world concepts: Up should always be up, down should be down, with
simulated gravity holding you to a surface.
• Immersive vision: Wide field-of-view, so you don’t feel you are looking through a window or
frame.
• Stereoscopic vision: Take advantage of two eyes to gauge size and distance, as you do in real
life.
• Realtime vision: The view is coordinated with head movement without noticeable lag.
• Graphical stability: The visual display is smooth, without jerkiness, stutter, or drop-out.
• Interactivity: The user should be able to interact with the virtual environment.
• Precision: Small, often imperceptible movements should be tracked and replicated.
• Consistent: The same input should always result in an identical result.
• Predictability: The user should be able to use past experience to anticipate future events.

During development of ShipSpace, we coalesced these aspects into a single philosophy of design; we
strive to create natural interactions that are familiar, consistent and exclusively human scale.

3.2 Specialist UI Development

The purpose of the user experience in this context is to reduce the barriers to meaningful communica-
tion and collaboration. All user interface elements should support the broader aims of sharing and
acquiring knowledge through learning, supporting communities of practice through communication
and collaboration Wenger (1998) and supporting adult learning models through access to information,
annotation and navigation through the environment Knowles (1975). Achieving this requires deliber-
ate choices and the collaborative expertise of a wide range of professions to execute harmoniously.
User interface specialists typically bring knowledge of 2D UI paradigms and while many of the con-
cepts do translate to VR, it is a new discipline that has its own idiosyncrasies and best practices.

3.3 Data Interchange

The authors have found that it is reasonably common for design and engineering firms to use CAD
software from several vendors simultaneously. Some CAD vendors still attempt to lock their custom-
ers into their products by limiting portability of the data created in their software, accomplished by
using unpublished file formats and not providing open API access. As shown by Zhu and Zhou
(2011), this is a counterproductive strategy for the vendor, and creates significant disincentives for
foresighted customers.

Forward-thinking vendors provide documented API access to data, license a data-format software de-
velopment kit on a reasonable basis, or support exporting to one or more of the industry-standard file
formats such as JT, STEP or IGES. This facilitates innovation that everyone benefits from; the vendor
gets industry-leading capabilities, customers can choose the best stack of technologies that best suits
their requirements across multiple projects and work forces. Third parties can extend and enhance
CAD systems across vendor boundaries, experimenting with a flexibility and understanding of their
sub-domain that a monolithic vendor may find difficult to match.

501
Recent work by DNV GL and their APPROVED project in collaboration with several CAD vendors is
a noteworthy step for the marine industry. The OCX 3D CAD interchange format promises to be a
strong and capable format, and the decision to publish the specification for implementation by all par-
ties is likely to significantly benefit the market.

3.4 Effective Change Management

As By (2005) reports, organisations make use of a number of different change management systems.
Most of these systems focus on some combination of management buy-in, effective communication of
the objectives, broad understanding of the need for change and a clear understanding of the path to
success. Whichever process is used, the organisation should understand that simply rolling out VREs
in design and engineering is likely increase productivity but there are many ways that the technology
can be deployed across the organisation.

4. Best Practice

As Haskins et al. (2004) showed, the cost-to-fix for errors increases exponentially as the project pro-
gresses. Thus, it is best practice to fix as many problems as possible during the design phase. With
effective large-scale VR collaboration tools, it is clear that a much greater proportion of problems can
be resolved “on the drawing board” as it were.

Table I: Cost Factor Escalation Through Project Life Cycle, Haskins et al (2004)
Method 1 Method 2 Method 3
Requirements 1x 1x 1x
Design 8x 3x-4x 4x
Build 16x 13x-16x 7x
Test 21x 61x-78x 28x
Operate 29x 157x-186x 1615x

4.1 Listening to New Voices

The inherent difficulty of communicating design ideas has led to very strict ways of working that tend
to exclude some perspectives, because the cost of considering that input is too high. Using an effec-
tive VR collaboration tool significantly reduces this cost, and can dramatically change the calculus of
who should be involved to provide useful input throughout the project. A trivial example of this idea
is providing cooks the opportunity to provide meaningful feedback about the useability and effective-
ness of the galley and servery, given the number of meals that are required and anticipated staffing
levels. More complex perhaps, owners and operators can be involved earlier in discussions of detailed
space constraints and the compromises therein. Production teams can be involved with space grabs,
adding real-world experience of the types of spaces that are required with current technologies. De-
sign reviews can involve a far wider group of stakeholders to consider more perspectives and engag-
ing additional expertise. The list of inputs depends entirely on the type of project but in every project,
there are voices that can be heard using VR collaboration that were uneconomic to consider previous-
ly.

4.2 Enabling New Modes of Working

Previously the design office was generally located in the shipyard itself, and it was common for engi-
neers to be confronted by production workers asking for implementation details or design intent. To-
day, even if there is a single design office – it is increasingly likely that several offices are involved –
it is much less likely to be in the same country as the yard. It might not share a language with the
yard. To make matters even more challenging, the lead design office might be in a completely differ-
ent time zone from the yard. Developing modes of working which enable closer collaboration be-
tween these groups promises a tremendous productivity boost.

502
Telecommuting is an established concept that has become a reality for many areas of work. Holding
meetings in a VRE brings this concept firmly to the design professions and removes many of the im-
pediments that a design or engineering professional faced when teleworking. A well-configured VRE
scales well to support many forms of telecommuting, including individuals and teams, plus teams of
teams. Representatives from teams all over the world can meet each other and discuss requirements,
design decisions and other details in a team-of-teams configuration.

Using the VRE to hold design meetings reduces the effect of geographic dispersion of the participants
as they can all be within the design together. Using voice notes and annotation tools in the VRE also
enables time zone differences to be worked around with minimal effort.

Virtual Prototyping is replacing more traditional methods of prototyping such as scale models and
plywood mock-ups. A virtual prototype can have a much tighter review/change iteration cycle and can
cost much less than traditional methods since there is no fabrication step. It is also much easier and
cheaper to share a virtual prototype with other users.

Virtual Design Reviews again allow participants from anywhere in the world at no additional cost,
meaning that the best people for the job can be made available without disrupting their other work or
incurring travel costs. Additionally, Goh and Spencer (2018) showed that participants in virtual de-
sign reviews are more likely to identify and consider problems, due to reduced cognitive load and a
far more natural relationship with the model.

Several classification societies are moving toward a protocol for using VREs for Type Approval.
DNV GL’s APPROVED research project, https://www.dnvgl.com/research/review2018/featured-
projects/approval-engineering-design-models.html, and Korean Register’s work on 3D CAD, Son et
al. (2018), are important steps toward this goal.

Management can be briefed by specialists in each discipline, no matter where they are. This active
integration of management throughout the process ensures that the entire company is working towards
the same goal, every department is involved and is able to contribute.

4.3 Iteration / Continuous Improvement

This is still a very new field, and we are only scratching the surface of future possibilities. Experimen-
tation is key to unlocking the potential. The traditional structures of design are very heavily influ-
enced by the high cost of design communication, a hidden tax so entrenched that it is almost impossi-
ble to fully comprehend how it has twisted the systems and methods that we use. Maintaining the
benefits of tradition while engaging with the myriad potential benefits of the future is challenging, but
the flow-on to organisations able to eliminate that brake on performance is startling.

5. Future Industry Developments

There are many possibilities but some of the trends that seem likely to be caused or accelerated by the
increasing use of VR technology include:

• Increased specialisation; smaller teams of super-experts will form that have deeper skills in a
relatively narrow area.
• Virtual teams; since the individuals that make up a team can be geographically separated, small
groups who enjoy working together will create formal and informal distributed teams. Organi-
sations will increasingly rely on strike teams of experts who are geographically scattered.
• Increased outsourcing; hardly controversial, this is already a strong trend. Assuming the above
points, the authors expect that project teams will actually be a team-of-teams, which comes to-
gether to solve a specific set of problems, and then disbands to address other problems.
• Increased complexity; another trend that is happening already and is unstoppable. We do not
suggest VR is a cause of, rather that it will help us to manage, this complexity.

503
• More focus on design; the authors expect that as a result of the cost benefits of a design focus,
this will become a more significant factor for ship owners. Additionally, competition at multi-
ple levels will increase the standard of design effectiveness demanded by ship operators.

6. Conclusion

The high cost of design communication can be lowered substantially through application of modern
VR collaboration tools that provide high degrees of realism, optimisation of user experiences, broad
data compatibility and effective assistance with change management. Adopting these new tools, or-
ganisations can begin listening to new voices, taking advantage of new modes of working and evolv-
ing work practices to eliminate the insidious tax of rework.

The future of shipbuilding will involve heavy use of VR in a number of ways and early adopters will
develop a significant cost benefit while also building increased capabilities. This combination of low-
er costs and better design will alter the basis of competition in the shipbuilding industry.

References

BY, R. (2005), Organizational Change Management: A Critical Review, J. Change Management 5,


pp.369-380

CRUZ LOZANO, R.; ALEMAYEHU, F.; EKWARO-OSIRE, S. (2014), Quantification of Uncertain-


ty in Sketches, ASME Int. Mech. Eng. Congress and Expositions (IMECE)

GOH, K.; SPENCER, R. (2018), Virtual Reality Human Factors Engineering, RINA Conf. Human
Factors, London

GOH, K.; SPENCER, R. (2018), Virtual Reality for Concept Design, RINA Conf. Warship, London

HASKINS, B.; STECKLEIN, J.; DICK, B.; MORONEY, G.; LOVELL, R.; DABNEY, J. (2004), Er-
ror Cost Escalation Through the Project Life Cycle, INCOSE Int. Symp. 14, pp.1723-1737

SLATER, M. (2009), Place illusion and plausibility can lead to realistic behaviour in immersive vir-
tual environments, Philosophical Trans. Royal Society of London, Series B, Biological Sciences 364

SON, M.; KIL, W.; BYUN, S.; LEE, J. (2018), Developing a 3D model-based design approval view-
er, The Naval Architect, pp.32-34

STACEY, M.; ECKERT, C. (2003), Against Ambiguity, Computer Supported Cooperative Work 12
EKWARO-OSIRE, S.; CRUZ LOZANO, R.; ENDESHAW, H.; DIAS, J. (2016). Uncertainty in
Communication with a Sketch, Journal of Integrated Design and Process Science, IOS Press. 20(4).
pp1-18

VANDENBERG, S. (1978), Mental Rotations, a Group Test of Three-Dimensional Spatial


Visualization, Perceptual and Motor Skills 47, pp.599-604

WENGER, E. (1998). Communities of Practice. Learning, Meaning and Identity, Cambridge Univer-
sity Press

ZHU, K.; ZHOU, Z. (2011), Lock-In Strategy in Software Competition: Open-Source Software vs.
Proprietary Software, Information Systems Research, pp.1-10

504
Use of Virtual Reality Tools for Ship Design
Kenneth Goh, Knud E. Hansen, Perth/Australia, KEG@knudehansen.com

Abstract

The author reviews the importance of correct visualisation in the design process and limitations of CAD
for design review and collaboration. The benefits and requirements that can lead to successful outcomes
using Virtual Reality (VR) technology for ship design are discussed. The investigation focuses on im-
plementation and use of Stirling Labs’ ShipSpace™ VR visualisation and collaboration tool. Experi-
ence gained from using ShipSpace™ in various stages of the ship design process is discussed as is some
of the barriers to implementing such ground-breaking technology.

1. Background

Goh and Spencer (2018) investigated the varying ability of people to visualise a ship design by only
reading drawings that show views, elevations, and sections. Goh and Spencer conclude that design and
problem solving is much better if people can see the design in natural way instead of applying cognitive
load to imagining the design. In the following section, the limitations of CAD systems with screen
displays for understanding complex 3D models like a ship are described and why the recent advent and
application of consumer VR hardware is such a powerful visualisation and communication medium for
design and engineering.

1.1 Limitations of 3D CAD as a Visualisation Tool

With the increasing power of digital computing in the last decades, the ability to create more realistic
digital virtual representations of the real world and design has developed tremendously. From using
computers as a simple drafting tool for traditional 2D drawings, to the ability to created accurate 3D
geometrically accurate forms that can be viewed from any perspective, CAD replaced the traditional
drawing board in less than a decade.

At the time of this research, there are dozens of modelling and engineering CAD programs that can
create realistic 3D objects and complex virtual environments such as a ship, complete with internal de-
tails. While these CAD systems are still needed to create the 3D models and environments, CAD sys-
tems with a normal screen display and mouse and keyboard interface cannot themselves be considered
as VR and have significant limitations that prevent them from be a useful design communication and
collaboration platform.

• Navigation in 3D
The change between designing in two dimensions to designing in three is quite profound.
In 2D design, the concept of up, down, left and right is well established and many of our paper
and computer-based tools such as the computer desktop and mouse all are optimised for two-
dimensional navigation. Even the computer monitor is a 2D display device. When working in
three dimensions, using traditional tools designed for two dimensions becomes clumsy and time
consuming. Simply navigating around a three-dimensional environment can be difficult. New
devices such a 3D mouse can help, but still required a new skill and a high cognitive load to
operate. The difficulty of working in a 3D virtual environment is often reported as a barrier for
expanding its use in engineering design.

• Observation in 3D
The normal method of rendering a three-dimensional environment onto a two-dimensional dis-
play monitor is akin to looking at a picture or through a window into another world. This can
make understanding the scale and size of objects in complex environments such as a ship’s
engine room very difficult. When using a CAD system with a screen display the field-of-view

505
(FOV) is generally limited to 30-45° horizontally compared to human eye sight which has a
FOV of about 180°. Projecting a 3D image onto flat 2D screens distorts the view and limits the
useful FOV that can be displayed. If the user changes the zoom or FOV, this alters the 3D image
and how it needs to be interpreted. As with traditional pictures of objects or environments like
landscapes, images on a computer screen need some understanding of the context since the
picture is normally smaller than how it would look in reality. There is also no possibility to use
human stereoscopic vision when using computer monitors to help understand size and dis-
tances.

• Rendering Limitations
A complete 3D virtual model of a ship with all its internal and external details included is a
enormous amount of data, even for modern computing hardware. Since CAD software is opti-
mised for fast editing, it often takes a large amount of time to open a scene or update the image
when the virtual camera is moved. This makes CAD systems poor at displaying the fast-chang-
ing images and other graphical nuances needed to create the illusion of motion for VR.

1.2 What Is Virtual Reality?

Virtual Reality provides a computer-generated visual experience that tricks your mind into reacting as
if you are in a real physical environment, even though you are just seeing a digital virtual construct.
Key aspects of achieving this immersion include:

• Immersive vision – Wide field-of-view, you do not feel you are looking through a window or
camera.
• Stereoscopic vision - Both eyes to gauge size and distance as you do in the real world.
• Vision tracking - The view is coordinated with your head movement without noticeable lag.
• Graphical stability - The visual display is smooth, without jerkiness, stutter or drop-out.
• Real world concepts – Notion of up and down is preserved, simulated gravity constraining you
onto a surface., etc.
• Interactive - You can interact with the virtual environment.

1.3 Implementing Virtual Reality

Theatre-type single large screen front projection is often used when many people need to view the
virtual world. A three-dimensional effect is sometimes provided using shutter or polarised glasses.
This is often proclaimed as VR; however, it is the least immersive because the field-of-view is limited
and cannot be controlled by individual observers. While this form cannot be considered proper VR, it
is included so the reader can understand the limitations of the technique.

With theatre projection the perspective is only correct for a single observer position. It is also not pos-
sible to understand the size of objects because there are multiple scaling factors such as the size of the
screen and distance of the screen to the observer. The technique also usually induces a high degree of
motion nausea among the observers because of navigation in the virtual world is controlled by a third
person and the aforementioned inaccuracies of the projection. All other observers are ‘along for the
ride’ and quickly leads to motion nausea. Typically 10-15 mins is people’s threshold, or even less if
motion is frequent. This makes theatre type VR very limiting as a design collaboration tool.

Projection caves and domes are also used provide a VR-like experience, but they are very expensive to
set up and have significant limitations. In particular, objects in the virtual environment that are close to
the observer cannot be displayed or approached as they are removed from the image projection when
they ‘enter’ the cave or dome. This make them particularly unsuitable for human factor engineering
which is primarily interested in direct human interaction with the environment. In addition, the view in
a cave or dome is correct only for a singular position. If the observer inside the cave or dome only
rotates their head the perspective may be correct, however if the user translates their head by leaning

506
from one side to the other or squatting down, the view the observer sees will no longer be accurate. This
also means that design collaboration by having multiple observers inside the cave or dome is made
more difficult by inaccurate visuals.

Table I: Comparison of Virtual Reality options


Space re- Equip. 360° view Close View Weara- Motion
quired cost objects accuracy ble nausea
Theatre Medium Medium No Yes Low Glasses High
Cave &
High High Yes No Medium Glasses Medium
Domes
HMD Low Medium Yes Yes High HMD Low

Virtual Reality is currently best achieved with a Head Mounted Display (HMD), highly accurate posi-
tional tracking system and computer software that updates the display as the user moves with sufficient
fidelity that the user sustains the illusion that they are immersed in the virtual environment displayed.
The high precision tracking of HMDs allows the observer to study objects in VR up close and peer
around or under objects. With a HMD the observers perspective of the virtual environment will always
be correct regardless of where the objects are or where the observer is looking.

Many early adopters of VR using theatre and cave projection have a poor impression of VR due to the
aforementioned limitations of such systems. Regardless of the benefits of HMD type VR, poor software
implementation can lead to inadequate graphics performance with stuttering and laggy image display.
Many people have been discouraged to use VR as a design tool due to experiencing poor VR imple-
mentations.

As with many things new, there is a general lack of understanding of VR technology. People incorrectly
identify the HMD as the ‘magic’ hardware that creates the virtual worlds. What they fail to understand
is that the HMD is essentially a dumb display device. Some believe that they can just plug the HMD
into any CAD machine and get VR. A powerful computer and well written software are needed to create
the 3D graphics at a speed fast enough to sustain the VR illusion. The display for each eye needs to be
refreshed at 90 frames per second and the time lag between moving your head and the image updating
to match needs to be less than 11 milliseconds.

While the graphical load can be carefully optimised when making a VR game or a training simulation
to keep the display fluent, such manual attention would be both time consuming and could remove
important details from the 3D model. Both are limitations to using VR as a design collaboration tool.
For VR to be useful as design tool the ShipSpace™ application must be able to handle massive amounts
of 3D information and be clever enough to perform graphical optimisations automatically and to keep
the display fluent without losing any detail.

2. Virtual Reality as a Design Tool

The key benefits of VR as a design tool are:

• Most realistic way to experience an unbuilt design without using any physical constructs
• No need to be able to interpret a drawing
• No need to imagine, everyone sees the correct picture
• Reduced abstraction and therefore reduced cognitive load
• Cognition can be applied more productively, ie. solving the problem or improving the design.
• Realistic sense of scale and arrangement
• Sight lines can be checked
• Work areas can be checked
• People can collaborate on the design together even though they may be in different locations
• No need to travel to discuss leads to more frequent communication on design issues

507
• Experts can more easily add value to the design process

The limitations are:

• Requires a 3D model, can be expensive if not already created in the design process
• Realistic tactile interaction with the virtual environment is limited
• Navigation is can be unrealistic (teleporting)
• Poor implementation can cause motion nausea and migraine
• Some people don’t like using HMDs, uncomfortable, vanity etc.

Many aspects of ship design can benefit from better communication and collaboration of ideas. The use
of virtual reality for human factor engineering and design reviews is particularly useful. Some applica-
tions of VR using ShipSpace that have demonstrated to be useful for ship design include:

• Arrangements
There are many spaces within a ship that demand careful arrangement in the concept design
stage. Spaces such as the bridge, operations rooms, galley and messing spaces all require a high
level of teamwork and can be tested in ShipSpace™ to ensure optimal functionality. Other
spaces, such as engine rooms can be so densely packed with equipment and services, that access
for maintenance and removal paths are often compromised. VR can be used to quickly check
dimensions, potential interferences and serviceability. Arrangement of mooring equipment, es-
pecially with the increasing use of enclosed mooring spaces, can also benefit from using VR for
optimisation and validation.

• Sight-lines
Sight lines can be accurately investigated using virtual reality, allowing ergonomic and effec-
tiveness optimisation. Traditionally, full-size mock-ups have been used to check arrangement
of equipment in the bridge. Such physical constructions, normally inside a large building /
warehouse, reveal nothing about the ability to see another vessel sailing in formation, checking
visibility while sailing into ports or viewing activities on open deck areas. These aspects can
easily be modelled and better checked within a virtual reality environment.

• Launch & Recovery


Small boat and helicopter operations require a high degree of co-ordination and are inherently
dangerous for personnel involved. Visibility between various control stations is paramount and
can be optimised in a VR environment against ergonomic, technical and other requirements.

ShipSpace™ is a design visualisation and collaboration tool developed by Stirling Labs for use in the
maritime and naval design process. ShipSpace™ reads normal CAD files directly and displays the 3D
model in virtual reality, complete with all of the detail at very high fidelity. Special techniques are used
to maintain rendering latency of less than 11ms so that immersion is maintained, and surfaces and light
are treated carefully to create a strong sensation of presence. ShipSpace™ provides a range of sophis-
ticated tools to interrogate the model in various ways, such as reading metadata, “x-ray vision”, con-
trolling object layer visibility, measuring distances and many others. The tool enables up to 64 users to
collaborate in the same virtual space.

3. Virtual Reality in the Ship Design Process

3.1 Sales

The purpose of using VR for sales should not be underestimated. In projects past, it is often a single
well executed artist impression or a scale model that has been the driving vision for a new project be it
a ship, aircraft or building, such is the power of visualisation. This inspiration is needed to garner

508
support and ultimately funding for any undertaking of significance such as a new ship. As noted above,
the use of computer-based 3D models has largely replaced the traditional work of artists and model
makers.

Typically, any existing 3D models built for exterior renders can be viewed in VR using ShipSpace.
Often some additional interior rooms of interest such as the bridge will also be modelled to provide a
more interesting experience for the viewer. These models are accurate in layout and arrangement but
lack any proper engineering details such as the structural elements, piping and so forth. Even equipment
arrangements such as navigation consoles and furnishings will be notional as opposed to accurate or
final. The idea of the sales model is to give an overall impression of the vessel with realistic details
without the details being fully resolved at that stage. This is also a fundamental aspect of the concept
design which will be discussed next.

The use of VR for sales has been the easiest and most natural implementation of this new technology.
Users are almost always keen to experience the design in VR. Initially this is partly due to the novelty
of using VR, which is still only a few percent of the general population. However, users are then over-
whelmingly astounded by the realism of the experience. In most cases new users are in a state of disbe-
lief. The visual accuracy of the ShipSpace™ VR experience is so compelling that their brain is being
fooled. New users often reach out for handrails that they see in VR when approaching the side of a
deck, or duck under low overhead objects. This demonstrates that the level of realism provided by
the ShipSpace™ VR tool is so convincing that the amygdala, the small part of the brain that pro-
vides survival instincts, is over-powering the front cortex, the large part of the brain responsible for rea-
soning and higher cognitive functions.

3.2 Concept Design

While the 3D models built for sales use are based on the concept design, implementing VR in the actual
process of developing the concept design is a different matter. Developing a concept design requires
people with good visualisation and imagination skills because they are creating something new. Con-
vincing these people that they need VR tools to help with their concept design can be challenging. The
common objection to using VR for concept design is the time consuming need to create a 3D model in
the first place.

The concept design phase can be a very fluid process with arrangements, superstructures and hull forms
changing constantly. Good concept designers have a very high level of ship knowledge are highly adapt
at understanding how various design changes will affect other aspects of the design. Currently the meth-
ods and tools for creating 3D models can be too time consuming to be useful for concept design. Com-
pounding the problem is that the concept design is generally developed by individuals whose core com-
petency does not include construction of 3D models. This is changing however, with particularly
younger designers recognising the benefits of VR and more willing to create the 3D models necessary.

Where the company has also experienced good use of 3D models and VR in developing concepts is in
validation of the design. For example, utilisation of hull spaces where waterlines change rapidly in the
fore and aft ship can be evaluated in VR once the hull stability model has been developed. Sightlines
from the bridge to working deck areas and be quickly validated or tweaked once the 3D exhibition
model has been developed. This tends to come later in the tender and contract phase of the project,
when it is important to check carefully aspects of the design.

4.3 Basic Design

In the basic design phase of a vessel, the general arrangement of the vessel is agreed and the most
important aspects of the ship design is the development of the structure of the vessel and arrangement
of main machinery spaces. In this phase structure and equipment need to come together in a convolution
of either connection or avoidance of one another. Equipment such as engines and shaftlines need to
marry to structural foundations while having sufficient clearance for access and maintenance. Piping

509
and ductwork need to snake through girders and deck penetrations to be sure the design can be made to
work when it comes to the detail design. Many of the experience-based dimensioning and space esti-
mations need to be verified in the basic design.

In this phase of the project the benefits of the ShipSpace™ VR tool are obvious. As the structural model
is being developed in 3D, equipment can be easily added to the model. Naval architects and engineers
then can experience the developing design in VR to better resolve interface and clearance issues. Ar-
ranging equipment and reticulating piping in an engine room is a complex three-dimensional puzzle.
The benefit of using VR is that it is much easier to use your intuition since you are experiencing the
design as if it was already built. The advantage of naturally changing your view point in ShipSpace™
VR by moving your body and head as you would in real life makes evaluating the design much easier,
quicker and more thorough. Users appreciate that everything they see is correct in scale and perspective.

It is often commented by people who have used ShipSpace™ and gone back to using a computer mon-
itor for viewing a 3D model, that it feels like they are just looking through a camera or window. It is
not possible to understand the size and scale of the 3D model on a monitor. The control of the view
point is also unnatural and can quickly induce nausea, especially on larger screens.

Class Societies are moving increasingly towards using 3D models in the approval process. The use of
VR tools and the natural way the design can be understood and communicated can only aid the process.
Furthermore, the ShipSpace™ VR system enables up to 64 simultaneous users in the same VR session.
As a designer, being able to collaborate with Class experts inside the 3D model, as if it was already
built, will lead to faster and easier resolution of many design issues. We look forward to an opportunity
to trial such a system with Class.

3.4 Detail Design

One of the great benefits of using VR tools in the detail design phase is the ability to find and resolve
clashes. Typically, during the detail design process, many people are involved with developing the
piping and cabling routes from schematic or single-line drawings. A preliminary space reserve or ‘space
grab’ is often modelled by experienced designers as a first step before detail designers follow or trace
over with the final pipe spool design complete with joining flanges, valves, filters and other elements
that could be in the pipeline.

Despite the initial space reserve planning, invariably clashes (interference between two or more objects
that could not exist in the real world), occur in the detail design process that need to be resolved or de-
conflicted. Furthermore, even if there are no clashes, the design must be checked for ‘build-ability’ by
experienced production personnel and possibly modified to ensure that it is possible to construct and
install.

The current method of checking for clashes and build-ability is to use a CAD system or 3D model
viewing application on a screen display. Most CAD software has built-in clash detection; however, such
checking can often be flawed because many correct instances are flagged as clashes. There is also no
capacity to check for build-ability, such as the situation where there is no clash, but insufficient space
has been allowed for installation or disassembly.

I haves experienced on many occasions finding significant clashes while examining a model using the
ShipSpace™ VR tool and having great difficulty finding the same clashes using a traditional CAD
system and a screen display. I suggests that a VR Head Mounted Display (HMD) is much better than a
screen simply because users are able to take in so much more of the environment at any given time.
When using a CAD system and screen display the field-of-view (FOV) is limited to 30-45° horizontally.
A consumer VR HMD has a FOV of typically 110°. This amounts to 5 to 10 times more viewable area
when considering a 2:1 horizontal to vertical FOV ratio.

510
The natural way in which the HMD user changes their view point, by moving their head and body like
in the real world, also seems to allow much faster and effective interrogation of the digital environment
compared to the typical hand operated mouse interface of a CAD and screen display system.

One barrier the author has seen to using VR in detail design is actually the shipyard. Once in contract
with the Owner, the last thing the shipyard wants is any change to the detailed design. Using VR tools
that provides the Owner with better understanding of the design increases opportunities for Owner to
make changes is not in the interest of many shipyards whose primary concern is staying on schedule.
Changes, regardless of variation costs, will always put pressure on the schedule. Some shipyards only
begrudgingly provide Owners access to the 3D model for the same reasons or delay access until it
becomes difficult to make any changes to the design because the module is already in production.

The author suggests that a main issue is that multiple types of 3D models are used throughout the design
process. There is sales and concept model, a FEM model, a detail design model etc. Each type of model
is suited to a different stage of development. For example, for concept design the 3D model must be
very easy and fast to construct and modify. However, for detail design the 3D model must be configured
to work with the other production systems in the yard, such as the parts database and plate cutting
machines. If the yard has been contracted to develop the design from the concept stage this could be
less of problem.

4. Conclusion

Good spatial skills and high cognitive loads are required to imagine and interpret drawings into mental
images that are useful for solving design problems or optimising solutions. The notion of virtual reality,
its benefits for design and the requirements to effectively deliver a VR experience were presented. VR
removes the need to use cognitive load for visualisation and imagination and allows users to apply their
thinking to solving design problems. Subject matter experts who may not have good visualisation skills
can more easily contribute to the vessel design.

A number of case examples demonstrate how naval architects, engineers, designers and operators are
using the ShipSpace™ VR tool to assist in many phases of the design process. Although the overall
benefits of VR is not in question, there are barriers and resistance to using VR tools similar to the
introduction of any revolutionary new technology.

References

GOH, K.; SPENCER, R.J. (2018a), Virtual Reality for Design of New Warship Concepts, in RINA
Warship 2018: Future Surface Vessels, London

GOH, K.; SPENCER, R.J. (2018b), Virtual Reality for Human Factor Engineering, in RINA Human
Factors, London

POPESCU, G.V.; BURDEA, G.C.; TREFFT, H. (2002), Multimodal Interaction Modeling, Handbook
of Virtual Environments: Design, Implementation, and Applications, Lawrence Erlbaum Assoc.,
pp.435-454

511
Roll Damping Predictions using Physics-based Machine Learning
Gabriel D. Weymouth, University of Southampton, Southampton/UK, G.D.Weymouth@soton.ac.uk

Abstract

Computational Fluid Dynamics simulations and Machine Learning models are useful predictions
tools that have the potential to work even better when used together. This paper presents a physics-
based machine learning approach that supplements standard regression basis functions, such as
polynomials, with simple physical models of the system. This mitigates the data dependence of
machine learning predictions, and the associated computational cost of generating the training set
simulations. We illustrate this method by increasing the accuracy of roll-damping power coefficient
predictions by 50 to 200% using O(10) training examples.

1. Introduction

Computational Fluid Dynamics (CFD) is increasingly used to analyse the hydrodynamic response of
ship structures, but the cost of such simulations is still too high for iterative design work. Given
practical limits on computational time and the large number of adjustable parameters for a given
design, only a small number of CFD simulations can be run, making it difficult to obtain a unique
global optimum for single objective optimization, and essentially impossible to use in mapping out a
complete optimal Pareto front (Schmitz et al., 2002).

Machine Learning appears to the panacea of our time, but it does little on its own to address this issue.
The vast majority of state-of-the-art machine learning methods require O(103)-O(106) sets of
examples, known as training data, in order to determine the model parameters (Witten et al., 2016). In
addition, more complex physical systems typically necessitate more elaborate learning models, which
must be trained with correspondingly larger sets of examples (Evgeniou et al., 2000). Once this
learning process is complete, the evaluation of the model to predict new cases is typically negligible,
but the generation of this number of simulated training examples using CFD is completely
impractical.

In this work we will examine the utility of including CFD in the process of optimizing roll-damping
keels. A roll-damping keel is an ideal candidate system for the inclusion of CFD as viscous effects are
dominant. The particular physics of roll damping differ substantially to those of the remaining degrees
of freedom of ship motions, where motions such as heave, sway and pitch may be easily and
sufficiently accurately obtained from potential flow methods in the form of strip theory. The viscous
nature of roll-damping results in substantial non-linearity in roll responses for various hull shapes,
ship operational types and requirements, and of course, intensity of the forced roll responses due to
irregular seas (Ikeda et al., 1978).

This work focuses on two elements of viscous roll-damping predictions. First, we will investigate the
periodic flow past a rolling ship using a novel extension of 2D+T methods (Fontain and Cointe, 1997,
Weymouth 2013). While many previous investigations of viscous roll-damping such as Jaouen et al.
(2011a, 2011b) and Hubbard and Weymouth (2017) use a rolling body without forward motion, this
has the inherent issue that the local vortex flow is not convected down-stream, leading to modelling
and accuracy issues. The new 2D+2T approach avoids this issue and is many orders of magnitude
faster than 3D unsteady ship simulations enabling the production of a large set of roll-damping
simulations.

Second, we use the mean and amplitude of the unsteady roll-damping power, to investigate the data
dependence of machine learning methods on nonlinear fluid dynamics problems such roll-damping
using the open source library Scikit-learn (Pedregosa et al., 2011). In particular, Physics-based Learn-
ing Models (Weymouth and Yue, 2013) are shown to greatly mitigate the data dependence of typical

512
machine learning methods. As such, these models enable a rapid way to establish surrogate models for
global multi-objective design analysis and optimization.

Fig.1: Instantaneous in-plane vorticity (red/blue colouring) due to periodic roll motion of a 3D
rectangular cylinder with bilge keels (shown as negative space). The left image is a volume
rendering and the right image shows the vorticity averaged longitudinally through the domain.
From Hubbard and Weymouth (2017).

2. Roll-damping model

Roll damping is one part of the full 6DOF seakeeping problem, but the roll-motion is typically
decoupled from the other degrees of freedom in ship dynamics analysis. In addition, because the wave
damping caused by roll is linear and fairly simple to determine, viscous roll-damping is often studied
in isolation, without a free-surface, by mirroring the geometry across the waterline. Finally, in order to
simplify experimental measurements, the effect of ship taper and forward speed is neglected, reducing
the problem of viscous roll-damping to a periodically oscillating cylinder in otherwise still water.

As shown in Hubbard and Weymouth (2017), numerical methods can be used to simulate this
experimental set-up with a high degree of accuracy. However, that reference shows that this model of
roll damping has progressed too far from the original physics in a critical way. Fig.1 shows that as the
cylinder is forced to oscillate, it generates more and more vorticity in the near wake, which only very
gradually is diffused into the surrounding flow. In contrast, shed vorticity in the real ship flow is
immediately swept away by the forward motion of the ship, completely changing the near body flow
and strongly impacting the forces.

Fig.2: Two-dimensions plus time (2D+T) representation of a ship flow, from Weymouth et al. (2006).
The left shows a “fish eye view” of the flow as it sweeps past the hull transforming a pseudo-
steady 3D plunging ship bow wave into a 2D unsteady plunging wave generate by a ‘flexible
wave-maker’.

513
To avoid this issue while still limiting the computational effort, this works extends the classic 2D+T
method to periodic flow cases. As illustrated in Fig.2, the classical 2D+T method transforms a three-
dimensional steady state flow into a two-dimensional unsteady flow (thus 2D+T) by restricting the
simulation to one plane that the ship passes through, an approximation that is only valid in the limit of
very slender and high speed vessels. As shown in the figure, the cross section geometry changes in
time as the ship travels, making Cartesian grid methods ideally suited for these simulations, see
Weymouth et al 2006 for an initial 2D+T applications and Weymouth 2013 for a detailed treatment of
the geometry.

Fig.3: Two of the 2D+T roll-damping cases illustrating two different circle ratios (𝑟/𝑅 = 0.9 and
𝑟/𝑅 = 0.1) and two different keel angles (𝛼 = 90°, 𝛼 = 72°) and the same roll magnitude
(𝛩 = 8°). The images show the instantaneous vorticity at the end of the ‘body’, ie 𝑥 = 𝐿.

However, this model of the flow assumes that each stationary plane will see the same flow since the
flow is steady state. However, in the case of a ship in roll, the flow is periodic and different
simulations must be run to capture each of these phases of motion. Since the original flow is 3D+T,
and we are still mapping a D→T, we call this a 2D+2T simulation. Note that each plane is a
completely independent simulation, making them trivially parallel, and that only a few such planes are
required to determine the unsteady behaviour.

3. Canonical roll-damping case

While the 2D+2T model described above is capable of modelling realistic hull geometries, the focus
of the current paper is the integration of machine learning methods. Therefore a simple canonical test
case of a circular cylinder geometry with four bilge keels is developed to highlight issues with
machine learning methods for nonlinear viscous flows, Fig.3. Two geometric features are studied in
this work; the angle between the keels 𝛼, and the relative size of the keels measured by the ratio 𝑟/𝑅
where 𝑟 is the circle radius, and 𝑅 the distance from the keel tip to the center of the circle. Fig.3
shows two configurations of this test case.

The geometry was prescribed to roll harmonically via the simple equation 𝜃(𝑡) = 𝛩 sin 𝜔𝑡. The roll
amplitude 𝛩 is proportional to the Keulegan-Carpenter number, and was studied as an additional
variable to the two geometric variables. The frequency 𝜔 sets Sarpkaya’s beta
𝜔𝑅 2
𝛽=
𝜈
where 𝜈 is the fluid kinematic viscosity. In this work we use a constant 𝛽 = 105 . The use of 2D+2T
means we cannot define a forward speed or boat length independently, only their ratio, the time scale
𝑇 = 𝐿/𝑈 which is set equal to the period of motion 𝑇 = 2𝜋/𝜔.

514
Fig.4: Roll-damping power coefficient for the Fig.3 (left) case at four different phases of the motion
period 𝑇. Note the power itself has period 0.5𝑇. The dashed line is the mean power coefficient,
and the dash-dot lines indicate mean-absolute-deviation.

Fig.5: Mean and amplitude of roll-damping power coefficient for all 1000 cases. Note the amplitude
of the power coefficient is scaled by 𝛩 and the top two plots have a logarithmic y-scale.

515
The simulations used LilyPad, an open-source Navier-Stokes solver that has been heavily validated
for unsteady fluid dynamics simulations, Weymouth (2015). The same grid was used for all cases,
with domain 𝑥, 𝑦 = −10𝑅 to 10𝑅 and grid size ℎ = 𝑅/140. Twelve slices over 0.5𝑇 were used to
discretize the roll period. Each parameter was varied over 10 values, leading to 1000 independent roll-
damping cases, two of which are shown in Fig.3. The total computational time for the simulations on
a 12-core workstation was around 6 hours.

The performance of each case was measured using the power 𝑃 required to roll the body through the
fluid, characterized by the power coefficient
𝑃
𝐶𝑃 =
𝜌𝛺3 𝑅 4
where 𝛺 = 𝜔𝛩 = |𝜃̇ | is the amplitude of the rotation velocity. Fig.4 shows the results for a
representative case. Note that the power required increases as the flow develops along the 2D+T
‘hull’. There are periods of negative powering where the fluid is transferring energy into the body
motion, but the positive values are much larger.

Fig.5 shows the simulated mean power coefficient ̅̅̅


𝐶𝑃 and amplitude |𝐶𝑃 | for all 1000 cases. Note that
even after dimensionally scaling the power (and adjusting the amplitude to be scaled by acceleration
instead of velocity squared, ie 𝛺̇𝛺 = 𝜔3 𝛩2 = 𝛺3 /𝛩) the results depend nonlinearly on all three input
variables.

4. Machine Learning predictions

The data in Fig.5 was next used to study the data dependence of machine learning methods in
nonlinear fluid dynamics flows. While Deep Neural Nets are all the rage these days because of their
universal description and automatic feature selection capabilities, the data in Fig.5 is not nearly
plentiful enough to train such a method. Instead this worked uses a simple linear Ridge-Regression
model, otherwise known as Tikhonov regularization (Evgeniou et al., 2000).

Fig.6: Ridge-Regression surface of the mean power coefficient trained using all 1000 cases (left) and
the intermediate model surface (right). The points are a slice of the data for 𝛩 = 8° and are
coloured by the error of the fit at each point.

In Ridge-Regression, a linear least-squares error function is supplemented by a regularization term


proportional to the model’s second derivative and so increasing the regularization strength decreases
the variance in the model, improving its generalization. The class of regression can be used with any
basis functions, but this work uses a simple polynomial kernel of the input variables, ie
𝑟 𝛩𝑟 2 𝛼𝑟 𝑟 2
𝑋 = [𝛩, 𝛼, , 𝛩2 , 𝛩𝛼, ,𝛼 , ,( ) ,⋯], ̅̅̅
𝑦 = [𝐶 𝑃]
𝑅 𝑅 𝑅 𝑅

516
and the mean pressure coefficient was used as the target function. The method was implemented in
Scikit-learn, with a polynomial kernel up to 4th order terms and 10-fold cross validation was used to
determine the regularization strength.

When trained using all 1000 cases, the learning model captures 97.6% of the variation in the data.
Fig.6 left shows the Ridge-Regression model applied to a slice of the input data where 𝛩 = 8°, which
shows an excellent fit and little to no spurious variance in the model.

However, no practising engineer has time to run 1000 full 3D unsteady CFD simulations. It is much
more relevant to ship science and other engineering domains to train the learning model with only a
handful of cases. Fig.7 (blue dots and line) shows that as the number of samples in the training set is
reduced, the accuracy drops drastically, and the variability of the model increases dramatically. For
example, with 100 simulations you have between an 80% and 95% accurate model, and this
difference depends entirely on which points you happen to simulate ahead of time. Drop the training
set size to 30 and the model is between 10% and 70% accurate, and below that even the median model
of this class of machine learning methods drops to below 50% accuracy.

5. Physics-based Machine Learning

The method of Physics-based modelling introduced in Weymouth and Yue (2013) limits the
dependence of the learning model on the specifics of the data set by including additional physics-
based information about the system. In particular, the method uses a set of intermediate system
models to supplement the (generic) polynomial basis. As long as this intermediate model is
functionally similar to the target function, the resulting learning model is much more robust.

Fig.7: Prediction accuracy of mean power coefficient as the training set size is varied. Each training
size was tested with 20 different random selections of training sets with the accuracy of each
model marked by a dot. The solid lines indicate the median accuracy at a given training set size.

In this case, there is no obvious pre-existing intermediate model as the test case is such a departure
from typical ship sections. The simplest model is to assume that each variable acts independently
𝑟 𝑟
𝑓𝐼𝑀 (𝛩, 𝛼, ) = 𝑓𝛩 (𝛩)𝑓𝛼 (𝛼)𝑓𝑟/𝑅 ( )
𝑅 𝑅
Comparing the two cases in Fig.3, we know physically that the power will not depend on 𝛼 > 𝜋/4
since the keels would all be too far apart to interact, while 𝛼 ∼ 0 will feature a strong negative
interference. As such, 𝑓𝛼 = 1 + tanh 2𝛼 is reasonable. Similarly, the circle size will be very
important when 𝑟/𝑅 ∼ 1 since the keel length is small, but for 𝑟/𝑅 < 2/3 the circle is too far from
the keel tip to influence the development of the vortex. Therefore, 𝑓𝑟/𝑅 = tanh[4(1 − 𝑟/𝑅)] is
reasonable. Finally, quadratic drag should dominate for large 𝛩, but for small roll, a linear drag model
is more appropriate. A few simulations would be enough to fit 𝑓𝛩 = 0.2𝛩.

517
Fig.6 shows the intermediate model prediction on the same slice of data, and while the accuracy is
significantly lower (around 54%), the shape of this model is fine. Fig.7 (purple dots and line) shows
that adding this model to the basis function of the same Ridge-Regression model roughly doubles the
median accuracy of the resulting model and reduces the model variation by a factor of 4 for O(10)
training set sizes.

6. Conclusions

In this work, a novel 2D+2T concept was developed to model periodic ship flows. This model has
more physical realism than neglecting the forward motion of the hull, is simple to compute using
Cartesian-grid methods, and is easily extended to realistic hull geometries and even including the
effects of the free surface.

This concept was used to generate a large 1000 case canonical data-set of roll-damping power
coefficients with two geometric and one kinematic independent variables. Using this data, a broad
study was carried out on the impact of training set size on machine learning accuracy for nonlinear
fluid dynamic problems. While typical machine learning approaches, such as Ridge-Regression with
polynomial basis functions, quickly loose accuracy and consistency as the amount of training data is
reduced, Physics-based learning methods offer a fairly turn-key method to introduce simple physical
insights into the learning process, greatly improving model predictions in the limit of few examples.
In this future, we plan to combine these two approaches to model full 3D ship roll-damping
characteristics with only a few example points using 2D+2T as the simplified model.

Acknowledgements

The author would like to acknowledge the work of Geoffrey Gonzalez for his discussions on machine
learning and Scikit-learn during his MSc.

References

EVGENIOU, T.; PONTIL, M.; POGGIO, T. (2000), Regularization networks and support vector
machines, Advances in Computational Mathematics 13/1, pp.1–50

FONTAINE, E.; COINTE, R. (1997), A slender body approach to nonlinear bow waves. Phil.
Trans. Royal Society of London A, 355, pp.565–574

HUBBARD, I.; WEYMOUTH, G. (2017), Physics-Based and Learning-Based Roll-Damping


Predictions, 20th Numerical Towing Tank Symp. (NuTTS)

IKEDA, Y.; HIMENO, Y.; TANAKA, N. (1978), Components of roll damping of ship at forward
speed, J. Society of Naval Architects of Japan 143, pp.113–125

JAOUEN, F.; KOOP, A.; VAZ, G. (2011a), Predicting roll added mass and damping of a ship hull
section using CFD, OMAE Vol.49085, p.2011

JAOUEN, F.; KOOP, A.; VAZ, G.; CREPIER, P. (2011b), RANS predictions of roll viscous damping
of ship hull sections, 5th Int. Conf. Computational Methods in Marine Engineering (MARINE)

MAERTENS, A.P.; WEYMOUTH, G.D. (2015), Accurate Cartesian-grid simulations of near-body


flows at intermediate Reynolds numbers, Computer Methods in Applied Mechanics and Eng. 283,
pp.106–129

PEDREGOSA, F.; VAROQUAUX, G.; GRAMFORT, A.; MICHEL, V.; THIRION, B.; GRISEL, O.;
VANDERPLAS, J. (2011), Scikit-learn: Machine learning in Python, J. Machine Learning Re-

518
search 12, pp.2825-2830

WEYMOUTH, G.D.; DOMMERMUTH, D.G.; HENDRICKSON, K.; YUE, D.K.P. (2006), Advance-
ments in Cartesian-grid methods for computational ship hydrodynamics. 26th Symp. Naval Ship
Hydrodynamics

WEYMOUTH, G.D. (2013), Comparison and Synthesis of 2D+ T and 3D Predictions of Non-Linear
Ship Bow Waves, OMAE Conf.

WEYMOUTH, G.D.; YUE, D.K. (2013), Physics-based learning models for ship hydrodynamics, J.
Ship Research 57/1, pp.1–12

WEYMOUTH, G.D. (2015), Lily pad: Towards real-time interactive computational fluid dynamics,
arXiv:1510.06886.

WITTEN, I.H.; FRANK, E.; HALL, M.A.; PAL, C.J. (2016), Data Mining: Practical machine learn-
ing tools and techniques, Morgan Kaufmann

519
Index by Authors
Aksnes 50 Jo 67 Son 67,180
Alblas 213 Kahva 224 Spencer 500
Antognoli 438 Kanellopoulou 224 Stakkeland 359
Askounis 428 Karvonen 191 Steidel 261
Berge 314 Kil 180 Stensrud 391
Bertram 7 Kim 67,451 Sugimoto 415
Bibuli 438 Klausen 391 Sypniewski 470
Bjermeland 50 Köhler 24 Thomas 301
Blanchard 374 Kokkinakos 428 Tsapelas 428
Bole 163 Kooij 104 Turan 332
Boogaart 286 Kvam 314 Uzun 332
Bradbeer 301 Lee 67,180,451 Van der Tas 254
Byrne 500 Linskens 254 Van Dijk 286
Byun 180 Liu 156 Van Os 324
Cabos 391 Luxcey 50 Vodas 428
Cady 246 Ma 156 Wahlström 191
Cassar 301 Macedo 224 Walther 274
Chatzikokolakis 428 Manderbacka 415 Wang 156
Colling 132 Marrone 438 Weymouth 412
Coraddu 332 Medhaug 94 Woo 451
Dafermos 224 Min 451 Xie 391
Demirel 332 Mondoro 344 Yan 156
Deul 213 Moussault 213 Zerbst 146
De Vos 286 Mouzakitis 428 Zissis 428
Diez 438 Munoz 78
Dodkins 201 Nijdam 213
Donohue 470 Nowak 359
Drazen 344 Odetti 438
Durante 438 Ødegårdstuen 391
Eriksen 33 Oh 67
Erikstad 458 Ommani 50
Ficini 438 Perez 78
Florean 224 Plowman 7
Forster 191 Porathe 352
Gaspar 493 Puustinen 191
Gatchell 224 Radosavljevic 324
Gehrke 24 Raessi 391
Goh 405 Reinholdtsen 50
Goodrum 470 Roh 67
Goodwin 201 Saariluoma 191
Grisso 344 Santic 438
Grudniewski 374 Savasta 374
Hagaseth 314 Schulte 274
Hahn 261 Sears 324
Hamre 391 Seppälä 405
Harries 224 Serani 438
Hekkenberg 104,132,213 Shen 451
Hoek 213 Shields 470
Hollister 118 Shin 451
Houghton 500 Sieranski 146
Hulkkonen 415 Singer 470
Jahn 274 Skramstad 391
Jeong 451 Sobey 374

520
19th Conference on
Computer Applications and Information Technology in the Maritime Industries

COMPIT'20
Pontignano / Italy, 11-13 May 2020

Topics: Artificial Intelligence / Big Data / CAX Techniques / Digital Twin / Simulations /
Virtual & Augmented Reality / Robotics / Autonomous Technology
In Design, Production and Operation of Maritime Systems
Organiser: Volker Bertram (volker.bertram@dnvgl.com)
Advisory Committee:
Marcus Bole AVEVA, UK Henrique Gaspar NTNU, Norway Rodrigo Perez SENER, Spain
Andrea Caiti Univ Pisa, Italy Stefan Harries Friendship Systems, Germany Ulf Siwe Swed. Mar. Adm., Sweden
Jean-David Caprace COPPE, Brazil Darren Larkins SSI, Canada Myeong-Jo Son Korean Register, S. Korea
Nick Danese NDAR, France Kohei Matsuo NMRI, Japan Julie Stark US Navy, USA
Bastiaan Veelo SARC, Netherlands

Venue: The conference will be held in the Certosa di Pontignano in Pontignano nr Siena

Format: Papers to the above topics are invited and will be selected by a committee.

Deadlines: anytime Optional “early warning” of intent to submit paper


20.12.2019 First round of abstract selection (1/3 of available slots)
20.1.2019 Final round of abstract selection (remaining 2/3 of slots)
21.3.2020 Payment due for authors
21.3.2020 Final papers due (50 € surcharge for late submission)
Fees: 600 € / 300 € regular / PhD student – early registration (by 31.1.2019)
700 € / 350 € regular / PhD student – late registration

Fees are subject to VAT (reverse charge mechanism in Europe)


Fees include proceedings, lunches, coffee breaks and conference dinner
Sponsors: Aveva, Korean Register, Sener, Siemens (further sponsors to be announced)

Information: www.compit.info

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy