100% found this document useful (2 votes)
95 views238 pages

Asset Condition, Information Systems and Decision Models

Uploaded by

Ellada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
95 views238 pages

Asset Condition, Information Systems and Decision Models

Uploaded by

Ellada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 238

Asset Condition, Information Systems and Decision Models

Joe E. Amadi-Echendu · Kerry Brown


Roger Willett · Joseph Mathew
Editors

Asset Condition,
Information Systems
and Decision Models

123
Editors
Joe E. Amadi-Echendu, Prof. Kerry Brown, Prof.
University of Pretoria Southern Cross University
Graduate School of Technology Tweed Heads NSW 2485
Management Australia
Pretoria 0002
South Africa

Roger Willett, Prof. Joseph Mathew, Prof.


University of Otago Queensland University of Technology
Department of Accountancy Centre for Integrated Engineering Asset
and Business Law Management (CIEAM)
Dunedin 9015 Brisbane QLD 2001
New Zealand Australia

ISBN 978-1-4471-2923-3 e-ISBN 978-1-4471-2924-0


DOI 10.1007/978-1-4471-2924-0
Springer London Dordrecht Heidelberg New York

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library

Library of Congress Control Number: 2012942608

© Springer-Verlag London Limited 2012

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be
reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of
the publishers, or in the case of reprographic reproduction in accordance with the terms of licences
issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms
should be sent to the publishers.
The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of
a specific statement, that such names are exempt from the relevant laws and regulations and therefore
free for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the
information contained in this book and cannot accept any legal responsibility or liability for any errors
or omissions that may be made.

Cover design: eStudioCalamar, Figueres/Berlin

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Foreword

I commend this second issue of the Engineering Asset Management Review


(EAMR volume 2) to you as we consolidate the establishment of a coherent and
integrated body of knowledge to guide all elements of managing physical engi-
neering assets. Each volume in the EAMR Series is a thematic collation of second-
level, peer-reviewed collection of selected articles from our past World Con-
gresses on Engineering Asset Management (WCEAM) (www.wceam.com) that
began in Australia in 2006 and have since been held in the UK (2007), China
(2008), Greece (2009), and Australia (2010) and in the USA in 2011.
Engineering asset management (EAM) is gaining acceptance as a term that en-
compasses all types of engineered assets, including built environment, infrastruc-
ture, and plant and equipment. By this definition, human, financial, and informa-
tion and communication assets are emphasized only in terms of their relationship
with the specific tasks of optimizing the service delivery potential of an engi-
neered physical asset. While optimizing service delivery potential is the primary
objective, it is important to note that EAM strives to achieve this in the broader
context of maximizing value and minimizing risks and costs. Sustainability im-
peratives now also impact on EAM, broadening the optimization challenge to
incorporate maximization of natural and social capital whilst concurrently mini-
mizing ecological footprint – sometimes interpreted in terms of the corporate
social responsibility themes of our asset-intensive organizations.
Within the growing field of EAM persists the longstanding belief that there
should be separation between different types of assets in terms of how they are
managed. For example, there is a view that civil infrastructure assets should be
considered quite separately from manufacturing and process plant and equipment.
Yet the asset register in many organizations typically reflects all of these assets,
hence representing a need, from a systems perspective, to view all assets in a ho-
listic and transdisciplinary manner.
The civil, mechanical and electrical components that comprise the engineered
physical asset base of an organization do not function in isolation from each other.
Civil infrastructure is usually constructed to support the operation of various plant

v
vi Foreword

and equipment, including mobile assets. For example, rail companies must man-
age both plant and equipment, such as locomotives and carriages, and rail infra-
structure, such as tracks and bridges.
Many organizations utilize corporate enterprise resource planning (ERP) sys-
tems, which are gradually driving businesses to consider all types of assets in
a strategic and integrated way for effective decisions at the highest levels of gov-
ernance. The need to have an integrated view of EAM becomes imperative as a
result – representing the next big challenge for this field.
I trust that the selected papers in this and future EAM Reviews will continue to
add to our understanding and knowledge and assist in consolidating this integrated
and holistic systems-orientated view of our developing transdisciplinary field of
endeavour.

Australia, May 2012 Professor Joseph Mathew


Chair, Board of Directors
The International Society of
Engineering Asset Management
Preface

Engineering Asset Management Review (EAMR) Series is a publication of the


International Society for Engineering Asset Management (ISEAM) dedicated to
the dissemination of research by academics, professionals and practitioners in
engineering asset management. EAMR complements other emerging publications
and standards that embrace the wide ranging issues concerning the management of
engineered physical assets.
The theme of Volume 2, Asset Condition, Information Systems and Decision
Models focuses on the conversion of raw data into information that should guide
managers into making valid decisions, especially regarding the operational condi-
tion of assets. The articles contained in EAMR Volume 2 highlight quality issues
such as the appropriateness and integrity of data and information that describe the
condition or ‘health’ of the asset. The articles further illustrate how multidiscipli-
nary views of the asset influence, not only the acquisition and analyses of data and
information but also, what models are used in making decisions regarding the
asset.
The Editors wish to thank all the contributors for their effort and patience
through the extended review process and the delays in publishing this EAMR
Volume 2. To all readers, we invite your comments and further critique, so that we
all may benefit from increased body of knowledge relevant to the management of
engineered physical assets.

Australia, New Zealand, May 2012 Joe Amadi-Echendu, Editor-in-Chief


Kerry Brown, Senior Editor
Roger Willet, Senior Editor
Joseph Mathew, Senior Editor

vii
Contents

Approaches to Information Quality Management:


State of the Practice of UK Asset-Intensive Organisations .......................... 1
1 Introduction ................................................................................... 2
2 Assets and Asset Management ...................................................... 2
3 Information Quality ....................................................................... 3
3.1 Information Quality Management.................................... 4
3.2 Information Quality Management Maturity Models ........ 5
4 Assessment Process ....................................................................... 6
4.1 Selection of Cases ............................................................ 7
4.2 Selection of Respondents ................................................. 8
5 Maturity Assessment Results......................................................... 8
5.1 General Trends in Implementing Information
Quality Management Practices ........................................ 12
6 Guidelines for Improving Information
Quality Management Practices ...................................................... 15
7 Conclusion..................................................................................... 16
References................................................................................................. 17
Information Systems Implementation for Asset Management:
A Theoretical Perspective................................................................................ 19
1 Introduction ................................................................................... 20
2 Information Systems in Contemporary Asset Management .......... 22
3 Scope of Information Systems in Asset Management ................... 23
4 Barriers to Information System Implementation ........................... 26
4.1 Limited Focus of Information System Implementation ... 26
4.2 Lack of Information and Operational
Technology Nexus ........................................................... 27
4.3 Technology Push as Opposed to Technology Pull........... 29
4.4 Isolated, Unintegrated and Ad hoc Technical Solutions .. 29

ix
x Contents

4.5 Lack of Strategic View of Information


System Capabilities.......................................................... 30
4.6 Lack of Risk Mitigation for IT Infrastructure .................. 31
4.7 Institutionalisation Issues Surrounding
Information Systems ........................................................ 31
5 Defining Information System Implementation .............................. 32
6 Perspectives on Information System Implementation ................... 33
6.1 Technological Determinism ............................................. 35
6.2 Socio-technical Alignment............................................... 36
6.3 Organisational Imperative................................................ 38
7 Aligning Information System Implementation
with Strategic Orientation.............................................................. 39
8 Information Systems from an Engineering Asset
Management Alignment Perspective............................................. 45
9 Conclusions ................................................................................... 48
References................................................................................................. 48
Appendix 1 Summary of Literature Relating to Barriers
to Implementation of Information Systems ................................... 59
Appendix 2 Summary of Literature Relating to different theoretical
Perspectives on the Implementation of Information Systems........ 66
Improving Asset Management Process Modelling and Integration ............ 71
1 Introduction ................................................................................... 72
2 Requirements for Representing AM Processes ............................. 73
2.1 AM Process Description .................................................. 73
2.2 Symbols and Notations .................................................... 74
2.3 Trade-off Between Details and Simplicity....................... 79
3 Requirements for Implementing AM Process Modelling.............. 80
4 Requirements for Evaluating AM Processes ................................. 82
5 Requirements for Integration......................................................... 84
6 Conclusions ................................................................................... 85
References................................................................................................. 86
Utilising Reliability and Condition Monitoring Data
for Asset Health Prognosis .............................................................................. 89
1 Introduction ................................................................................... 90
1.1 Architecture of FFNN Prognostic Model......................... 91
1.2 Statistical Modelling of FFNN Training Targets ............. 92
2 Model Validation........................................................................... 97
2.1 Prognostic Modelling Using Industry Pump
Vibration Data.................................................................. 97
2.2 Analysis of Prognostic Output ......................................... 97
2.3 Model Comparison........................................................... 100
3 Conclusions ................................................................................... 102
References................................................................................................. 102
Contents xi

Vibration-Based Wear Assessment in Slurry Pumps ................................... 105


1 Introduction ................................................................................... 106
1.1 Pressure Pulsation, Ensuing Vibration
and VPF Component........................................................ 107
1.2 Hypothesis of This Work ................................................. 108
1.3 Summary of This Work.................................................... 108
2 Experimental Procedure for Data Acquisition............................... 109
2.1 Experimental Setup .......................................................... 109
2.2 Wear Types and Levels.................................................... 110
2.3 Procedure to Acquire Vibration Data............................... 111
3 Signal Processing........................................................................... 111
3.1 Cumulative VPF Monitoring............................................ 112
3.2 Time-domain PCA-based VPF Monitoring ..................... 113
3.3 Frequency-domain PCA-based VPF monitoring ............. 116
4 Results and Discussions................................................................. 117
5 Conclusion..................................................................................... 122
References................................................................................................. 122
The Concept of the Distributed Diagnostic System for Structural Health
Monitoring of Critical Elements of Infrastructure Objects ......................... 125
1 Introduction ................................................................................... 125
2 Methods of Determining the Stress in Critical Elements
of Infrastructure Objects................................................................ 127
3 Distributed Diagnostic System for Structural
Health Monitoring ......................................................................... 128
4 Conclusions ................................................................................... 131
References................................................................................................. 131
Optimising Preventive Maintenance Strategy for Production Lines........... 133
1 Introduction ................................................................................... 134
2 The Concept and Methodology of SSA......................................... 135
3 Methodology for Determining an Optimal PM Strategy ............... 138
3.1 Estimation of the Reliability of Production Lines............ 138
3.2 Criteria for Optimising PM Strategies ............................. 139
4 Example......................................................................................... 141
5 Conclusion..................................................................................... 145
References................................................................................................. 146
A Flexible Asset Maintenance Decision-Making Process Model ................. 149
1 Introduction ................................................................................... 150
2 Characteristics of Asset Maintenance Decisions ........................... 152
3 A “Split” Asset Maintenance Decision Support Framework......... 154
4 A Flexible Asset Maintenance Decision-Making
Process Model ............................................................................... 155
5 Discussion and Comparison .......................................................... 159
xii Contents

6 Case Studies .................................................................................. 160


6.1 Case 1: Determination of an Optimal Economiser
Maintenance Strategy....................................................... 161
6.2 Case 2: Determination of the Optimal Lead Time
to Repair Leaking Tubes .................................................. 163
6.3 Case 3: Pipeline Renewal Decision Support .................... 165
7 Conclusion..................................................................................... 167
References................................................................................................. 168
Machine Prognostics Based on Health State Estimation Using SVM ......... 169
1 Introduction ................................................................................... 170
2 Prognostics System Based on Health State Estimation ................. 171
3 Health State Probability Estimation Using SVMs
for RUL Prediction ........................................................................ 173
4 Validation of Model Using Hp-LNG Pump .................................. 175
4.1 High Pressure LNG Pump................................................ 175
4.2 Acquisition of Bearing Failure Vibration Data ................ 177
4.3 Feature Calculation and Selection.................................... 179
4.4 Selection of Number of Health States for Training.......... 182
4.5 RUL Prediction of Bearing Failure .................................. 183
5 Conclusion..................................................................................... 185
References................................................................................................. 186
Modeling Risk in Discrete Multistate Repairable Systems .......................... 187
1 Introduction ................................................................................... 187
2 Reliability Model of a Single Repairable Component................... 188
3 Multistate Reliability Modeling for a Discrete-Event System....... 189
4 Transitions Between States............................................................ 191
4.1 Spare (State 8).................................................................. 192
4.2 Standby (State 7).............................................................. 192
4.3 Derated (State 6) .............................................................. 192
4.4 Full Normal Duty (State 5) .............................................. 192
4.5 Minor Fault (State 4)........................................................ 193
4.6 Major Fault (State 3) ........................................................ 193
4.7 Failed (State 2)................................................................. 193
4.8 In Repair (State 1) ............................................................ 194
5 Cost Functions ............................................................................... 194
6 Risk Modeling ............................................................................... 196
6.1 Risk After One Transition Step........................................ 196
6.2 Risk After k Transition Steps........................................... 197
7 Simple Four-State Model............................................................... 199
8 Verification.................................................................................... 200
9 Using Discrete-Event Simulation for Sensitivity Analysis
of Decision Variables in Asset Management................................. 203
10 Conclusion..................................................................................... 204
References................................................................................................. 204
Contents xiii

Managing the Risks of Adverse Operational Requirements


in Power Generation – Case Study in Gas and Hydro Turbines ................. 207
1 Introduction ................................................................................... 208
2 Issues with Gas Turbines Operations ............................................ 209
2.1 Common Failures in Gas Turbines .................................. 209
2.2 Equivalent Operating Hours (EOH)................................. 210
2.3 Managing Risks of Operating Beyond
Maintenance Schedules.................................................... 210
2.4 Economics and Financial Risks/Gains
of Extended EOH ............................................................. 213
3 Issues with Hydro Turbines........................................................... 213
3.1 Draft Tube Pressure Pulsations ........................................ 214
3.2 High Sub-Synchronous Vibrations .................................. 215
3.3 Draft Tube Casing Stresses .............................................. 216
3.4 Potential Consequences.................................................... 217
4 Conclusion..................................................................................... 217
References................................................................................................. 218
Field-Wide Integrated Planning in a Complex and Remote Operational
Environment: Reflections Based on an Industrial Case Study .................... 219
1 Introduction ................................................................................... 219
1.1 Integrated Operations....................................................... 220
1.2 Method ............................................................................. 221
2 Integrated Planning........................................................................ 221
2.1 Operational Requirements of Integrated Planning ........... 222
2.2 Horizontal Periodic Planning ........................................... 222
2.3 Work Process Milestones and Templates
for Continuous Integrity in Planning................................ 223
2.4 Enhancing IT Environment to Suit Users’
Requirements and the Optimisation
of Integrated Planning Work Processes ........................... 224
3 Status of Integrated Planning......................................................... 225
3.1 Levels of Integrated Planning .......................................... 225
3.2 Impact of Economical Limitations................................... 227
3.3 Impact of Profit-Cost Assessment.................................... 228
4 Influence Factors for Integrated Planning ..................................... 229
4.1 Influence Factors at the Corporate Business Level .......... 229
4.2 Influence Factors at Integration Level ............................. 230
4.3 Influence Factors at System Development....................... 231
5 Conclusion..................................................................................... 231
References................................................................................................. 232
About the Editors............................................................................................. 233
Approaches to Information Quality
Management: State of the Practice of UK
Asset-Intensive Organisations

Philip Woodall, Ajith Kumar Parlikad and Lucas Lebrun

Abstract Maintaining good quality information is a difficult task, and many lead-
ing asset management (AM) organisations have difficulty planning and executing
successful information quality management (IQM) practices. The aims of this work
are, therefore, to understand how organisations approach IQM in the AM unit of
their organisation, to highlight general trends in IQM, and to provide guidance on
how organisations can improve IQM practices. Using the case study methodology,
the current level of IQM maturity was benchmarked for ten organisations in the
U.K. focussing on the AM unit of the organisation. By understanding how the most
mature organisations approach the task of IQM, specific guidelines for how organi-
sations with lower maturity levels can improve their IQM practices are presented.
Five critical success factors from the IQM-CMM maturity model were identified as
being significant for improving IQM maturity: information quality (IQ) manage-
ment team and project management, IQ requirements analysis, IQ requirements
management, information product visualisation and meta-information management.

Keywords Asset information quality, Asset information system, Asset manage-


ment, Information quality management, Information quality practices, Information
quality requirements, Information quality management maturity model

__________________________________
P. Woodall
Institute for Manufacturing, Department of Engineering, University of Cambridge,
Cambridge, CB3 0FS, UK
e-mail: phil.woodall@eng.cam.ac.uk
A.K. Parlikad
Institute for Manufacturing, Department of Engineering, University of Cambridge,
Cambridge, CB3 0FS, UK
L. Lebrun
Institute for Manufacturing, Department of Engineering, University of Cambridge,
Cambridge, CB3 0FS, UK

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 1


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_1, © Springer-Verlag London Limited 2012
2 P. Woodall, A.K. Parlikad and L. Lebrun

1 Introduction

Making sound asset management (AM) decisions, such as whether to replace or


maintain an ageing underground water pipe, are critical to ensuring that organisa-
tions maximise the performance of their assets. These decisions are only as good
as the information which supports them, and basing decisions on poor-quality
information may result in great economic losses [1]. Maintaining and providing
good-quality information is a difficult task, and many leading AM organisations
therefore require guidance on how to plan and execute successful information
quality management (IQM) practices; typical practices include the identification of
IQM key performance indicators and the application of suitable information secu-
rity procedures. To develop such guidelines and ensure that they are geared to-
wards the current maturity and needs of the organisations, an understanding of the
current state of IQM performance (maturity) of AM organisations is required. The
research question for this work is therefore: how do organisations approach IQM
in the AM unit of their organisation?
To address this question, the Information Quality Management Maturity Model
(IQM-CMM) [2], developed specifically within the domain of AM, was used to
benchmark the current level of IQM performance in AM organisations. Organisa-
tions in the U.K. which have a significant portion of their expenditure and risk
associated with the management of their assets were selected for this assessment.
Asset managers from ten AM organisations were interviewed using questions
developed from the critical success factors (CSFs) contained in the IQM-CMM
model. Each organisation was placed in the model, and the maturity level was
determined by the extent to which the organisation satisfied the CSFs.
By understanding how the most mature organisations approach IQM, five CSFs
which were satisfied by only the higher-level organisations are highlighted; lower
maturity organisations can focus on these CSFs to quickly improve their IQM
practices.
This paper is organised as follows. Section 2 presents a brief background of as-
set management. Section 3 describes information quality (IQ) and IQM and re-
views the different IQM-related maturity models available. The case study meth-
odology is described in Section 4, and the results and analysis of the maturity
benchmarking exercise are presented in Section 5. Section 6 analyses these results
and describes the key CSFs which lower maturity level organisations should focus
on. Finally, Section 7 presents the conclusions of the paper regarding the current
state of IQM practices in AM-related organisations.

2 Assets and Asset Management

In this work, the term asset is used to describe physical engineering objects, and
examples of assets for the rail and utilities industries include trains, junction box-
es, rails, transformers, power cables and water pipelines. AM is defined as the
Approaches to Information Quality Management 3

Figure 1 Asset Lifecycle [4]

“systematic and coordinated activities and practices through which an organisation


optimally manages its assets, and their associated performance, risks and expendi-
tures over their lifecycle for the purpose of achieving its organisational strategic
plan” [3]. A strategic plan in this context is “the overall long-term plan for the
organisation that is derived from and embodies its vision, mission, values, busi-
ness policies, objectives and the management of its risks” [3]. Together, these
definitions encompass the whole lifecycle aspect and the physical nature of the
assets. For a thorough review of asset management definitions see [4].
As part of the coordinated activities to optimally manage assets, organisations
must make decisions which affect the state of their assets for each of the lifecycle
stages (Figure 1) while recognising that these decisions are not independent; for
example, decisions to acquire new assets are often influenced by asset retirement
decisions – hence the asset lifecycle. Coordinating these decisions and understand-
ing the impact of one decision outcome on subsequent decisions is vital to effi-
cient AM. Effective decision-making can be achieved through monitoring and
capture of information regarding key events and factors/constraints which affect
asset performance and, consequently, organisational performance. With the advent
of the Internet, wireless sensing technologies, and the decreasing cost of data stor-
age, it is possible to offer asset managers increasing amounts of information to
support their decisions. However, more data does not necessarily mean better
information or more effective decisions. This issue is highlighted by Koronios [5],
who found that 70 % of generated data is never used by asset managers. Providing
asset managers with good quality information and ensuring that effective IQM
practices are in place are, therefore, of utmost importance.

3 Information Quality

Different definitions have been used for IQ in the past 20 years [6], and currently,
the most widely accepted definition of IQ is “fitness for use” [7, 8, 9, 10]. This
definition expresses the fact that IQ is something dependent on the context, and
4 P. Woodall, A.K. Parlikad and L. Lebrun

therefore, information considered to be of high quality for one purpose can be


considered low quality for a different purpose. Various attempts have been made
to refine this definition by incorporating aspects such as consumer viewpoints [8,
11]. English [9] refines the definition by considering IQ to be composed of inher-
ent and pragmatic components, where inherent IQ refers to the correctness of the
information, whereas pragmatic IQ refers to the degree of usefulness of the infor-
mation. Furthermore, two similar categories are also used to define IQ as “con-
forms to specification” and “meets or exceeds customer expectations” [12].
While such definitions may capture the whole meaning of IQ, they appear im-
practical for direct measurement [12, 13]. Therefore, to measure IQ in a practical
way, IQ is defined along different dimensions [14, 8, 12] such as accuracy, com-
pleteness, consistency and timeliness [15]. To maintain high-quality information
for all relevant IQ dimensions, suitable IQM practices need to be in place and
managed correctly in the organisation.

3.1 Information Quality Management

Information Quality Management can be defined as “the function that leads the
organisation to improve information quality by implementing processes to meas-
ure, assess costs of, improve and control information quality, and by providing
guidelines, policies, and education for information quality improvement” [9], and
whose goal is to increase the organisation’s effectiveness by eliminating the costs
of poor information quality [16]. Some definitions incorporate knowledge man-
agement such as the work of Ge and Helfert [17], who defined three areas of re-
search for IQM: quality management, information management and knowledge
management. This work, however, excludes the complex area of knowledge man-
agement to focus on quality management and information management (Figure 2).
Moreover, no comprehensive framework has so far encompassed the three afore-
mentioned approaches to IQM [17], and it is still unclear exactly what IQM en-
compasses [18]. Note that another important area in IQM relates to the importance

Figure 2 Scope of Research


Approaches to Information Quality Management 5

of people and culture. Having conducted a study on business information quality


in Lithuania, Ruževičius and Gedminaitė [19] observed that a change of attitude
towards information is needed to succeed in IQM.

3.2 Information Quality Management Maturity Models

A number of IQM maturity models have been developed with different levels of
complexity, methods of development and levels of usability (Table 1). The Infor-
mation Quality Management Capability Maturity Model (IQM-CMM) was devel-
oped and validated with AM organisations and is, therefore, ideally suited to the
focus of this study. Moreover, it also has a usable and extensive set of process
areas (PAs) and CSFs which can be used as appraisal criteria for determining the
level of maturity. These CSFs are defined for each of the maturity levels in the
IQM-CMM model (optimising, managed, measuring, reactive and chaotic).
A high-level view of the model is shown in Figure 3, which illustrates the ma-
turity levels with brief descriptions of the characteristics of each level. For each
maturity level, PAs are defined, and these contain a set of CSFs. The mapping of
all PAs to CSFs is shown in the results section in Table 3. Details of the meaning
of the CSFs can be found in [2]. The aim of a maturity assessment using this
model is therefore to determine the extent to which each CSF is satisfied within an
organisation. The results for each CSF are then aggregated to determine the extent
to which each PA is satisfied and then aggregated once again to determine whether
a maturity level is satisfied.

Figure 3 High-Level View of IQM–CMM Maturity Model [2]


6 P. Woodall, A.K. Parlikad and L. Lebrun

Table 1 Existing IQM Maturity Models

Model Complexity Method used for Usability


development
IQMMG [11] 6 categories Built from QMMG No assessment
(staged/continuous) methodology
DGMM [20] 4 categories Not explained No assessment
(staged/continuous) methodology
DQMMM [21] Staged: 4 levels Built from CMMI CEO interview
and authors experience
PAM [22] 28 categories Built from BSI PAS55:2008 121 questions
(staged/continuous) in an Excel tool
IQG [23] 2 axes, 4 quadrants Not explained 17 criteria
IQMF [24] Staged: 5 levels, Built from CMMI 190 questions
14 KPAs, and authors’ experience split
33 activities, into 3 levels
74 Sub-activities of depth
IQM-CMM [2] Staged: 5 levels, Inductively built 200 appraisal
13 PAs, from case studies criteria
48 CSFs

4 Assessment Process

The case study methodology was used to assess the how organisations approach
IQM in the AM unit of their organisation. Case studies are ideal in the following
circumstances [25]:
1. The focus of the study is to answer ‘how’ or ‘why’ questions.
2. Study participants’ behaviour cannot be manipulated.
3. Contextual issues need to be addressed.
4. Boundaries between phenomena and their context are not clear.
Each of these is relevant to the characteristics of this study. The question for
this work (‘how do organisations approach IQM in the AM unit of their organisa-
tion?’) is a ‘how’-style question and therefore meets the first requirement. In
terms of manipulating the behaviour of the people involved with improving IQM,
while it may be possible to influence what will be done, it is not possible to influ-
ence what has been done to reach the current state of IQM maturity. We also
assert that IQM improvement in the AM unit of organisations must be related to
the context because IQM improvement will depend on details such as the strate-
gic direction of the organisation, the type of assets owned by the organisation
(and hence the type of data/information required), and the type of regulations
imposed on the organisation. Finally, the boundaries between the contextual de-
tails and IQM improvement are not clear because of the number of different con-
textual details and the current lack of understanding of the linkage between con-
textual details and IQM improvement.
Approaches to Information Quality Management 7

4.1 Selection of Cases

Organisations where AM represents a core activity of business were selected as


the ‘case organisations’. Organisations from different business sectors were se-
lected to ensure that the idiosyncrasies of a single business sector, such as the
need to satisfy regulatory requirements, did not bias the understanding of how
organisations approach IQM activities. The unit of analysis within the case or-
ganisations is the practices related to the improvement and management of IQ in
the AM unit of the organisations. This encompasses the AM information systems
and the procedures and people involved with AM. The spectrum of organisations
chosen encompasses utility (suppliers of water, electricity and gas), transport,
defence asset support (defence-related assets are managed via service contracts
between organisations), and facility management. A total of ten case study or-
ganisations were selected (Table 2). Confidentiality agreements were signed with
the organisations; hence the names and identifying details of the organisations are
not shown.
Within the case study methodology, semi-structured interviews were used to
determine the extent to which each organisation satisfied the CSFs of the IQM-
CMM model. The interview consisted of 40 questions, 31 of which were devel-
oped from the IQM-CMM model CSFs; the remaining questions focussed on the
organisation’s future approach to IQM.

Table 2 Business Sectors and Roles of the Interview Respondents for Each Organisation

Case Business sector Role of interview respondents


A Utility Head of asset information department
Manager of asset performance team
B Utility Business transformation manager, ex-manager of asset
information team
C Defence asset support Information specialist from information exploitation team
D Facility management IT programme manager
E Utility Asset information manager
Asset manager
Asset manager
IS development programme manager
F Facility management Head of facilities department
Technical services manager
Estates and buildings manager
G Utility Information delivery manager
Data integrity team manager
H Defence asset support Supply policy manager
I Defence asset support Systems architect
J Transport Asset information manager
8 P. Woodall, A.K. Parlikad and L. Lebrun

4.2 Selection of Respondents

To ensure suitable respondents were selected, a sample set of questions from the
interview was sent to each organisation prior to each interview. Each interview was
conducted either over the telephone (8 cases) or face-to-face (2 cases), and recorded
with the help of a Dictaphone. Notes were also taken by the interviewer during the
interview. The details of the full interview protocol are available on request from the
authors. Most organisations had respondents who were asset information specialists,
only one organisation, case G, had a dedicated IQ manager (see Table 2). Cases F
and H did not have information specialists, and cases D and I had IT specialists. In
some cases, the lack of dedicated positions related to IQM was due to resource con-
straints and business priorities for the two facility management organisations.

5 Maturity Assessment Results

To place each organisation on a particular maturity level, the answers to the 31 ma-
turity interview questions were used to determine the extent to which each CSF was
satisfied. The level of satisfaction was measured using an ordinal scale (not satisfied,
partially satisfied and fully satisfied). The actual levels of satisfaction for each CSF
for the ten organisations (labelled organisation A to J) is shown in Table 3, where ‘–’
represents not satisfied, ‘P’ partially satisfied and ‘S’ fully satisfied. The table also
shows the maturity level, process areas for each maturity level and the groups of
CSFs belonging to each process area. Note that maturity level 1 is not shown in Ta-
ble 3 because it is always satisfied. The final two columns show the frequencies of
partially satisfied (cP) and fully satisfied (cF) across all the organisations.
The processes and systems being analysed were complex, and determining
whether these processes and systems met the CSFs was not feasible beyond the
scale used. Unfortunately, partially satisfied cannot be interpreted simply as 50 %
because in some cases partially satisfied was less than 50 % and in other cases
more than 50 %. This does mean that the intervals between these categories are not
always equal. Therefore, calculating aggregate measures, such as the mean, using
these values for a set of CSFs would violate the restrictions imposed by ordinal
scales [26]. The following measures were therefore developed to aggregate the
values for the CSFs in Table 3 into maturity levels which could then be used to
determine the extent to which an organisation had satisfied each maturity level.
• F = Number of CSFs fully satisfied / Number of CSFs
• FP = Number of CSFs fully satisfied or partially satisfied / Number of CSFs
Table 4 shows the final maturity level of each organisation, and the values of ‘F’
and ‘FP’ for each maturity level are shown as percentages. For example, for organi-
sation A no CSFs were fully satisfied for maturity level 4, but 3 out of 13 CSFs were
fully or partially satisfied for maturity level 4, which is shown as 23 % in the FP
column for organisation A. A maturity level was deemed satisfied when F > 50 and
FP > 80; the final maturity levels of the organisations are shown in the bottom row.
Table 3 CSFs Satisfied by the Organisations (– = Not Satisfied, P = Partially Satisfied, S = Fully Satisfied)

Organisation
Maturity
Process Area CSF A B C D E F G H I J cP cF
Level
5 IQ Firewall IQ Firewall – – – – – – – – – – 0 0
5 IQ Management IQ Management Metrics – – – – – – – – – – 0 0
Performance Analysis and Reporting – – – – – – – – – – 0 0
‘Monitoring’ IQ Management Benchmarking – – P – – – P – – – 2 0
4 Continuous IQ IQ Problem Root–Cause–Analysis – P S – – – – – – – 1 1
Improvement IQ Risk Management and Impact Assessment P – – P – – S P P – 4 1
IQ Management Cost–Benefit Analysis – – S – P – – S – – 1 2
Business Process Reengineering for IQ Improvements – – S P – – – P – – 2 1
4 Enterprise Information Enterprise Tier Management P P S P P P S S P P 7 3
Architecture Management Information Tier Management – P P – – – P – P P 5 0
Approaches to Information Quality Management

Application Tier Management – S S P P – P P P – 5 2


Physical Tier Management P P S P P P P – S P 7 2
Master Data Management/Redundant Storage – P P – – – P – – – 3 0
4 IQM Governance IQM Accountability, Rewards & Incentives: IQ is Everyone’s Responsibility – – P P – – – P – – 3 0
IQ Benchmarking – P P – – – – – – – 2 0
Strategic IQ – – P – P – P – – – 3 0
IQ Audit Trail – P S – P – – – P – 3 1
3 IQ Management Roles IQ Management Team and Project Management P P S – P – S – – P 4 2
and Responsibilities IQ Management, Education, Training and Mentoring – – P – – – P – – – 2 0
IQ Problem Reporting and Handling – – P – – – P – P – 3 0
Scripted information Cleansing – – S S P – – – – S 1 3
3 IQ Assessment IQ Metrics – – P – – – P P – – 3 0
IQ Evaluation – P P P – – P P P – 6 0
9
10

Table 3 (continued)

Organisation
Maturity
Process Area CSF A B C D E F G H I J cP cF
Level
3 IQ Needs Analysis Requirements Elicitation P P S P P P P S P P 8 2
Requirements Analysis – P S – – – S P – – 2 2
Requirements Management – – S – – – S P – – 1 2
3 Information Product Information Supply Chain Management – P S P – – S S P P 4 3
Management Information Product Configuration Management – S S S S – S S S S 0 8
Information Product Taxonomy P S S S P P S S P P 5 5
Information Product Visualisation P P S P P P S P P P 8 2
Derived Information Products Management S P S – P – – S – – 2 3
Meta-information Management – P S – P – S P – – 3 2
2 Information Security Security Classification of Information Products S S S S S S S S S P 1 9
Management Secure Transmission of Sensitive Information S S S S S S S S S S 0 10
Sensitive Information Disposal Management S S S S S S S S S S 0 10
2 Access Control Authentication S S S S S S S S S S 0 10
Management Authorisation S S S S S S S S S S 0 10
Audit Trail S S S P S – P P S S 3 6
2 Information Storage Physical Storage S S S S S S S S S S 0 10
Management Backup and Recovery S S S S S S S S S S 0 10
Archival and Retrieval S S S S S S S S S S 0 10
Information Destruction S S S S S S S S S S 0 10
2 Information Needs Stakeholder Management S S S S S S S S S P 1 9
Analysis Conceptual Modelling S S S S S S S S P P 2 8
Logical Modelling S S S S S S S S S P 1 9
Physical Modelling S S S S S S S S S P 1 9
P. Woodall, A.K. Parlikad and L. Lebrun
Table 4 Final Maturity Level of Each Organisation with Percentage Values of F and FP for each Maturity Level

Organisation
A B C D E F G H I J
Maturity Level
F FP F FP F FP F FP F FP F FP F FP F FP F FP F FP
5 – Optimising 0 0 0 0 0 25 0 0 0 0 0 0 0 25 0 0 0 0 0 0
4 – Managed 0 23 8 62 54 92 0 46 0 46 0 15 15 54 15 46 8 46 0 23
3 – Measuring 7 33 13 67 73 100 20 47 7 53 0 20 53 87 33 73 7 47 13 47
2 – Reactive 100 100 100 100 100 100 93 100 100 100 93 93 93 100 93 100 93 100 64 100
1 – Chaotic 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
Final Maturity Level 2 2 4 2 2 2 3 2 2 2
Approaches to Information Quality Management
11
12 P. Woodall, A.K. Parlikad and L. Lebrun

5.1 General Trends in Implementing Information Quality


Management Practices

Figure 4 illustrates the aggregated (for all organisations) level of satisfaction for
each CSF. The actual values (cP and cF) for this figure are shown in the rightmost
columns of Table 3, where these values are represented as percentages. For ex-
CSFs

Physical Modelling
Logical Modelling
Conceptual Modelling
Stakeholder Management
Information Destruction
Archival and Retrieval
Level 2

Backup and Recovery


Physical Storage
Audit Trail
Authorisation
Authentication
Sensitive Information Disposal Management
Secure Transmission of Sensitive Information
Security Classification of Information Products
Meta-Information Management
Derived Information Products Management
Information Product Visualisation

Fully satisfied (cF%)


Information Product Taxonomy
Information Product Configuration Management
Information Supply Chain Management
Requirements Management
Level 3

Requirements Analysis
Requirements Elicitation
Information Quality Evaluation
Partially satisfied (cP%)

Information Quality Metrics


Scripted information Cleansing
Information Quality Problem Reporting and Handling
Information Quality Management, Education, Training
Information Quality Management Team and Project Management
Information Quality Audit Trail
Strategic Information Quality
Information Quality Benchmarking
IQM Accountability, Rewards & Incentives
Master Data Management/Redundant Storage
Physical Tier Management
Level 4

Application Tier Management


Information Tier Management
Enterprise Tier Management
Business Process Reengineering for Information Quality
Information Quality Management Cost-Benefit Analysis
Information Quality Risk Management and Impact Assessment
Information Quality Problem Root-Cause-Analysis
Information Quality Management Benchmarking
Level 5

Analysis and reporting


Information Quality Management Metrics
Information Quality Firewall
0

20

40

60

80

100

Aggregated level of satisfaction (%)

Figure 4 Aggregated Level of Satisfaction of CSFs for All Organisations


Approaches to Information Quality Management 13

ample, all organisations (100 %) fully satisfied the ‘information destruction’ CSF,
whereas 80 % of organisations partially satisfied and 20 % fully satisfied the ‘re-
quirements elicitation’ CSF; all organisations therefore attempted the ‘require-
ments elicitation’ CSF.
The bulk of the maturity level 2 CSFs (on the left of Figure 4) were fully satis-
fied by all organisations, whereas for level 3 and above, fewer CSFs were fully
satisfied and more partially satisfied or not satisfied. Three CSFs which were not
attempted by any organisation surveyed. These are all in maturity level 5 and
include the IQ firewall, IQ management metrics, and analysis and reporting (IQ
management performance monitoring). High-level organisations looking to under-
take new IQM activities can attempt to implement these practices.
The higher-level CSFs (level 3 and above), which were attempted by 70 % or
more of the organisations, include the following factors (see the two groups of
values in levels 3 and 4 in Figure 5):
• IP visualisation;
• IP taxonomy;
• IP configuration management;
• information supply chain management;
• enterprise tier management;
• application tier management;
• physical tier management;
• requirements elicitation.
Except for requirements elicitation, these CSFs fall into two categories defined
by the IQM-CMM model: Information Product Management and Enterprise In-
formation Architecture Management. Most organisations had partially satisfied
the IP visualisation CSF, which requires that the same information in multiple
systems be represented consistently to the user. This is because the systems used
by the asset managers contain ‘default’ forms which were designed with the sys-
tem. However, to fully satisfy this CSF requires that different systems have a
consistent look and feel for a given information product. Clearly, this is much
harder to achieve, and only the higher-level organisations have achieved this to a
certain degree. The IP taxonomy CSF concerns organising information products
into a hierarchical structure as well as identifying relationships between informa-
tion products, including aggregations, compositions and associations. IP configu-
ration management processes ensure that any changes to information are recorded
and can be rolled back. This process is managed by change requests, which are
initiated, reviewed, approved and tracked to closure. Formal audits are regularly
performed to assess compliance with the configuration management plan. The
implementation of these processes within the organisations was largely success-
ful. Information supply chain management refers to the fact that both internal and
external information suppliers have been identified and documented. Furthermore,
information flows have also been documented, and communication between in-
formation suppliers and users has been established with suitable agreements
in place.
14 P. Woodall, A.K. Parlikad and L. Lebrun

All organisations expend significant effort on the development and use of their
information systems, and, hence, the CSFs related to enterprise information archi-
tecture feature prominently in Figure 5, despite their being at a higher maturity
level (4) than most organisations are currently at. Enterprise tier management is
about maximising information integration and interoperability, and organisations
that have satisfied this have developed and documented their information architec-
ture. Most organisations have some level of information integration, and the in-
CSFs

Physical Modelling
Logical Modelling
Conceptual Modelling
Stakeholder Management
Information Destruction
Archival and Retrieval
Level 2

Backup and Recovery


Physical Storage
Audit Trail
Authorisation
Authentication
Sensitive Information Disposal Management
Secure Transmission of Sensitive Information
Security Classification of Information Products
Meta-Informa tion Management
Derived Information Products Management
Information Product Visualisation

Fully satisfied (cF%)


Information Product Taxonomy
Information Product Configuration Management
Information Supply Chain Management
Requirements Management
Level 3

Requirements Analysis
Requirements Elicitation
Information Quality Evaluation
Partially satisfied (cP%)

Information Quality Metrics


Scripted information Cleansing
Information Quality Problem Reporting and Handling
Information Quality Management, Education, Training
Information Quality Management Team and Project
Information Quality Audit Trail
Strategic Information Quality
Information Quality Benchmarking
IQM Accountability , Rewards & Incentives
Master Data Management/Redundant Storage
Physical Tier Management
Level 4

Application Tier Management


Information Tier Management
Enterprise Tier Management
Business Process Reengineering for Information Quality
Information Quality Management Cost-Benefit Analysis
Information Quality Risk Management and Impact Assessment
Information Quality Problem Root-Cause-Analysis
Information Quality Management Benchmarking
Level 5

Analysis and reporting


Information Quality Management Metrics
Information Quality Firewall
0

20

40

60

80

100

Aggregated level of satisfaction (%)

Figure 5 Most Commonly Attempted Higher-Level CSFs


Approaches to Information Quality Management 15

formation systems architecture is vertically integrated from the operational to the


strategic level. Satisfying physical tier management assumes that hardware and
general infrastructure provide the necessary support for the application tier, which
concerns the software infrastructure. Information tier management has typically
not been addressed to the same extent (only 50 % aggregated level of satisfaction)
due to the challenging requirement to combine heterogeneous data sources and
establish a single version of the truth of the information. Many AM systems are
used within an organisation, and for organisations that have a large number of
satellite systems and the main AM systems, it is very difficult to combine all the
systems and establish a single version of the truth.

6 Guidelines for Improving Information Quality


Management Practices

Five CSFs were fully satisfied by the highest maturity level organisations which
were not fully satisfied by any of the lower-level (level 2) organisations. The
higher-level organisations therefore demonstrated the feasibility to fully imple-
ment these CSFs and obtain higher maturity levels (level 3 for case G and level 4
for case C). These five CSFs (Table 5) are therefore ideal candidates for level 2
organisations to focus on to improve their IQM practices.
The ‘IQ management team and project management’ CSF requires the formal
management of all IQM practices. This includes allocating the key roles for a
project, determining the scope of the work required, project deliverables, busi-
ness/technical aspects of the project, and estimating project costs and benefits [2].
In the process area of ‘IQ needs analysis’, the CSFs ‘requirements analysis’ and
‘requirements management’ received very little attention from lower maturity

Table 5 Key CSFs for Improving IQM Practices for Organisations in Maturity Level 2

Organisation
High Maturity Level 2
Maturity
Process Area CSF C G A D E F B H I J
Level
IQ Management IQ Management Team
Roles and and Project S S P – P – P – – P
Responsibilities Management
IQ Needs Analysis Requirements Analysis S S – – – – P P – –
Requirements
IQ Needs Analysis S S – – – – – P – –
Management
Information Product Information Product
S S P P P P P P P P
Management Visualisation
Information Product Meta-information
S S – – P – P P – –
Management Management
16 P. Woodall, A.K. Parlikad and L. Lebrun

level organisations. The precursor to these CSFs is ‘requirements elicitation’


which, in general, involves speaking to stakeholders to determine what the current
IQ problems are and then defining them. Interestingly, all of the organisations
attempted some aspect of ‘requirements elicitation’, but these organisations should
now focus on prioritising these IQ problems, mapping them to specific systems
and determining the desirable levels of IQ as part of the ‘requirements analysis’
CSF. Furthermore, changes to the problems and effective communication of the
analysis should be managed, and regular reviews of quality should be established
for the ‘requirements management’ CSF.
The key aspect for satisfying the ‘information product visualisation’ CSF is to
ensure that the same information, in multiple systems, is represented consistently.
The maturity level 2 organisations partially satisfy this CSF by using the prede-
fined forms which exist with the various information systems used in the AM unit
of the organisation, but to take the next step, organisations need to find ways to
ensure that these are as consistent as possible across different systems.
Metadata are data describing data in AM-related information systems and com-
prise properties such as edit history, ownership and security level. The establish-
ment of a metadata registry is required for the ‘meta-information management’
CSF to be satisfied, which means that metadata are stored and managed separately
from standard AM-related data.

7 Conclusion

The IQM maturity of the AM unit of ten organisations was benchmarked to de-
termine how the organisations approached IQM. Most of the organisations found
it a challenge to improve IQM and needed guidance on how to advance from their
current level of maturity. No organisation is currently at the top level of the matur-
ity model, and so there is room for improvement in all the organisations surveyed.
An analysis of how the CSFs in the IQM-CMM maturity model were satisfied
showed that five CSFs were fully satisfied by the two higher maturity level or-
ganisations, and these were never fully satisfied by any of the lower maturity or-
ganisations. It is recommended, therefore, that the lower maturity organisations
focus on these five CSFs to quickly improve their IQM practices. These five CSFs
concern IQ management team and project management, requirements analysis,
requirements management, information product visualisation, and meta-infor-
mation management. Further work is required to understand the order in which
organisations should implement the CSFs in the IQM-CMM maturity model to
improve their IQM practices and move up in the hierarchy of maturity levels.

Acknowledgments We would like to thank all the respondents for committing the time and
effort to take part in this study; their help is very much appreciated. We also thank Andy
Koronios and Jing Gao for assistance with the IQM–CMM maturity model, Alex Borek for help
with proof reading this work, and EPSRC for supporting this research.
Approaches to Information Quality Management 17

References

[1] Gao J, Baškarada S, Koronios A (2006) Agile maturity model approach to assessing and
enhancing the quality of asset information in engineering asset management information
systems. In: Proceedings of the 9th international conference on business information sys-
tems (BIS 2006), 31 May–2 June 2006, Klagenfurt, Austria, pp. 486–500.
[2] Baškarada S (2008) IQM-CMM: information quality management capability maturity
model. PhD thesis, University of South Australia, Adelaide, South Australia.
[3] British Standards Institution (2004) Asset management: PAS 55-1: British Standards
Institution.
[4] Ouertani MZ, Parlikad AK, McFarlane DC (2008) Towards an approach to select an asset
information management strategy. Int J Comput Sci Appl 5:25–44.
[5] Baškarada S, Koronios A, Gao J (2006) Towards a capability maturity model for informa-
tion quality management: a TDQM approach. In: Proceedings of the 11th international con-
ference on information quality (ICIQ-06), Cambridge, MA, 10–12 November 2006.
[6] Eppler MJ (2000) Conceptualizing information quality: a review of information quality
frameworks from the last ten years. In: Proceedings of the 5th international conference on
information quality, Cambridge, MA, pp. 83–96.
[7] Juran JM (1974) Quality control handbook. McGraw-Hill, New York.
[8] Wang R, Strong D (1996) Beyond accuracy: what data quality means to data consumers. J
Manage Inf Syst 12:5–34.
[9] Strong D, Lee YW, Wang R (1997) 10 potholes in the road to information quality. IEEE
Comput 30:38–46.
[10] Lin S, Gao J, Koronios A (2006) Key data quality issues for enterprise asset management in
engineering organisations. Int J Electron Bus 4:96–110.
[11] English L (1999) Improving Data warehouse and business information quality: methods for
reducing costs and increasing profits. Wiley, New York.
[12] Kahn B, Strong D, Wang R (2002) Information quality benchmarks: product and service
performance. Commun ACM 45:84–192.
[13] Al-Hakim L (2007) Information quality management: theory and applications. IGI Global,
Hershey, PA.
[14] Redman T (1996) Why care about data quality? In: Data Quality for the Information Age.
Artech House, Boston.
[15] Batini C, Cappiello C, Francalanci C, Maurino A (2009) Methodologies for Data Quality
Assessment and Improvement. ACM Comput Surv 41:1–52.
[16] English L (2002) The essentials of information quality management. Information Manage-
ment Magazine, 1 September 2002.
http://www.information-management.com/issues/20020901/5690-1.html
[17] Ge M, Helfert M (2007) A review of information quality research. In: Proceedings of the
12th international conference on information quality, 9–11 November 2007, Cambridge,
MA.
[18] Levis M, Helfert M, Brady M (2007) Information quality management: review of an evolv-
ing research area. In: Proceedings of the 12th international conference on information qual-
ity, 9–11 November 2007, Cambridge, MA.
[19] Ruževičius J, Gedminaitė A (2007) Business information quality and its assessment. Eng
Econ 2:18–25.
[20] DataFlux (2008) The Data Governance Maturity Model.
http://www.dataflux.com/DataFlux-Approach/Data-Governance-Maturity-Model.aspx
[21] Ryu K, Park J, Park J (2006) A data quality management maturity model. ETRI J
28:191−204.
[22] Institute of Asset Management (2009) Asset information guidelines – guidelines for the
management of asset information. Woodlands Grange, UK.
18 P. Woodall, A.K. Parlikad and L. Lebrun

[23] Délez T, Hostettler D (2006) Information quality: a business-led approach. In: Proceedings
of the 11th international conference on information quality, Cambridge, MA, 10–12 No-
vember 2006.
[24] Caballero I, Caro A, Calero C, Piattini M (2008) IQM3: information quality management
maturity model. J Universal Comput Sci 14:3658–3685.
[25] Baxter P, Jack S (2008) Qualitative case study methodology: study design and implementa-
tion for novice researchers. Qual Rep 13:544–559.
[26] Fowler FJ (1993) Survey research methods, 2nd edn. Sage, Thousand Oaks, CA.
Information Systems Implementation
for Asset Management:
A Theoretical Perspective

Abrar Haider

Abstract Asset-managing organisations implement information systems for a


variety of reasons which range from process automation to creating information-
enabled integrated views of lifecycle asset management. However, these organisa-
tions have reported inconsistent value from these systems due to an assortment of
strategic, management and operational issues. The primary factor contributing to
this variation is a technology-centric approach to information system implementa-
tion which treats these systems as passive technology constructs whose behaviour
is predictable and which provide the same level of service regardless of the con-
text within which they are deployed. However, information systems are social
systems strongly embedded in the social and physical structures of organisations,
and therefore human, organisational and social factors have a direct relationship
with the use and institutionalisation of technology. Information system implemen-
tation, therefore, is a continuous process aimed at improving organisational res-
ponsiveness to external and internal challenges by aligning these systems with
strategic business requirements. This paper explains various perspectives on in-
formation system implementation and the alignment of these systems with strate-
gic business considerations. It develops a framework aimed at aligning and
matching information system capabilities with business objectives and asset man-
agement requirements. This framework treats information as the key enabler of
asset management and emphasises that in order to achieve desired results, infor-
mation system implementation must serve organisational areas which influence
technology implementation as well as the areas which are influenced by it. This
framework treats information system implementation as a means to translate
strategic asset management objectives into operational actions by enabling asset
lifecycle processes, facilitating organisational integration, and creating a culture

__________________________________
Abrar Haider
School of Computer and Information Science, University of South Australia,
Mawson Lakes Campus, Mawson Lakes, South Australia 5095, Australia

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 19


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_2, © Springer-Verlag London Limited 2012
20 A. Haider

which values information and is conducive to organisational efficiency and


growth. At the same time, it shows how information generated by these systems
could inform asset management strategies for strategic reorientation and recalibra-
tion. In this way, information system implementation becomes a generative learn-
ing process which helps in the systems’ institutionalisation and contributes to the
maturity of technical, social and organisational contexts of organisations.

Keywords Asset management, Information systems, Strategic alignment

1 Introduction

Information system (IS) implementation is a management activity which aims at


fulfilling business information requirements and aligning them with strategic busi-
ness objectives [1, 2]. Managerial expectations from the use of these systems,
therefore, relate to increased quality and quantity of output, substitution of human
effort through business automation and an enhanced cost-benefit value profile of
core business activities. Advantages of these cost benefits are often translated as
gains in terms of production/manufacturing/service provision output through op-
erational efficiency and comparative advantage over competitors. However, when
organisations fail to realise the anticipated benefits of IS investments, it is mainly
due to the way these systems are introduced and institutionalised in the organisa-
tion [3]. Institutionalisation of ISs, however, is as much a social process as it is a
management process aimed at organisational learning and continuous alignment of
these systems with business requirements and objectives.
Traditionally, engineering enterprises adopt a technology-centred approach to
asset management, where technical aspects command most resources and are
considered foremost in the planning and design stage [4]. Skills, process maturity
and other organisational factors are considered relatively late in the process, and
sometimes only after the systems are operational. However, human, organisation-
al and social factors have a direct relationship with technology [5, 6, 7], which
highlights the conceptual and operational constraints posed on effective technol-
ogy implementation. ISs are systems which are embedded in the social structure
of the context of their implementation, and their value and usefulness depend on
the interaction of social, organisational and contextual factors. Using ISs for
asset management, therefore, signifies a learning progression aimed at organisa-
tional adaptation which is shaped by the view of the technology held by technol-
ogy users and the history of IS operation, maintenance and management preva-
lent in the organisation. This legacy characterises the formal and informal
organisational structures and relationships evolved over a period of time. As a
result, the process of this interaction and the interacting factors shape IS use
through the meaning they give to the IS use and thus contribute to the systems’
maturity in the organisation.
Information Systems Implementation for Asset Management: A Theoretical Perspective 21

The core objective of this research is to understand how asset-managing or-


ganisations implement and make use of ISs for the effective management of asset
lifecycles. This research uncovers various perspectives in IS implementation and
the dynamics which help shape utilisation of ISs for engineering asset manage-
ment. The overall question that this paper addresses is how ISs should be imple-
mented for asset management in such a way that they provide continuous align-
ment of ISs with strategic asset management and overall business orientation.
Related to this is the question of what factors shape and influence implementation
and institutionalisation of ISs for asset lifecycle management. The paper starts
with a discussion of the problem statement so as to put into perspective why or-
ganisations implement ISs and what issues constrain organisations to make opti-
mum use of these systems. This is followed by a detailed discussion of IS imple-
mentation in general and for asset lifecycle management in particular. The next
section explains the classic theories of IS implementation and how such systems
are related to an asset management paradigm. The paper then proposes a compre-
hensive IS-based asset lifecycle management framework.
This paper addresses the issue of IS implementation for asset management. To
do so, three domains will be discussed in the paper, i.e. information management,
ISs and information technology (IT). It is therefore important at this stage to
discuss each of these domains. Information management refers to the acquisition,
exchange and distribution of information to different stakeholders and the storage
of information. It may appear to be simply managing the lifecycle of information;
however, it is much more than that. It also includes other areas such as
organisation of information; data quality; and management and control of the
structure, aggregation, processing, security, retrieval and delivery of information
to the right stakeholders. Information management is strongly driven by an
organisation’s IT strategy and information management policy. An IS is the
combination of IT and people. An IS uses technology to support business
planning, operations, control, management and decision support. IS refers to the
interaction of people, software, business processes, data and hardware technology
to process and exchange information. Human, organisational and social factors
have a direct relationship with ISs. In this sense, an IS does not just represent
technology but also the way in which people interact with this technology to
execute, manage and improve business processes. People’s interaction with tech-
nology is, therefore, fashioned by the social structure, and this social structure
itself is persistently shaped or transformed by their actions. Thus, there is a dy-
namic relationship between technology, and the context within which information
systems are employed, and the organisational actors who interact with that tech-
nology. From an IS perspective, technology is socially and physically constructed
by human action. On the other hand, IT, or IT, refers to the design, development,
implementation, support and management of software applications and computer
hardware. It deals with the use of hardware and software to acquire, store,
exchange, retrieve and secure information.
22 A. Haider

2 Information Systems in Contemporary Asset Management

Conceptually, the implementation of technology is a subjective activity which is


biased and cannot be detached from the human understanding, organisational
context and social environment within which it takes place. Implementation of ISs,
therefore, is influenced by the actors who carry out this exercise and the principles
and assumptions which they follow to implement technology. It represents the
existing meanings and interests which individuals or communities of interest asso-
ciate with the use of technology within the socio-technical environment of an
organisation. Just as human interest in the organisation and the interpretation of
information requirements is shaped and reshaped over time, the nature of expecta-
tions from technology also change from time to time. The focal point of this
change is the interactive association between people, technology and the organisa-
tional context. However, action is an important element of this interaction; it is
contained in the structuration theory [8] and is facilitated and influenced by the
social structure of the organisation. Therefore, when ISs are physically adopted
and socially composed, there is generally a consensus on what the technology is
supposed to accomplish and how it is to be utilised [5]. This temporary interpreta-
tion of ISs is institutionalised and becomes associated with the actors that con-
structed technology and gave it its current significance, until it is questioned again
for reinterpretation. This requirement of reinterpretation may grow owing to
changes in the technical, social, or organisational context. ISs, therefore, are not
objective entities, such that they could be implemented without considering their
interaction with technical, organisational, economic, social, and human factors.
Current ISs in operation within engineering enterprises have paid for them-
selves, as the methodologies employed to design these systems define, acquire and
build systems of the past, not for the future [5]. For example, the maintenance IS
development which has attracted considerable attention in research and practice is
far from optimal. While maintenance activities have been carried out ever since
the advent of manufacturing, modelling of an all-inclusive and efficient mainte-
nance system has yet to be produced [4, 5]. This is mainly due to the continuously
changing and increasing complexity of asset equipment and the stochastic nature
or unpredictability of the environment in which assets operate, along with the
difficulty of quantifying the output of the maintenance process itself. For example,
current ISs employed for condition monitoring identify a failure condition when
the asset is near breakdown and, therefore, serve as tools of failure reporting better
than instruments for prewarning the failure condition in its development. On the
other hand, ISs utilised in asset management not only must provide for the decen-
tralised control of asset management tasks but also must act as instruments for
decision support. In sum, ISs for engineering asset management must provide an
integrated view of lifecycle information such that the smooth operation of assets
can be ensured and informed choices about managing the asset lifecycle made. An
integrated view of engineering asset management through ISs, however, requires
appropriate hardware and software applications; quality, standardised and inter-
Information Systems Implementation for Asset Management: A Theoretical Perspective 23

operable information; appropriate process design, organisational structure, and


skill set of employees; alignment between strategic asset management and ISs; and
an organisational culture that values information.

3 Scope of Information Systems in Asset Management

Engineering enterprises mature technologically along a continuum of standalone


technologies to integrated systems and in so doing aim to achieve the maturity of
processes enabled by these technologies and the skills associated with their opera-
tion [9]. Asset-managing engineering enterprises have a twofold interest in infor-
mation and related technologies. Firstly, such technologies should provide a broad
base of consistent logically organised information concerning asset management
processes; secondly, they should make available real-time updated asset-related
information to asset lifecycle stakeholders for strategic asset management decision
support [5, 10]. This means that the ultimate goal of using ISs for asset manage-
ment is to create an information-enabled integrated view of asset management so
that asset managers have complete information about an asset available to them,
i.e. starting from their planning through to retirement, including their operational
and value profile, maintenance demands and treatment history, health assessments,
degradation pattern and financial requirements to keep them operating at near
original specifications. In theory, ISs in asset management therefore have three
major roles. Firstly, ISs are utilised in the collection, storage and analysis of in-
formation spanning asset lifecycle processes; secondly, ISs provide decision sup-
port capabilities through the analytic conclusions arrived at from the analysis of
data; and thirdly, ISs provide for asset management functional integration. In do-
ing so, ISs for asset management seek to enhance the outputs of asset management
processes through a bottom-up approach. This approach gathers and processes
operational data for individual assets at the foundation level and at higher levels
provides a consolidated view of entire asset bases (Figure 1).
Theoretically speaking, ISs translate strategic asset management decisions
through the planning and management considerations into operational actions.
They achieve this by aligning ISs with asset management strategy. The planning
and management level defines the design of business processes and choice of
technology which enable these processes and align the operational level with stra-
tegic asset management considerations. Thus, in a top-down direction the ISs
‘translate’ strategic asset management considerations into action. On the other
hand, from the bottom up these ISs provide information analysis and decision
support. This decision support allows for an assessment of the effectiveness and
maturity of existing asset lifecycle processes, enabling technical infrastructure and
management controls. Top management utilises these assessments, at the strategic
level, to bridge gaps in performance or to re-engineer or re-adjust strategic asset
management considerations. Therefore, in the bottom-up direction, the ISs act as
‘strategic enablers’. In sum, ISs for asset management must allow for horizontal
24 A. Haider

IS Implementation Concerns Desired Asset Management Outputs

Strategic Providing an integrated view of


How must IS be implemented to asset lifecycle management
provide an integrated view of asset
Level informaon to facilitate strategic
lifecycle? decision making at the execuve
level.

Fulfilling asset lifecycle planning


How must IS be implemented to and control requirements aimed
meet the planning and control of at connuous asset availability
asset lifecycle management? through performance analysis
Planning/Management Level based on analysis of various
dimensions of asset informaon
such as design, operaon,
maintenance, financial, and risk
assessment and management.

How must IS be implemented to


meet operaonal requirements of
assets? Aiding in or ensuring
asset design, operaon,
condion monitoring, failure
Operaonal Level noficaons, maintenance
execuon and resource
allocaon and enabling other
acvies required for smooth
asset operaon.

Figure 1 Scope of information systems for asset management [10]

integration of business processes and vertical integration of functional areas asso-


ciated with managing the lifecycle of assets. An important measure of the effec-
tiveness of ISs, therefore, is the level of integration which they provide in bringing
together different functions of asset lifecycle management, as well as stakeholders,
such as business partners, customers and regulatory agencies like environmental
and government organisations. Nevertheless, the minimum requirement for asset
management at the operational and planning/management levels is to provide
functionality that facilitates the following operations [11].
ISs at the operational level must provide for a standardised information base
that drives the management and strategic levels. In doing so, these systems must
also provide certain level of coupling with business processes. However, loose
coupling would not properly satisfy the information needs of business processes,
and tight coupling would make the process technology dependent. The minimum
requirement from ISs at the operational and planning/management levels is to
provide functionality that facilitates
a. knowing what and where the assets are that the organisation owns and is re-
sponsible for;
b. knowing the condition of the assets;
c. establishing suitable maintenance, operational and renewal regimes appropriate
for the assets and the level of service required of them by present and future
customers;
Information Systems Implementation for Asset Management: A Theoretical Perspective 25

d. reviewing maintenance practices;


e. implementing job/resource management;
f. improving risk management techniques;
g. identifying the true cost of operations and maintenance; and
h. optimising operational procedures.
In engineering enterprises, strategy is often built around two principles: com-
petitive concerns and decision concerns. Competitive concerns set manufactur-
ing/production goals, whereas decision concerns deal with the way these goals are
to be met. ISs provide for these concerns through support for value-added asset
management, in terms of choices such as selection of assets, their demand man-
agement, support infrastructure to ensure smooth asset service provision and proc-
ess efficiency. Furthermore, these choices are also concerned with in-house or
outsourcing preferences so as to draw upon the expertise of third parties. The
primary expectation from ISs at the strategic level is that of an information-
enabled integrated view of the asset lifecycle so that informed choices can be
made in terms of economic tradeoffs or alternatives for asset lifecycle in line with
asset management goals and objectives and the long-term profitability outlook of
the organisation. However, according to IIMM [11], the minimum requirements
from ISs or at the strategic level are to aid in the following activities:
a. predicting future capital investments required to minimise failures by determin-
ing replacement costs;
b. assessing the financial viability of the organisation to cover costs through esti-
mated revenue;
c. predicting future capital investments required to prevent asset failure;
d. predicting the decay, model of failure or reduction in the level of service of
assets or their components and the necessary rehabilitation/replacement pro-
grammes to maintain an acceptable level of service;
e. assessing the ability of the organisation to meet costs (renewal, maintenance,
operations, administration and profits) through predicted revenue;
f. modelling what-if scenarios such as
(i) technology change/obsolescence,
(ii) changing failure rates and risks these pose to the organisation, and
(iii) alterations to renewal programmes and the likely effect on service;
h. alteration to maintenance programmes and the likely effect on renewal costs;
and
i. impacts of environmental (both physical and business) changes.
In practice, ISs for asset management hardly provide the benefits stated above.
An information-enabled integrated view of an asset lifecycle requires the integra-
tion of asset management core business processes and IT-related capabilities
through policies and technical choices to achieve business standardisation and
technical integration and interoperability. Whereas what we have on the ground is
a technical landscape replete with isolated pools of data which is patchy and error
prone; ISs possessing, processing and communicating these data lack integration;
26 A. Haider

there is a plethora of disparate technology platforms which make interoperability


almost impossible; and to cap it all, automation efforts are littered with task-
technology mismatch [5]. The following sections highlight some of the issues
resulting from inept implementation of ISs for asset management.

4 Barriers to Information System Implementation

Value from ISs in asset management depends upon an assortment of technical as


well as organisational and social factors. Effective IS implementation for engi-
neering asset management, therefore, demands a comprehensive implementation
plan which accounts for those aspects which can potentially influence IS institu-
tionalisation in the organisation. ISs are systems that are embedded in the social
structure of the context of their implementation and are, therefore, influenced by
the interaction of social and contextual forces. IS use signifies the learning pro-
gression which is shaped by the view of the technology and the history of IS man-
agement prevalent in the organisation. It characterises the formal and informal
organisational structures and relationships which have evolved over a period of
time. The process of interaction between the interacting structures and roles within
the cultural context of the organisation shapes the maturity of the organisation as
well as its technical infrastructure. ISs, thus, require a certain level of organisa-
tional cultural, procedural and structural maturity to produce enhanced levels of
service. Organisations need to take stock of this maturity and then select new
technologies so that their adoption by the organisation is easy and so that they
contribute to the effectiveness of the overall technical infrastructure. It is no sur-
prise that organisations fail to realise the anticipated benefits of ISs due to a lack
of appropriate planning regarding their implementation and the way these systems
are institutionalised in the organisation. This research carried out an extensive
review of the literature to expose the barriers to successful IS implementation in
the context of engineering organisations (Appendix 1). An analysis of these barri-
ers reveals some common patterns which highlight the issues and problems im-
pacting successful utilisation of ISs by the asset managing organisation. The fol-
lowing sections discuss these issues in detail.

4.1 Limited Focus of Information System Implementation

IS implementation in asset-managing organisations has a narrow focus and limited


scope, which places a strong emphasis on technical aspects and does not give due
attention to the organisational, social and human dimensions of technology im-
plementation [12, 13]. This approach to technology implementation at best serves
as process automation and does not contribute to the cultural, organisational and
technical maturity of the organisation [14]. On the technical side, it gives rise to
Information Systems Implementation for Asset Management: A Theoretical Perspective 27

issues such as lack of application integration, information interoperability and data


quality [15, 16]. On the organisational side, this approach does not give due con-
sideration to issues such as business process reengineering, introduction of appro-
priate structural changes to allow enabling technical infrastructure to provide
maximum value, up-skilling of employees, training on new technologies and
change management [17, 18, 19]. As has been previously, technology is a passive
entity, and its use is shaped by the interaction of technology with organisational
and human factors. Implementation exercises that do not account for the cause-
and-effect relationship which shapes technology cannot institutionalise technology
in an organisation.

4.2 Lack of Information and Operational Technology Nexus

In the technical domain of engineering enterprises, operational technologies (OTs)


are as prevalent and important as information technologies. OTs include control
and management or supervisory systems such as supervisory control and data
acquisition (SCADA). IT and OT are inextricably intertwined, where OTs facili-
tate running of the assets and are used to ensure system integrity and to meet the
technical constraints of the system. Table 1 presents an overview of the character-
istics of IT and OT infrastructures.
OT technologies are used primarily for process control; however, they also in-
clude technologies such as sensors and actuators, which are used in many control
and data acquisition systems which perform a variety of tasks within the asset
lifecycle. Technically, OT is a form of IT as it necessarily deals with information
and is controlled by (in most cases) a software. For example, asset operation is
continuously monitored for developing failures or failure conditions. Numerous
OT systems are used for condition monitoring at this stage which capture data
from sensors and other field devices to diagnostic/prognostic systems; these in-

Table 1 IT and OT Profiles [20]

Metaphor Information technology Operational technology


Purpose Information acquisition, exchange, Managing assets, technology, con-
and management; business process trolling processes
automation
Architecture Monolithic, transactional, RDBMS Event-driven, real-time, embedded
or text software, rule engines
Interfaces GUI, Web browser, terminal Electro-mechanical, sensors, coded
and keyboard displays
Ownership CIO, managers, knowledge workers Engineers and technicians
Connectivity Corporate network, IP-based Control networks, hardwired
Examples Finance, accounting, ERP SCADA, PLCs, modelling, control
systems
28 A. Haider

clude SCADA systems, CMMS and enterprise asset management systems. These
systems further provide inputs to maintenance planning and execution. However,
maintenance requires not only effective planning but also availability of spares,
maintenance expertise, work order generation and other financial and non-finan-
cial supports. This necessitates the integration of technical, administrative and
operational information of the asset lifecycle such that timely, informed and cost-
effective choices can be made about the maintenance of an asset. For example, a
typical water pump station in Australia is located far from major infrastructure and
has rather long pipeline assets that bring water from the source to the various des-
tinations. The demand for water exists 24 hours a day, 7 days a week. Although
the station may have an early warning system installed, maintenance labour at the
water stations and along the pipeline is limited and spare inventory is generally not
held at water stations. Therefore, it is important to continuously monitor asset
operation (which in this case constitutes equipment at the water station as well as
the pipeline) to sense asset failures as soon as possible. However, early-fault de-
tection is of little use if it is not backed up with the ready availability of excess
capacity and maintenance expertise. The expectations placed on a water station by
its stakeholders concern not just continuous availability of operational assets but
also the efficiency and reliability of support processes. IT systems or ISs therefore
need to enable maintenance workflow execution as well as decision support by
enabling information manipulation on such factors as asset failure and wear pat-
tern; maintenance work plan generation; maintenance scheduling and follow-up
actions, asset shutdown scheduling, maintenance simulation, spare water acquisi-
tion, testing after servicing/repair treatment, identification of asset design weak-
nesses, and asset operation cost-benefit analysis. An important measure of the
effectiveness of ITs, therefore, is the level of integration which they provide in
bringing together different functions of asset lifecycle management, as well as
stakeholders, such as business partners, customers, and regulatory agencies like
environmental and government organisations.
The lack of convergence between IT and OT is a major issue that has technical,
management and organisational dimensions. The root cause of this issue, however,
is the fact that IT and OT are managed and owned by different departments within
an organisation [21]. IT is generally governed by an IT department, whereas OT is
controlled by the department within which it is deployed. IT is thus managed by
an IT department and OT is managed by engineers. The absence of a common set
of rules to govern the implementation and use of OT and IT leads to the formation
of islands of isolated technologies within the organisation, which makes integra-
tion and interoperability of technologies cumbersome if not impossible. With
limited or no integration, there is poor leverage of learnings and benefits, and
decision support is unintelligible. Management of IT and OT by different func-
tions is cost and effort intensive, as this multiplicity of strategies to manage tech-
nology (which are essentially of the same stock) cannot connect properly with the
business strategy and operational plans [22]. At the same time, this multiplicity
also results in a lack of accountability around standardisation of technology and
practice and policy enforcement.
Information Systems Implementation for Asset Management: A Theoretical Perspective 29

4.3 Technology Push as Opposed to Technology Pull

There is an evident lack of commitment from top management in engineering


asset-managing organisations to institutionalise technology. As a result, IT imple-
mentation in general and IS implementation in particular has been disorganised
and not driven by strategic business considerations. Most of these technologies
are implemented due to pressure from regulatory agencies. Thus, these technolo-
gies have been pushed into the IT infrastructure of an organisation, without con-
sidering the fit between business processes and technology. This lack of user or
technology stakeholders’ involvement in technology adoption hampers develop-
ment of a collaborative, creative and quality conscious organisational culture and
impedes process efficiency. A by-product of this inefficiency is the inability of
the business to collect and disseminate accurate information which might contrib-
ute to organisation-wide coordination and horizontal integration. IS implementa-
tion, thus, is heavily predisposed towards a technology push rather than a tech-
nology pull strategy.
Engineering enterprises seldom engage in taking stock of their technical infra-
structure and the business processes enabled by it [23, 24, 25]. As a result, these
organisations are unable to determine how well their business processes are per-
forming [26], how effectively these processes are coupled with technology [27]
and what the information gaps or requirements are which technology has not
fulfilled [28]. However, when a technology is selected to fill these gaps, it has a
process-requirement ‘pull’ impact and fits in well with the operating logic as well
as the enabling technical and non-technical infrastructure of the organisation. On
the other hand, when the technology is ‘pushed’ into the technical infrastructure
of the organisation, it must adapt to the chosen technology. This adaptation has
technical, organisational and human dimensions. As a result, there is a task-
technology mismatch [29] and lack of technical standardisation [30], which gives
rise to issues related to, for example, information integration and interoperability
across the organisation.

4.4 Isolated, Unintegrated and Ad hoc Technical Solutions

The technical infrastructure of an asset-managing organisation consists of various


off-the-shelf proprietary, legacy and customised systems and a number of ad hoc
solutions in the forms of spreadsheets and databases. Off-the-shelf systems are
developed on customised guidelines and support proprietary data formats,
whereas legacy systems are technologically weak, even though they evolve with
the organisation [31]. These systems have been in operation for more than 20
years, are developed using old technologies and are not compatible with new
technologies. Ad hoc solutions are developed by employees on their own. They
do not conform to any quality or technical standard and are naturally isolated
30 A. Haider

from the mainstream technology-based logical and physical operating model of


the organisation. As a result of these anomalies, asset lifecycle information is
hard to aggregate, lacks interoperability and has tight coupling with technology. It
therefore cannot be reused. ISs in asset-managing organisations are simply iso-
lated pools of data [32] which may serve the needs of individual departments but
do not contribute towards an integrated information-enabled view of asset lifecy-
cle management. This means that the existing technical infrastructure in general
and ISs in particular are generally not aligned with the strategic asset manage-
ment considerations [33], do not contribute to functional integration [17] and do
not conform to a unique enterprise information model.

4.5 Lack of Strategic View of Information System Capabilities

IS implementation in asset-managing organisations does not follow a linear path.


There are a number of reasons for this. Firstly, maturity of technology is not pro-
portional to the growth and maturity of the organisation’s infrastructure, culture
and intellectual capital [34, 35, 36]; secondly, there is often a lack of wider organ-
isational representation in the selection of technology [30]; thirdly, management
often harbours a distrust in technology [31]; fourthly, there is often a lack of an
evaluative culture to assess IT performance which could inform the organisation
of the value profile which technology enables and the issues associated with its
implementation and continued use [37, 38]; fifthly, cost concerns drive IS imple-
mentation rather than an approach which takes into account the existing techno-
logical infrastructure, business requirements, available skill base and operational
and strategic value of technology investment [12]; sixthly, information is often
not treated as an asset which one owns [29, 39].
Traditionally, asset managers focus on developing the technical foundation for
asset lifecycle management around OTs and leave the selection, adoption and
maintenance of information technologies to IT managers [13]. This may be at-
tributed to the propensity of asset managers to view IS utilisation in general as a
secondary or support activity to execute business processes. Their emphasis is
more on the substitution of labour through technology utilisation rather than busi-
ness automation and functional integration aimed at internal efficiency and over-
all strategic advantage. Since the level of input from asset managers regarding the
choice of IS has a narrow focus, these systems do not contribute to the organisa-
tion’s responsiveness to internal and external challenges. There is, therefore, a
need for closer interaction between the CIO (chief information officer), CTO
(chief technology officer), and CEO (chief executive officer) or the COO (chief
operating officer). Such a nexus allows for coherent planning, design, implemen-
tation of an organisation’s structure, processes and technical infrastructure and
maturity of its value chain.
Information Systems Implementation for Asset Management: A Theoretical Perspective 31

4.6 Lack of Risk Mitigation for IT Infrastructure

Risk management is fundamental to asset management. Almost all asset-managing


organisations conform to some risk management strategy, standard or plan; how-
ever, their scope does not include the risks posed by or to ISs. Risk mitigation within
the IT function or department is limited to securing ISs from unauthorised access,
intrusion and malicious codes like viruses. There is no risk assessment, control or
management in terms of business losses occurring as a result of lack of information
availability, quality and integration. A related issue is the lack of information owner-
ship within asset-managing organisations [29, 30], which leads to an inability of the
organisation to assign accountability for asset management inefficiencies resulting
from wrong, fabricated, compromised and delayed information [40, 41].

4.7 Institutionalisation Issues Surrounding


Information Systems

The issues discussed here regarding IS implementation for asset lifecycle man-
agement are diverse. These issues have technical, human and organisational di-
mensions and significant consequences for business development. IS implementa-
tion should, therefore, not be treated as a support activity in the value chain of
asset management. It should be pursued proactively and aim to continuously align
technology with the organisational structure and infrastructure, process design and
strategic business considerations so as to realise the soft and hard benefits associ-
ated with the use of these systems. Thus when ISs are physically adopted and
socially and organisationally consistent, there will be consensus on what the tech-
nology is supposed to accomplish and how it is to be utilised. These systems
would then provide a learning platform to facilitate organisational evolution and
maturity where they act as business enablers and strategic translators.
IS institutionalisation is strongly underpinned by the political, economic and
cultural context of the organisations, which bring together individuals and groups
with particular interests and interpretations and help them in creating and sustain-
ing ISs as socio-technical systems [42]. The relationship between ISs and the
context of their implementation has been the focus of many research initiatives
such as the connection between planning sophistication and IS success [43], expe-
diency of strategic IS planning [44], differences between IS capabilities and man-
agement perceptions [45], impact of inter-organisational behaviour and organisa-
tional context on the success of IS planning [46] and identification of key
dimensions of IS planning and the systems’ effectiveness [47].
IS implementation planning is an intricate task with a complex mix of activities
[48]. It is a continuous process aimed at harmonising the objectives of ISs, defin-
ing strategies to achieve these objectives and establishing plans to implement these
strategies [49]. However, as IT environments in general and IS applications in
32 A. Haider

particular are growing in their control and complexity, IS implementation is be-


coming a specialised task and requires broad organisational representation. This
broad representation ensures that all aspects of IS implementation are covered at
the planning stage. Organisations, therefore, formulate cross-functional teams
comprising business managers, IS personnel, users, unit managers and financial
managers to create an all-encompassing implementation strategy through effective
communication and interaction.
The issues discussed above range from technical issues to social, managerial
and organisational issues. However, the origin of these issues can be traced back
to two factors, i.e. inadequate organisational planning and preparation for technol-
ogy adoption and disregard of organisational and social change associated with
technology adoption. Therefore, the notion of employing ISs requires more than
just the installation of technology. It calls for consideration of organisational,
technical and structural processes and the human dimensions of IS use and the
meaning and values that the stakeholders attach to them [50]. The following sec-
tions build upon this theme and develop the case for IS implementation in engi-
neering asset management.
The following sections explain the theoretical foundations of IS implementa-
tion in general and for asset management in particular.

5 Defining Information System Implementation

IS implementation is defined as “an organisational effort to diffuse and appropri-


ate IT within a user community” [51, p. 231]. The user community has some aspi-
rations attached to the use of technology which characterise the values and inter-
ests of various social, political and organisational agents [42]. Walsham [52] notes
that IS implementation needs to cover all the human and social aspects and im-
pacts of implementation in organisations. The effectiveness of IS implementation,
therefore, is a subjective term. However, DeLone and McLean [53] argue that six
dimensions determine the effectiveness of IS implementation, i.e. system quality,
information quality, information use, user satisfaction, individual impact and or-
ganisational impact. The effectiveness of IS implementation is compromised if
relevant change management strategies are not put in place [54]. Therefore, work-
ing and learning are increasingly being blended together. Castells [55] takes the
argument further and posits that ISs, due to their information processing capabili-
ties, have the potential to bring about continuous learning and innovation in an
organisation. IS implementation is not a one-off endorsement of technology; in
fact, it is a continuing process of learning aimed at the evolving use of ISs. IS
implementation, therefore, can be defined as a continuous process aimed at organ-
isational learning through alignment between the organisation’s strategy and ap-
plication of ISs within the organisation, where the use of these systems is shaped
by the organisational context and actors and guided by the value profile that the
stakeholders of these systems attach to the implementation.
Information Systems Implementation for Asset Management: A Theoretical Perspective 33

6 Perspectives on Information System Implementation

In computer science, implementation is considered an activity concerned with the


installation of an IT system and applications and is focused entirely on the techni-
cal aspects of an IS’s development process. On the other hand, in an IS paradigm,
implementation is a process which deals with how to make use of hardware,
software and information to fulfil specific organisational needs [56]. This per-
spective of IS implementation is generally governed by two quite opposing views.
In a technology-driven view, humans are considered passive entities whose be-
haviour is determined by technology. It is argued that technology development
follows a casual logic between humans and technology and is independent of its
designers and users. This mechanistic view assumes that human behaviour can be
predicted, and therefore technology can be developed and produced perfectly with
an intended purpose. This view may hold true for control systems such as micro-
controllers, which have a determined behaviour; however, this view has inherent
limitations for ISs due to its disregard of human and contextual elements. A cor-
ollary to this objective view is the managerial assumption that IS implementation
increases productivity and profitability. Consequently, management decisions are
governed by the expectations from technology rather than the means that enable
technology to deliver the expectations.
The opposing stance to the traditional technical view is much more liberating
and takes a critical view of the deterministic approach to the relationship between
technology and human, organisational and social aspects. This view illustrates
that technology has an active relationship with humans, in the sense that humans
are considered constructors and shapers of the use of technology. In this ap-
proach, technology users are considered active rather than passive entities, and
their social behaviour, interaction, and learning evolves continuously towards
improving the overall context of the organisation. This organisational change, as a
result of IS implementation, is not a linear process and represents intertwined
multifaceted relations between technology, people and a variety of opposing
forces, which makes the human and organisational behaviour highly unpredict-
able. This unpredictability is attracting the attention of researchers to uncover the
relationship between humans and technology to develop human-centred tech-
nologies [57, 58].
The computer science and IS perspectives on technology implementation are
quite divergent, where one considers it as structure and the other as process. Con-
sidering it as structure demonstrates that technology determines the business proc-
esses, whereas the process view argues that technology alone cannot determine the
outcomes of business processes and in fact is open to an intentional purpose.
Schienstock et al. [59] summarises various perceptions of technology implementa-
tion using different metaphors (Table 2).
When these metaphors are viewed in the light of the two views described pre-
viously, the first three metaphors, i.e. tool, automation and control instrument,
conform to the technical view. The process metaphor matches the emancipatory
34 A. Haider

view; whereas the organisation technology and medium metaphors are debatable
and can conform to either view.
A review of the literature on IS adoption reveals that researchers have at-
tempted to address implementation of these systems from a variety of different
perspectives. At the same time, it also reveals that the value profile which organi-
sations attach to IS implementation spans from simple process automation to
providing decision support for strategic competitiveness. An in-depth literature
review of IS implementation and adoption from 2000 to 2007 was carried out for
this research (Appendix 2). This literature review identifies different theoretical
perspectives which originated from diversified fields of knowledge such as busi-
ness management, organisational behaviour, computer science, mathematics,
engineering, sociology and cognitive sciences. These theories can be classified
into three broad categories: technological determinism (such as information proc-
essing, task-technology fit and agency theory); socio-technical interactions (such
as actor network theory, socio-technical theory, and contingency theory) and
organisational imperatives (such as strategic competitiveness, resource-based
view theory and dynamic capabilities theory).
Technological determinism theories adopt a mechanistic view of organisations
where technology is applied to bring about predicted or desired effects. Socio-
technical theories are focused on the interaction of technology with the social and
cultural context of the organisation to produce desired results. Organisational
imperative theories focus on the relationships between the environment in which
the business operates, business strategies and strategic orientation, and the tech-
nology management strategies to produce desired results in the organisation. The
following sections discuss these perspectives in detail and examine their role in
effective implementation of ISs for engineering asset management.

Table 2 Perceptions of Technology Implementation [59]

Metaphor Function Aim


Tool Support business process Increase quality, speed up work
process, cope with increased
complexity
Automation technology Eliminate human labour Cut costs
Control instrument Monitor and steer business Adjust to changes, avoid defects
process
Organisation technology Co-ordinate business processes Increase transparency, organisa-
tional flexibility
Medium Set up technical connections Facilitate quick and intensive
for communication exchange of information and
knowledge
Process Improve IS Promote continuous learning
Information Systems Implementation for Asset Management: A Theoretical Perspective 35

6.1 Technological Determinism

Technological determinism theories are technology centred, where organisational


or societal change is enabled by technology adoption. Technology determinists
believe that technology is the prime enabler of change and, therefore, is the fun-
damental condition which is essential to shape the structure or form of an organi-
sation. Technological determinism is also referred to as technology push, where
the organisation lets technology determine a solution rather than business need
driving the solution. It argues that social and cultural shaping of an organisation is
characterised by technology and receives minimal or no influence from human and
social aspects. Karl Marx is often cited as one of the earliest technology determi-
nists, with his dictums like ‘the windmill gives you society with the feudal lord:
the steam-mill, society with the industrial capitalist’ [60]. This vision takes a uto-
pian view of technology and advocates the intrinsic goodness of technology to
organisations and society at large.
Bijker [61] argues that technological determinism embodies two subtly differ-
ent principles. The first principle states that technological development follows a
progressive path, one in which older technology is replaced with new technol-
ogy; denying this progression is to intervene in the natural order. The second
principle has been attributed to Heilbroner [62], who argues that technologies act
on social interactions in a predictable way. In light of this principle, technologi-
cal determinism calls for implementation of technology to enable foreseeable
changes in business processes, organisational structure, information flows, com-
munication patterns and functional relationships. It conforms to a checklist ap-
proach and stresses that if certain steps are followed, relevant benefits from
investments in ISs can be achieved. These steps include development of technol-
ogy platforms as well as the activities that must be carried out to use them effec-
tively, such as user training, networking and data management [63]. These initia-
tives have been applied as if they were independent of the context and valid
under any conditions or circumstances. User training is one such example, where
it is often believed that training on different aspects of software or a system
enables users to handle any issue relating to their operation. In fact, humans have
varying levels of comprehension and expertise. In sum, to provide value from IS
implementation, technological determinism disregards organisational, cultural
and social aspects (which may influence or be influenced by technology adop-
tion) even though they are inherently interlinked [64]. This approach, however,
recognises that technology provides the necessary support to enable business
processes in an organisation. Technology implementation and adoption, thus,
becomes a linear process which organisations must go through to exploit the full
IS potential.
In this approach IS implementation is considered a smooth process due to as-
sumed objectives with an apolitical vision of the organisation and organisational
harmony and stability. In terms of Boulding’s theory of the hierarchy of systems,
technological determinism matches control systems, which are governed by prede-
fined targets such as those in thermostats or robots. Similarly, deterministic im-
36 A. Haider

plementation of ISs is led by critical success factors and performance indicators


embodied in the IS implementation plan. It is aimed at business automation rather
than enabling business strategy, mainly due to the way it disregards human and
other organisational aspects. In these circumstances, the underlying assumption is
the predictability of human behaviour, which implies that whole organisations can
be structured to accommodate and make use of ISs in specific and predetermined
ways. Technology, with its deterministic behaviour, thus creates new principles
and standards for business operations that compel organisations to challenge the
status quos and find solutions to questions such as,what ISs do, why they do what
they do and how they accomplish what they do, which in turn makes organisations
consider alternative available technologies.
IS implementation in engineering asset management has generally followed a
technological determinism approach, where technology is considered first and
human and organisational factors are not considered until after the actual imple-
mentation of the technology. This may be attributed to the propensity of engineer-
ing organisations to exhibit a mechanistic attitude towards technology which fo-
cuses on the automation of processes rather than viewing ISs as strategic enablers
of the organisation. This also explains the heavy leaning towards maintenance
activities in the overall asset lifecycle management strategies and viewing asset
lifecycle management activities as a necessary cost rather than as the premium of
smooth asset operation. Consequently, the existing backdrop of IS implementation
in engineering asset management represents a fragmented approach aimed at ena-
bling individual processes in functional silos and fails to enable integration of
asset lifecycle management activities and processes.

6.2 Socio-technical Alignment

The socio-technical views in IS implementation originated from organisational


theory [65], institutional theory [66] and sociology [67]. The socio-technical ap-
proach was introduced in ISs as a way of maximising the value and success of IS
implementation [68]. Since then it has been applied to a variety of aspects of IS
operation (such as task-technology fit) in a broad way, chiefly through the re-
search of Enid Mumford (see for example [69]). It stresses the importance of so-
cial choices in the implementation of technology within a particular context by
employing participative techniques [57]. Socio-technical theorists regard ISs as
social systems that are shaped by people with varying interests and argue that
human, organisational and social factors have a direct relationship with ISs. This
view focuses on the change that takes place in response to IS implementation
through the interaction of various actors within the organisational context that
shape IS use. The underlying assumption of this approach is that the success of
technology implementation cannot be predetermined or predefined; it in fact de-
pends upon the way different social and human variables react to technology
adoption within the context of the organisation. Therefore, it presents IS imple-
Information Systems Implementation for Asset Management: A Theoretical Perspective 37

mentation as a bottom-up approach which provides means to achieving the ends of


organisational objectives [70]. This is in contrast to the view held by technological
determinists that IS implementation is an end to means.
Orlikowski [71], with the help of Giddens’ structuration theory, discusses the
dichotomous nature of technology. The author posits that technology, on the one
hand, conforms to an intended reality through its well-established intrinsic objec-
tive features, such as hardware and software logic. On the other hand, technology
is also subjective, and organisational reality is emergently constructed through the
social interaction of humans with technology. This view is supported by Ciborra
[70], who argues that improvisation is a significant aspect which helps in building
organisational reality. This improvisation happens at all levels of the organisation
and reflects the way an organisation adjusts to technology implementation. Or-
ganisational change, therefore, becomes a dynamic activity, as the planning and
decision-making processes aim to make sense out of the continuously changing
organisational context. Walsham [64] suggests that the following areas help in
understanding the interaction between context and processes.
a. computers and cognition, which focus on the individual level and build an
understanding of technology and its relationship to human action and cognition;
b. phenomenology and hermeneutics, which treat ISs as interpretive entities hav-
ing significance and meaning from designers’ and users’ perspectives;
c. soft systems methodology, which works on the supposition that for organisa-
tional intervention to occur, it is necessary to take into account the different
contingent (but not universal) interpretations which different individuals and
groups hold;
d. critical theory, which focuses on individual emancipation by developing meth-
odologies which promote open communication and explicitly recognise the ex-
istence of structures of power and control in organisations; and
e. post-modernism, which concentrates on the closeness of events and importance
of contingent conditions and challenges future visions of progress.
Working up from the bottom, the socio-technical approach focuses on the ef-
fects of technology implementation. It focuses on the way technology-enabled
processes are managed at the operational level. This requires line managers to be
aware of the information needs of business processes; capabilities of technologies
to enable these processes; skills of employees to operate these technologies; and
the social, organisational and cultural contexts within which technology is im-
plemented. Here the manager deals with a number of uncertainties about technol-
ogy, organisational evolution and maturity, and culture. For example, even if the
relationship between technology and the context is well established and tested in
different organisational settings, the emergent and unpredictable nature of human
action may change the development, requisition and institutionalisation of tech-
nology [71]. This quagmire has been termed ‘soft-line’ determinism. From this
point of view, ISs are instruments of sense making, i.e. the perception of charac-
ter and value of information and ISs. Socio-technical approaches, therefore, are
more suited to control and governance of post-implementation issues, by describ-
38 A. Haider

ing and providing understandings of the relationship between technology on the


one hand and organisational context and actors on the other. Due to the changing
nature of interacting elements whose behaviour is unpredictable, this approach
falls short in terms of providing an all-encompassing view of how to approach
IS implementation.

6.3 Organisational Imperative

This approach to IS implementation is mainly attributed to the information proc-


essing model. The fundamental premise of this perspective is that strategic plan-
ning is the key to organisational effectiveness and efficiency. It argues that man-
agement has unrestricted control over the choice of technology and its impact in
the organisation. Organisations and the use of technology within could thus be
viewed as a brain which induces fragmentation, routinisation and binding of deci-
sion-making practices which make it manageable. Organisational imperative
theories in ISs are strongly influenced by strategic management theories. This
influence gained momentum after Porter [72] proposed his theory on competitive
advantage. Porter’s five-force industrial analysis model and related strategies
have been used as a basis for many research endeavours on IS-based competitive
advantage [73].
Organisational imperative theories follow a top-down approach and generally
focus on activities such as the formulation of an information policy aligned with
business strategy, followed by information architecture, which is designed to cater
for the overall business as well as individual business process needs. These steps
thus provide a roadmap of IS development and implementation by taking into
consideration factors such as costs involved in the development and implementa-
tion of ISs, an organisation’s technical infrastructure, technological trends and the
risks involved in the process. In these approaches, consideration given to IS plan-
ning overshadows IS implementation, and implementation issues are believed to
originate from the post-implementation investigation of factors which hamper
successful implementation. Mintzberg [74] criticises the top-down approach and
argues that by following this approach, strategy formulation represents a con-
trolled and mindful process which is associated exclusively with top management
and that the process of strategy formulation is isolated from its implementation.
Due to this disconnect, strategy formation becomes a one-way street without any
feedback on its effectiveness, whereby strategy implementation processes do not
inform strategy formulation processes. Davenport [75] takes the argument further
and concludes that the highly structured top-down approaches do not provide an
effective method of IS implementation. The author suggests that a business envi-
ronment changes at a continuous rate, and these methodologies are not in keeping
with the pace of this change. It must also be acknowledged that information used
to formulate strategy is historic; therefore, the assumptions arrived at from the
analysis of this information has little relevance for future decisions. In most cases,
Information Systems Implementation for Asset Management: A Theoretical Perspective 39

the speed with which technology updates itself renders these strategic considera-
tions obsolete. Consequently, by the time strategy is fully implemented, the pri-
mary principles adopted and assumptions made about the business are outdated,
and this approach ends up strategising for the past and not for the future.
These three theoretical perspectives encompass the existing principles em-
ployed to implement technologies within business organisations. All have their
own limitations and benefits and are further dependent on a variety of intra- or
extra-organisational factors for their success. However, for implementation of ISs
for asset management, none of these theoretical perspectives could be considered
all-encompassing or all-inclusive. Theoretically, a hybrid approach which draws
on all three of these perspectives seems most appropriate for IS implementation
for asset management. The following sections describe how ISs must be imple-
mented to align strategic asset management considerations with technology, so as
to respond to external and internal challenges.

7 Aligning Information System Implementation


with Strategic Orientation

In asset management, ISs are not just business automation tools. Among the most
significant contributions of these systems are that they translate strategic objec-
tives into action and inform asset and business strategy through value-added deci-
sion support. However, the fundamental building block to enable such a value
profile is the quality of the alignment of strategic business objectives with the
physical, social and technical context of the organisation such as policies, internal
structures, systems and relationships which support business execution [76]. These
contexts and their mutual interaction help organisational maturity by shaping col-
laboration, empowerment, adaptability and learning in the organisation [77]. The
mutual interaction of these contexts depends three critical aspects: firstly, the
design of the organisation, i.e. the organisation’s structure and functions, and the
reporting relationships that give shape to this structure; secondly, the business
processes and related information flows; and thirdly, the skills and competencies
required to execute business and operate enabling technologies, i.e. job design and
training, sourcing and management of human resources [78]. The concept of align-
ing strategic business objectives with the physical, social and technical context of
an organisation illustrates that IS implementation should be aimed at binding these
contexts together so that they contribute to the strategic advantage of the business
[79, 80]. As a result, institutionalisation of these systems contributes to the matur-
ity of these contexts and increases organisational responsiveness to internal and
external challenges [81].
Each implementation of an IS is unique, and it is not possible to follow particu-
lar theories (e.g. technological determinism, socio-technical alignment, organisa-
tional imperatives) regarding implementation in letter and spirit. For example, ISs
for asset management include operational technologies like sensors and other
40 A. Haider

condition monitoring systems whose behaviour is highly predictable and which


require minimal human intervention. On the other hand, there are others systems
like CMMS, ERP or MIS whose behaviour and use are determined by the social
interactions of the organisational actors in the organisation. At the same time, the
information demands put on ISs in some areas of engineering asset management
(such as maintenance) are quite diverse, and the available technologies are not
mature enough to address these demands. This limits the choice of technologies
and also influences their application and use. The dynamics of asset management,
therefore, suggest that for effective IS implementation there needs to be a hybrid
approach which brings together social, organisational and technical contexts of the
organisation and aligns them with strategic business orientation. Numerous at-
tempts have been made at describing IS alignment; however, two classical ap-
proaches proposed by Earl [80] and Henderson and Venkatraman [78] have been
the focus of practical and research endeavours.
Earl [80], while proposing his organisational fit framework (Figure 2), suggests
that alignment of technology is subjective and needs to be driven by the context
rather than strategic orientation of the business. This framework attempts to pro-
pose a holistic view of IS implementation and suggests four processes (i.e. clarifi-
cation, innovation, foundation and constitution processes) which provide align-
ment between the four strategic domains, i.e. business strategy, information
management strategy, IS strategy and IT strategy. Each of these domains is further
subdivided into components and imperatives. Components represent the key fac-

Organisational Information Systems


Strategy Strategy

Business Organisation Alignment Opportunity


Intent Context SBU Group

Information Management Information Technology


Strategy Strategy

Roles Relationships Scope Architecture


Formal Informal Capability Powers

Figure 2 Organisation fit framework [81]


Information Systems Implementation for Asset Management: A Theoretical Perspective 41

tors which govern the domain, whereas imperatives illustrate the key aspects
which need to be taken in account to manage the domain. This framework pro-
vides guidelines for strategic management of IT and ISs and their integration.
Earl [80] argues that the organisation must have answers to some fundamental
questions to align the four domains. Although the framework does not answer
these questions, it formalises them into the strategic agenda of the organisation
and points to the processes through which these questions are raised and answered
regularly. These questions could be as follows:
a. What IS and IT applications should the organisation develop to improve the
competitiveness of its business strategies?
b. What technological opportunities should the organisation consider to enhance
the efficiency and quality of its business processes?
c. Which IT platforms should the organisation be developing, and what plan and
policies are required to do that?
d. What IT capabilities should the organisation develop, and how may these be
acquired?
e. How should the IS activities be organised and what is the role of ISs?
f. How should IS/information technologies be governed and what kind of mana-
gerial profile best serves these needs?
The framework has an organisational strategy domain at at its core and sug-
gests its two components as being the organisational intent interpreted through
strategic choices and the organisational context shaped by the organisational in-
frastructure and culture. The components and imperatives of an organisation’s
strategy need to be accounted for while formulating IS strategy. The organisa-
tional context and business intent are subjective, and therefore the process with
which they feed into information strategy is not always clear or formalised. Earl
[80] terms understanding of these strategic considerations which influence infor-
mation strategy domain as the ‘clarification process’ and argues that familiarity
with strategic business intent and the organisational context is essential for IS
implementation and management. IS strategy is, thus, developed in response to
this process of clarification. The two key components comprising IS strategy
domain are ‘alignment’ and ‘opportunity’. Alignment is based on the clarification
process and calls for aligning IS implementation with business intent, goals and
context. The aim of alignment is to keep IS implementation aligned with business
orientation through strategic business units by employing methodologies such as
critical success factors or through steering committees [82]. The opportunity
component seeks to seize opportunities for organisational growth and maturity
through creative use of technology by actively looking out for technology-centric
business improvement enablers and thus contributing to the ‘innovation process’.
The IS strategy domain influences other domains through this innovation process,
for instance, the promise of translating or informing organisational strategy with
ISs is much greater than making structural adjustments. At the same time, the IS
strategy domain prompts changes to information management when reconfigura-
tion of the functionality of these systems necessitates business process reengineer-
42 A. Haider

ing, or IS opportunities influence the technological scope of IT strategy as the


innovation process necessitates acquiring new technical abilities.
The domain of IT strategy deals with two components: the scope or types of
technologies which the organisation needs to use and the architecture which con-
trols the technologies used by the organisation. Imperatives in IT strategy are
capability and powers. The scope of the technological capability is determined by
the skills and competencies needed for proficient use of technology, whereas
architecture is influenced by the powers required to implement and manage the
technological infrastructure. In this way, the IT strategy domain constitutes the
‘foundation process’, which provides the management base and control of activi-
ties associated with building and developing an IT infrastructure. The fourth do-
main, information management strategy, functions as the bedrock of IS strategy
and comprises roles and relationships. The components of the information man-
agement strategy domain are the roles and relationships which need to be defined
in managing IT activities, particularly those related to IS function. Roles refer to
the formal associations which define the responsibility and the control of those in
power to manage information management resources, whereas relationships de-
fine the informal relationships between responsibility and controlling power. The
linkages that the information management strategy domain confers upon IS strat-
egy, IT strategy and organisation strategy domains are called the ‘constitution
process’. This constitution process thus influences organisational strategy, the
capabilities and effectiveness of IS strategy, and the quality of the IT-related
strategic decisions.
The alignment modelled in this framework provides a high-level view of inte-
grating technology with business. It describes alignments in broad terms and does
not provide guidelines which could be drilled down to the operational level im-
plementation of technology. It views alignment of ISs as a linear or a mechanistic
process which follows fixed paths and interacts with ‘standard’ contexts. How-
ever, in reality, IS alignment is non linear, takes time, and cannot be attained
thorough an assumed set of strategies built around roles and relationships. In
addition, the notion of assumptions provides a contradiction to what is proposed
in the framework. Viewing alignment as a mechanical process implies determinis-
tic stance, which affects adaptability, and also impedes creativity and novelty
proposed by the innovation process associated with the IS strategy domain. It is
also important to note that values, roles, and their relationships are not just impor-
tant for information management, but are equally significant for the overall
alignment of technical, organisational, and social contexts. Furthermore, formal
roles and relationships could be embodied in business strategy; however, human
relationships which shape and influence these relationships are dynamic and thus
cannot be confined to the boundaries of a policy or plan. The framework also
stresses planning of associations between processes, rather than the relationship
between technology and processes in the first instance, and then using the infor-
mation thus generated to integrate business processes. Thus, this framework treats
information as a passive entity in translating strategic business considerations into
action or in informing business strategy so as to ensure strategic recalibration or
Information Systems Implementation for Asset Management: A Theoretical Perspective 43

re-orientation. Using information to drive alignment facilitates the creation of


shared meaning of the use of ISs and helps in shaping the context within which
alignment is sought. For example, information enables teamwork and thus aids in
developing a culture favourable to the roles and relationships advocated as being
necessary for alignment in the framework. This framework, or the theories based
on this framework, is, therefore, inadequate to meet the requirements of IS im-
plementation for asset lifecycle management.
Henderson and Venkatraman [78] provide an alternative view of IS alignment
as illustrated in Figure 3. The authors propose two important points, the distinction
of IT strategy from IS infrastructure and processes and the distinction of strategic
fit from interdomain alignment, as the key to business transformation. The model
thus takes an intentional view of organisational transformation. It draws its value
from three types of relationships: the fit which links two domains horizontally or
vertically, interdomain alignment and alignment of all domains with strategic
business considerations. It argues that business strategy consists of three key ele-
ments – the scope of the business, which relates to the services and products which
the business offers; unique competencies, or the attributes of the organisation
which provide it with a comparative advantage over competitors; and governance,
which reflects the strategic choices, such as strategic alliances and joint ventures,
to support the unique competencies and business scope.

BUSINESS STRATEGY IT STRATEGY

BUSINESS TECHNOLOGY
SCOPE SCOPE
EXTERNAL

I/T
DISTINCTIVE BUSINESS SYSTEMATIC
GOVERNANCE
COMPETENCIES GOVERNANCE COMPETENCIES

ADMINISTRATIVE
INFRASTRUCTURE
PROCESSES
INTERNAL

PROCESSES
SKILLS SKILLS ARCHITECTURES

ORGANIZATIONAL INFRASTRUCTURE IT INFRASTRUCTURE AND PROCESSES


AND PROCESSES

BUSINESS INFORMATION TECHNOLOGY

Strategic Fit Cross Domain Alignment Functional Integration

Figure 3 Strategic alignment model [79]


44 A. Haider

Henderson and Venkatraman [78] suggest that IT strategy needs to be drawn


from business strategy. In doing so, it establishes three key areas: a definition of
the scope of IT, which illustrates the range of technical infrastructure available to
the organisation; systemic competencies, which represent the distinctive
IT-related competencies which support existing strategy as well as contribute to
the creation of new strategies; and IT governance, which are the structural
choices (such as partnerships and joint ventures) to acquire IT capabilities which
contribute to systemic competencies and scope of IT in the organisation. The
third domain in the model is IT infrastructure and processes, which represent the
IT architecture, or technological configurations and information; processes, or the
activities necessary to support IT operations such as maintenance; and skills, or
the competencies required to operate and manage IT infrastructure in the organi-
sation. Similarly, the fourth domain of organisational infrastructure and processes
represents the administrative infrastructure, including the structure, roles and
reporting relationships; processes and information flows associated with the exe-
cution of key business activities; and skills, or the capabilities and competencies
required to execute the key activities which support business strategy.
The concept of alignment demonstrated by this model is dynamic and takes
into account changes in the business environment and their implications on the
strategic and organisational development [77]. The clear distinction between
business and IT domains advocated by this model underscores the need for func-
tional integration and thus calls for aligning the choices made in relation to IT
and business at strategic as well as operational levels. However, the model does
not account for social relationships which shape technology use and thus institu-
tionalise technology. Consequently, the changes in IT strategy, IT infrastructure
and organisational infrastructure are in response to changes in the business envi-
ronment. This model treats IT strategy as a controlled process undertaken by top
management and assumes that control of IT infrastructure, skills and IT man-
agement processes provides the basis for technology alignment with the organ-
isational infrastructure. Furthermore, managerial action provides for integration
of activities within and across domains, and thus the model assumes that factors
like what skills are needed, how information flows between processes and sys-
tems, and what outputs will be achieved from certain control actions can be
determined; hence the alignment process takes a linear path. This framework
suffers from the same drawbacks as Earl’s organisation-fit framework and, there-
fore, is not robust enough to address the question of alignment of ISs with stra-
tegic asset management so that the organisation is responsive to internal and
external challenges. The framework also undermines the role of information in
achieving alignment of the social, technical and organisational contexts with the
strategic business orientation. In summary, this model may be effective in ana-
lysing the impacts of IS implementation rather than facilitating asset manage-
ment maturity by enabling alignment of strategic asset management considera-
tions with technology implementation.
Information Systems Implementation for Asset Management: A Theoretical Perspective 45

8 Information Systems from an Engineering Asset


Management Alignment Perspective

IS implementation and its alignment with the organisational social and cultural
environment, structure, infrastructure and strategy do not follow a mechanistic
pattern and require time to take shape and deliver expected results. It is a process
which is socially and technically engendered in the organisation and, therefore,
requires a maturity of interacting actors and infrastructure to provide an appro-
priate level of alignment. Using available IS theories along with the lessons
learnt from the alignment theories discussed in previous sections, this section
attempts to develop an alternative approach to IS implementation and its align-
ment with the technical, organisational and social contexts of the organisation.
An IS-based engineering asset management alignment framework is illustrated
in Figure 4.
This framework treats alignment as a process which is technically and socially
composed and embedded in the organisation; in addition, it highlights the role of
information in shaping alignment. Proponents of contingency theory [83, 84]
suggest that the performance of an entity is contingent upon various internal and
external constraints. These theorists highlight four important points: (1) there is
no one best way to manage an organisation, (2) the subsystems of an organisation
need to be aligned with each other and with the overall organisation, (3) success-
ful organisations are able to extend this alignment to the organisational environ-
ment, and (4) organisational design and management must satisfy the nature and
needs of the task and work groups. Contingency theory stresses the multivariate
nature of organisations and, along with systems theory, assists in understanding
the interrelationships within and among subsystems of an organisation [85]. The
framework applies systems theory [86], and instead of considering an organisa-
tion’s or its constituent domains’ properties alone, it builds upon the relationships
and understanding of the domains which collectively provide for the IS alignment
within and with the organisation. This framework embodies these relationships
and applies the theory of dynamic capabilities to address the changing nature of
the asset management business environment by stressing integration, building and
reconfiguration of competencies to address the changing business environment
[87, 88].
The framework takes a resource-based view and proposes four domains: strate-
gic orientation, operational orientation, IS design and organisational design. Ana-
logous to Henderson and Venkatraman’s model, it argues that the strategic orien-
tation of the asset-managing organisation is defined through the interaction of
business scope, unique competencies and business governance choices. The opera-
tional orientation of asset management is derived from this strategic orientation.
The framework seeks to develop alignment based on goals of asset lifecycle man-
agement processes with the organisation’s overall objectives. This means that
asset lifecycle management processes conform to the strategic asset management
orientation. The asset lifecycle management domain is strategically aligned with
46 A. Haider

Strategic Orientation Operational Orientation


Lifecycle Stakeholder Lifecycle
Resource
Decisions and Relationship Learning
Management
Tradeoffs Management Management

Goals Alignment
Business
Scope Renewal
Renewal Cycle

Identify Operate &


Plan Acquire Maintain Dispose
Learning, Need
Optimisation, & Primary
Change Cycle Asset
Review Re-Evaluate
Need Monitor Lifecycle
Asset Solution
Comparative Business Change
Advantage Governance

Supply &
Risk Quality Lifecycle
Logistics
Management Management Accounting
Management

Intent Alignment Strategic Fit Functional Alignment

Process

Information Value & Purpose

Standardisation of Technology
Business Needs Definition

Competency Management
Business Responsiveness

Development

Skill and Human Information Analysis


Resource Development
Context Alignment

Formal and Informal Relationship Information Storage


Development Infrastructure

Organisational Infrastructure Development Information Exchange & Integration Infrastructure

Collaborative Culture and Structure Development Data Acquisition and Technology Support Infrastructure

Organizational Design IS Design

Figure 4 Information systems alignment with engineering asset management

the organisational design domain in the sense that not only do the organisational
and social contexts conform to asset lifecycle management objectives but they also
contribute to the responsiveness of the organisation, and in so doing help asset
lifecycle management processes to adapt to changes in the internal and external
business environment.
In this framework, the information requirements of asset lifecycle processes
drive IS design. The framework treats operational and information technologies in
the same domain as IS design. Thus, the alignment sought between operational
orientation of asset management and IS design aims at a functional integration of
asset lifecycle. To ensure information integration and quality, the IS design do-
main takes a bottom-up approach and stresses standardised data acquisition and
technology support infrastructure, which facilitates information integration and
communication and consequently allows for information storage in a way that
makes information accessible and available throughout the organisation. This
helps with information and knowledge management and functional integration.
The analysis layer refers to both the analysis to evaluate if the existing standard
of information and information systems meets the process and organisational
objectives (hence the strategic alignment between the IS design domain and stra-
tegic orientation and operational orientation domains) and to the level of decision
support which is required at various stages of an asset’s lifecycle. The quality of
the asset lifecycle management processes strongly depends upon the quality of
Information Systems Implementation for Asset Management: A Theoretical Perspective 47

information, and information quality itself is a measure of how effectively the ISs
cater for the information needs of the business processes. The analysis layer,
therefore, also measures the integration between ISs and business processes.
However, technologies, whether information or operational, are passive entities.
Their use and institutionalisation are not mechanistic processes and rely on the
culture, structure and human actors in an organisation. Therefore, the framework
proposes contextual alignment between IS design and organisation design
domains.
Organisational design takes time to develop, and its alignment with the IS is
also subject to the same time constraints. Therefore, the organisation design do-
main stresses the ‘development’ of a collaborative culture and structure as the
fundamental element of organisational design. This foundation provides the build-
ing block for developing an organisational infrastructure (internal structures,
policies and procedures put in place to support the strategic orientation of the
business), which shapes formal and informal relationships and drives human
resources management and skills development. Thus, organisational design pro-
vides for the development of core competencies which aid in utilising information
and operational technologies as well as executing asset management processes for
the advantage of the organisation through alignment based on organisational in-
tent (i.e. organisational vision, mission and objectives). In doing so, the social and
organisational contexts contribute to strategic orientation and are themselves
shaped in line with the strategic orientation. In doing so, organisational design
domain improves responsiveness of the organisation, which enables the organisa-
tion to respond to changes in the business environment. At the same time, since
the organisational design domain is strategically aligned with the operational
orientation domain, it accounts for the objectives of the overall business as well
as the asset lifecycle demands and goals. It thus provides the context within
which the ISs are employed, shaped and institutionalised. The context of the or-
ganisation is subject to change due to internal and external forces; therefore, the
framework suggests context-based dynamic alignment between the IS design and
organisational design domains.
This framework treats information as the key enabler of asset management and
emphasises that IS implementation is not a managerial process or activity. In
actual fact, it is a social process which is continuously aimed at aligning and
matching IS capabilities with business objectives and requirements. The frame-
work also highlights that to achieve the desired results, it is important to account
for those organisational areas which influence technology implementation and
those which are influenced by it. This framework thus treats IS implementation as
a means to translate strategic asset management objectives into operational ac-
tions by enabling asset lifecycle processes and utilises the information generated
by the execution of these processes to inform asset management strategy for stra-
tegic reorientation and recalibration. In this way, IS implementation becomes a
generative learning process which helps in the maturity of the technical, social
and organisational context of the organisation.
48 A. Haider

9 Conclusions

IS implementation in an asset management paradigm aims to translate strategic


objectives into action, align strategic business information requirements with ISs,
provide integration of lifecycle processes and inform asset and business strategy
through value-added decision support. This paper demonstrates that IS implemen-
tation is an intricate task with a complex mix of activities. At the same time, it
acknowledges that ISs are social systems and their use is shaped and reshaped by
organisational actors who interact with technology and the context of their imple-
mentation. The framework highlights that asset management is information driven
and IS implementation not only involves understanding of the structure of the
technology but also requires an understanding of the organisational context within
which technology is to be implemented. It thus provides a holistic view of the
theoretical and practical assumptions associated with IS implementation, which
has significant implications for asset managers in terms of establishing a robust
technology support for asset lifecycle management. This framework provides
guidance on technical, organisational and social aspects associated with IS imple-
mentation and the way they interact with each other to give shape and meaning to
the use of ISs in achieving the strategic objectives of asset lifecycle management.
The framework does not treat implementation of ISs for asset management as a
one-off endorsement of technology. It presents IS implementation as a continuous
process aimed at organisational learning through alignment between the organisa-
tion’s strategy and application of ISs, guided by the value profile shaped by re-
quirements of asset management and the organisational, social and technical con-
texts of the implementation of these systems.

References

[1] Earl MJ (1989) Management strategies for information technology. Prentice-Hall, Hemel
Hempstead, UK
[2] Galliers RD (1991) Strategic information systems: myths, realities and guidelines for
successful implementation. Eur J Inf Syst 1(1):55–64
[3] Lederer AL, Sethi V (1996) Key prescriptions for strategic information systems planning.
J Manage Inf Syst 13(1):35–62
[4] Haider A, Koronios A, Quirchmayr G (2006) You cannot manage what you cannot meas-
ure: an information systems based asset management perspective. In: Mathew J, Ma L,
Tan A, Anderson D (eds) Proceedings of the inaugural world congress on engineering as-
set management, 11–14 July 2006, Gold Coast, Australia
[5] Haider A, Koronios A (2005) ICT based asset management framework. In: Proceedings of
the 8th international conference on enterprise information systems (ICEIS), Paphos, Cy-
prus, vol 3, pp. 312–322
[6] Checkland P (1981) Systems thinking, systems practice. Wiley, Chichester
[7] Walsham G (2001) Making a world of difference: IT in a global context. Wiley, Chichester
[8] Giddens A (1984) The constitution of society: outline of the theory of structure. University
of California Press, Berkeley, CA
Information Systems Implementation for Asset Management: A Theoretical Perspective 49

[9] Haider A (2007) Information systems based engineering asset management evaluation:
operational interpretations. Dissertation, University of South Australia, Adelaide, Australia
[10] Haider A (2009) Value maximisation from information technology in asset management –
a cultural study. In: Proceedings of the international conference of maintenance societies
(ICOMS), 2–4 June 2009, Sydney, Australia
[11] IIMM (2006) International infrastructure management manual. Association of Local
Government Engineering NZ, National Asset Management Steering Group, New Zealand,
Thames, ITBN 0-473-10685-X
[12] Marosszeky M, Sauer C, Johnson K, Karim K, Yetton P (2000) Information technology in
the building and construction industry: the Australian experience. In: Li H, Shen Q, Scott
D, Love PED (eds) Proceedings of the INCITE 2000 conference: Implementing IT to ob-
tain a competitive advantage in the 21st century. Hong Kong Polytechnic University Press,
Hong Kong, pp. 78–92
[13] Power D (2005) Implementation and use of B2B-enabling technologies: five manufactur-
ing cases. J Manuf Technol Manage 16(5):554–572
[14] Songer AD, Young R, Davis K (2001) Social architecture for sustainable IT implementa-
tion in AEC/EPC. In: Coetzee G, Boshoff F (eds) Proceedings of IT in construction in Af-
rica, 30 May–1 June, Mpumalunga, South Africa
[15] Stewart R, Mohamed S (2002) IT/IS projects selection using multi-criteria utility theory.
Logist Inf Manage 15(4):254–270
[16] Laurindo FJB, de Carvalho MM (2005) Changing product development process through
information technology: a Brazilian case. J Manuf Technol Manage 16(3):312–327
[17] Small MH (2006) Justifying investment in advanced manufacturing technology: a portfo-
lio analysis. Ind Manage Data Syst 106(4):485–508
[18] Zipf PJ (2000) Technology-enhanced project management. J Manage Eng 16(1):34–39
[19] Weippert A, Kajewski SL, Tilley PA (2002) Internet-based information and communica-
tion systems on remote construction projects: a case study analysis. Construct Innovat
2(2):103–116
[20] Steenstrup K (2008) EAM and IT enabled assets: what is your equipment thinking about
today? In: Energy & Utilities Summit, 7–10 September 2008, JW Marriott Grande Lakes,
Orlando, FL
[21] Marsh L, Flanagan R (2000) Measuring the costs and benefits of information technology
in construction. Eng Construct Architect Manage 7(4):423–435
[22] Gindy NNZ, Cerit B, Hodgson A (2006) Technology roadmapping for the next generation
manufacturing enterprise. J Manuf Technol Manage 17(4):404–416
[23] Haider A, Koronios A (2003) Managing engineering assets: a knowledge based approach
through information quality. In: Proceedings of the 2003 international business informa-
tion management conference, Cairo, Egypt, pp. 443–452
[24] Haider A (2008) Information systems for asset lifecycle management: lessons from two
cases. In: 3rd world congress on engineering asset management, 27–30 October 2008, Bei-
jing, People’s Republic of China
[25] Haider A (2010) Governance of IT for engineering asset management. In: 14th business
transformation through innovation and knowledge management – an academic perspec-
tive, 23–24 June 2010, Istanbul, Turkey
[26] Lee I (2004) Evaluating business process-integrated information technology investment.
Bus Process Manage J 10(2):214–233
[27] O’Brien WJ (2000) Implementation issues in project web sites: a practitioner’s viewpoint.
J Manage Eng 16(3):34–39
[28] Abdel-Malek L, Das SK, Wolf C (2000) Design and implementation of flexible manufac-
turing solutions in agile enterprises. Int J Agile Manage Syst 2(3):187–195
[29] Paiva EL, Roth AV, Fensterseifer JE (2002) Focusing information in manufacturing:
a knowledge management perspective. Ind Manage Data Syst 102(7):381–389
50 A. Haider

[30] Whyte J, Bouchlaghem D (2001) IT innovation within the construction organisation. In:
Coetzee G, Boshoff F (eds) Proceedings of IT in construction in Africa, 30 May–1 June
2001, Mpumalunga, South Africa
[31] Haider A (2010) Enterprise architectures for information and operational technologies for
asset management. In: 5th world congress on engineering asset management, 25–27 Octo-
ber 2010, Brisbane, Australia
[32] Pun KF (2005) An empirical investigation of strategy determinants and choices in manu-
facturing enterprises. J Manuf Technol Manage 16(3):282–301
[33] Stephenson P, Blaza S (2001) Implementing technological change in construction orga-
nisations. In: Coetzee G, Boshoff F (eds) Proceedings of IT in construction in Africa,
30 May–1 June, Mpumalunga, South Africa
[34] Jaska PV, Hogan PT (2006) Effective management of the information technology func-
tion. Manage Res News 29(8):464–470
[35] Love PED, Irani Z Li H, Cheng EWL, Tse RYC (2001) An empirical analysis of the
barriers to implementing e-commerce in small-medium sized construction contractors in
the state of Victoria, Australia. Construct Innovat 1(1):31–41
[36] Gordon SR, Gordon JR (2002) Organizational options for resolving the tension between
IT departments and business units in the delivery of IT services. Inf Technol People
15(4):286–305
[37] Voordijk, H, Leuven, AV, & Laan, A 2003) Enterprise resource planning in a large con-
struction firm: implementation analysis. Construct Manage Econ 21(5):511–521
[38] Gomes CF, Yasin MM, Lisboa JV (2004) A literature review of manufacturing perform-
ance measures and measurement in an organizational context: a framework and direction
for future research. J Manuf Technol Manage 15(6):511–530
[39] Nitithamyong P, Skibniewski MJ (2004) Web-based construction project management
systems: how to make them successful? Automat Construct 13(4):491–506
[40] Alshawi M, Ingirige B (2003) Web-enabled project management: an emerging paradigm
in construction. Automat Construct 12(4):349–364
[41] Bjork BC (2002) The impact of electronic document management on construction infor-
mation management. In: Proceedings of the international council for research and innova-
tion in building and construction, Council for Research and Innovation in Building and
Construction Working Group 78 conference 2002, 12–14 June 2002, Aarhus, Denmark
[42] Bijker WE, Law J (eds) (1992) Shaping technology/building society: studies in sociotech-
nical change. MIT Press, Cambridge, MA
[43] Sabherwal R (1999) The relationship between information system planning sophistication
and information system success: an empirical assessment. Decis Sci 30(1):137–67
[44] Teo TSH, Ang JSK (1999) Critical success factors in the alignment of IS plans with busi-
ness plans. Int J Inf Manage 19(2):173–185
[45] Kunnathur AS, Shi Z (2001) An investigation of the strategic information systems plan-
ning success in Chinese publicly traded firms. Int J Inf Manage 21(6):423–439
[46] Lee GG, Pai RJ (2003) Effects of organizational context and inter-group behaviour on the
success of strategic information systems planning: an empirical study. Behav Inf Technol
22(4):263–280
[47] Grover V, Segars AH (2005) An empirical evaluation of stages of strategic information
systems planning: patterns of process design and effectiveness. Inf Manage 42(5):761–779
[48] Newkirk HE, Lederer AL, Srinivasan C (2003) Strategic information systems planning:
too little or too much. J Strateg Inf Syst 12(3):201–228
[49] Teo TSH, King WR (1997) Integration between business planning and information sys-
tems planning: an evolutionary-contingency perspective. J Manage Inf Syst 14(1):185–224
[50] Allen JP (2000) Information systems as technological innovation. Inf Technol People
13(3):210–221
[51] Kwon TH, Zmud RW (1987) Unifying the fragmented models of information systems
implementation. In: Boland RJ Jr, Hirshheim RA (eds) Critical issues in information sys-
tems research. Wiley, New York
Information Systems Implementation for Asset Management: A Theoretical Perspective 51

[52] Walsham G (1993) Interpreting information systems research in organizations. Wiley,


Chichester
[53] DeLone WH, McLean ER (1992) Information systems success: the quest for the depend-
ent variable. Inf Syst Res 3(1):60–95
[54] Benjamin R, Scott Morton M (1992) Reflections on effective application of information
technology in organizations … from the perspective of management in the 90’s program.
In: Proceedings of the IFIP 12th world computer congress on personal computers and in-
telligent systems – information processing ’92, North-Holland, Amsterdam, 3:131–142
[55] Castells M (2000) The rise of the network society. The information age: economy, society
and culture, 2nd edn. Blackwell, Malden, MA
[56] Kappelman LA, Mclean ER (1994) User engagement in information systems develop-
ment. In: Levine L (ed) Diffusion, Transfer and Implementation of Information Technol-
ogy. Elsevier, Amsterdam
[57] Checkland P (1981) Systems thinking, systems practice. Wiley, Chichester
[58] Walsham G (1995) Interpretive case studies in IS research: nature and method. Eur J Inf
Syst 4(2):74–83
[59] Schienstock G (1999) Information society, work and the generation of new forms of social
exclusion. (SOWING): first interim report (literature review).
http://www.uta.fi/laitokset/tyoelama/sowing/frontpage.html. Accessed 30 May 2008
[60] Marx K (1847) The poverty of philosophy.
http://www.marxists.org/archive/marx/works/1847/poverty-philosophy/ch02.htm.
Accessed 21 August 2010
[61] Bijker WE (1995) Of bicycles, bakelites, and bulbs: toward a theory of sociotechnical
change. MIT Press, Cambridge, MA
[62] Heilbroner R (1994) Do machines make history? In: Marx L, Smith MR (ed) Does tech-
nology drive history? The dilemma of technological determinism. MIT Press, Cambridge,
MA, pp. 53–65
[63] Agarwal R, Sambamurthy V (2002) Principles and models for organizing the IT function.
MIS Q Exec 1(1)
[64] Walsham G (2001) Making a world of difference: IT in a global context. Wiley, Chichester
[65] Kraft P, Truex D (1994) Postmodern management and information technology in the
modern industrial corporation. In: Baskerville R, Smithson S, Ngwenyama O, DeGross J
(eds) Proceedings of the IFIP WG8.2 working conference on information technology and
new emergent forms of organization, Ann Arbor, MI, 11–13 August 1994, North-Holland,
New York
[66] Van Der Blonk H (2000) Institutionalization and legitimation of information technologies in
local contexts. In: Proceedings of the information flows, local improvisations and work prac-
tices, International Federation of Information Processing Working Group 9.4 on social im-
plications of computers in developing countries, Cape Town, South Africa, 23–26 May 2000
[67] Dahlbom B, Mathiassen L (1993) Computers in context the philosophy and practice of
systems design, 2000 edn. Blackwell, Oxford
[68] Bostrom RP, Heinen JS (1977) IS problems and failures: a socio-technical perspective.
MIS Q September:17–32
[69] Mumford E (2000) Socio-technical design: an unfulfilled promise or a future opportunity.
In: Baskerville R, Stage J, DeGross JI (eds) Organizational and social perspectives on in-
formation technology. Kluwer, Boston
[70] Ciborra C (1996) Improvisation and information technology in organizations. In: Proceed-
ings of the ICIS. Cleveland
[71] Orlikowski WJ (2000) Using technology and constituting structures: a practice lens for
studying technology in organizations. Organ Sci 11(4):404–428
[72] Porter ME (1979) How competitive forces shape strategy. Harvard Bus Rev
57(2):137−145
[73] Porter ME, Miller VE (1985) How information gives you competitive advantage. Harvard
Bus Rev 63(4):149–160
52 A. Haider

[74] Mintzberg H (1990) The design school: reconsidering the basic premises of strategic
management. Strateg Manage J 11(3):171–195
[75] Davenport TH (1998) Putting the enterprise into the enterprise system. Harvard Bus Rev
July–August, pp. 121–131
[76] Scott Morton MS (ed) (1991) The corporation of the 1990s: information technology and
organizational transformation. Oxford University Press, Oxford
[77] Tapscott D, Caston A (1993) Paradigm shift: the new promise of information technology.
McGraw-Hill, New York
[78] Henderson JC, Venkatraman N (1993) Strategic alignment: leveraging information tech-
nology for transforming organizations. IBM Syst J 32(1):4–16
[79] Henderson JC, Venkatraman N (1992) Strategic alignment: a model for organizational
transformation through information technology. In: Kochan TA, Useem M (eds) Trans-
forming organizations. Oxford University Press, Oxford
[80] Earl M (1996) Integrating IS and the organization: a framework of organizational fit. In:
Earl MJ (ed) Information management: the organizational dimension. Oxford University
Press, Oxford
[81] Robson C (2004) Real world research, 2nd edn. Blackwell, Oxford
[82] Ward J, Griffiths P (1996) Strategic planning for information systems, 2nd edn. Wiley,
London
[83] Chin WW, Marcolin BL, Newsted PR (2003) A partial least squares latent variable model-
ling approach for measuring interaction effects: results from a Monte Carlo simulation
study and an electronic-mail emotion/adoption study. Inf Syst Res 14(2):189–217
[84] Khazanchi D (2005) Information technology (IT) appropriateness: the contingency theory
of fit and IT implementation in small and medium enterprises. J Comput Inf Syst
45(3):88–95
[85] Premkumar G, King WR (1994) Organizational characteristics and information systems
planning: an empirical study. Inf Syst Res 5(2):75–109
[86] Churchman CW (1994) Management science: science of managing and managing of
science. Interfaces 24(4):99–110
[87] Zahra SA, George G (2002) The net-enabled business innovation cycle and the evolution
of dynamic capabilities. Inf Syst Res 13(2):147–150
[88] Daniel EM, Wilson HN (2003) The role of dynamic capabilities in e-business transforma-
tion. Eur J Inf Syst 12(4):282–296
[89] Chan FTS, Chan MH, Lau H, Ip RWL (2001) Investment appraisal techniques for ad-
vanced manufacturing technology (AMT): a literature review. Integr Manuf Syst
12(1):35–47
[90] Huang C, Fisher N, Spreadborough A, Suchocki M (2003) Identify in the critical factors
of IT innovation adoption and implementation within the construction industry. In: Pro-
ceedings of the 2nd international conference on construction in the 21st century (CITC-II),
Sustainablity and Innovation in Management and Technology, 10–12 December 2003,
Hong Kong
[91] Thorpe D (2003) Online remote construction management trials in Queensland department
of main roads: a participant’s perspective. Construct Innovat 3(2):65–79
[92] Stewart RA, Mohamed S, Marosszeky M (2004) An empirical investigation into the link
between information technology implementation barriers and coping strategies in the Aus-
tralian construction industry. Construct Innovat 4(3):155–171
[93] Abdel-Makoud AB (2004) Manufacturing in the UK: contemporary characteristics and
performance indicators. J Manuf Technol Manage 15(2):155–171
[94] Dangayach GS, Deshmukh SG (2005) Advanced manufacturing technology implementa-
tion: evidence from Indian small and medium enterprises (SMEs). J Manuf Technol Man-
age 16(5):483–496
[95] Adam A (2002) Exploring the gender question in critical information systems. J Inf Tech-
nol 17(2):59
Information Systems Implementation for Asset Management: A Theoretical Perspective 53

[96] Aladwani AM (2002) An integrated performance model of information systems projects.


J Manage Inf Syst 19: 185–210
[97] Alavi M, Leidner DE (2001) Review: knowledge management and knowledge manage-
ment systems. MIS Q 25(1):107–136
[98] Alstyne MV, Brynjolfsson E (2005) Global village or cyber-balkans? Modeling and meas-
uring the integration of electronic communities. Manage Sci 51(6):851
[99] Alter S (2001) Are the fundamental concepts of information systems mostly about work
systems? Commun AIS 5(11):1–67
[100] Anandarajan M, Arinze B (1998) Matching client/server processing architectures with
information processing requirements: a contingency study. Inf Manage 34(5):265–274
[101] Andres HP, Zmud RW (2001) A contingency approach to software project coordination.
J Manage Inf Syst 18(3):41–70
[102] Argyres SN (1999) The impact of information technology on coordination: evidence from
the B-2 “stealth” bomber. Organ Sci 10(2):162–180
[103] Atkinson CJ (2000) The Soft Information Systems and Technologies Methodology (SIS-
TeM): an actor network contingency approach to integrated development. Eur J Inf Syst
9(2):104–123
[104] Bagchi S, Kanungo S, Dasgupta S (2003) Modelling use of enterprise resource planning
systems: a path analytic study. Eur J Inf Syst 12(2):142–158
[105] Bahli B, Rivard S (2003) The information technology outsourcing risk: a transaction cost
and agency theory-based perspective. J Inf Technol 18(3):211–221
[106] Barki H, Rivard S, Talbot J (2001) An integrative contingency model of software project
risk management. J Manage Inf Syst 17(4):37–69
[107] Barrett M, Scott S (2004) Electronic trading and the process of globalization in traditional
futures exchanges: a temporal perspective. Eur J Inf Syst 13(1):65–79
[108] Barry B, Crant JM (2000) Dyadic communication relationships in organizations: an attri-
bution/expectancy approach. Organ Sci 11(6):648–664
[109] Basden A (2002) The critical theory of Herman Dooyeweerd? J Inf Technol
17(4):257−269
[110] Bausch KC (2002) Roots and branches: a brief, picaresque, personal history of systems
theory. Syst Res Behav Sci 19(5):417–428
[111] Becerra-Fernandez I, Sabherwal R (2001) Organization knowledge management: a con-
tingency perspective. J Manage Inf Syst 18(1):23–55
[112] Beckman PA (2002) Concordance between task and interface rotational and translational
control improves ground vehicle performance. Hum Factors 44(4):644–653
[113] Bobbitt LM, Dabholkar PA (2001) Integrating attitudinal theories to understand and pre-
dict use of technology-based self-service: the Internet as an illustration. Int J Serv Ind
Manage 12(5):423–450
[114] Bolt MA, Killough LN, Koh HC (2001) Testing the interaction effects of task complexity
in computer training using the social cognitive model. Decis Sci 32(1):1–20
[115] Burke K, Aytes K, Chidambaram L (2001) Media effects on the development of cohesion
and process satisfaction in computer-supported workgroups: an analysis of results from
two longitudinal studies. Inf Technol People 14(2):122–141
[116] Burkhardt ME (1994) Social interaction effects following a technological change: a longi-
tudinal investigation. Acad Manage J 37:869–898
[117] Callon M (1986) The sociology of an actor-network: the case of the electric vehicle. In:
Callon M, Law J, Rip A (eds) Mapping the dynamics of science and technology. Macmil-
lan, London
[118] Cannel E, Nicholson B (2005) Small firms and offshore software outsourcing: high trans-
action costs and their mitigation. J Glob Inf Manage 13(3):33–54
[119] Chakravarthy B (1997) A new strategy framework for coping with turbulence. Sloan
Manage Rev 38(2):69–82
[120] Chan SC, Lu M (2004) Understanding internet banking adoption and use behaviour:
a Hong Kong perspective. J Glob Inf Manage 12(3):21–44
54 A. Haider

[121] Chen ANK, Edgington TM (2005) Assessing value in organizational knowledge creation:
considerations for knowledge workers. MIS Q 29(2):279–309
[122] Chen JC, Chong PP, Chen Y (2001) Decision criteria consolidation: a theoretical founda-
tion of Pareto principle to Porter’s competitive forces. J Organ Comput Electron Com-
merce 11(1):1–14
[123] Chen Y, Chong PP, Chen JC (2000) Small business management: an IT-based approach.
J Comput Inf Syst 41(2):40–47
[124] Chin WW, Marcolin BL, Newsted PR (2003) A partial least squares latent variable model-
ling approach for measuring interaction effects: results from a Monte Carlo simulation
study and an electronic-mail emotion/adoption study. Inf Syst Res 14(2):189–217
[125] Chung WY, Fisher CW, Wang RY (2005) Redefining the scope and focus of information
quality work: a general systems theory perspective. In: Wang RY, Pierce WM, Madnick
SE, Fisher CW (eds) Advances in management information systems. ME Sharpe, Armonk,
NY
[126] Churchman CW (1994) Management science: science of managing and managing of
science. Interfaces 24(4):99–110
[127] Clemons EK, Hitt LM (2004) Poaching and the misappropriation of information: transac-
tion risks of information exchange. J Manage Inf Syst 21(2):87–107
[128] Cohen W, Levinthal D (1990) Absorptive capacity: a new perspective on learning and
innovation. Adm Sci Q 35(1):128–152
[129] Compeau D, Higgins CA, Huff S (1999) Social cognitive theory and individual reactions
to computing technology: a longitudinal study. MIS Q 23(2):145–159
[130] Cooper RB, Wolfe RA (2005) Information processing model of information technology
adaptation: an intra-organizational diffusion perspective. Database Adv Inf Syst
36(1):30−48
[131] Daniel EM, Wilson HN (2003) The role of dynamic capabilities in e-business transforma-
tion. Eur J Inf Syst 12(4):282–296
[132] Dennis AR, Garfield MJ (2003) The adoption and use of GSS in project teams: toward
more participative processes and outcomes. MIS Q 27(2):289
[133] Dennis AR, Wixom BH, Vandenberg RJ (2001) Understanding fit and appropriation
effects in group support systems via meta-analysis. MIS Q 25(2):167–193
[134] Dunn C, Grabski S (2001) An investigation of localization as an element of cognitive fit in
accounting model representations. Decis Sci 32(1):55–94
[135] Feeley TH, Barnett GA (1996) Predicting employee turnover from communication net-
works. Hum Commun Res 23(1):370–387
[136] Garicano L, Kaplan SN (2001) The effects of business-to-business E-commerce on trans-
action costs. J Ind Econ 49(4):463–485
[137] Garrity EJ (2002) Synthesizing user centred and designer centred is development ap-
proaches using general systems theory. Inf Syst Frontiers 3(1):107–121
[138] Gattiker TF, Goodhue DL (2005) What happens after ERP implementation: understanding
the impact of inter-dependence and differentiation on plant-level outcomes. MIS Q
29(3):559–585
[139] Gebauer J, Shaw MJ (2004) Success factors and impacts of mobile business applications:
results from a mobile e-procurement study. Int J Electron Commerce 8(3):19–41
[140] Ginberg MJ (1980) An organizational contingencies view of accounting and information
systems implementation. Account Organ Soc 5(4):369–382
[141] Goodhue DL (1995) Understanding user evaluations of information systems. Manage Sci
41(12):1827–1844
[142] Goodhue DL, Thompson RL (1995) Task-technology fit and individual performance. MIS
Q 19(2):213–236
[143] Gregoire YM, Wade JH, Antia K (2001) Resource redeployment in an ecommerce envi-
ronment: a resource-based view. In: Proceedings of the American Marketing Association
conference, Long Beach, CA
Information Systems Implementation for Asset Management: A Theoretical Perspective 55

[144] Griffith TL, Sawyer JE, Neale MA (2003) Virtualness and knowledge in teams: managing
the love triangle of organizations, individuals, and information technology. MIS Q
27(2):265–287
[145] Hansen T, Jensen JM, Solgaard HS (2004) Predicting online grocery buying intention: a
comparison of the theory of reasoned action and the theory of planned behavior. Int J Inf
Manage 24(6):539–550
[146] Hasan B, Ali JMH (2004) An empirical examination of a model of computer learning
performance. J Comput Inf Syst 44(4):27–34
[147] Heng MSH, de Moor A (2003) From Habermas’s communicative theory to practice on the
internet. Inf Syst J 13(4):331–352
[148] Henwood F, Hart A (2003) Articulating gender in the context of ICTs in health care: the
case of electronic patient records in the maternity services. Crit Soc Policy 23(2):249–267
[149] Hidding G (2001) Sustaining strategic IT advantage in the information age: how strategy
paradigms differ by speed. Strateg Inf Syst 10(3):201–222
[150] Hinds PJ, Bailey DE (2003) Out of sight, out of sync: understanding conflict in distributed
teams. Organ Sci 14(6):615–632
[151] Hoxmeier JA, Nie W, Purvis GT (2000) The impact of gender and experience on user
confidence in electronic mail. J End User Comput 12(4):11–20
[152] Humphreys PK, Lai MK, Sculli D (2001) An inter-organizational information system for
supply chain management. Int J Prod Econ 70(3):245–255
[153] Huseyin T 2005) Information technology relatedness, knowledge management capability,
and performance of multibusiness firms. MIS Q 29(2):311–335
[154] Iskandar BY, Kurokawa S, LeBlanc LJ (2001) Adoption of electronic data interchange:
the role of buyer-supplier relationships. IEEE Trans Eng Manage 48(4):505–517
[155] Jae-Nam L, Young-Gul K (2005) Understanding outsourcing partnership: a comparison of
three theoretical perspectives. IEEE Trans Eng Manage 52(1):43–58
[156] Jagodzinski P, Reid FJM, Culverhouse P, Parsons R, Phillips I (2000) A study of electron-
ics engineering design teams. Des Stud 21(4):375–402
[157] Janson M, Cecez-Kecmanovic D (2005) Making sense of e-commerce as social action. Inf
Technol People 14(4):311–343
[158] Jarvenpaa SL (1988) The importance of laboratory experimentation in information sys-
tems research. Commun ACM 31(12):1502–1504
[159] Jasperson J, Carter PE, Zmud RW (2005) A comprehensive conceptualization of post-
adoptive behaviors associated with information technology enabled work systems. MIS Q
29(3):525–557
[160] Jones M, Karsten H (2003) Review: structuration theory and information systems re-
search. WP 11/03. Judge Institute Working Papers, University of Cambridge.
http://www.jbs.cam.ac.uk/research/working_papers/2003/wp0311.pdf.
Accessed 3 December 2009
[161] Kauffman RJ, Mohtadi H (2004) Proprietary and open systems adoption in E-procure-
ment: a risk-augmented transaction cost perspective. J Manage Inf Syst 21(1):137–166
[162] Keil M, Smith HJ, Pawlowski S, Jin L (2004) Why didn’t somebody tell me? Climate,
information asymmetry, and bad news about troubled projects. Database Adv Inf Syst
35(2):65–84
[163] Kern T, Kreijger J, Willcocks L (2002) Exploring ASP as sourcing strategy: theoretical
perspectives, propositions for practice. J Strateg Inf Syst 11(2):153–177
[164] Khazanchi D (2005) Information technology (IT) appropriateness: the contingency theory
of fit and IT implementation in small and medium enterprises. J Comput Inf Syst
45(3):88–95
[165] Kim KK, Michelman JE (1990) An examination of factors for the strategic use of informa-
tion systems in the health care industry. MIS Q 14(2):201–215
[166] Kling R, McKim G, King A (2003) A bit more to it: scholarly communication forums as
socio- technical interaction networks. J Am Soc Inf Sci Technol 54(1):47–67
56 A. Haider

[167] Ko D, Kirsch LJ, King WR (2005) Antecedents of knowledge transfer from consultants to
clients in enterprise system implementations. MIS Q 29(1):59–85
[168] Kohli R, Kettinger WJ (2004) Informating the clan: controlling physicians’ costs and
outcomes. MIS Q 28(3):363–394
[169] Kuo FY, Chu TH, Hsu MH, Hsieh HS (2004) An investigation of effort-accuracy trade-off
and the impact of self-efficacy on Web searching behaviors. Decis Support Syst
37(3):331–342
[170] Lamb R, Kling R (2003) Reconceptualizing users as social actors in information systems
research. MIS Q 27(2):197–235
[171] Larsen T, Levine L, DeGross JI (eds) (1999) Information systems: current issues and
future changes. IFIP, Laxenburg, Austria
[172] Ledington PWJ, Ledington J (1999) The problem of comparison in soft systems method-
ology. Syst Res Behav Sci 16(4):329–339
[173] Leonard LNK, Cronan TP, Kreie J (2004) What influences IT ethical behavior intentions-
planned behavior, reasoned action, perceived importance, or individual characteristics? Inf
Manage 42(1):143–158
[174] Liaw SS, Chang WC, Hung WH, Huang HM (2006) Attitudes toward search engines as a
learning assisted tool: approach of Liaw and Huang’s research model. Comput Hum Be-
hav 22(2):177–190
[175] Lim K, Benbasat I (2000) The effect of multimedia on perceived equivocality and per-
ceived usefulness of information systems. MIS Q 24(3):449–471
[176] Loch CH, Huberman BA (1999) A punctuated equilibrium model of technology diffusion.
Manage Sci 45(2):160–177
[177] Madey G, Freeh V, Tynan R (2002) The open source software development phenomenon:
an analysis based on social network theory. In: Proceedimgs of Americas Conference on
Information Systems (AMCIS2002), Dallas, TX, pp. 1806–1813
[178] Mahaney RC, Lederer AL (2003) Information systems project management: an agency
theory interpretation. J Syst Softw 68(1):1–9
[179] Mahoney LS, Roush PB, Bandy D (2003) An investigation of the effects of decisional
guidance and cognitive ability on decision-making involving uncertainty data. Inf Organ
13(2):85–110
[180] Majchrzak A, Malhotra A, John R (2005) Perceived individual collaboration know-how
development through information technology-enabled contextualization: evidence from
distributed teams. Inf Syst Res 16(1):9–27
[181] Malhotra A, Gosain S, El Sawy OA (2005) Absorptive capacity configurations in supply
chains: gearing for partner-enabled market knowledge creation. MIS Q 29(1):145–187
[182] Markus ML, Majchrzak A, Gasser L (2002) A design theory for systems that support
emergent knowledge processes. MIS Q 26(3):179–212
[183] Massey AP, Montoya-Weiss MM (2006) Unraveling the temporal fabric of knowledge
conversion: a model of media selection and use. MIS Q 30(1):99–114
[184] McMaster TE, Mumford EB, Swanson EB, Warboys B, Wastell D (eds) (1997) Facilitat-
ing technology transfer through partnership: learning from practice and research. Chap-
man & Hall, London
[185] Melville N, Kraemer KL, Gurbaxani V (2004) Information technology and organizational
performance: an integrative model of IT business value. MIS Q 28(2):283–322
[186] Mirchandani DA, Lederer AL (2004) IS planning autonomy in US subsidiaries of multina-
tional firms. Inf Manage 41(8):1021–1036
[187] Mora M, Gelman O, Cervantes F, Mejia M, Weitzenfeld A (2003) A systemic approach
for the formalization of the information systems concept: why information systems are
systems? In: Cano JJ (ed) Critical reflections on information systems: a systemic ap-
proach. Idea Group, Hershey, PA
[188] Newman M, Robey D (1992) A social process model of user-analyst relationships. MIS Q
16(2):249–266
Information Systems Implementation for Asset Management: A Theoretical Perspective 57

[189] Orlikowski WJ (2000) Using technology and constituting structures: a practice lens for
studying technology in organizations. Organ Sci 11(4):404–428
[190] Orlikowski WJ, Barley SR (2001) Technology and institutions: what can research on
information technology and research on organizations learn from each other? MIS Q
25(2):245–265
[191] Orlikowski WJ, Walsham G, Jones M, DeGross JI (eds) (1996) Information technology
and changes in organizational work, Chapman & Hall, London
[192] Palvia SC, Sharma RS, Conrath DW (2001) A socio-technical framework for quality
assessment of computer information systems. Ind Manage Data Syst 101(5–6):237–251
[193] Pawlowski SD, Robey D (2004) Bridging user organizations: knowledge brokering and
the work of information technology professionals. MIS Q 28(4):645–672
[194] Pollock TG, Whitbred RC, Contractor N (2000) Social information processing and job
characteristics: a simultaneous test of two theories with implications for job satisfaction.
Hum Commun Res 26(2):292–330
[195] Porra J, Hirschiem R, Parks MS (2005) The history of Texaco’s corporate information
technology function: a general systems theoretical interpretation. MIS Q 29(4):721–746
[196] Porter ME (2001) Strategy and the internet. Harvard Bus Rev 79(3):63–78
[197] Pozzebon M, Pinsonneault A (2005) Global-local negotiations for implementing configur-
able packages: the power of initial organizational decisions. J Strateg Inf Syst
14(2):121−145
[198] Premkumar G, Ramamurthy K, Saunders CS (2005) Information processing view of
organizations: an exploratory examination of fit in the context of interorganizational rela-
tionships. J Manage Inf Syst 22(1):257–294
[199] Qu Z, Brocklehurst M (2003) What will it take for china to become a competitive force in
offshore outsourcing? An analysis of the role of transaction costs in supplier selection.
J Inf Technol 18(1):53–67
[200] Rose J (2002) Interaction, transformation and information systems development – an
extended application of soft systems methodology. Inf Technol People 15(3):242–268
[201] Ryan SD, Harrison DA, Schkade LL (2002) Information-technology investment decisions:
when do costs and benefits in the social subsystem matter? J Manage Inf Syst
19(2):85−127
[202] Sabherwal R, Hirschheim R, Goles T (2001) The dynamics of alignment: insights from a
punctuated equilibrium model. Organ Sci 12(2):79–197
[203] Sahay S (1997) Implementation of information technology: a time-space perspective.
Organ Stud 18(2):229–260
[204] Sakaguchi T, Nicovich SG, Dibrell CC (2004) Empirical evaluation of an integrated
supply chain model for small and medium sized firms. Inf Resour Manage J 17(3):1–9
[205] Sambamurthy V, Bharadwaj A, Grover V (2003) Shaping firm agility through digital
options: reconceptualizing the role of it in contemporary firms. MIS Q 27(2):237–263
[206] Santhanam R, Hartono E (2003) Issues in linking information technology capability to
firm performance. MIS Q 27(1):125–153
[207] Schilling MA, Vidal P, Ployhart RE, Marangoni A (2003) Learning by doing something
else: variation, relatedness, and the learning curve. Manage Sci 49(1):39–56
[208] Scott J (2000) Social network analysis: a handbook, 2nd edn. Sage, London
[209] Scott SV, Wagner EL (2003) Networks, negotiations and new times: the implementation
of enterprise resource planning into an academic administration. Inf Organ 13(4):285–313
[210] Shaft TM, Vessey I (2006) The role of cognitive fit in the relationship between software
comprehension and modification. MIS Q 30(1):29–55
[211] Street CT, Meister DB (2004) Small business growth and internal transparency: the role of
information systems. MIS Q 28(3):473–506
[212] Sudweeks F, Mclaughlin ML, Rafaeli S (eds) (1998) Network and netplay. MIT Press,
Cambridge, MA
[213] Sutcliffe AG (2000) Requirements analysis for socio-technical system design. Inf Syst
25(3):213–233
58 A. Haider

[214] Teo TSH, Yu Y (2005) Online buying behavior: a transaction cost economics perspective.
Omega 33(5):451–465
[215] Venkatesh V, Morris MG, Davis GB, Davis FD (2003) User acceptance of information
technology: toward a unified view. MIS Q 27(3):425–478
[216] Vessey I (1991) Cognitive fit: a theory-based analysis of the graphs versus tables litera-
ture. Decis Sci 22(2):219–240
[217] Vessey I (2006) The theory of cognitive fit: one aspect of a general theory of problem
solving? In: Zhang P, Galletta D (eds) Human-computer interaction and management in-
formation systems: foundations. Advances in Management Information Systems Series.
ME Sharpe, Armonk, NY
[218] Vessey I, Glass RL (1994) Applications-based methodologies. Inf Syst Manage
11(4):53−57
[219] Wade M, Hulland J (2004) The resource-based view and information systems research:
review, extension and suggestions for future research. MIS Q 28(1):107–138
[220] Walsham G, Sahay S (1999) GIS for district-level administration in India: problems and
opportunities. MIS Q 23(1):39–65
[221] Walsham G (2002) Cross-cultural software production and use: a structurational analysis.
MIS Q 26(4):359–380
[222] Walther JB (1995) Relational aspects of computer-mediated communication. Organ Sci
6(2):186–203
[223] Whitworth B, De Moor A (2003) Legitimate by design: towards trusted socio-technical
systems. Behav Inf Technol 22(1):31–51
[224] Ying-Pin Y (2005) Identification of factors affecting continuity of cooperative electronic
supply chain relationships: empirical case of the Taiwanese motor industry. Supply Chain
Manage Int J 10(4):327–335
[225] Yoh E, Damhorst ML, Sapp S, Laczniak R (2003) Consumer adoption of the internet: the
case of apparel shopping. Psychol Market 20(12):1095–1118
[226] Zacharia ZG, Mentzer JT (2004) Logistics salience in a changing environment. J Bus
Logist 25(1):187–210
[227] Zaheer A, Dirks K (1999) Research on strategic information technology: a resource-based
perspective. In: Venkatraman N, Henderson JC (eds) Strategic management and informa-
tion technology. JAI, Greenwich, CT
[228] Zmud RW (1988) Building relationships throughout the corporate entity. In: Elam J,
Ginzberg M, Keen P, Zmud R (eds) Transforming the IS organization: the mission, the
framework, the transition. ICIT Press, Washington, DC
Appendix 1 Summary of Literature Relating to Barriers to Implementation of Information Systems

Scope Barriers Reference


Operational Level Planning/Management Level Strategic Level
Study of drivers and barriers of Variety of disparate IT/OT plat- Ad hoc planning leading to improvised Technological conservatism;
technology adoption among forms; ineffective application IT solutions; employee resistance to short-term business relationships [21]
different industry sectors, pri- integration and information change; inability to justify investments hampering maturity of technology
marily manufacturing interoperability; ignorance of in IT adoption
importance of data quality
Study in Australian construction Fragmented approach to technol- Low level of trust among business Narrow scope and limited vision [12]
industry identifying levels of IT ogy implementation partners of strategic use of IT; IT invest-
implementation and risk factors ment decisions driven by cost
considerations
Study of issues relating to Lack of fit of technology with the Lack of job redesign as a result of Legal and cost barriers [27]
e-commerce technology imple- business processes; information technology adoption; expectations
mentation in engineering enter- access and usage restrictions; ill- from technology outweighing technical
prises defined information communica- capability and maturity of the organisa-
tion and exchange structure tion
Study of project-based Lack of up-skilling and training Inability to allocate financial and non Lack of technology acceptance [18]
e-commerce technologies in an on new technology financial resources to support technol- and change management; lack of
engineering organisation ogy implementation; evaluation of management commitment; lack of
effectiveness of IT solutions technology need assessment
Study of technology implemen- Inability to maintain quality of Mismatch of technical solution with Top management not convinced [28]
tation in manufacturing organi- information; skills and people organisational infrastructure; lack of of economic benefits and likeli-
sations attitude towards technology; proper process control; lack of in- hood that these will be realised;
technology acceptance and volvement of various organisational inability to assess future require-
Information Systems Implementation for Asset Management: A Theoretical Perspective

change; lack of technology levels in technology adoption process ments and information needs
integration
59
60

Scope Barriers Reference


Operational Level Planning/Management Level Strategic Level
Study of individuals from 34 en- Incompatibility of OT; lack of Lack of awareness of the importance of High costs of implementation; [14]
gineering organisations in USA IT/OT integration; lack of sup- information management; non- invisibility of value from IT
focusing on social barriers to portive organisational culture cooperative corporate culture investment
technology implementation, for impeding employees from sharing
technologies relating to 3D de- knowledge; lack of employee
sign and simulation, data ware- motivation to up-skill
house, engineering applications
and information management
Study of issues in virtual reality Lack of data standards and sys- Lack of resources to support technol- Lack of wider organisational
application implementation tems support; slowness of tech- ogy implementation; inability to coor- representation in decision making [30]
among design managers in nology; unexpected technical dinate technical and business staff; for investment in technology
Africa issues and problems; differences lack of coordination between in-house
in actual performance and capa- developers and solution providers
bilities offered by off-the-shelf
applications
Study focusing on organisa- Lack of IT and OT compatibility Lack of user involvement in technol- Lack of planning and communi- [33]
tional change aimed at success- within organisation to support ogy adoption process; middle man- cation of IT investment rationale
ful IT implementation cross-organisation functionality; agement’s resistance to adopt new to all levels in organisation; lack
employee resistance to change; technology for uncertainties regarding of strategic alignment of technol-
lack of requisite skill base; lack output delivery; lack of organisational ogy; high costs of IT investment
of employee motivation to learn fit with technology and support
new technologies
Study of issues in successful Lack of appropriate IT infrastruc- Lack of information exchange between Management’s expectations of [35]
implementation of e-commerce ture to enable business processes; sites; inability or difficulty measuring achieving benefits in the short
technologies in Australian information security issues; lack benefits of IT investments; cost of IT term; high indirect or hidden costs
engineering organisations of awareness of information maintenance, training of IT investment; lack of organ-
quality; lack of skill base, high isational integration
turnover of employees; employee
resistance to changing work
practices
A. Haider
Scope Barriers Reference
Operational Level Planning/Management Level Strategic Level
Study aimed at providing guid- Incompatibility with existing Inappropriate IT evaluation techniques; IT investment policies primarily [89]
ance for manufacturing compa- technologies; lack of research and high attention paid to technical devel- driven by financial concerns; lack
nies preparing to invest in development into what technol- opment but not enough to adjustments of awareness of strategic role of
advanced manufacturing tech- ogy suits the business; insuffi- needed to accommodate technology; technology by management;
nology cient level of confidence in cer- inability to measure soft benefits from inconsistent nature of corporate
tain technologies IT investments IT/OT governance

Study of barriers to IT imple- Lack of quality IT infrastructure; Lack of awareness of multidisciplinary Industrial fragmentation; high
mentation in engineering or- lack of system compatibility; lack nature of IT; lack of support from cost of IT investments; decreased [15]
ganisations in developing coun- of information interoperability; middle managers; high staff workload profit margins
tries unavailability of skill base
Study identifying IT implemen- Compatibility of technologies; Lack of user involvement in IT adop- Narrow focus of management in [19]
tation success factors information accessibility and tion choices; lack of training and making choices about technology
reliability; quality and accuracy technical support investment
of information and data input
Study of user attitudes to elec- Slow processing speed; lack of Lack of resources for technology Organisational functional silos [41]
tronic data management sys- data and data communication support and optimal utilisation driving technology adoption
tems standards; employee resistance to strategies
change; varying user attitudes
towards technology adoption
Study of importance of informa- Lack of access to information, Mismatch between information needs Inability of top management to [29]
tion to knowledge management information accuracy, timeliness of organisation and information sys- view information as an asset
in manufacturing organisations of information; task-technology tems; lack of trust among business
mismatch partners to share data
Study of Dutch and US-based Lack of requisite hardware and Lack of IT coordination and control; High degree of IT centralisation [36]
manufacturing organisations IT software infrastructure non-supportive organisational culture and business strategy, structure
Information Systems Implementation for Asset Management: A Theoretical Perspective

management and structure and scope; IT expertise rather


than business need driving IT
investment decisions
61
62

Scope Barriers Reference


Operational Level Planning/Management Level Strategic Level
Study of benefits and problems IT incompatibility; lack of infor- Lack of IT support decision making Lack of collaboration among [40]
of Web-enabled IT applications mation security infrastructure, and resource allocation; lack of coor- business partners; technology not
in engineering organisations skill base and competence to dination among project participates contributing to organisational
operate technology; inefficient responsiveness to changing busi-
information exchange and com- ness needs
munication speed
Study of success and failure of Lack of fit between IT invest- Inability to match technology imple- Lack of fit between business
ERP in Dutch engineering ments and IT infrastructure ma- mentation methods and change man- strategy and IT [37]
organisations turity agement process
Study of essential criteria for IT Individual’s perception of tech- Low degree of innovativeness in the Lack of responsiveness to [90]
adoption in engineering enter- nology; lack of IT/OT compati- organisation; hierarchical organisa- changes in competitive environ-
prises bility; inability to keep up with tional structure; organisational culture ment
changes in technology not conducive to IT
Study of IT implementation Unreliable technology; slow Technology not mature enough to High costs of IT investments [91]
issues of online construction speed of operation; user reluc- handle information needs of organisa-
management tance to adapt to technology; tion; benefits of IT utilisation not fully
lacking information security and perceived; lack of commitment from
skill base technology stakeholders to make it
work effectively
Study of Web-based project Lack of information interopera- Lack of information ownership; lack of Inability to quantify IT invest- [39]
management services in engi- bility; lack of requisite features of accountability ment costs and benefits
neering organisations technology; employee resistance
to change
Study of barriers to IT imple- Lacking security and privacy; Low levels of awareness of IT benefits; Lack of strategic focus of IT [92]
mentation at industrial, organ- poor information interoperability; lack of creative culture; inability to investments; technological con-
isational and project levels in employee resistance to change; measure soft and hard benefits of IT servatism; limited financial re-
construction industry lack of skills investments sources available for IT
A. Haider
Scope Barriers Reference
Operational Level Planning/Management Level Strategic Level
Study of relationship between Inability to integrate IT and OT; Non-availability of feedback on tech- Lack of information on competi- [93]
shop floor technologies and lack of user involvement in tech- nology use and its impact on different tive environment
organisational and environ- nology implementation process; business areas
mental factors in manufacturing lack of skills to operate technol-
organisation in UK ogy; inadequate training

Study of performance meas- Short-term focus on process Inability to take into account financial Lack of IT implementation as a
urement literature in manufac- automation; inability to appreci- and non-financial benefits of IT in- means of business strategy trans- [38]
turing organisations from 1988 ate multidimensional nature of vestments in performance evaluation lation; lack of matching organisa-
to 2000 technology implementation methods; inability to effect change tional objectives, customer needs
management to adapt to technology; and organisational success factors
lack of pre- and post-implementation with IT investments
evaluation of IT

Study of a business process Lack of fit between IT and busi- Inability to redesign business processes Lack of strategic analysis of [26]
integrated IT evaluation meth- ness processes to adapt to new technology; inability to impact of IT investments
odology which integrates busi- properly measure process requirements
ness strategy, business process and manage IT configuration
design and supporting IT in-
vestment

Study of Shanghai- and Hong Lack of research and develop- Lack of fit of IT infrastructure with Inability of technology to con- [32]
Kong-based manufacturing ment capabilities on technology business, objectives tribute to horizontal/vertical
organisations to identify and investments; lack of employee integration
prioritise the strategy determi- skills and competencies
nants for manufacturing enter-
prises
Information Systems Implementation for Asset Management: A Theoretical Perspective
63
64

Scope Barriers Reference


Operational Level Planning/Management Level Strategic Level
Study of manufacturing organi- Technology not properly mapped Lack of understanding of impact of IT; Lack of top management com- [13]
sations to determine extent to to process needs lack of intra organisational collabora- mitment to institutionalise tech-
which long-established tech- tion; inability of management to iden- nology
nologies (such as electronic data tify and manage IT risks before they
interchange) have been applied become issues
across supply chains; factors
influencing implementation;
future technology trends
Study of advanced manufactur- Lack of proper requirement Lack of pre-/post-implementation Inability to view IT investments [94]
ing technologies in Indian analysis and conceptual design of evaluation of IT investments; inability as source of strategic benefits,
manufacturing organisations investments in IT; inadequate to assess existing technological base to such as improved quality, greater
training match investments in IT flexibility and cost reduction
Study of manufacturing firms IT applications not on par with Lack of fit between organisational Inability to assess impact of IT on [16]
aiming to link enhanced per- user demands; lack of application infrastructure, processes and technol- strategic orientation; non-
formance of product develop- integration; lack of information ogy availability of an IT strategy
ment processes with the increas- sharing; non-availability of requi-
ing use of IT applications site technical support
Study aimed at justification of Lack of consideration of organ- Lack of functional integration Inability to evaluate technology [17]
investments in advanced manu- isational changes necessitated by before implementation; inability
facturing technology at manu- technology implementation of management to adopt an ap-
facturing plants in USA proach IT implementation which
accounts for operational and
strategic value of IT
Study aiming at value attributes Ineffective operational support to Lack of quality conscious IT culture; Lack of organisational respon- [34]
related to business knowledge back IS implementation; passive lack of appropriate IT evaluation siveness to make choices as to
and competence of IT personnel IT staff; lack of requisite IT skill techniques when and how to migrate to a
within manufacturing organisa- base new technology
tions
A. Haider
Scope Barriers Reference
Operational Level Planning/Management Level Strategic Level
Study of an integrated technol- Lack of consensus on technology Lack of integrated approach to IT/OT Lack of evaluation methodologies [22]
ogy road-mapping methodology adoption between different func- technology management; inability to for technology acquisition pro-
for manufacturing organisations tions identify gaps in technological plat- jects which incorporate organisa-
which enables management to forms, prioritisation of technical is- tional, financial and social fac-
define its technology require- sues, and creation of action plans, and tors; inability of IT to provide
ments and to create a balanced communication of technology needs decision support for business
technology project portfolio across organisation responsiveness and competitive-
ness
Information Systems Implementation for Asset Management: A Theoretical Perspective
65
66

Appendix 2 Summary of Literature Relating to different theoretical Perspectives on the Implementation


of Information Systems

Theory Description Focus Reference


from IS
Literature
Actor Network Emphasises importance of actors (including organisation, people and objects such as hard- Heterogeneous network of [117, 171,
Theory ware, software, hardware) to a social network. Order in organisations is maintained through social and technical actors 184, 191, 209,
smooth running and interaction of these actors to create order 220]
Adaptive Based on Giddens’ [8] structuration theory, it states that production and reproduction of the Structure of IT, organisa- [107, 108,
Structuration social systems through member use of rules and resources in interaction tional environment and 144, 151, 152,
Theory tasks aimed at efficiency. 221]

Agency The- Study of ubiquitous agency and principal relationships, in which the principal delegates Efficiency through align- [105, 121,
ory work to an agent. Agency theory addresses two issues which arise out of such a relationship: ment of interests, risk 159, 162, 168,
firstly, the conflicts between the aims of the principal and, secondly, the inability of the sharing and contracting 178, 186]
principal to verify the behaviour of the agent
Absorptive Emphasises establishment by organisations of internal R&D capacities which aid IS devel- Capabilities through [96, 128, 167,
Capacity opment in line with existing familiarity of technology and through evaluation and incorpora- amount of knowledge 181, 193,
tion of externally generated technical knowledge absorption 207]
Cognitive Fit Developed by Vessey [218], it proposes that there is a link between information presentation Problem resolution; process [112, 134,
and the tasks enabled by the information. This relationship defines task performance for enhancement; task perform- 179, 210, 216,
individual users ance 217, 218]
Critical Social Suggests that social reality has historical underpinnings and is constituted and reconstituted Learning by doing; social [95, 98, 109,
by people. Even though people or organisations can mindfully make an effort to alter their emancipation 132, 147, 148,
social and economic conditions, their ability to do so is hampered by the dominant social, 180]
cultural and political structures. It focuses on the conflicts and contradictions in the social
environment and seeks to be a source of emancipation to alleviate dissonance
A. Haider
Theory Description Focus Reference
from IS
Literature
Contingency Optimal organisational performance is contingent upon various internal and external con- Organisational Efficiency [106, 111,
straints. Important postulates of this theory: 124, 140, 164,
a. there is no one best way to manage an organisation; 228]
b. ‘fit’ between organisation and its subsystems;
c. successful organisations extend this fit to the organisational environment;
d. organisational design and management must satisfy the nature of tasks and work groups.

Dynamic Stresses integration, building and reconfiguration of organisational competencies (external as Competitiveness [131, 205,
Capabilities well as internal) to address changing business environment 219]

Information Suggests that learning should be approached through use of memory. It is based on two ideas Learning by doing; knowl- [100, 101,
Processing proposed by Miller (1956). Firstly, the concept of ‘chunking and the limited capacity’, which edge reuse 102, 115, 130,
posits that short-term memory can hold 5 to 9 chunks of meaningful information. The second 138, 199]
feature of information processing mimics human capabilities of information processing

Knowledge- Treats knowledge as most strategically important resource of an organisation, due mainly to Core competencies; sus- [97, 153, 183]
based theory of social complexity and difficulty of imitation of knowledge-based resources. Organisational tained competitive advan-
firm knowledge and competencies are therefore chief determinants of enhanced organisational tage
performance and sustained competitive advantage

Punctuated In terms of organisational behaviour, this theory comprises three elements: deep structures, Strategic change [158, 176,
Equilibrium equilibrium periods and revolutionary periods. Deep structures are the sets of basic choices 188, 195, 202,
comprising a system, i.e. fundamental parts into which its units are organised, and the fun- 211]
damental activity patterns in maintaining the existence of the system. Equilibrium period is
the maintenance of organisational structure and activity patterns with small-scale incre-
mental changes made to system for it to adapt to changing environment, without affecting
Information Systems Implementation for Asset Management: A Theoretical Perspective

the deep structures. Revolutionary periods occur when deep structures are changed, leading
to a disorderly state, until choices are made to enact new structures for the system
67
68

Theory Description Focus Reference


from IS
Literature
Resource- Business organisations possess resources which enable them to gain competitive advantage. Competitive advantage [143, 149,
based View Scarce resources lead an organisation to sustainable competitive advantage until the organi- 185, 206,
sation is able to protect against resource imitation, transfer or substitution 227]
Resource Organisations should alter their behaviour and structures to acquire and maintain required Organisational dominance [150, 154,
Dependency resources. This includes modifying their dependent relationships to assume a status of 163, 204, 224,
power, that is, by minimising their dependence on other organisations or by increasing the 226]
dependence of other organisations on them
Reason-based Argues that behaviours of individuals are characterised by behavioural intentions, whereas System behaviour [104, 113,
Action behavioural intentions are themselves derived from the attitudes of individuals towards the 145, 155, 173,
behaviour and the norms associated with the behavioural performance 215, 225]
Systems Instead of considering a system’s properties or their parts or elements, this theory advocates System throughput; feed- [99, 125, 126,
the relationships and understanding of the parts which collectively form the whole, i.e. the back; control 137, 182,
system. It includes understanding of system boundaries, input, output, processes, circum- 187]
stances, hierarchy, orientation and flow of information
Social Cogni- Provides a framework for understanding, foreseeing and altering human behaviour. It ac- Organisational learning [114, 120,
tive knowledges human behaviour as the interaction between individual traits, actions/behaviour 129, 146, 169,
and environment 174]
Social Net- Views social relationships as nodes and ties. Nodes represent individual actors in networks, Knowledge diffusion; [116, 135,
work ties the association between them. These relationships can take many forms; in its fundamen- communication strength 177, 194, 208,
tal type a social network represents the relationship between nodes and may be used to 212, 222]
investigate social/intellectual capital contained at each node
Structuration Attempts to reconcile theoretical duality of social systems such as agency/structure, subjec- Structure; social system [160, 189,
tive/objective and micro/macro perspectives. It does not concentrate on individual entities 190, 197, 200,
but focuses on the social practices ordered across space and time [8]. Such a view helps in 203]
understanding technology-enabled contemporary businesses
A. Haider
Theory Description Focus Reference
from IS
Literature
Socio-technical Built around two organisational subsystems: technical, which consists of tools and tech- Process optimisation; or- [166, 170, 69,
niques to transform inputs into outputs, and social, which consists of employees, skills, ganisational integration 192, 201, 213,
authority structure, knowledge, behaviours and values. Socio-technical theory is built upon 223]
the fit by the collective optimisation of these systems. This requires an explicit recognition
of the interdependency of these systems
Strategic Developed by Porter [73], it provides a roadmap of an organisation’s competitiveness Competitive forces; com- [119, 122,
competitive- through the five-force analysis, value-chain analysis and strategic sets, aimed at providing petitiveness analysis 123, 167,
ness cost leadership, differentiation or focused advantages to the organisation 196]
Soft Systems Intends to resolve soft and hard issues related to poorly structured problems having social Problem resolution [103, 110,
Methodology impacts; emphasises that the investigator must taken into account issues other than mere 156, 157,
technical. Developed by Checkland [57], it has seven stages: 172]
a. definition of problem and understanding of nature of problem;
b. expression of problem through rich images;
c. development of various perspectives of issue through root definitions;
d. construction of conceptual models to address root definitions;
e. comparison of conceptual models with rich images developed in step b;
f. identification of desirable and possible changes to problem situation;
g. development of recommendations to improve problem situation.
Transaction Argues that total costs incurred by an organisation can be divided into two categories: trans- Governance structure; [118, 127,
Cost Econom- action and production costs. Transaction costs represent all costs which arise from process- outsourcing, interorganisa- 136, 161, 199,
ics ing of information to organise and synchronise the tasks performed by people and machines tional coordination and 214]
to accomplish organisation’s primary processes. Production costs are costs incurred from collaboration
producing or creating goods or services through primary processes. Organisation aims to
reduce costs through efficient information processing
Information Systems Implementation for Asset Management: A Theoretical Perspective

Task- Use of IT is expected to have a positive effect on people’s performance if capabilities of Technical fit; system utili- [133, 139,
Technology Fit technology match tasks which people must perform [143] sation 141, 142,
175]
69
Improving Asset Management Process
Modelling and Integration

Yong Sun, Lin Ma and Joseph Mathew

Abstract Asset management (AM) processes play an important role in assisting


enterprises to manage their assets more efficiently. To visualise and improve AM
processes, the processes need to be modelled using certain process modelling
methodologies. Understanding the requirements for AM process modelling is
essential for selecting or developing effective AM process modelling methodolo-
gies. However, little research has been done on analysing the requirements. This
paper attempts to fill this gap by investigating the features of AM processes. It is
concluded that AM process modelling requires intuitive representation of its proc-
esses, ‘fast’ implementation of the process modelling, effective evaluation of the
processes and sound system integration.

Keywords Asset management processes, Process modelling, Process evaluation,


Process integration

__________________________________
Yong Sun
CRC for Infrastructure and Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, Brisbane, QLD 4001, Australia
e-mail: y3.sun@qut.edu.au
Lin Ma
CRC for Infrastructure and Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, Brisbane, QLD 4001, Australia
Joseph Mathew
CRC for Infrastructure and Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, Brisbane, QLD 4001, Australia

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 71


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_3, © Springer-Verlag London Limited 2012
72 Y. Sun, L. Ma and J. Mathew

1 Introduction

An enterprise often conducts various asset management (AM) activities which are
interlinked in different logical ways, resulting in different processes. These proc-
esses are termed AM processes. Inefficient AM processes can incur significant
costs for an organisation or even the failure of an organisation to achieve its AM
goals. AM processes can be improved using process modelling and reengineering
technology. AM process modelling is the documentation, analysis and design of
the structure of AM processes. Process working mechanisms, required resources,
external factors, constraints and their relationships with the environment in which
these processes operate are also included in process modelling. AM process mod-
els can be used for visualising processes, developing data requirements, coordinat-
ing AM activities among different personnel [1], generating workflow to develop
AM information systems and assisting in the integration of AM information sys-
tems with other IT systems. With improved processes, an organisation can achieve
its AM goals effectively with less consumption of its resources including time,
finances, labour, IT systems and materials.
Process modelling has attracted much attention of engineering researchers since
the beginning of the industrial revolution [2]. During the late 1980s and early
1990s, businesses started to become more interested in processes [3]. Modelling is
important as it provides managers, asset maintenance personnel, operators and
users with a common understanding of each process [4]. It also visualises proc-
esses so that they can be discussed and audited more intuitively [4]. A survey
conducted in 2006 [5] showed process improvement in general is beneficial to
most users. AM processes have been used to guide AM practices [6, 7]. However,
these processes are modelled using flowcharts. This modelling method is insuffi-
cient for comprehensively describing the characteristics of AM activities.
Research on AM process modelling methods has also attracted increased atten-
tion in recent times [8, 9]. The research of Ma et al. [10] shows that AM processes
have common characteristics for different businesses. They are dynamic over a
long time span, generally focus on engineering assets which are hierarchically
structured, are closely related to decision support processes, and involve a diver-
sity of information and data. Modelling AM processes normally involves different
people in different departments or organisations and often outsourcing. Noting
these features, Frolov [9] studied AM process modelling and recognised that a
sound foundation to enable effective application needed to be developed.
This paper addresses this issue and focuses on analysing AM process modelling
requirements. The analysis considers the following aspects:

(1) process representation;


(2) process modelling implementation;
(3) process evaluation;
(4) information exchange between different IT systems.
Improving Asset Management Process Modelling and Integration 73

The study is expected to assist in developing or selecting effective methods for


modelling AM processes.
The rest of the paper is organised as follows. Section 2 discusses AM process
representation requirements, and Section 3 presents and analyses the major re-
quirements for process modelling implementation. Section 3 also analyses the
requirements for the evaluation of AM processes, and Section 4 deals with
AM-related information integration requirements. Conclusions are presented in
Section 5.

2 Requirements for Representing AM Processes

AM processes should be modelled in addressing AM process characteristics and


their modelling goals. Firstly, AM process models should be intuitive and easy to
follow because they are often used by people with varied skill sets including busi-
ness managers, financial officers, maintenance engineers and operators. Secondly,
the models should contain sufficient information, especially AM-specific informa-
tion such as engineering assets, working time, required tools and skills as AM data
models are generated from these process models. Thirdly, they should be able to
accommodate IT requirements because the processes are generally implemented
using computer systems. Finally, AM process models should be flexible and
adaptable because of the dynamic nature of these processes.
Different business process modelling methods and techniques with software
support have been developed to address modelling requirements from different
viewpoints. They all have their advantages, but they all must address the key re-
quirements of AM process modelling [3]. This section focuses on analysing AM
process representation requirements in more depth. Major existing process model-
ling techniques are also briefly reviewed to see whether and how they can meet
the requirements.

2.1 AM Process Description

AM processes should be presented in an event-driven, activity-focused methodol-


ogy because actions and their sequences are the major concerns. This methodol-
ogy has been adopted by most existing process modelling techniques. Other repre-
sentations tend to be less effective for meeting AM needs. For example, Swim-
lanes is an organisation-focused process modelling method. When using this
method to model AM processes, the data flow in the processes is hard to describe.
However, data flow is critical to developing an AM IT system. The second major
drawback is that this method does not readily represent activities shared by mul-
tiple participants.
74 Y. Sun, L. Ma and J. Mathew

2.2 Symbols and Notations

To make AM processes intuitive and easy to follow, symbols and notations should
be straightforward [11]. Process modelling symbols that have specific meanings
will need to be learnt and hence can be hard to understand unless viewers have an
engineering background. However, on the other hand, notations must be compre-
hensive enough to represent the required AM information.
Currently, flowcharts are still widely used to model AM processes [6, 7] be-
cause they are well established, familiar to most engineers and business managers
and can be readily adopted as workflow models in developing AM systems. How-
ever, flowcharts model the relationships of activities and judgements only, with-
out presenting other important information such as data flow and participants
simultaneously.
IDEF0 (one of the Integration DEFinition methods) has also been used for
modelling AM processes [12]. IDEF0 is one of the 16 modelling methods in the
family of IDEF, which was created by the United States Air Force. IDEF0 was
released as a standard for function (activity) modelling in 1993. It is a method
designed to model the decisions, actions and activities of an organisation or sys-
tem using simple boxes and arrows (Figure 1). Effective IDEF0 models enable the
analysis of a system and promote good communication between the analyst and
the customer.
In Figure 1, the box represents an activity. Input and output arrows represent
material and information (data) flow. ‘Control’ stands for something used to im-
plement the activity such as conditions, recipes or manuals. ‘Mechanisms’ stands
for the resources or organisations required by the activity.
IDEF0 is a type of graph-plus-text notation which is easier to understand and
better for AM process management, especially for developing AM IT systems.
This graph-plus-text notation has different variations such as the five views used
in the Architecture of Information Integration Systems (ARIS) [13], the architec-

Controls

Inputs Outputs
Activity
(function)

Mechanisms

Figure 1 IDEF0 box and graphics (modified from http://www.idef.com/IDEF0.html accessed


on 15 June 2006)
Improving Asset Management Process Modelling and Integration 75

ture modelling notation (AMN) used by James Martin & Co. [4] and the Generic
Activity Model (GAM) in integrated enterprise modelling (IEM) [14].
The presentation of process models in ARIS is very similar to IDEF0 (Fig-
ure 2). The major difference is that in ARIS, the Control view and the Mechanism
view are activity (function) self-contained and do not link to other activities using
lines. This design makes ARIS process models more readable and clearer.
ARIS was developed to attempt to model all aspects of complex businesses.
However, Green and Rosemann [15] analysed the five views in ARIS and con-
cluded that “even when considering all five views in combination, problems may
arise in representing all potentially required business rules, specifying the scope
and boundaries of the system under consideration, and employing a ‘top-down’
approach to analysis and design”.
When using ARIS to model AM processes, Ma et al. [16] noted that the influ-
ence of decisions is not reflected in the general ARIS views. Information about
assets can be included in the Output view. In this case, the information of assets
is not highlighted. However, in the asset maintenance management process, one
emphasises the influence of decision making and the layout of the asset. Asset
maintenance management is a dynamic process which is closely related to deci-
sion support and information about assets. To accommodate the requirements of
AM, the authors suggested extending current ARIS-based views to include the
views for maintenance decision support and asset technical information while
developing AM process models using APRS, i.e. adding a Decision view and an
Asset view when modelling asset maintenance management processes. The Deci-
sion view includes all aspects for maintenance decision making. The Asset view
includes the layout and the configuration of assets. The technical specifications of
assets are also allocated in the Asset view. Considering the existing Output view
is often misleading as it contains both input and output. The original Output view
was divided into Input view and Output view. Figure 3 shows the modified ARIS
views to accommodate the requirements of AM. The authors also indicated that
the modified views are still far from being a satisfactory solution. Further re-
search is therefore required.

Data view

Control

CONTROL view/ Data flow


PROCESS view ACTIVITY
(function) view Output view

Execute

Organisation view
Figure 2 The general ARIS business process views
76 Y. Sun, L. Ma and J. Mathew

Decision view Control view/Process view

Input view
Activity (function)
Data view
view
Output view

Asset view Organisation view

Figure 3 The modified ARIS process views for asset management [16]

IEM also uses the concept of views. The representation method in IEM (Fig-
ure 4) is nearly the same as in ARIS [14].
A key feature of IDEF0, ARIS and IEM is that all conditions to complete an ac-
tivity are represented using separate boxes, and then these boxes are linked to an
activity box using lines. AMN is different from these three modelling methods in
that it includes an activity, the time the activity takes (metrics), the people who
complete the activity (roles) and the techniques and tools used to complete the
activity within the same box (Figure 5). The major advantage of this method of
representation is that a box contains more information so that the process models
become less messy. Another advantage is that the time used for implementing an
activity is explicitly presented. The major disadvantage of this design is that it
does not describe data flow. In addition, different properties in the same box will
create difficulties in software development.

Order
Controls the
Product, Order execution Processed Product,
or Resource Order or Resource
Objects Objects

Object Object
ACTIVITY
(Status n) (Status n+1)

Resource
Executes the
Activity

Figure 4 Generic activity model of IEM [14, p. 23]


Improving Asset Management Process Modelling and Integration 77

Metrics Roles

Activity
Inputs Deliverables
Techniques Tools

Figure 5 Architect modelling notation [4, p. 52]

In recent years, Business Process Model and Notation (BPMN) has become an
increasingly important standard for process modelling. BPMN is also a type of
graph-plus-text notation similar to activity diagrams used in the Unified Model-
ling Language (UML). According to documentation provided by Object Man-
agement Group “In BPMN a Process is depicted as a graph of Flow Elements,
which are a set of Activities, Events, Gateways, and Sequence Flows that define
finite execution semantics” [17]. BPMN adopts both an event-driven activity-
focused representation and Swimlanes to focus on participants (Figure 6). BPMN
is much richer than other existing notations. BPMN 2.0 has defined five basic
categories of notations: Flow Objects, Data, Connecting Objects, Swimlanes, and
Artefacts. Each category has several elements which can be further subdivided
into to subelements. For example, three elements including events, activities and
gateways are included in the category of Flow Objects, whereas activities are
divided into non-atomic activities which can be expanded into subprocesses and
atomic activities which are termed Tasks. Therefore, in BPMN, the terms ‘activ-
ity’ and ‘task’ are both used because they have different meanings. Tasks are
further divided into different types with different notations, including service
task, send task, receive task, user task, manual task, business rule task and script

Message
Association Data

Event
Task
Sequence Sequence flow
flow

Message flow

Pool

Figure 6 Business process model and notation (BPMN)


78 Y. Sun, L. Ma and J. Mathew

task. The advantage of the richness is that it can be used to deal with the com-
plexity that is inherent in business processes. However, the richness also makes
this language more complicated to deal with. End users often have difficulty
identifying the interface between process modelling and business rule modelling
[18].
A major advantage of BPMN is that it provides a mapping between the graph-
ics of notations and Web Services Business Process Execution Language
(WS-BPEL), or Business Process Execution Language (BPEL) for short. BPEL is
a standard executable language developed by OASIS for modelling actions within
business processes using Web-based services. However, BPEL cannot appropri-
ately describe the interconnection of multiple partners [19]. BPMN models can
also be mapped to the Yet Another Workflow Language (YAWL) environment
through the BPMN2YAWL component for execution [20]. YAWL was developed
by Wil van der Aalst at the Eindhoven University of Technology, the Netherlands,
and Arthur ter Hofstede at the Queensland University of Technology, Australia, in
2002, aiming to extend Petri nets’ support for various control flow patterns [20].
(Petri nets are reviewed in Section 4.) YAWL supports dynamic workflows, which
is particularly useful for modelling dynamic AM processes.
Systems thinking has also been used to model dynamic processes. A typical
system process model is demonstrated in Figure 7. The notations of systems think-
ing are also a type of graph-plus-text, but they are less intuitive. The process mod-
els developed using the systems thinking method has better simulation capabilities

Financial
model
Process
Competitors Initiation model Outcome

Resource
Competitors External
model
supplies

Competitors
Personnel Materials Tools Facilities

Competin
g Competing
process
process

Figure 7 System process models in context [21, p. 17]


Improving Asset Management Process Modelling and Integration 79

[21]. Systems-thinking-based process models allow interactions between activities


to be considered. For example, an upstream activity which is accomplished in a
particular manner can affect the nature and duration of later activities [21]. This
modelling technique enables AM models to place more emphasis on the dynamic
nature of AM processes.
On the basis of the preceding analysis, it can be seen that various notations
have been available for representing AM processes. However, each process mod-
elling language can meet only some of the requirements for AM process represen-
tation. For example, in terms of richness of notation and ability of execution,
BPMN would be the choice. When it comes to modelling dynamic processes,
YAWL and systems thinking would work better, and for presenting activity im-
plementation times, AMN is preferable. Therefore, a combination of BPMN, sys-
tems thinking and AMN may be an effective solution.

2.3 Trade-off Between Details and Simplicity

AM processes should contain enough information. However, models become


chaotic even for small processes if they contain too much information. Keeping a
balance between simplicity and completeness is needed. Simplicity is important
for human reading, and completeness is important for AM process management
and data flow design (see [16] for detailed discussions on this issue).
When modelling AM processes, the boundary of the process, the scope of each
process segment and application of atomic or non-atomic activities need to be
determined. According to BPMN, a non-atomic activity can be expanded to an-
other layer of the subprocess. A layer is a set of linked AM subprocesses which
are non-atomic activities of the processes in another layer. For example, AM sub-
processes in the second layer are expanded from the non-atomic activities of the
AM process in the first layer. An atomic activity (or task) cannot be expanded into
another layer of subprocess. During AM process modelling, one often needs to
group several detailed activities into a more ‘macro’ activity (non-atomic activity).
For example, activity risk analysisis composed of several more detailed activities
such as failure frequency analysis and failure consequence analysis. However,
grouping activities is a skilful art. The number of the atomic activities (tasks) in an
AM process is fixed. Using ‘big’ non-atomic activities can reduce the number of
activities in a process model to make it simpler. However, at the same time the
layers of its subprocess models will be increased. Too many ‘small’ activities or
too many layers (or subprocesses) both decrease the readability of the process
models. A balance between the number of activities and layers is necessary.
In addition, AM process models need to have an additional navigational dimen-
sion to allow viewers to delve deeper into the details or to be able to jump from
one subprocess to another. To meet this need, the complexity of the AM process
models will have to be increased.
80 Y. Sun, L. Ma and J. Mathew

3 Requirements for Implementing AM Process Modelling

A major barrier to employing process modelling technology in AM is the signifi-


cant investment of time, finances and human resources for modelling before any
initial benefit can be realised. A basic requirement for an AM modelling method is
that it must allow modellers to develop AM process models fast to reduce costs.
An effective AM process modelling method should enable the required informa-
tion and data to be obtained quite easily. Such information and data could exist
across the whole structure of an organisation.
Modelling AM processes normally involves different skills in different depart-
ments or organisations and often involves outsourcing. Figures 8 and 9 illustrate
the people (roles) and factors commonly related to AM process modelling.

Logistics Technical Financial IT engineers


officers engineers officers

Asset Business
manufacturers managers /
/ dealers planners
AM
Regulators / processes Human
legal workers / resource
policy makers managers

Users / Consultants Operators Process


customers modellers
Figure 8 People who are likely to be involved in AM process modelling

Human Operations Data /


resources requirements information

Models / Data flow


methods
AM
AM policies / processes Business
regulations / objectives /
standards goals

Technical
Finance Inventory
manuals /
drawings
Figure 9 Factors likely to be involved in AM process modelling
Improving Asset Management Process Modelling and Integration 81

The current practice of mapping AM processes often involves external experts


who have BPM-specific knowledge and staff members who have good under-
standing of the organisation’s processes and activities. Process modellers must
capture the required information, whereas staff members need to understand BPM
knowledge. Capturing data and information for modelling can be difficult because
modellers, users and participants in a process have little common ground. Each
participant normally has partial information about the process. One traditional
approach for capturing information is via an interview or survey [1]. Another
approach is to conduct focused workshops involving all relevant experts. Fig-
ure 10 describes a conventional procedure for process modelling.
The steps ‘workshop, survey or interview’ and ‘develop/refine process models’
could be repeated several times. Compared with workshops, surveys and inter-
views are less effective and less efficient. However, a workshop involving a num-
ber of people can be very costly and time consuming, especially for widely dis-
tributed enterprises. Hence, the most important thing for AM process modelling is
to reduce the involvement of people in the overall modelling process and to make
it more automated and objective [22] using existing reference models.
Reference models/patterns are generic conceptual models that formalise rec-
ommended practices for certain domains. Existing reference models can be classi-
fied into two categories: (1) “ideal” models, which are developed using typical
business activities and used mainly for reengineering the business process of an
enterprise such as in SAP; and (2) components or patterns, each of which de-
scribes a part of the business activity which represents a common characteristic
abstracted from different business processes. The Configurable Event-Driven
Process Chains (CEDPC), BPSim++ and Micro Saint Sharp are three existing
software tools which were developed using this category of reference models. The
CEDPC is a configurable reference modelling language which enables core pat-

Figure 10 Conventional procedure Select AM processes to be


for AM process modelling
modelled

Determine the people


involved in the processes

Workshop, survey and / or


interview

Develop/refine process
models

Documentation
82 Y. Sun, L. Ma and J. Mathew

terns to be captured. BPSim++ is a library of components for business process


simulation based on the Visual Component Library of Borland C++ Builder. It is
an extendible and reusable library of modelling components. Micro Saint Sharp is
a general-purpose, discrete-event simulation software tool.
Despite these techniques, a ‘fast’ modelling methodology for mapping existing
AM process visually and quickly has yet to be developed. A possible approach is
to develop a fast modelling methodology which enables different users in a com-
pany to work at their own offices and to input their requirements, activities and
outcomes independently. Each user only focuses on what he or she does and does
not need to consider the logical relationship between his or her individual activi-
ties and other people’s work. These inputs are all forwarded to a server. The links
of these users’ work will be automatically generated by the server based on their
inputs. These links are then compared with the reference models or patterns so that
the final results of the AM process models can be presented in a standardised
format. Some preliminary research on this issue has been reported [8, 9].

4 Requirements for Evaluating AM Processes

During AM process modelling, modellers and users often need to evaluate differ-
ent process models. The evaluation has two objectives: (1) to evaluate whether the
process can achieve its goals and (2) to compare different process alternatives and
determine the best one for an enterprise. An evaluation of process models is im-
portant because some ineffective processes can cause significant financial losses
to an enterprise. AM process modelling must ensure that enterprises can gain
advantages from their investment.
AM processes are dependent on an organisation’s objectives/goals, structure
business scale and ready access to resources. An evaluation of an AM process
should be made by considering the application environment of the process. A poor
AM process for one enterprise may be perfect for another enterprise. To quantify
the evidence for this argument, two possible processes for a virtual asset repair are
assumed in Figure 11. The implementation time of Process A is 3 hours 45 min-
utes, whereas the implementation time of Process B is 2 hours 45 minutes. If the
duration of service interruption requires less than 4 hours, then both processes can
be used. In this case, Process A is more favourable because it can be implemented
by a single qualified maintenance technician. Process B needs two technicians.
Scheduling the workload of these two technicians is not straightforward. However,
if the interruption duration requires less than 3 hours, only Process B can be se-
lected. On the other hand, if an organisation has merely one qualified technician,
only Process A is possible.
Currently, a methodology to evaluate AM processes systematically awaits de-
velopment. The following three critical criteria must be considered in the evalua-
tion of AM processes:
Improving Asset Management Process Modelling and Integration 83

(1) effectiveness, which measures the degree to which AM goals are achieved
through implementation of an AM process for which it is designed. For example,
an AM strategy planning process without risk analysis would not be effective;
(2) efficiency, which measures the usage rate of enterprise resources including
time, finances, labour, IT systems and materials when implementing an AM
process to achieve its business goals. An optimised AM process would en-
able users to achieve their goals with minimum assumption of enterprise
resources;
(3) flexibility, which measures the adaptability of an AM process to frequently
changing organisational structures and dynamic business environments. The
knowledge about process changes can be captured using process-aware infor-
mation systems (PAISs) [23].

Simulation is currently a common approach to evaluating business processes


[24]. One example is Petri nets (PNs), which are discussed in [10]. PN language
was first formally defined by Carl Adam Petri in the 1960s. It is a graphical and
mathematical modelling tool appropriate for modelling systems with simultane-
ously occurring events and resource sharing and hence can be used to describe
AM processes. PNs have a thorough mathematical foundation and are good for
simulation. Several variations of PN have been created. To deal with time-de-

60 min 60 min
Activity a Activity a

60 min
Activity b
60 min 45 min
Activity b Activity c
45 min
Activity c
30 min
Activity d
30 min
Activity d

30 min 30 min
Activity e Activity e

Process A Process B

Legend:
Logic AND
Figure 11 Example of AM process options
84 Y. Sun, L. Ma and J. Mathew

pendent, probabilistic systems, stochastic Petri nets (SPNs) were developed. Two
of these SPNs are generalised stochastic Petri nets (GSPNs) and stochastic activ-
ity networks (SANs). Both of these can be used for numerical and simulation
analysis.
PN technology has been incorporated with other methodologies to enhance its
capability. An integration of PNs and the trace logic of the communicating se-
quential processes theory led to the event-driven GSPN-based modelling approach
for the construction of complex system models. A combination of PNs and activity
networks led to the SAN-based modelling approach, which can be used to model
timed and instantaneous activities [25]. An integration of PNs and workflow pat-
terns which are used as a benchmark for the suitability of a process specification
language led to YAWL [20]. While PN process models are too abstract to be un-
derstood by ordinary viewers including business managers and engineers, YAWL
is much more intuitive for both process designers and users.
Another example of process simulation tools is UPPAAL, which is an inte-
grated tool developed by Uppsala University in Sweden and Aalborg University in
Denmark. It can be used to model and validate real-world systems which are mod-
elled as networks of timed automata and, hence, has the potential for AM process
simulation. This tool has been used for systematic evaluation of fault trees [26].
However, the process models developed in UPPAAL cannot be easily understood
without a sound knowledge of this tool.
In general, simulation is more suitable for evaluating efficiency and flexibility
rather than effectiveness. Some analytic approaches with more specific concerns
have also been developed. Chen at al. [27] presented a data envelopment analysis
(DEA) non-linear model for measuring the impact of IT on a multistage business
process. Sarkis [28] presented an activity-based analysis methodology for the
selection or prioritisation of a set of candidate business processes or projects that
should undergo reengineering. The same concept may be applied to compare dif-
ferent AM process options. Although existing business process evaluation meth-
ods have potential for AM process evaluation, they only evaluate processes from
specific points of view. A method to evaluate AM processes systematically and
effectively has yet to be developed.

5 Requirements for Integration

AM is a part of the business activities in an enterprise. Optimal AM (local optimi-


sation) does not always mean optimal business (global optimisation). Hence, AM
must be integrated into the whole enterprise management system to maximise the
benefits to the enterprise. In addition, commonly used systems such as SAP, Ora-
cle, Baan and Intentia have traditionally focused on a single enterprise. With glob-
alisation, the need for AM integration to cross enterprises becomes pressing. To
Improving Asset Management Process Modelling and Integration 85

satisfy integration requirements, an AM process modelling methodology must


enable the developed process models to perform the following tasks:
(1) consolidate all aspects of AM. This includes integrating different views and
goals of an AM process. An AM process generally has various users who may
have different goals. For example, one might use the AM process model to
manage activities, whereas another might use it to extract data flow;
(2) be interoperable, i.e. be able to exchange information and services between
programs or user interfaces no matter where they are located [29]. Software
and hardware in an IT system need to collect data from condition monitoring
systems and existing databases, manipulate and analyse these data and send
the processed data or analysis results back to the database or control devices.
Two types of data need to be considered in AM process modelling: (1) data
for describing AM process models such as the location of a block and the re-
lationship between two blocks and (2) data for implementing AM processes
such as the required human resources and the locations of assets. When de-
veloping AM process models, at least the following two problems within an
AM information system need to be solved: (1) AM-related information can be
smoothly transferred between different components of an AM system and (2)
different modules for modelling and analysing AM processes can be com-
bined and decomposed when needed.
The first requirement is relatively easy to meet. This type of integration is dis-
cussed in [10]. Meeting the second requirement is much more difficult because the
model normally needs access to all source codes of related programs – which is
unlikely to be achieved in practice. Hence, in terms of AM system integration,
more attention should be focused on the second type of integration. One such
effort is the Data Reference Model (DRM) presented by Kuhlmann et al. [29] to
support information and service exchanges between central programs and user
interfaces. Some process-oriented integrated AM systems have also been devel-
oped. For example, British Petrol (BP) developed a cross-enterprise AM system.
This system was designed based on Maximo and connects its business processes
with its suppliers and contractors to co-ordinate the maintenance, operation and
repair of its equipment [30]. However, the integration of AM information systems
has only been implemented on a case-by-case basis. A generic method for facili-
tating interoperability does not yet exist.

6 Conclusions

Process modelling plays a critical role in modern AM practices. It can be used to


automate AM in enterprises through information systems and increase efficiencies
and reduce costs in enterprises. To achieve these goals, AM process modelling
methodologies should enable to perform the following functions:
86 Y. Sun, L. Ma and J. Mathew

(1) contain sufficient AM information such as activities, time, organisation and


other resources for implementing these activities while maintaining readability;
(2) be evaluated to determine the best process from multiple perspectives. Ideally,
this evaluation can be automatically conducted in the course of processes
modelling;
(3) accommodate changes in the business structure and environment. These mod-
els should be configurable from an information technology point of view;
(4) be developed relatively quickly with a minimum of human effort.

In addition, AM process modelling methodologies should make it possible for


data flow to be developed from process models relatively easily – an essential task
in developing AM systems – and enable the seamless integration of AM systems
with enterprise IT systems.
Existing business process modelling methodologies can be used to model AM
processes. However, these methodologies cannot meet the special requirements of
AM process modelling perfectly. Further research on AM modelling methodology,
especially fast modelling, AM-specific notations and reference models, is neces-
sary. The common elements between AM processes and business processes will
enable some findings in this study to be applied to business process modelling in
selected applications.

References

[1] Weske M (2007) Business process management: concept, languages, architectures.


Springer, Berlin Heidelberg New York
[2] van der Aalst W, van Hee K (2002) Workflow management: models, methods, and systems.
MIT Press, Chicago
[3] Shen H, Wall B, Zaremba M, Chen Y, Browne J (2004) Integration of business modelling
methods for enterprise information system analysis and user requirements gathering. Com-
put Ind 54(2):307–323
[4] Chesney T (2003) Competitive information in small businesses. Kluwer, Dordrecht
[5] Palmer N (2007) A survey of business process initiatives. BP Trends.
http://www.bptrends.com/members_surveys/deliver.cfm?report_id=1001&target=FINAL
PDF 1-23-07.pdf&return=surveys_landing.cfm
[6] New Zealand National Asset Management Steering Group (2004) Optimised decision mak-
ing guidelines: a sustainable approach to managing infrastructure. Thames, New Zealand
[7] International Infrastructure Management Manual 2006 Edition. Institute of Public Works
Engineering Australia, Level 12, 447 Kent Street, Sydney NSW 2000 Australia.
[8] Frolov V, et al (2009) Building an ontology and process architecture for engineering asset
management. In: Proceedings of the 4th world congress on engineering asset management,
Athens, Springer, London
[9] Frolov V, et al (2008) Identifying core function of asset management. In: Proceedings of the
3rd world congress on engineering asset management and intelligent maintenance systems,
Beijing, Springer, Berlin Heidelberg New York
[10] Ma L, Sun Y, Mathew J (2007) Asset management process and its representation. In: Pro-
ceedings of the 2nd world congress on engineering asset management and 4th international
conference on condition monitoring, Harrogate, UK
Improving Asset Management Process Modelling and Integration 87

[11] Weichhardt F (1999) Modelling and evaluation of processes based enterprise goals. In:
Scholz-Reiter B, Stahlmann H-D, Nethe A (eds) Process modeling. Springer, Berlin Heidel-
berg New York, pp. 115–131
[12] Gómez Fernández JF, Crespo Márquez A (2009) Framework for implementation of mainte-
nance management in distribution network service providers. Reliab Eng Syst Saf
94(10):1639–1649
[13] Scheer A-W (1999) ARIS – business process frameworks, 3rd edn. Springer, Berlin Heidel-
berg New York
[14] Mertins K, Jochem R (1999) Quality-oriented design of business processes. Kluwer, Boston
[15] Green P, Rosemann M (2000) Integrated process modelling: an ontological evaluation. Inf
Syst 25(2):73–87
[16] Ma L, Sun Y, Mathew J (2004) Asset management process modelling. In: Proceedings of
the international conference of maintenance societies. Maintenance Engineering Society of
Australia, Sydney, Australia
[17] Object Management Group (2010) Business Process Model and Notation (BPMN).
http://www.omg.org/spec/BPMN/2.0 (Accessed 14 March 2012)
[18] Recker JC (2010) Opportunities and constraints: the current struggle with BPMN. Bus
Process Manage J 16(1):181–201
[19] Decker G, et al. (2009) Interacting services: from specification to execution. Data Knowl
Eng 68(10):946–972
[20] Adams M (2010) YAWL – user manual.
http://www.yawlfoundation.org/yawldocs/YAWLUserManual2.0.pdf
[21] Hitchins DK (2003) Advanced systems thinking, engineering, and management. Artech,
Boston
[22] Muller J-A (1999) Automatic model generation in process modeling. In: Scholz-Reiter B,
Stahlmann H-D, Nethe A (eds) Process modeling. Springer, Berlin Heidelberg New York,
pp. 17–36
[23] Weber B, et al. (2009) Providing integrated life cycle support in process-aware information
systems. Int J Coop Inf Syst 18(1):115–165
[24] Volkner P, Werners B (2000) A decision support system for business process planning. Eur
J Oper Res 125(3):633–647
[25] Mazzocca N, Russo S, Vittorini V (1999) The modelling process and Petri nets: reasoning
on different approaches. In: Scholz-Reiter B, Stahlmann H-D, Nethe A (eds) Process model-
ing. Springer, Berlin Heidelberg New York, pp. 37–56
[26] Cha S, et al. (2003) System evaluation of fault trees using real-time model checker
UPPAAL. Reliab Eng Syst Saf 82(1):11–20
[27] Chen Y, et al. (2006) Evaluation of information technology investment: a data envelopment
analysis approach. Comput Oper Res 33:1368–1379
[28] Sarkis J, Presley A, Liles D (1997) The strategic evaluation of candidate business process
reengineering projects. Int J Prod Econ 50(2–3):261–274
[29] Kuhlmann T, Lamping R, Massow C (1998) Intelligent decision support. J Mater Process
Technol 76(2):257–260
[30] Holland CP, Shaw DR, Kawalek P (2005) BP’s multi-enterprise asset management system.
Inf Softw Technol 47(4):999–1007
Utilising Reliability and Condition Monitoring
Data for Asset Health Prognosis

Andy Chit Tan, Aiwina Heng and Joseph Mathew

Abstract The ability to forecast machinery health is vital to reducing mainte-


nance costs, operation downtime and safety hazards. Recent advances in condition
monitoring technologies have given rise to a number of prognostic models which
attempt to forecast machinery health based on condition data such as vibration
measurements. This paper demonstrates how the population characteristics and
condition monitoring data (both complete and suspended) of historical items can
be integrated for training an intelligent agent to predict asset health multiple steps
ahead. The model consists of a feed-forward neural network whose training targets
are asset survival probabilities estimated using a variation of the Kaplan–Meier
estimator and a degradation-based failure probability density function estimator.
The trained network is capable of estimating the future survival probabilities when
a series of asset condition readings are inputted. The output survival probabilities
collectively form an estimated survival curve. Pump data from a pulp and paper
mill were used for model validation and comparison. The results indicate that the
proposed model can predict more accurately as well as further ahead than similar
models which neglect population characteristics and suspended data. This work
presents a compelling concept for longer-range fault prognosis utilising available
information more fully and accurately.

Keywords Condition-based maintenance, Condition monitoring and prognostics,


Artificial neural networks

__________________________________
Andy Chit Tan
Queensland University of Technology, Brisbane, QLD 4001, Australia
Aiwina Heng
Queensland University of Technology, Brisbane, QLD 4001, Australia
Joseph Mathew
Queensland University of Technology, Brisbane, QLD 4001, Australia

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 89


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_4, © Springer-Verlag London Limited 2012
90 A.C. Tan, A. Heng and J. Mathew

1 Introduction

The ability to forecast asset health is essential to minimising maintenance costs,


operation downtime and safety hazards. Machinery prognostics involves predict-
ing an asset’s remaining useful life, future health or risk to operation based on
condition monitoring (CM) and reliability data. Several valuable models have
considered integrating CM data into reliability prediction for individual assets.
Goode et al. [1] calculated an asset’s time to failure based on Weibull distribution
and vibration data. The Weibull proportional hazards model (PHM) [2–5] was
applied to forecasting the reliability of equipment. PHMs assume that hazard
changes proportionately with covariates (asset condition in this case) and that the
proportionality constant is the same at all times. A Weibull delay time distribution
[6, 7] was used to model the life distribution rolling element bearing. The distribu-
tion was updated as more CM information became available. This model requires
the determination of a threshold level to indicate the defect initiation point, which
is hard to identify and seldom recorded in practice. Most of the existing models for
machinery prognostics can be divided into three main categories: physics-based
approaches, model-based approaches and artificial intelligence approaches. Re-
views of these prognostic models can be found in [8, 9,10].
Physics-based approaches basically combine system-specific mechanistic
knowledge, defect growth formulas and CM data for predicting the propagation of
a fault. They generally require fewer failure histories than data-driven models.
However, the fault propagation of assets in real-life operation is often too complex
to be modelled accurately. Data-driven approaches which derive models directly
from the acquired data may often be the more available solution. They normally
include statistical approaches [11, 12, 13], which typically involve fitting probabil-
istic failure distribution to historical data. These approaches are the least complex
and may be the only alternative in not-so-critical or low-failure-rate situations. A
recent physics-based model involving a condition-based prediction method for
long-range prediction is reported by Heng et el. [14].
Model-based approaches can be accurate when a correct and accurate model is
available. However, it is very difficult to build mathematical models for complex
systems. It requires system-specific mechanistic knowledge. Jauntunen [15] stated
that the wear of rotating machine components is still not fully understood today.
Most model-based prognostic methods focus on the prediction of crack propaga-
tion [16, 17]. However, there is a large variety of other failure modes, and prog-
nosticians need to correctly identify the fault type in question. Even if that has
been accomplished, defect growth is not a deterministic process. It has been
shown that even under well-controlled experimental conditions, crack growths of a
set of identical components are vastly different. It is also difficult to apply crack
growth models in practice because they require the knowledge of a crack’s exact
geometry or orientation, which are usually very irregular and cannot be identified
without disassembling the machine component.
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 91

Compared to model-based models, artificial intelligence models make much


fewer assumptions about the system and its operating conditions. One popular
artificial intelligence prognostic technique in the literature is artificial neural net-
works. Neural networks can be tuned using well-established algorithms to provide
desired outputs directly in terms of vibration signals. Neural networks have pro-
duced comparable and, in some cases, superior results to standard mechanistic or
statistical models in various disciplines [18, 19]. In recent years, several methods
employing neural networks have been proposed for bearing prognosis. Tse and
Atherton [20] approached bearing prognosis as a time-series prediction using a
recurrent neural network (RNN). These models perform single-step-ahead predic-
tions to output the predicted vibration signal feature(s) at the next immediate time
step. However, in reality, single-step predictions rarely raise the bar from diagnos-
tics to prognostics. In some cases, one time step in a plot of vibration feature
measurement for prognostics can be only 15 minutes. A prognostics horizon of
15 minutes or even 1 day is not much help to optimal maintenance scheduling.
Nevertheless, several aspects of the data-driven approach need to be further in-
vestigated. Firstly, both reliability information and CM data need to be effectively
integrated to enable longer-range prognosis. Second, suspended CM data of his-
torical units have not been directly modelled and fully utilised. Suspended CM
data are the condition trending data of historical units which did not undergo fail-
ure. They are very common in practice due to preventive replacements and the
components under study still in operation. Lastly, the nonlinear relationship be-
tween an asset’s actual survival status and the measured CM indices needs to be
deduced.
This paper presents an approach for addressing the challenges mentioned
above. A feed-forward neural network (FFNN) is trained to predict the survival
probability of an operating asset utilising both reliability and condition monitoring
data. The training targets are calculated using a variation of the Kaplan–Meier
(KM) estimator [21] and a degradation-based failure probability density function
(PDF). Pump vibration data from an Irving Pulp and Paper mill were used for
model validation and comparison.

1.1 Architecture of FFNN Prognostic Model

An FFNN consists of a layer of input nodes, one or more layers of hidden nodes,
one layer of output nodes and connection weights. During training, input and tar-
get pairs are repetitively presented to a network. The network will draw the rela-
tionships between the inputs and targets and adjust its connection weights to pro-
duce outputs as close as possible to the targets. The FFNN used in this work has
one hidden layer, d + 1 input nodes (d is the number of delayed indices of asset
condition), and h output nodes (h is the desired number of time intervals to be
forecasted) (Figure 1).
92 A.C. Tan, A. Heng and J. Mathew

Input Hidden Output


Layer Layer Layer

Y (t ) Sˆ (t + Δ)

Y (t − Δ) Sˆ (t + 2Δ)


Y (t − dΔ) Sˆ (t + hΔ)

Figure 1 Architecture of FFNN used in proposed prognostic model

Let S denote the probability of survival or reliable operation, t the current or


latest time, Δ the fixed time interval between measurements, nth the number of
future intervals that the nth output node represents and n = 1, 2, 3, …, h. The acti-
vation of the nth output node is trained with and interpreted as Ŝt + n∆, which is the
probability that the item will survive up to the nth next time interval. Collectively,
the survival probabilities form a forecasted survival curve for the monitored item
at the time of prediction.

1.2 Statistical Modelling of FFNN Training Targets

The FFNN training targets are estimates of the survival probabilities of each moni-
tored item in the training set. They are computed based on the actual survival
status of the historical item at the time of measurement, as well as on how the
health of this item compared to the health of the entire population at similar oper-
ating times. These two considerations are detailed in the following sections.

1.2.1 Kaplan–Meier Estimation of Survival Probability

Training Targets for Complete Datasets

A historical dataset is considered complete if the monitored item has reached fail-
ure when removed from operation. Let i = 1, 2, …, m and m represent the number
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 93

of monitored historical items. If item i has reached failure before repair or re-
placement, its survival probability is assigned a value of 1 up until its failure time
step, Ti, and a value of 0 thereafter:

1, 0 ≤ t < Ti
S KM ,i ( t ) =  . (1)
0, t ≥ Ti
Note that we consider all functions discussed here to be the true function esti-
mated from the given degradation datasets and drop the hat “^” for notational
clarity.

Training Targets for Suspended Datasets

A historical dataset is considered suspended if the item has not reached failure but
has been repaired or removed from operation. For such suspended datasets, the
survival probability is similarly assigned a value of 1 up until the time interval in
which survival was last observed. Survival probabilities for subsequent time inter-
vals are computed using a variation of the KM estimator [21] based on the sur-
vival rate of the complete datasets from this moment onwards.
For suspended units which are overhauled/replaced due to non-deterioration
factors (e.g. calendar-time-based suspensions), the modified KM estimator tracks
the cumulative survival probability of the suspended unit i as follows:

1, 0 ≤ t < Li

S KM ,i ( t ) =   dj  , (2)
 ∏ 1 − n ,  t ≥ Li
 Li ≤t j ≤t  j 

where dj is the number of failures up to time step tj, nj is the number of units at risk
just prior to time tj and Li denotes the time interval in which historical unit i was
last observed to be still surviving.
For suspended units which are repaired/replaced to prevent failures because a
fault has been detected (informative suspensions), the modified KM estimator
calculates the cumulative survival probability of the suspended unit i as follows:



1, 0 ≤ t < Li

S KM ,i ( t ) =  μi , t = Li , (3)

μ ⋅  dj 
∏ 1−
 L ≤t ≤t  n j
i  , t > Li
 i j  

where μi is the health index estimated based on the fault severity of the unit at
repair/replacement and 0 ≤ μi ≤ 1.
94 A.C. Tan, A. Heng and J. Mathew

1.2.2 Failure PDF Estimation Based on Degradation Data

Let Yi(t) be the condition value for item i at operating age t and Y(t) a vector con-
taining the condition values from all of the m historical items in interval t:
Y (t ) = [Y1 (t ); Y2 (t ); ...; Ym (t ) ] (4)

The PDF of condition values at an interval u is denoted as f(Y | t). The overall
survival probability in the case considered is defined as the probability of condi-
tion indices not exceeding the failure threshold
Ythresh

S (t ) = Pr[Y (t ) < Ythresh ] = 0


f (Y | t )dY . (5)

The preceding equation shows that the reliability function can be estimated tak-
ing into account the mechanism of change in the condition of each historical item
(Figure 2).
To estimate the specific survival probability for each historical item i, we
successively multiply the probability of the items that have survived the preced-
ing intervals having condition indices higher than the observed index of item i
but lower than the threshold. We assume that the condition value, which re-
presents the degradation of the corresponding asset, will not decrease. This is
an assumption which will yield us a conservative estimate of survival probability.

R(t)
Failure PDF, f (T |Ythresh)

Ythresh
Condition value PDF, f (Y | t j )

Probability
of survival

tj
Figure 2 Instantaneous reliability based on historical degradation processes
Let k = 1, 2, …; then the conditional probability of an item i surviving interval t + k∆ is
Pr[Ti > t + k Δ | Yi ( t + k Δ ) ≥ yi ,t + k Δ , Ti > t , Yi ( t ) ≥ yi ,t ,...]
k
= ∏ Pr[Ti > t + j Δ | Yi ( t + j Δ ) ≥ yi ,t + j Δ , Ti > t + ( j − 1)Δ, Yi ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
j =1

k Pr[Ti > t + j Δ, Y i
( t + j Δ ) ≥ yi ,t + jΔ | Ti > t + ( j − 1)Δ, Y i ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
=∏ (6)
j =1 Pr[Ti ( t + j Δ ) ≥ yi ,t + j Δ | Ti > t + ( j − 1)Δ, Y i ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
k Pr[ y
thresh > Yi ( t + j Δ ) ≥ yi ,t + j Δ | ythresh > Yi ( t + ( j − 1) Δ ) ≥ yi , t + ( j −1) Δ ,...]
=∏
j =1 Pr[Yi ( t + j Δ ) ≥ yi ,t + j Δ | ythresh > Yi ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
ythresh

 f ( y | t + j Δ ) dy
k
yi ,t + jΔ
=∏ ~
,
j =1
 f ( y | t + j Δ ) dy
yi ,t + jΔ

ythresh

where  f ( y | t + jΔ ) dy is the integral of the conditional PDF between the observed degradation index of item i and the threshold
yi ,t + jΔ

~
and  f ( y | t + j Δ ) dy is the integral of the conditional PDF over all possible values equal to or higher than the observed degrada-
yi ,t + jΔ
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis

tion index of item i.


95
96 A.C. Tan, A. Heng and J. Mathew

1.2.3 Final Target Outputs for ANN Training

The final estimated survival probability is then the mean of the two survival
probability estimates. After training, when a series of condition indices at the
current time t and d previous time steps are fed into the input nodes, the network
will produce an estimate of the survival probabilities in the h future intervals,
which can be plotted as the forecasted survival curve for that unit. As the next
set of input values becomes available, a new updated output vector will be pro-
duced, generating a new survival curve, with the final survival probability given
in Eq. (7):

Si (t ) = mean  S KM ,i (t ), S PDF ,i (t )  .
(7)
The training target vector for historical item i, denoted here by Di, consists of
the estimated survival probability in the h successive intervals:

 Si (t + Δ ) 
 S (t + 2Δ ) 
Di (t ) =  i . (8)
 # 
 
 Si (t + hΔ ) 

During training, the input and target vectors of the training sets are repetitively
presented to the neural network. The network attempts to produce output values
which are as close as possible to the target vectors. After training, when a series of
condition indices at the current time t and d previous time steps

 Y (t ) 
 
 Y (t − Δ ) 
y (t ) = Y (t − 2Δ)  (9)
 
 # 
Y (t + hΔ) 
 

are fed into the input nodes, the network will produce an output vector

 Sˆ (t + Δ ) 
 
 Sˆ (t + 2Δ) 
O(t ) =  , (10)
 # 
 Sˆ (t + hΔ) 
 

which can be plotted as the survival curve for that unit, estimated at time t. As the
next set of input values becomes available, a new updated output vector will be
produced, generating a new survival probability curve.
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 97

2 Model Validation

2.1 Prognostic Modelling Using Industry Pump Vibration Data

Vibration data and failure/suspension records of centrifugal pumps at the Irving


Pulp and Paper mill were used for training, testing and comparison of the pro-
posed model and three other models. The centrifugal pumps used in this work
were Gould 3175L centrifugal pumps, which are used extensively for pumping the
various liquids used in paper making from one processing station to another.
These pumps operate 24 hours non-stop, except during the bi-annual maintenance
shutdowns. Vibration signals were collected at eight locations on the pump, before
being pre-processed into five frequency bands, an overall summary of the five
bands, and an acceleration value.
In this case study, 32 historical datasets were available (10 rolling element bear-
ing failures, 6 mechanical seal failures, 14 calendar suspensions – pumps still oper-
ating normally when the data were obtained – and 2 informative suspensions with
an estimated bearing health index of 0.5 and 0.4 respectively). As the failure mode
to be considered in this study is bearing failure, the six seal failure dataset functions
were used as suspended datasets. The seal failures did not affect the vibration read-
ings and were found to be completely random. Using the Exakt covariate analysis
[4], the feature P1V_Par5, which corresponds to the 5× frequency band of the verti-
cal measurement at the problematic bearing end of the pump, was found to be most
significantly related to bearing degradation. The feature values were linearly inter-
polated so that the measurement points were equally spaced at 10 days. As the un-
even and sometimes scarce measurement intervals of the original datasets might
have affected data modelling quality, time steps were not grouped in intervals in this
test, i.e. 1 time step = 1 interval. Three of the 10 failure datasets were reserved as test
sets, and the remaining datasets were assigned for modelling and network training.
The FFNNs used for this real-life data analysis had 11 input nodes, 15 hidden
nodes and 5 output nodes (predicting 5 intervals ahead) and were trained with the
gradient descent algorithm with momentum backpropagation.

2.2 Analysis of Prognostic Output

As the prediction output of the proposed model is survival probabilities, the exact
predicted failure times are not represented. For evaluation purposes, the predicted
failure time was identified by noting the first output unit which predicted a survival
probability of less than 0.5; each time step is 10 days. Table 1 shows the prediction
results of the first test set, in which the actual failure time was at t = 600 days. Sur-
vival probabilities of less than 11 time steps were not presented as the pump was in
the stage of normal operation. Figure 3 shows the interpolated input data and the
graphical representation of predicted survival probability at selected time steps.
98

Table 1 Prediction Output of Proposed Model for Test Set 1 in Assessment I

t = 110 t = 120 t = 130 t = 140 t = 150 t = 160 t = 170 t = 180 t = 190


Survival Probability in1st subsequent interval, Ŝk + 1(t) 
… 2nd subsequent interval, Ŝk + 2(t)  0.84 0.84 0.84 0.84 0.85 0.84 0.84 0.83 0.83
… 3rd subsequent interval, Ŝk + 3(t) 0.83 0.83 0.83 0.83 0.84 0.84 0.83 0.82 0.82
… 4th subsequent interval, Ŝk + 4(t) 0.83 0.83 0.83 0.83 0.83 0.83 0.83 0.82 0.83
… 5th following interval, Ŝk + 5(t) 0.82 0.82 0.82 0.82 0.83 0.83 0.82 0.82 0.82
0.82 0.82 0.82 0.82 0.82 0.82 0.82 0.81 0.82

t = 200 t = 210 t = 220 t = 230 t = 240 t = 250 t = 260 t = 270 t = 280 t = 290 t = 300 t = 310 t = 320 t = 330 t = 340 t = 350
0.83 0.83 0.84 0.84 0.85 0.85 0.85 0.84 0.84 0.83 0.82 0.83 0.84 0.84 0.83 0.83
0.83 0.83 0.83 0.84 0.84 0.85 0.84 0.84 0.83 0.82 0.81 0.82 0.83 0.83 0.82 0.82
0.83 0.83 0.83 0.84 0.84 0.84 0.84 0.83 0.83 0.82 0.81 0.82 0.82 0.82 0.82 0.81
0.82 0.82 0.82 0.83 0.83 0.84 0.83 0.83 0.82 0.81 0.80 0.81 0.81 0.81 0.81 0.81
0.82 0.82 0.82 0.82 0.83 0.83 0.83 0.82 0.82 0.81 0.80 0.80 0.81 0.81 0.81 0.80

t = 360 t = 370 t = 380 t = 390 t = 400 t = 410 t = 420 t = 430 t = 440 t = 450 t = 460 t = 470 t = 480 t = 490 t = 500 t = 510
0.82 0.81 0.81 0.81 0.80 0.81 0.80 0.80 0.79 0.79 0.76 0.73 0.70 0.67 0.64 0.63
0.81 0.81 0.81 0.80 0.80 0.79 0.79 0.78 0.77 0.75 0.71 0.67 0.64 0.62 0.61 0.60
0.81 0.80 0.80 0.80 0.80 0.80 0.79 0.78 0.77 0.75 0.71 0.67 0.63 0.60 0.59 0.59
0.81 0.80 0.80 0.80 0.79 0.79 0.78 0.76 0.75 0.74 0.70 0.66 0.62 0.58 0.57 0.55
0.80 0.80 0.80 0.80 0.79 0.78 0.78 0.76 0.75 0.73 0.69 0.64 0.59 0.55 0.54 0.52

t = 520 t = 530 t = 540 t = 550 t = 560 t = 570 t = 580 t = 590


0.62 0.61 0.60 0.57 0.54 0.54 0.49 0.49 failed
0.57 0.54 0.53 0.51 0.44 0.44 0.44 0.42 at
0.56 0.51 0.51 0.43 0.34 0.28 0.27 0.25 t = 600
0.51 0.50 0.42 0.40 0.25 0.22 0.24 0.26
0.50 0.42 0.35 0.29 0.09 0.09 0.17 0.23
A.C. Tan, A. Heng and J. Mathew
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 99

Figure 3 Graphical representation of prediction output by the proposed model at selected time
steps for test set 1 in Assessment I

The predicted survival probabilities closely matched the actual degradation


trend. The survival probability was high and had a stable trend during earlier ser-
vice of the bearing (subplots in Figure 3, operating age under 190 days). The sur-
vival probability began to drop at an increasing rate at around day 430, suggesting
the initiation of a defect. It can also be seen in Figure 3 that, although the vibration
RMS value temporarily stopped increasing at around t = 500 days and t = 560 days,
the survival probability was still forecasted to drop at an increasing rate. This
observation suggests that the prognostic model may have learned to capture the
non-linear relationship between the condition index and the actual health state of
the monitored item. This capability makes such a model much more robust than
models which directly use the condition index to represent the asset health.
However, when a survival probability of 0.5 was used as the failure threshold for
this study, the model underestimated the failure time. The first output with a value
below 0.5 was produced at t = 530 in the fifth row (0.42, highlighted in Table 1),
which means the bearing was forecasted to fail in the fifth next interval, i.e.
t = 580 days. However, the failure did not occur till t = 600 days. The error is consid-
ered small in relation to the whole lifetime of the bearing ([600–580]/600 = 0.033 or
3.3 %). This underestimation, however, might be due to the fact that failed units in
100 A.C. Tan, A. Heng and J. Mathew

training sets still have a certain amount of remaining useful life at replacement. This
short period of time discrepancy may have created a slight bias in the failure data
modelling. The bearing in this test set might have been run to a higher level of defect
severity before being replaced, and therefore the failure point seemed to be post-
poned slightly in the lifetime than the normal failure point that the proposed ANN
has learned to recognise. In fact, test set No. 1 indeed has a longer period of decreas-
ing vibration RMS value at the end of the bearing life compared to the training sets.
This observation may suggest that the bearing in test set 1 might indeed have been
left running to a higher stage of damage than the bearings in the failure training sets.

2.3 Model Comparison

The prediction results of the proposed model were compared with those of the
following models:
• FFNN with the same structure and training function but trained with the false
assumption that suspension times were failure times (Model A);
• FFNN with the same structure and training function but trained using only
complete failure datasets (Model B); and
• one-step-ahead time series prediction (Model C).
The test consisted of three assessments. In Assessment I, all 6 complete data-
sets and 16 suspended ones were made available for model training. In Assess-
ment II, only 3 complete training sets and the 16 suspended ones were used. In the
last assessment, only 1 complete training sets and the 16 suspended training sets
were used.
The prediction results of the proposed model were also compared with those of
a recurrent neural network (RNN) which approached machine health prognosis as
a time-series prediction (Model C). RNNs are the most commonly used artificial
intelligence prognostic models reported, such as in [20]. Based on the condition
values in the failure datasets, a threshold value of 0.6 was selected. The RNN
selected for comparison here is an Elman network which had a Levenberg–
Marquardt backpropagation training function and nine hidden nodes and predicted
one step ahead. This structure is selected based on the best trade-off between
structure complexity, prediction horizon length and prediction accuracy obtained
through a post-training regression analysis.
For comparison of the proposed model with Models A, B and C, we define a
penalty function which considers the mean prediction accuracy and the prediction
horizon of a prognostic model:

1 c
p( y) =   pg ( y j )  + ph ( y),
c j =1 
(11)

where c is the number of test sets.


Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 101

The prediction accuracy function pg measures the discrepancy between the ac-
tual failure time T and the predicted failure time Tˆ in each test set:

α (T − Tˆ ), Tˆ < T

pg ( y ) = 0, Tˆ = T (12)
 ˆ ˆ
 β (T − T ), T > T

where α and β are penalty parameters of underestimation and overestimation in


failure time prediction, and α < β since overestimation is worse than underestima-
tion in failure time prediction.
The prediction horizon function ph subjects penalties to exponential decay as
the length of the horizon increases:

ph ( y ) = e − λh , (13)

where λ is the decay constant.


In this test, the values of α, β and λ are assigned values of 0.1, 0.5 and 0.2 re-
spectively. The penalty points of the proposed model and Models A, B and C are
presented in Table 2.
The proposed model had the lowest penalty point in all three assessments.
Model A was greatly penalised as it underestimated the time to failures. The
performance of Model B was quite good in Assessments I and II, where there was
a reasonable amount of complete failure datasets available for training. However,
when there are only suspended data available for training, Model B was totally
incapable of performing prediction. Model C received relatively consistent high
penalty points due to its short prediction horizon. In view of the available com-
plete failure data, suspended data and incomplete data, it was to be expected that
the penalty rates would vary. Also, the time that the predicted degradation index
crossed the predetermined threshold did not match the failure time. The compari-
son suggests that the proposed model provides more accurate prediction output
than the other control models in all assessments.

Table 2 Penalty for the Four Models in Each Assessment

A (models B (excludes C (one-step-ahead


Assessment\Model Proposed suspensions suspensions time-series
as failures) from training) prediction)
I 0.868 1.568 1.101 1.119
II 1.035 1.901 1.035 1.119
III 0.785 2.001 8.168 1.152
102 A.C. Tan, A. Heng and J. Mathew

3 Conclusions

This paper presented a non-parametric approach to predicting the remaining useful


life of individual assets based on both reliability and condition monitoring data.
The test results verified that the proposed model performed better than the tradi-
tional Weibull model, which is based solely on reliability data, and the RNN time-
series prediction, which only considers condition monitoring data.
This work presented an approach with the following aims:
1. to illustrate the potential power of addressing the negligence of suspended
lifetime data in machine prognostic model training;
2. to incorporate population characteristics in prognoses;
3. to enhance the output of a neural network by including survival probability
estimation to model, measure and manage risks in the non-deterministic chan-
ges in condition indices;
4. to provide real-time long-range prediction, taking advantage of statistical mod-
els’ ability to provide a useful representation of survival probabilities and of
neural networks’ ability to recognise the non-linear relationship between a ma-
chine component’s future survival condition and a given series of prognostic
data features;
5. to minimise assumptions (e.g. about physics model coefficient values, degrada-
tion patterns, underlying failure distributions, failure thresholds) in forecasting
asset health.
The industrial case study results also verified that the proposed model performs
better than models which do not include suspended data and population character-
istics in their prognostic modelling. This work presented a compelling concept for
longer-range fault prognosis utilising available information more fully and accu-
rately. Future work includes applying the proposed model to real-life data with
varying machine operating conditions.

Acknowledgements The authors gratefully acknowledge the financial support from the QUT
Faculty of Built Environment and Engineering and the Cooperative Research Centre for Inte-
grated Engineering Asset Management (CIEAM). Thanks are also due to the Centre for Mainte-
nance Optimization and Reliability Engineering (C-MORE) at the University of Toronto and to
Irving Pulp and Paper for generously providing the pump data and contributing to the model
improvement.

References

[1] Goode KB, Moore J, et al (2000) Plant machinery working life prediction method utilizing
reliability and condition-monitoring data. Proc Inst Mech Eng 214:109–122
[2] Jardine AKS, Anderson M (1985) Use of concomitant variables for reliability estimation.
Maint Manage Int 5:135–140
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 103

[3] Jardine AKS, Anderson PM, et al (1987) Application of the Weibull proportional hazards
model to aircraft and marine engine failure data. Qual Reliab Eng Int 3:77–82
[4] Banjevic D, Jardine AKS (2006) Calculation of reliability function and remaining useful life
for a Markov failure time process. IMA J Manage Math 17(2):115–130
[5] Sundin PO, Montgomery N, et al (2007) Pulp mill on-site implementation of CBM decision
support software. In: Proceedings of the international conference of maintenance societies,
Melbourne, Australia
[6] Wang W (2002) A model to predict the residual life of rolling element bearings given moni-
tored condition information to date. IMA J Manage Math 13(1):3–16
[7] Wang, W. and W. Zhang (2005). A model to predict the residual life of aircraft engines
based upon oil analysis data. Naval Research Logistics 52: 276–284
[8] Heng A, Zhang S, Tan ACC, Mathew J (2009) Rotating machinery prognostics: state of the
art, challenges and opportunities. J Mech Syst Signal Process 23:724–739
[9] Kothamasu R, Huang SH, VerDuin WH (2006) System health monitoring and prognostics –
a review of current paradigms and practices. Int J Adv Manuf Technol 28:1012–1024
[10] Jardine AKS, Lin D, Banjevic D (2006) A review on machinery diagnostics and prognostics
implementing condition-based maintenance. Mech Syst Signal Process 20:1483–1510
[11] Vlcek BL, Hendricks RC, Zaretsky EV (2003) Determination of rolling-element fatigue life
from computer generated bearing tests. Tribology Transactions, 46(4):479–493, Oct 2003
[12] Groer PG, Analysis of time-to-failure with a Weibull model, Proceedings of the Mainte-
nance and Reliability Conference, Knoxville, TN, USA, 2000, 59.01–59.04
[13] Schomig A, Rose O (2003) On the suitability of the Weibull distribution for the approxima-
tion of machine failure. Proceedings of the conference on industrial engineering research,
Portland OR, June 2003
[14] Heng, Tan ACC, Mathew J, Jardine A (2009) Intelligent condition based prediction of
machine reliability. J Mech Syst Signal Process 23:1600–1614
[15] Li Y, Kurfess TR, Liang SY (2000) Stochastic prognostics for rolling element bearings.
Mech Syst Signal Process 14(5):747–762
[16] Qiu J, Set BB, Liang SY, Zhang C (2002) Damage mechanics approach for bearing lifetime
prognostics. Mech Syst Signal Process 16(5):817–829
[17] Roemer MJ, Byington CS, Kacprznski GJ, Vachtsevanos G (2005) An overview of selected
prognostic technology with reference to an integrated PHM architecture. Proceedings of
ISHEM forum, Napa Valley, CA, Nov 7–10, 2005, 65
[18] Huang R, Xi L, Li X, Richard Liu C, Qiu H, Lee J (2007) Residual life predictions for ball
bearings based on self-organizing map and back propagation neural network methods. Mech
Syst Signal Process 21:193–207
[19] Wang P, Vachtsevanos G (2001) Fault prognostics using dynamic wavelet neural networks.
Artif Intell Eng Des Anal Manuf 15:349–365
[20] Tse P, Atherton D (1999) Prediction of machine deterioration using vibration based fault
trends and recurrent neural networks. Trans ASME J Vibrat Acoust 121(3):355–362
[21] Kaplan EL, Meier P (1958) Nonparametric estimation from incomplete observations. J Am
Stat Assoc 53:457–481
Vibration-Based Wear Assessment
in Slurry Pumps

Girindra Mani, Dan Wolfe, Xiaomin Zhao and Ming J. Zuo

Abstract Centrifugal slurry pumps are widely used in various industries, includ-
ing Canada’s oil sands industry, to move mixtures of solids and liquids, typically
from mine sites to central processing facilities. In highly abrasive applications,
such as oil sand slurry, wear of wetted components is the main failure mode of the
pumps, and impellers are often the shortest-lived components. An accurate, non-
intrusive assessment of component wear in slurry pumps has yet to be developed.
This paper will outline a non-destructive vibration-based diagnosis platform based
on a novel hypothesis that a specific pattern of vibration – resulting from wear-
induced pressure pulsation alteration – can be observed and recorded. Specifically,
this method quantifies impeller vane trailing edge damage by analysing the ampli-
tude at the vane passing frequency (VPF) of vibration data. To counter data vari-
ability, we employ a combination of three approaches to analyse the acquired
vibration data according to the hypothesis.
First, a cumulative amplitude measure was evaluated from VPF amplitudes by
employing auto-scaling of time-domain vibration data followed by fast Fourier
transform (FFT). Second, an amplitude measure was evaluated from the first
component at VPF after utilizing principal component analysis (PCA) on mul-
tichannel time-domain data. Finally, an amplitude measure was evaluated from
the first component at VPF after utilizing PCA on frequency-domain data. It was

__________________________________
G. Mani
University of Alberta, Canada
D. Wolfe
Syncrude Research Centre, Canada
X. Zhao
University of Alberta, Canada
M.J. Zuo
University of Alberta, Canada
e-mail: ming.zuo@ualberta.ca

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 105


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_5, © Springer-Verlag London Limited 2012
106 G. Mani et al.

found that the final measure had great potential to be used for the identification
and estimation of impeller damage due to wear since its values followed the pro-
gression of the impeller damage. A viable wear assessment method based on this
platform can potentially be used to discern the extent of wear damage on a slurry
pump impeller.

Keywords Pumps, Wear detection, Maintenance, Signal processing

1 Introduction

Centrifugal slurry pumps are widely used in mining, ore processing, waste treat-
ment, cement production and other industries. In oil sands operations, they are
crucial in moving the raw material for bitumen extraction and tailings disposal.
Maintaining and extending their useful life is thus essential to the reliable opera-
tion of these processes. Slurry pumps are subject to wear due to the existence of
solid particles in the pumped media. Consequently, they require regular mainte-
nance throughout their life, in contrast to regular centrifugal pumps, which can last
for years between repairs. Even with scheduled maintenance, undetected wear of
wetted components can result in costly unscheduled outages of slurry pumps.
Unscheduled outages cost oil sand companies millions of dollars each year.
Sophisticated on-line assessment of the wear status of wetted components in
slurry pumps thus has the potential to generate significant cost savings for slurry
pump operators. Reported studies on slurry pumps focus on improvement of their
design and understanding of wear mechanisms. As reported in [2], in a case study
conducted for a 10 × 14 in. pump in a fluid catalytic cracking unit (FCCU), the
initial cost of a fully lined pump was higher compared to conventional American
Petroleum Institute (API) pumps, but over a 6-year evaluation life, the total cost
(capital cost plus maintenance, repair and replacement parts) was 45 % lower.
Engin [3, 4] has studied the effect of solids on the performance of slurry pumps.
Liu et al. [5] investigated the erosive wear of the impellers and liner of centrifugal
slurry pumps. They studied the eroded material surfaces of impellers and liners
with a scanning electron microscope (SEM).
Some research work has been reported that deals with the investigation of dif-
ferent wetted components. Ridgway et al. [6] consider the life cycle tribology of
the slurry pump gland seal. Slurry pumps are commonly used in mineral process-
ing to transport two-phase mixtures of liquids and solid particles. The authors
concluded that the particle properties significantly influenced seal failure. They
also developed a hypothesis on gland seal failure and wear in a slurry environ-
ment, discussed alternative methods to quantify the wear including empirical and
experimental approaches, and presented some preliminary results from the work.
Khalid and Sapuan [7] focused on impeller wear patterns. They fabricated a wear
testing rig for a water pump impeller and selected a parameter that could be used
to determine the wear of slurry pump impeller as a function of operating hours.
Vibration-Based Wear Assessment in Slurry Pumps 107

Their main findings were that (a) erosion is the dominant type of wear, (b) the
weight loss of an impeller is due to material removal from the impeller as a result
of erosive wear, (c) the diameter loss of an impeller is attributed to the impinge-
ment of solid particles on the impeller vane trailing edge, and (d) the surface to-
pography under a microscope indicates that the region near the centre (vane lead-
ing edge) of the impeller encounters less wear compared to the region at the rim
(vane trailing edge) of the impeller.
In spite of all these findings, relatively limited research has been conducted in
the development of condition monitoring of slurry pumps [1], particularly using
non-invasive techniques. In this paper, we present a non-destructive wear assess-
ment technique based on vibration monitoring for damage assessment of impel-
lers, specifically of the vane trailing edge. Vane trailing edge wear is one of the
most important wear modes in pump impellers. The technique is based on a novel
hypothesis that connects two different phenomena: (a) pressure pulsation altera-
tion due to trailing edge wear and (b) ensuing vibration response.

1.1 Pressure Pulsation, Ensuing Vibration


and VPF Component

Let us first examine previous studies on the development of non-invasive tech-


niques for resolving pump issues using vibration signal and pressure pulsations.
Rodriguez et al. [8] presented a theoretical method to interpret the observed vi-
bration as a consequence of modulation in the amplitudes of the rotor-stator inter-
actions in a centrifugal pump; this method was used to modify pump design to
reduce vibration. Wang et al. [9] proposed a vibration-based fuzzy classification
method for fault diagnosis of a five-plunger pump. Abbot et al. [10] observed
vibration-contributing mechanisms such as acoustic resonance in a piping system.
Srivastav et al. [11] examined the effect of the radial gap between the impeller
and the diffuser on the vibration and noise in a centrifugal pump under different
flow conditions. They concluded that an increase in the radial gap between the
impeller and the diffuser reduced vibration and noise levels with little effect on
pump efficiency.
The work of Weissgerber at al. [12] was one of the earliest instances where
trends in pressure pulsation were examined in terms of faults in pumps. They
concluded that the amplitude at the pump running frequency could be limited by
controlling unbalance, where vane pass pulsations could be controlled by ensuring
proper clearance between the blade tip and the casing cutwater on the pump. In a
design study, Hodkiewicz [13] concluded that the pressure pulsations at the pump
discharge decreased with an increase in the radial gap between the impeller and
the volute. Guo and Maruta [14] experimentally studied the pressure fluctuations
generated by the interactions between the impeller and the volute of a centrifugal
pump with the objective of improving centrifugal pump design.
108 G. Mani et al.

Zbroja et al. [15] formulated an experimental method to examine pump acous-


tic characteristics and concluded that pump characteristics depended on the loca-
tion of pump ports and loop acoustics. In a related study, Morgenroth [16] re-
ported the results of an experimental study of the pressure pulsations produced by
a centrifugal volute pump at its VPF and their amplification by acoustic resonance
in a connected piping system and concluded that rounding the cutwater reduced
the amplitude of acoustic resonance.

1.2 Hypothesis of This Work

Our hypothesis is to bridge the void between knowledge gained from earlier pump
research and a possible method of unobtrusive impeller wear pattern analysis of
slurry pumps. In particular, the studies of Srivastav [11] and Hodkiewicz [13] as
discussed above are relevant here. Both studies – one using vibration analysis and
the other using pressure pulsation – were done to focus on improvement of pump
design in terms of the radial gap between the impeller and the volute.
In our study here, we hypothesize that vane trailing edge wear of the impeller –
a very common form of wear in slurry pumps – will cause an effective increase of
‘periodic’ radial gap between the impeller and the volute. The term ‘periodic’
refers to the VPF. This increase will cause flow alteration, leading to a reduction
of pressure pulsations at the VPF, which in turn will manifest in the outside meas-
ured vibrations. Therefore, we expect a reduction in amplitude of the VPF compo-
nent in the frequency domain when trailing edge wear occurs. Note that we as-
sume all the vanes/blades of the impeller will experience identical amounts of
damage simultaneously.

1.3 Summary of This Work

The primary aim of this work was to develop a non-invasive technique for wear
assessment of slurry pump components that could be easily implemented while the
pumps are in service. It has been well established that machinery damage or de-
fects often manifest in vibrations. Most studies of machinery vibrations focus on
vibrations generated by mechanical damage in components such as bearings,
shafts or seals. Fluid interaction with mechanical components is an additional
aspect of pumps that can have an impact on perceived vibration from outside the
impeller casing, and this is the focus of this paper. The slurry pump monitored in
the experiments presented was run with a series of impellers with different levels
of artificially created wear. The damage progression levels are considered to be
slight, moderate and severe. The vibration data are measured in a non-intrusive
manner by sensors installed at three different locations outside the pump. Ampli-
tude measures are evaluated from vane pass frequency amplitudes by employing
three different approaches.
Vibration-Based Wear Assessment in Slurry Pumps 109

The remainder of this paper is structured as follows. In Section 2, we describe


the experiments conducted for data collection under different degrees of impeller
wear. In Section 3, the proposed approach is thoroughly described. Analysis results
and discussions are given in Section 4. Conclusions are provided in Section 5.

2 Experimental Procedure for Data Acquisition

The experimental system for this study enabled pump speed, flow rate, slurry
density and inlet pressure to be controlled while using wetted components with
various levels of damage. The collected data include, e.g., vibration, acoustic,
pressure, flow rate and motor current. However, the focus of this paper is vibration
signal analysis.

2.1 Experimental Setup

A state-of-the-art experimental setup [17] was established consisting of compo-


nents that can be divided into seven major categories: (i) slurry pump:
Weir/Warman 3/2 CAH slurry pump with impeller C2147 (8.4" in diameter and
5 vanes); (ii) 40 HP drive motor complete with variable frequency drive; (iii) data
acquisition system: a 12-channel National Instruments SCXI system; (iv) PLC
control panel: designed to control and monitor system operation; (v) sensors: two
thermocouples, one microphone, three tri-axial accelerometers, two pressure sen-
sors for inlet and outlet and a differential pressure sensor for flow rate measure-
ment; (vi) computer: a Dell Inspiron 9200 laptop computer for data collection via
Labview; (vii) other: inlet pressure control tank, sand addition tank, safety rupture
disk, various valves, pipes and glycol cooling system.
A three-dimensional schematic drawing of the test loop is shown in Figure 1
with key components identified. The locations of the accelerometer sensors are

Figure 1 Schematic of pump loop


110 G. Mani et al.

Figure 2 Locations of accelerometers

shown in Figure 2. Each of these accelerometers senses vibrations in three axes


resulting in a total of nine vibration signals. A description of the detailed locations
of all sensors, valves and other components is not relevant in the context of this
paper and is therefore omitted.

2.2 Wear Types and Levels

Based on an examination of wear patterns on impellers removed from field slurry


pumps, it has been observed that trailing edge vane damage is a common type of
impeller damage and has a large impact on pump performance and eventual fail-
ure. Therefore, it was decided to focus on this type of damage. The damage pro-
files produced in lab impellers were to mimic the observed wear patterns of worn
field impellers. The vane length of a perfect lab impeller is approx.. 12 cm. Three
levels of trailing edge damage – slight, medium and severe – were fabricated as
shown in Figure 3. As illustrated in this figure, 5 mm of vane material was re-
moved to create the slight damage level, 10 mm to create the medium damage
level and, finally, 15 mm to create the severe damage level.
Vibration-Based Wear Assessment in Slurry Pumps 111

Figure 3 Schematic of trailing edge vane damage levels (Aulakh and Wu, 2006)1

2.3 Procedure to Acquire Vibration Data

Procedures were documented and strictly followed in experiment implementation


to ensure reproducibility.
a) System preparation: First, the necessary valves were opened and the seal water
pump was turned on. Next, the slurry pump was turned on and sand was added at a
minimum flow rate of 150 USGPM. Sand was added until the slurry density
reached the target value of 1.17 kg/L. The system was then run at a steady rate un-
til all significant entrained air had escaped, at which point data could be collected.
b) Data acquisition: Process parameters were collected for pump speeds from
1200 to 3200 RPM in 200-RPM increments. The process parameters included
pump speed, motor horsepower, pump inlet and outlet pressure, pump outlet
flow rate, and inlet and outlet slurry temperature. Vibration data were collected
at 1800, 2200, and 2600 RPM. One 5-min data sample was collected for each
case, at a sampling frequency of 9 KHz.

3 Signal Processing

To validate the hypothesis proposed in this paper, the vibration signals obtained
from experiments were numerically processed in the time and frequency domains
to evaluate measures that are representative of impeller wear in a slurry pump. This
procedure comprised a number of stages, as depicted in Figure 4. We employed a
combination of three approaches to analyse the data. Considering the fact that the
system was very complex and considerable data variability was expected, multiple
approaches could have resulted in superior wear identification and estimation.

1
Amit S Aulakh and Siyan Wu, Slurry Pump CBM Project, Progress Report 35 (09), Syncrude
Canada Ltd., Edmonton, Alberta, Canada, August 21, 2006.
112 G. Mani et al.

Experiment
Acquire multichannel
vibration data

Preprocessing
Filter/normalize

Frequency-domain PCA-based VPF monitoring

Transform to Apply PCA and use PC


frequency-domain to get VPF amplitude

Cumulative VPF monitoring Frequency-domain PCA-based VPF monitoring

Add the values from Apply PCA and use PC


each signal to get VPF amplitude

Perform confidence
analysis

Make decision

Figure 4 Flow chart of signal processing procedure

3.1 Cumulative VPF Monitoring


In this approach, the vibration data was normalized (sometimes referred to as ‘auto
scale’) [18] according to the following equation:
x−μ
xˆ = , (1)
σ
where x is experimentally acquired (original) data, x̂ is normalized data, μ is the
mean of the original data and σ is the standard deviation of the original data. Nor-
Vibration-Based Wear Assessment in Slurry Pumps 113

0.7 VPF

0.6

0.5
Amplitude (g)

0.4

Pump rotating speed


0.3

0.2

0.1

0
0 1 2 3 4 5 6 7 8

Frequency (order of rotating speed)


Figure 5 Frequency components of slurry pump vibration signal (undamaged impeller,
1800 RPM, sensor 1, x direction)

malization was performed to nullify any deviation due to experimental uncertainty


and ambient interference. Normalization ensured that the energy of all signals
would be the same, which allowed consistent comparisons of the different cases.
Essential inherent features could then be extracted from the frequency-domain.
Next, the vibration data were transformed into the frequency domain via the
FFT, and the amplitudes of the vane pass frequency were recorded. An example of
this transformation is illustrated in Figure 5. A cumulative measure was created by
summing the vane pass frequency amplitudes for each of the nine vibration sig-
nals. Finally, this cumulative measure for each damaged impeller was compared
with the baseline case of an undamaged impeller. As noted in the introduction, this
measure was expected to decrease with increased impeller wear.

3.2 Time-domain PCA-based VPF Monitoring

PCA is central to the study of multivariate data and is extremely versatile with
applications in many disciplines [19]. PCA continues to be the subject of much
research, ranging from new model-based approaches to algorithmic ideas from
neural networks. PCA has found application in fields such as face recognition and
image compression and is a common technique for finding patterns in data of high
114 G. Mani et al.

dimension. Since patterns in data can be hard to find in data of high dimension,
where the luxury of graphical representation is not available, PCA is a powerful
tool for analysing this type of data.
Here are the steps that are followed to calculate principal components:

a) Step 1: The mean of acquired data is made zero in all dimensions.


b) Step 2: Calculate covariance matrix. If we have n-dimensional data set (n x T
matrix, T being the time indices), the covariance matrix will be an n by n
matrix.
c) Step 3: Calculate the eigenvectors and eigenvalues of the covariance matrix.
The highest eigenvalue represent the most significant principal component
(PC). The eigenvectors corresponding to eigenvalues with significant values
can be used to derive a new data set in a new orthogonal co-ordinate system.
d) Step 4: Derive new data set using the ‘significant’ eigenvectors and original
data set.

Trendafilova et al. [20] used PCA for feature selection using frequency-domain
vibration data in an effort to detect faults in aircraft wings. Huang [21] used PCA

5
0 (a)
-5
5
0 (b)
-5
5
0
(c)
-5
5
Amplitude (g)

0 (d)
-5
5
0 (e)
-5
5
0 (f)
-5
5
0 (g)
-5
5
0 (h)
-5
5
0
(i)
-5
152 152.5 153 153.5 154 154.5 155

Time (s)
Figure 6 Vibration data: 1800 RPM, undamaged impeller: (a) sensor 1, x direction, (b) sen-
sor 1, y direction, (c) sensor 1, z direction, (d) sensor 2, x direction, (e) sensor 2, y direction,
(f) sensor 2, z direction, (g) sensor 3, x direction, (h) Sensor 3, y direction, and (i) Sensor 3,
z direction
Vibration-Based Wear Assessment in Slurry Pumps 115

simply for data visualization to recognize patterns (example: temperature at differ-


ent locations of a furnace). The idea was to visualize multivariate data as a surface
that in turn can be decomposed with PCA. Deng et al. [22] used PCA for the de-
tection of landscape changes with time.
In view of measurement variability, we believe PCA can be very useful to de-
termine patterns from multichannel vibration data. As discussed earlier, nine-dim-
ensional vibration data (three sensors, three directions) are collected using the
same experiment from three locations on the surface of the slurry pump. The first
approach described earlier in this section takes into account the overall effect of
the multichannel data. By utilizing PCA, we intend to capture the essential pattern
of the data set, and so we consider only the most significant PC.
Figure 6 shows the vibration data acquired from the experimental system for
the 1800-RPM case with an undamaged impeller. After application of PCA, the
nine-dimensional data give rise to another nine-dimensional data set as depicted
in Figure 7. The data shown in Figure 7a are the most significant PC and the
subsequent data shown are in decreasing significance. The frequency-domain
transformation of these components is shown in Figure 8, which makes it clear
that the main frequency-domain features, such as VPF, are the highest for the first
component. In this approach, the amplitude of the VPF of the first component
will be monitored.

5
0 (a)
-5
5
0 (b)
-5
5
0
(c)
-5
5
0 (d)
Amplitude (g)

-5
5
0 (e)
-5
5
0 (f)
-5
5
0 (g)
-5
5
0 (h)
-5
5
0 (i)
-5
152 152.5 153 153.5 154 154.5 155

Time (s)
Figure 7 Application of PCA on vibration data: 1800 RPM, undamaged impeller: (a) first
principal component (PC), (b) second PC, (c) third PC, (d) fourth PC, (e) fifth PC, (f) sixth PC,
(g) seventh PC, (h) eighth PC, and (i) Ninth PC
116 G. Mani et al.

0.5 VPF 0.5 0.5

0.4 0.4 0.4

0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(a) (b) (c)
Amplitude (g)

0.5 0.5 0.5

0.4 0.4 0.4

0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(d) (e) (f)

0.5 0.5 0.5

0.4 0.4 0.4


j
0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(g) (h) (i)
Frequency (multiple of pump speed)

Figure 8 Frequency-domain response of components after application of PCA on time-domain


vibration data: (a) first principal component (PC), (b) second PC, (c) third PC, (d) fourth PC,
(e) fifth PC, (f) sixth PC, (g) seventh PC, (h) eighth PC, and (i) Ninth PC

3.3 Frequency-domain PCA-based VPF monitoring

This approach is similar to the time-domain PCA approach as described in Sec-


tion 3.2, except that PCA is applied in the frequency domain rather than in the
time domain. Again, we will consider only the most significant PC. Figure 9
shows the PCs obtained from PCA application on the frequency-domain data
derived from time series shown in Figure 6. It is clear that major frequency-
domain features such as VPF are the highest for the first component. Like the
time-domain PCA approach, the amplitude of the VPF of the first component will
be monitored in this approach.
Vibration-Based Wear Assessment in Slurry Pumps 117

0.8 VPF 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(a) (b) (c)
Amplitude (g)

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10

(d) (e) (f)

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(g) (h) (i)
Frequency (multiple of pump speed)
Figure 9 Application of PCA on frequency-domain responses of acquired vibration data:
(a) first principal component (PC), (b) second PC, (c) third PC, (d) fourth PC, (e) fifth PC,
(f) sixth PC, (g) seventh PC, (h) eighth PC, and (i) Ninth PC

4 Results and Discussions

This study focused on a specific spectral component of vibration signals at the


vane pass frequency that, according to our hypothesis, can indicate trailing edge
vane damage. It is somewhat counter-intuitive because, while the overall vibration
generally increases with damage, amplitude at the VPF may actually decrease with
wear. The VPF component, along with other frequency contents, is shown in Fig-
ure 5 for a test run at 1800 RPM with an undamaged impeller.
In this first approach, the amplitude of the peak at the VPF was obtained for
each of the nine vibration signals acquired for each test scenario: 1800, 2200 and
2600 RPM. It is noted that a test scenario involves a run with a specific impeller
(e.g. undamaged impeller, for example) and a specific pump speed (e.g.
1800 RPM). The amplitude values of these nine signals are shown in Figure 10a.
118 G. Mani et al.

0.6
(a)
Amplitude (g)

0.4

0.2

0
Cumulative Amplitude

2.5
(b)
2

1.5

0.5
Undamaged Slight Moderate Severe
Figure 10 Amplitude of vane pass frequency component for 1800 RPM: (a) all nine signals
from the three sensors, and (b) cumulative amplitude

2.5

2
1800 RPM
Cumulative Amplitude

1.5

2200 RPM

0.5
2600 RPM

Undamaged Slight Moderate Severe


Figure 11 Cumulative amplitude of VPF component for different pump speeds
Vibration-Based Wear Assessment in Slurry Pumps 119

These values were then added to reduce variability, thereby obtaining the ‘cumula-
tive amplitude’ measure as depicted in Figure 10b. The amplitude values of dam-
aged cases and baseline cases (cases with undamaged impellers) indicate that the
trend is quite consistent. The trend can be seen even more clearly in the plot of the
cumulative amplitude measures (Figure 10b). The trend shows that a pump with a
worn impeller can clearly be discerned from one with an undamaged impeller.
This finding was validated by testing the signal processing procedure on data
collected at different pump speeds, as illustrated in Figure 11.
In the time-domain PCA approach, the amplitude of the peak VPF was obtain-
ed for the most significant PC calculated for each test scenario. The result (Fig-
ure 12) clearly shows the expected decreasing trend, except for the 2200-RPM
case with a severely worn impeller, which increases slightly from the moderately
worn impeller case. However, lower-level wear (undamaged or slight) can be
easily discernible from higher level wear (moderate or severe). Frequency-domain
PCA approach results are shown in Figure 13. Similar observations can be made
here. In this case, the value for severely worn impeller cases is slightly more than
that for moderately worn impeller cases for both 2200 and 2600 RPM. In Fig-
ures 11–13, we are unable to obtain absolute monotonic trends because the vibra-
tions are generated by complex fluid and impeller interactions. However, the
roughly monotonic trends provide rough indications of impeller damage growth.

0.7

0.6

0.5
1800 RPM
Amplitude

0.4

0.3 2200 RPM

0.2

0.1
2600 RPM

Undamaged Slight Moderate Severe


Figure 12 Time-domain PCA application – VPF amplitude of first principal component for dif-
ferent pump speeds
120 G. Mani et al.

0.9

0.8

1800 RPM
0.7

0.6
Amplitude

0.5

0.4
2200 RPM

0.3

0.2

0.1 2600 RPM

Undamaged Slight Moderate Severe


Figure 13 Frequency-domain PCA application – VPF amplitude of first principal component
for different pump speeds

In Figures 14–16, VPF amplitudes of damaged cases are plotted and normal-
ized with respect to the undamaged case. The first approach is depicted in Fig-
ure 14, where the average and standard deviation are illustrated for the cumulative
amplitude approach. The average values of all pump speeds show a reduction in

100
90
Percentage Amplitude Reduction

80
70
60
50
40
30
20
10
0
Undamaged Slight Moderate Severe
Figure 14 Cumulative amplitude reduction as wear progresses; each bar represents average of
VPF values at all speeds; each vertical line represents the standard deviation of those values
Vibration-Based Wear Assessment in Slurry Pumps 121

cumulative amplitude of 20 % for slight damage, 60 % for moderate damage and


64 % for severe damage. In Figure 15, the average and standard deviation are
shown for the second approach of time-domain PCA application. In this case, the
average values show a reduction in amplitude of 20 % for slight damage and
approx.. 70 % for moderate and severe damage. In Figure 16, the average and
standard deviation are shown for the third approach, which applied frequency-
domain PCA. The observations in this third case are very similar to those in the
second approach.
These three approaches clearly demonstrate that trailing edge damage has a
profound effect on the sand/fluid flow and alters a specific component of the vi-

100
90
Percentage Amplitude Reduction

80
70
60
50
40
30
20
10
0
Undamaged Slight Moderate Severe
Figure 15 Time-domain PCA application – reduction of VPF amplitude of first PC as wear
progresses; each bar represents average of VPF values at all speeds; each vertical line represents
the standard deviation of those values

100
90
Percentage Amplitude Reduction

80
70
60
50
40
30
20
10
0
Undamaged Slight Moderate Severe
Figure 16 Frequency-domain PCA application – reduction of VPF amplitude of first PC as
wear progresses; each bar represents average of VPF values at all speeds; each vertical line
represents the standard deviation of those values
122 G. Mani et al.

bration of the system. This specific component is the VPF component as predicted
by our hypothesis. The VPF component can be monitored to identify the extent of
wear on the vane trailing edge. In terms of estimation, higher-level damage can be
clearly distinguished from lower-level damage by a significantly diminished VPF
amplitude.

5 Conclusion

In this study, a non-invasive vibration-based platform for identifying a specific


wear type of a slurry pump impeller was reported. The wear type of vane trailing
edge damage studied is one of the most common types of wear in slurry pumps.
The experimental technique was based on a hypothesis that trailing edge wear
induces an effective increase in the gap between impeller and volute that alters
vibration patterns in a specific manner. Specifically, the alteration is the reduction
of the VPF component. The technique utilized a combination of three approaches
to analyse the VPF component extracted from experimentally obtained vibration
signals from the pump casing. The effectiveness of the procedure was demon-
strated using three pump speeds: 1800, 2200 and 2600 RPM. The analysis sup-
ports our hypothesis and can be summarized as follows:
a. Damage due to trailing edge wear on impeller vanes has a significant effect on
the vibration spectrum of a slurry pump. This effect can be attributed to the
change in pressure pulsation due to progressive shortening of the impeller
vanes and, therefore, widening of the impeller vane to cutwater gap.
b. The intensity of pressure pulsations decreases as the length of vanes is reduced,
which manifests in a reduced amplitude of the VPF component in the vibration
spectrum. This phenomenon is specific to the VPF and cannot be extended to
other frequencies such as the pump rotating frequency.
c. The amplitude of the VPF spectral component steadily decreases with the
growth of trailing edge impeller vane damage.
d. From all three approaches, it is clear that undamaged or slight wear cases can
easily be distinguished from cases of high-level wear (i.e. moderate or severe
wear).
e. Our future work will include experimental measurements of pressure pulsation
at the pump discharge and numerical simulations of the pump flow field with
undamaged and worn impellers.

References

[1] Volk MW (2005) Pump characteristics and applications, 2nd edn. CRC, Boca Raton, FL
[2] Orchard B, Moreland C, Warne C (2007) Optimizing the working life of hydrocarbon
slurry pumps. World Pumps 492:50–54
Vibration-Based Wear Assessment in Slurry Pumps 123

[3] Engin T, Gur M (2003) Comparative evaluation of some existing correlations to predict
head degradation of centrifugal slurry pumps. J Fluids Eng 125:149–157
[4] Engin T (2007) Prediction of relative efficiency reduction of centrifugal slurry pumps:
empirical- and artificial-neural network-based methods. J Power Energy A Proc Inst Mech
Eng 221:41–50
[5] Liu J, Xu H, Qi L, Li H (2004) Study on erosive wear and novel wear-resistant materials
for centrifugal slurry pumps. In: Proceedings of the ASME conference on heat trans-
fer/fluids engineering, 11–15 July 2004, Charlotte, NC
[6] Ridgway N, O’Neill B, Colby C (2005) The life cycle tribology of slurry pump gland seals.
In: 18th international conference of fluid sealing, 12–14 October 2005, Antwerp, Belgium
[7] Khalid YA, Sapuan SM (2007) Wear analysis of centrifugal slurry pump impellers. Ind
Lubricat Tribol 59(1):18–28
[8] Rodriguez CG, Egusquiza E, Santos IF (2007) Frequencies in the vibration induced by the
rotor stator interaction in a centrifugal pump turbine. J Fluids Eng 129:1428–1435
[9] Wang J, Hu H (2006) Vibration-based fault diagnosis of pump using fuzzy technique.
Measurement 39:176–185
[10] Abbot P, Gedney C, Morton D, Celuzza S, Dyer I, Ehlers P, Vaicaitis R, Brown J,
Guinzburg A, Hodgson W (2000) Vibration and acoustic evaluation of a large centrifugal
wastewater pump, Part 1: Background and experiment. American Society of Mechanical
Engineers, Noise Control and Acoustics Division (Publication) NCA, 27:243–252, 2000
[11] Srivastav OP, Pandu KR, Gupta K (2003) Effect of radial gap between impeller and dif-
fuser on vibration and noise in a centrifugal pump. J Inst Eng India Mech Eng Div
84(1):36–39
[12] Weissgerber C, Day MW (1980) Reduction of pressure pulsations in fan pumps. TAPPI
63(4):143–146
[13] Hodkiewicz MR, Norton MP (2002) The effect of change in flow rate on the vibration of
double-suction centrifugal pumps. Proc Inst Mech Eng E J Process Mech Eng 216:47–58
[14] Guo SJ, Maruta Y (2005) Experimental investigations on pressure fluctuations and vibra-
tion of the impeller in a centrifugal pump with vaned diffusers. JSME Int J Ser B Fluids
Thermal Eng 48(1):136–143
[15] Rzentkowski G, Zbroja S (2000) Experimental characterization of centrifugal pumps as an
acoustic source at the blade-passing frequency. J Fluids Struct 14:529–558
[16] Morgenroth M, Weaver DS (1998) Sound generation by a centrifugal pump at blade pass-
ing frequency. J Turbomach Trans ASME 120(4):736–743
[17] Mani G, Wolfe D, Zhao X, Zuo MJ (2008) Slurry pump wear assessment through vibration
monitoring. In: Proceedings of WCEAM-IMS, 27–30 October, Beijing, China
[18] Sohn H, Farrar CR (2001) Damage diagnosis using time series analysis of vibration sig-
nals. Smart Mater Struct 10:446–451
[19] Jolliffe IT (2002) Principal component analysis, 2nd edn. Springer Series in Statistics,
Springer Berlin Heidelberg New York
[20] Trendafilova I, Cartmell MP, Ostachowicz W (2008) Vibration-based damage detection in
an aircraft wing scaled model using principal component analysis and pattern recognition.
J Sound Vibrat 313:560–566
[21] Huang X (2008) Visualizing principal components analysis for multivariate process data.
J Qual Technol 40(3):299–309
[22] Deng JS, Wang K, Deng YH, Qi GJ (2008) PCA-based land-use change detection and
analysis using multitemporal and multisensor satellite data. Int J Remote Sens
29(16):4823–438
The Concept of the Distributed Diagnostic
System for Structural Health Monitoring
of Critical Elements of Infrastructure Objects

Jedrzej Maczak

Abstract In civil engineering structural health monitoring, various methods of


technical state assessment are used on the basis of comparative dynamic, ten-
sometric, magnetic and optic-fibre measurements. All these measurement methods
allow for stress assessment in critical fragments of structures which are vital for
the structures’ stability and durability. The evolution of defects in construction
causes measurable changes in dynamic properties along with changes in stress
distribution in critical construction joints. Additionally, materials which could
threaten a catastrophe caused by fatigue wear, exceeding stress limits or the emer-
gence of plastic deformations, have magnetic properties which could affect the
local magnetic field. The latter seems to be a very promising way of assessing
global stress in ferromagnetic materials. In this paper, the concept of a distributed
diagnostic system capable of monitoring the technical state of critical elements of
large infrastructure objects like bridges, steel trusses, supermarket buildings and
exhibition halls will be discussed. Adaptation of such systems is vital for on-line
assessment of the technical state of infrastructure objects and could limit the pos-
sibility of catastrophic disasters resulting in the loss of human life.

Keywords Monitoring systems, Structural health monitoring

1 Introduction

In recent years, around the world, an increasing number of large-scale objects


have been built like bridges, supermarkets, exhibition halls and warehouses. Such

__________________________________
J. Maczak
Institute of Automotive Engineering, Poland
e-mail: jma@mechatronika.net.pl

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 125


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_6, © Springer-Verlag London Limited 2012
126 J. Maczak

structures have demonstrated a tendency toward increased size and construction


surface load. Additionally, designers are being pushed to reduce the costs of the
constructed objects while simultaneously increasing the variety of architectonic
concepts. This has led to a growing number of catastrophic accidents with casual-
ties including loss of human life. Among the biggest accidents in recent years one
could cite the following ones caused by snowfall:
• Bad Reichenhall, Germany, 2006 (swimming pool roof collapse, 15 dead,
32 injured);
• Katowice, Poland, 2006 (exhibition hall roof collapse, 65 dead, 170 injured);
• Moscow, 2004 (Transvaal Park swimming pool, 28 dead, 110 injured);
• Moscow, 2006 (market hall collapse, 65 dead, 32 injured).
Additionally worth noticing are the collapses of the two air terminals in Paris
(2004) and on the Spanish island of Minorca (2006). All these events indicate a
need to develop new methods of assessing the technical state of such objects.
Although many investigative methods exist which permit the detection and
definition of structural failures (damages), the above-mentioned catastrophes oc-
curred unexpectedly, causing danger to people in addition to enormous material
losses. There is clearly a lack of procedures in place for unequivocally determin-
ing the soundness of buildings and the amount of time likely to pass before the
next disaster. Such procedures would protect structures by allowing a timely ap-
plication of appropriate repair methods, thereby minimizing the possibility of
tragic accidents involving the loss of human life.
Currently, monitoring systems are used only occasionally on new large bridge
structures like suspension bridges which could suffer damage during extreme
weather conditions. The main cause of this situation is the lack of diagnostic pro-
cedures allowing on-line diagnosis of the technical condition of structures. Such
diagnostic systems are generally not installed on smaller structures due to funding
considerations and lack of reliable diagnostic methods which would allow for a
global assessment of the structures’ technical condition. The commonly used
methods of determining the technical state of structures usually focus on searching
for cracks and material heterogeneities and assessing concrete or steel degradation,
which does not allow for an assessment of stress in prestressed concrete or steel
structures. Such methods are limited to periodic maintenance strategies and are
thus not suitable for on-line diagnosis.
Objects such as those mentioned previously are created using a variety of tech-
nologies. Some are light steel structures (e.g. warehouses), some structures made
from prestressed concrete. In every case, the methods of early defect detection
should take into account the differences in construction technologies and allow for
an assessment of the construction load (for example, from snow lying on the roof
or blowing wind) and thus an assessment of internal stress in the structures. In
particular, the proper evaluation of load in prestressed beams is important as it is
the load which determines the strength of a concrete structure. For such structures
the most important consideration is the preservation of the compression force in
the concrete.
The Concept of the Distributed Diagnostic System for Structural Health Monitoring 127

2 Methods of Determining the Stress in Critical Elements


of Infrastructure Objects

One of the most popular methods of determining the stress in machine design is
tensometry. Properly used, tensometry allows for stress/strain assessment at places
of applied strain gauge. This method could be adopted to measure the load applied
to a given structure. The only problem is that tensometric measurements are rela-
tive to some base measurement, usually the first measurement taken after applying
a strain gauge to the structure. This means that it is possible to obtain only incre-
mental stress measurements, not total stress values. Of course, for a new construc-
tion, tensometric methods could be used as such methods enable gluing strain
gauges to structures with a minimal or known load applied. Alternatively, it is
necessary to build a mathematical model of the construction with distributed load
for static load assessment and determin the critical elements in the construction for
proper placement of strain gauges.
An extension of classical tensometry is fibre-optic tensometry. Instead of using
strain gauges, it uses Bragg gratings connected by fibre optics. Using optical lines
simplifies cabling as several gauges can be added to the same fibre-optic line.
Tensometric methods are rather inexpensive and widely used, so they are easily
adopted for automatic on-line monitoring of the load applied to steel structures.
The only problem that remains is the proper selection of critical points for install-
ing strain gauges and determining the border (limiting) values. On the other hand,
adoption of tensometric methods to existing prestressed concrete constructions is
very limited as usually there is no way to apply strain gauges to cables, and what
is worse, the load of these cables (prestressing force) in that moment is usually
unknown for old structures.
The prestressing force of old existing structures made of prestressed concrete is
very hard to evaluate because there are currently no ‘off-the-shelf’ methods that
one could apply. A very promising method currently in the development stage is
based on an analysis of the dynamic response of a structure such as a bridge [1].
The method is based on the analysis in amplitude modulation phenomena of the
vibroacoustic signal caused by the impact of a modal hammer or any other source
of excitation. Preliminary tests show that it is possible to develop a diagnostic
model that, contrary to currently used models, allow us to analyse the relationships
between the stress distribution in the transverse section and the parameters of the
vibroacoustic signal [2]. The basis of the model is the assumption that the initial
prestress in the bended beam is accompanied by dispersion phenomena that cause
changes in the wave propagation parameters, mainly differences between group
and phase velocities. These changes engender modulation phenomena in the spec-
trum of beam acceleration signals. Assuming that existing damages in a beam
would cause a decrease in the stress in the transverse section, this should cause
measureable changes in modulating frequencies. Those frequency changes depend
only on the beam characteristics and beam load and are independent of the excita-
tion value of the signal [3]. The relation between the stress distribution in concrete
128 J. Maczak

and the steel beams allows one to build diagnostic inverted models and, thus, to
determine the qualitative changes in the construction technical state like load and
stress in concrete or prestressing beams.
Another very promising method of determining stress in ferromagnetic materi-
als is based on measurement of the free magnetic field of the construction material
[4, 5, 6]. The magnetic field of a steel construction’s element is related to the
stress concentration and is easily measured. Because this is a free field, there is no
need to magnetize the construction. The author’s preliminary experiments using
steel material samples confirms the possibility of using this method in monitoring
systems. The problem which remains unsolved relates to the effect of disturbances
caused by external magnetic fields. This method seems very promising as it is not
limited to particular construction points, as with strain gauges, but rather allows
for assessing the stress in whole elements of the construction.

3 Distributed Diagnostic System for Structural


Health Monitoring

Distributed diagnostic systems are widely used in machine diagnostics to monitor


the condition of critical machines, e.g. power units, fans, allowing on-line monitor-
ing and decision making depending on the current state of the monitored objects
[7]. The main advantage of this approach is the possibility of remote monitoring,
from a single location, of the technical state of many objects distributed over a

Figure 1 Layout of distributed diagnostic system


The Concept of the Distributed Diagnostic System for Structural Health Monitoring 129

large area, which limits costs and manpower. This approach is especially advanta-
geous in cases involving great distances between machines and a diagnostic techni-
cian [8]. This method is thus limited only by network availability and performance.
This concept could be easily adopted for on-line monitoring of infrastructure
objects. A distributed diagnostic system (Fig. 1) is a network of intelligent, pro-
grammable units monitoring particular construction elements or machines (Fig. 2).
These units are built in accordance with the microprocessor controller’s capabili-
ties and are equipped with signal conditioning circuits well matched to the signal
sensors, measuring values linked to the object’s technical state. All controllers are
linked to the database which stores information about changes in the construction
technical state. This database is accessible to the technical staff overseeing the
diagnosed infrastructure objects who are able to make appropriate decisions re-
garding use of the system. These local networks are easily expanded into larger
e-monitoring networks (Fig. 3).
Local diagnostic units usually have the ability to communicate with their envi-
ronment using either TCP/IP or CAN networks for the purpose of informing users
and the managing unit about a structure’s current technical state or load and the
decisions made regarding use. TCP/IP networks are increasingly able to authorize

Acquisition threads
Interthread
Diagnostic signal
communication
acquisition
service

External environment
Sensors
Process signal
acquisition*
External
communication
Diagnostic conclusion interface
Monitored structure

threads
Internal system bus

Signalization of
Signal analysis
construction state
(signal estimate calc.)
TCP/IP

`
Storing estimates
Information storage
(database of estimate)
External
communication
threads
Performing diagnostic
conclusions

Programable
Actuators Change of construction Automation
parameters as a reaction
on detected failure* Controller Database

Thread of construction
control* * if possible

Figure 2 Programmable automation controller used for machine monitoring


130 J. Maczak

external access to the system. It is also very easy on such networks to implement
the automatic messaging module (e.g. e-mail, SMS) informing authorized person-
nel about current problems with monitored objects. The network could also be
used for communication with an external database storing processed measurement
results and information about a structure’s current technical state. Such a solution
would serve to release the controller from the necessity of handling the local data-
base and reduce the limitation imposed by hardware requirements.
The exact structure of the system and number of database units depend on the
type, size and number of infrastructure objects being monitored. Data from similar
objects could be stored in a single database, allowing for easy comparison of the
diagnostic data. If a main diagnostic centre exists, then a central database could be
established. The database storing results allows diagnostic technicians for histori-
cal trends viewing and allows for modification of diagnostic algorithms. Also the
comparison of behaviour of objects of the same type is possible.
As a source of informations about the current technical state of the monitored
element signals from different transducers could be used. Strain gauges or fibre-
optic Bragg gratings and magnetic field transducers could be used for determining
load applied to a construction. To analyse the dynamic behaviour of a construction
and additionally to determine the prestress force in concrete structures, piezoelec-
tric accelerometers could also be used [9]. The latter could be utilized to determine
the prestress force in concrete elements. Additionally, accelerometer signals could
be used to check an object’s technical condition. This is based on the assumption
that the development of the degradation and fatigue processes emerging in infra-
structure objects causes modulation phenomena of measurable dynamic parame-
ters as well as a quantitative and qualitative increase in non-linear effects in sys-
tems in which static loads predominate. Application of these methods requires the

FBG x NObj
(fiber optic Monitored structure
HMI/SCADA tensometry)
(optional)

Diagnostic signals
Structure
TCP/IP parameter Ζ Tensometry (classic)
Export of measured values control Ζ Magnetometry
(server OPC) Ζ Acceleration

DSC
x NObj - monitored structures
DSC (datalogging Data backup
x NOper - operator consoles
- Data archiving cRIO
and supervisory in case of x NSign - signalization units
- Momentary data (Real Time transmission
control) (data socket) controller) errors
DSC - system database
server
Archiving
(SQL DB)
- Historical data TCP/IP
- Momentary data System diagnostic
(UDP)
x NSign
x NOper
Data flow
System diagnostic Operator’s Signalization
(UDP) console units TCP/IP

UDP
Reporting
(MS Office ActiveX)
Disk write

Figure 3 Data flow block diagram of distributed diagnostic system


The Concept of the Distributed Diagnostic System for Structural Health Monitoring 131

use of mathematical models for diagnosing a safety-critical structure, describing


its static and dynamic behaviour. Models should take into account the develop-
ment of degradation and fatigue processes and permit one to determine the rela-
tionships between technical state parameters and symptoms of structural wear.

4 Conclusions

The adaptation of distributed diagnostic system technology to the diagnosis of


mechanical systems for the purpose of monitoring critical elements of infrastruc-
ture objects holds great promise. Such technology could improve the safety of
infrastructure objects and lower the probability of catastrophic events with loss of
human life. The cost of such systems would be relatively low compared to the
losses resulting from accidents caused by extreme loads or environmental condi-
tions. Depending on the needs and the complexity of the structure, the system
could be limited to measuring the load or stress in the construction or expanded to
calculate the structure’s remaining useful life.

References

[1] Radkowski S, Szczurowski K (2006) Hilbert transform of vibroacoustic signal of prestressed


structure as the basis of damage detection technique. In: Proceedings of the conference on
bridges, Dubrovnik, Croatia, 21–24 May 2006, pp. 1075–1082
[2] Gałęzia A, Radkowski S, Szczurowski K (2006) Using shock excitation in condition moni-
toring of prestressed structure. In: Proceedings of the international congress on sound and
vibration (ICSV), Vienna, 2–6 July 2006
[3] Gałęzia A, Mączak J, Radkowski S, Szczurowski K (2008) A method of stress distribution
assessment in prestressed structures. In: Proceedings of the VII international seminar on
technical systems degradation, Liptovsky Mikulasz, 26–29 March 2008
[4] Kusenberger FN, Barton JR (1981) Detection of flaws in reinforcement steels in prestressed
concrete bridges. Final Report FU-WA/RD-81/087, Federal Highway Administration,
Washington, DC
[5] Sawade G (2001) Mobile SQUID-Messystems zur Bauwerksinspektion, Teilvorhaben
Magnetisierungsvorrichtung und Signalverarbaitung. Forschungsbericht 13 N 27249/3,
Bundesministerium fur Bildung Wiessenschaft, Forschung und Technologie (in German)
[6] Dubov AA (2008) Principal features of the metal magnetic memory method and inspection
tools as compared to known magnetic NDT methods. Available from:
www.energodiagnostika.com. Accessed 17 March 2012
[7] Maczak J (2007) Structure of distributed diagnostic systems as a function of particular diag-
nostic task. In: Proceedings of the 20th international congress and exhibition on condition
monitoring and diagnostics engineering management (COMADEM 2007), Faro, Portugal
[8] Shuhle R, Luft M, Lebitsch F (2002) Digital and software supported tele service.
www.telediagnose.com, Issue 3. Available from: http://telediagnose.com. Accessed 17
March 2012
[9] Polder RB, et al (2009) COST Action 534 – new materials, systems, methods and concepts
for prestressed concrete structures – final report. European Science Foundation, Strasbourg
Cedex, France
Optimising Preventive Maintenance Strategy
for Production Lines1

Yong Sun, Lin Ma and Joseph Mathew

Abstract Preventive Maintenance (PM) is often applied to improve the reliabil-


ity of production lines. A Split System Approach (SSA) based methodology is
presented to assist in making optimal PM decisions for serial production lines. The
methodology treats a production line as a complex series system with multiple
(imperfect) PM actions over multiple intervals. The conditional and overall reli-
ability of the entire production line over these multiple PM intervals are hierarchi-
cally calculated using SSA, and provide a foundation for cost analysis. Both risk-
related cost and maintenance-related cost are factored into the methodology as
either deterministic or random variables. This SSA based methodology enables
Asset Management (AM) decisions to be optimised considering a variety of fac-
tors including failure probability, failure cost, maintenance cost, PM performance,
and the type of PM strategy. The application of this new methodology and an
__________________________________
Y. Sun
CRC for Integrated Engineering Asset Management,
School of Engineering Systems, Queensland University of Technology,
Brisbane, QLD 4001, Australia
y3.sun@qut.edu.au, Tel.: (61 7) 3138 2442, Fax: (61 7) 3138 1469
L. Ma
CRC for Integrated Engineering Asset Management,
School of Engineering Systems, Queensland University of Technology,
Brisbane, QLD 4001, Australia
J. Mathew
CRC for Integrated Engineering Asset Management,
School of Engineering Systems, Queensland University of Technology,
Brisbane, QLD 4001, Australia

1
This research was conducted within the CRC for Integrated Engineering Asset Management,
established and supported under the Australian Government’s Cooperative Research Centres
Programme.

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 133


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_7, © Springer-Verlag London Limited 2012
134 Y. Sun, L. Ma and J. Mathew

evaluation of the effects of these factors on PM decisions are demonstrated using


an example. The results of this work show that the performance of a PM strategy
can be measured by its Total Expected Cost Index (TECI). The optimal PM inter-
val is dependent on TECI, PM performance and types of PM strategies. These
factors are interrelated. Generally, it was found that a trade-off between reliability
and the number of PM actions needs to be made so that one can minimise Total
Expected Cost (TEC) for asset maintenance.

Keywords Preventive maintenance, Decision making, Production lines, Split


System Approach, Engineering asset management

1 Introduction

The determination of optimal Preventive Maintenance (PM) strategies for produc-


tion lines, especially over the whole life of these assets, is imperative for their
owners as maintenance costs can occupy a sizeable portion of the total costs of
business. The need to optimise maintenance of production lines becomes pressing
with increasing complexity of machines and competitive market pressure.
Maintenance issues of production lines have attracted much attention from re-
searchers. For example, Dallery and Bihan [1] developed an improved method for
analysing serial production lines with unreliable machine and finite buffers. Liber-
opoulos [2] conducted a case study for the reliability analysis of an automated
pizza production line and Miltenburg [3] investigated the effect of breakdown on
U-shaped production lines. Some literature on the optimal PM planning for pro-
duction lines has also been published. For example, see research reports presented
by Cavory [4], Percy et al. [5], and Chareonsuk et al. [6]. Two major issues need
to be addressed when making an optimal decision of PM strategy for production
lines: (1) the changes in reliability of production lines due to PM and (2) mainte-
nance-related costs. Conflicting interests exist between these two issues. More
frequent maintenance activities often need to be conducted and more resources
need to be consumed if one wishes to maintain a production line at a higher reli-
ability level. As a result, maintenance-related costs increase. On the other hand,
lowering reliability requirements can reduce the maintenance-related costs. How-
ever, a lower reliability of a production line usually means that this production line
is prone to more breakdowns and greater loss in production. A good maintenance
strategy must balance both reliability and maintenance costs.
Various maintenance optimisation models have been developed [7]. Some
analysis has revealed that maintenance cost will increase with increasing mainte-
nance frequency, whereas the cost due to breakdown of a production line de-
creases with increasing PM frequency. Hence, an optimal PM frequency exists [8].
Chareonsuk et al. [6] attempted to optimise PM intervals of production lines under
two criteria, namely, expected total costs per unit time and reliability. However,
they did not consider multiple imperfect PM actions in their model. To deal with a
Optimising Preventive Maintenance Strategy for Production Lines 135

long term PM schedule for new production lines, Percy et al. [5] postulated a new
Bayesian method based approach but did not develop an applicable algorithm. As
Reliability (or Risk) Based PM (RBPM) is generally more cost-effective than
Time Based PM (TBPM), maintenance management has shifted its focus on
TBPM to the use of RBPM. Khan and Handdara [9] presented a risk-based main-
tenance approach composed of risk determination, risk evaluation and mainte-
nance planning for optimising maintenance/inspection strategy. The risk-based
maintenance strategy has been used for a power generation plant [10]. Fault tree
analysis and Monte Carlo simulation are the major methods for probabilistic fail-
ure analysis in maintenance decision making [9]. The effect of PM has not been
investigated adequately. As financial risk is a major issue in maintenance strategy
determination, Kierulff [11] discussed the replacement issues from the financial
point of view. To reduce decision uncertainty, the Proportional Hazard Model
(PHM) based approach has been proposed for optimising Condition-based Main-
tenance (CBM) [12]. This PHM based method is generally used to optimise the
next maintenance time. More sophisticated maintenance optimisation models have
also been developed. For example, Kallen and Noortwijk [13] proposed an adap-
tive Bayesian decision model to optimise periodic inspection and replacement
policy for structural components. A practical model for determining the optimal
PM strategy for production lines over its life-span is yet to be developed. The
major barrier to developing such a model is reliability prediction of production
lines with multiple PM actions over a long operational period. Production lines are
normally complex repairable systems and PM actions on these complex systems
are generally imperfect, i.e. the state of a production line after a PM action is be-
tween “as good as new” and “as bad as old”.
A Split System Approach (SSA) based methodology is developed in this paper
to remove this barrier. SSA was proposed by the authors [14] to predict the reli-
ability of systems with multiple PM actions over multiple intervals. In this paper,
the SSA is used to predict the reliability of production lines with multiple PM
actions. Only serial production lines are considered. A serial production line indi-
cates that the failure of any machine in this production line will cause the failure
of the whole system (production line). Serial production lines are commonplace in
manufacturing industries such as automobile manufacturing factories, food proc-
essing factories and clothes making factories.
The rest of the paper is organised as follows: in Section 2, the concept and
methodology of SSA are reviewed. In Section 3, a methodology for determining
the optimal PM strategy based on SSA is presented, and this is followed by an
example in Section 4. A conclusion is provided in Section 5.

2 The Concept and Methodology of SSA

The basic concept of the SSA is to separate repaired and unrepaired components
within a system virtually when modelling the reliability of a system after PM
136 Y. Sun, L. Ma and J. Mathew

actions. This concept enables the analysis of system reliability at the component
level, and stems from the fact that generally when a complex system has a PM
action, only some of components are repaired.
The following assumptions were made in developing SSA based models:

(1) The failure of repaired components is independent of unrepaired components.


This assumption means that when a component is repaired, the failure distri-
bution form of the unrepaired components of a system does not change, and
the conditions of the unrepaired components do not affect the reliability char-
acteristics of repaired components.
(2) The reliability function of a new repairable system is known. The reliability
functions of repaired components are also known.
(3) The topology of a repairable system is known.
(4) The repair time is negligible.
(5) PM time is deterministic variable.

The topology of production lines discussed in this paper is assumed to be serial


systems consisting of M components. The original multi-serial system can be
converted into a simplified serial system which only contains two virtual parts:
“Part 1” includes repaired machines and “Part 2” is the remainder of the produc-
tion line, often referred to as a subsystem (see Figure 1).
In Figure 1, R1(τ)i and R2(τ)i are the reliability functions of Part 1 and Part 2 af-
ter the ith PM interval (refer to Figure 2). In this paper, the second subscript i is
used to stand for “after the ith PM action”. Subscript i = 0 stands for no PM. The
PM strategy is to repair Part 1 whenever the reliability of the production line falls
to a predefined control limit of reliability R0. A possible interpretation for this PM
strategy is that the components in Part 1 have a much shorter mean time to failure
than the components in Part 2.

1 2 3 M

Original system
(a)

R1 (τ )i R2 (τ ) i

Part 1 Part 2

Simplified series system


(b)
Figure 1 Simplification of Production Lines
Optimising Preventive Maintenance Strategy for Production Lines 137

Rs(t)

Rs(t)0 Rs(t)1
Rs(t)n-1
Rs(t)n
τ
R0
Δt1 Δt2 Δt3… Δtn

t0 t1 t2 tn

Figure 2 Changes to the Reliability of an Imperfectly Maintained System

As mentioned previously, production lines are often complex repairable sys-


tems. The states of machines after repairs in a production line can have a signifi-
cant impact on the reliability of the entire production line and must be considered
while modelling the reliability of the production line covering a series of PM ac-
tions. PM actions on a production line often involve imperfect repairs. The reli-
ability of a system after imperfect repairs declines in a manner shown in Figure 2.
Two time coordinates are used in the modelling:
Absolute time scale t: 0 ≤ t < ∞.
Relative time scale τ: 0 ≤ τ ≤ ti (i = 1, 2, …, n).
In Figure 2, R0 is the predefined control limit of the reliability level for the pro-
duction line, Δti (i = 1, 2, …, n) is the interval time between (i-1)th PM activity and
ith PM activity. Parameter ti is the ith PM time and also the start time for the pro-
duction to run again after the ith PM action according to Assumption (4).
When a system receives PM actions, two types of reliability concepts are in-
volved [15]: one is the conditional reliability of the system. This reliability indi-
cates the survival probability of a system which has successfully been preventively
maintained. It describes the reliability changes between two PM actions as shown
in Figure 2. The other is the probability of survivor of the system over its whole
life time which takes into account the probability of survival of the repaired com-
ponents until their individual PM times. It describes the reliability changes of the
system over a given period which may cover a number of PM intervals. To distin-
guish the latter from the conditional reliability, it is termed as the overall reliabil-
ity of the system.
For a simple scenario where Part 1 is always repaired in n PM actions, the con-
ditional reliability function of the system after the jth PM actions (j = 1, 2, …, n)
can be expressed as
j
R1 (τ ) j Rs (τ +  Δti )0
Rs (τ ) j = j
i =1
, ( j = 1, 2, ..., n). (1)
R1 (τ +  Δti )0
i =1
138 Y. Sun, L. Ma and J. Mathew

Equation (1) can be rewritten using absolute time scale as


j
R1 (t −  Δti ) j Rs (t )0 j
Rs (t ) = i =1
(t ≥  Δti ) and ( j = 1, 2, ..., n). (2)
R1 (t )0 i =1

Note that Eqs. (1) and (2) both describe the reliability of a system which has
been preventively maintained for n times, i.e. these two equations both describe the
conditional probability of survival of a system with n PM intervals. To predict the
overall reliability of a system with multiple PM intervals, the cumulative effect of
multiple PM actions needs to be considered, i.e. the probability of survival of the
repaired components until their individual repair times should be considered [8].
The overall reliability function of a serial system after the first PM action is
Rsc (τ )1 = R1 (Δt1 )0 Rs (τ )1 , (3)

where Rsc(τ)1 is the cumulative reliability of the system after the first PM action.
R1(Δt1)0 is the probability of survival of Part 1 until time t1.
Generally, the overall reliability of the system over the n PM cycles can be ex-
pressed as
j
Rsc (τ ) j = ∏ R1 (Δti )i −1 Rs (τ ) j , ( j = 1, 2, ..., n), (4)
i =1

where Rsc(τ)j is the overall reliability of the system after the jth PM action
(j = 1, 2, …, n).
The authors have also developed a model for calculating the reliability of a sys-
tem with multiple repaired components over multiple PM cycles [16].

3 Methodology for Determining an Optimal PM Strategy

The SSA based PM decision making methodology is composed of production line


reliability prediction and maintenance cost analysis.

3.1 Estimation of the Reliability of Production Lines

As mentioned in Section 2, SSA analyses the reliability of repairable systems after


PM at the component level. Hence, direct application of SSA to estimating the reli-
ability of production lines might be inconvenient because a production line often
consists of numerous components. To avoid this inconvenience, a production line
can be decomposed at different levels virtually, and then the reliability of the pro-
duction line can be analysed at these levels using SSA respectively (see Figure 3).
Optimising Preventive Maintenance Strategy for Production Lines 139

Production line

Remainder of the Repaired


production line machine(s)

Remainder of Repaired assemblies


the machine(s)

Remainder of the
Repaired component(s)
assemblies

Figure 3 Decomposition of a Production Line

A bottom-up approach can be used for analysing the reliability of the produc-
tion line after a production line has been virtually decomposed as shown in Fig-
ure 3. The reliability functions of assemblies are estimated firstly at the component
level using SSA, and then the reliability functions of machines can be estimated at
the assembly level. Finally, the reliability function of the production line can be
estimated at the machine level. For simplification, only the last step is demon-
strated in this paper.

3.2 Criteria for Optimising PM Strategies

Both reliability of a production line and maintenance-related cost are considered in


this paper when determining optimal PM strategies for production lines.
Reliability is used to describe the likelihood of failure of a system. Risk due to
failure of production lines can be converted into risk-related cost, which includes
loss of production, penalty for contract breach, machine damages and additional
harmful impact on human, products, machines and environment. Maintenance-
related cost includes material cost, maintenance labour cost and loss of production
due to conducting PM. Various asset maintenance cost models have been devel-
oped (e.g. see references [17−19]). In this paper, the risk-related cost of a produc-
tion line is assumed to be proportional to the failure probability of the production
line and the maintenance-related cost is assumed to be proportional to the number
of PM actions. Based on these two assumptions, the risk-related cost and the main-
tenance-related cost are expressed as
Cr = kr [1 − R (T )], (5)

Cm = k m N T , (6)
140 Y. Sun, L. Ma and J. Mathew

where T is the operational period of the production line that an enterprise is inter-
ested in. Typically, T is the life span of the production line. R(T) is the reliability
of the production line at time T. Parameters kr and km are two scale constants. NT is
the required number of PM actions over the period of time T for maintaining the
production line above the reliability level of R(T).
Define the Total Expected Cost (TEC) as the sum of the expected risk-related
cost and the expected maintenance-related cost and the Total Expected Cost Index
(TECI) as the result that the TEC is divided by km:
TEC = Cr + Cm , (7)

TECI = krm [1 − R(T )] + NT , (8)


where
krm = kr / k m . (9)

Parameter km is termed as the Risk-Maintenance Cost Ratio (RMCR). It repre-


sents the significance of a PM action. A higher km indicates that a PM action is
more significant, that is, more risk-related costs can be reduced due to the de-
creased failure probability after this PM action. An advantage for using RMCR is
that this parameter is dimensionless.
TECI can be used to measure the performance of a PM strategy. The lower the
TECI, the better the PM strategy.
In industry, parameters kr and km may vary significantly and unpredictably. Let
Kr denote the cost per PM action and Km denote the cost per percentage of failure
probability. Then, Kr and Km are both random variables. Assume that Kr and Km
both change in [0, ∞) and are independent of the age of the asset and the number
of PM actions. If Kr has a distribution density function ƒr(kr), then conditional on
Kr = kr, one has
Cr [ R(T ) | K r = kr ] = kr [1 − R (T )], (10)

and on removing the condition, one has



Cr =  kr [1 − R (T )] f r (kr )dkr = E[ K r ][1 − R(T )], (11)
0


where E[ K r ] =  kr f r (kr )dkr is the first moment of K r .
0

Similarly, if Km has a distribution density function ƒm(km), the expected mainte-


nance cost is given by
C m = E[ K m ] N T , (12)

where E[ K m ] =  km f m (km )dkm is the first moment of K m .
0
Optimising Preventive Maintenance Strategy for Production Lines 141

In this case, RMCR can be defined as


E[ K r ]
krm = , (13)
E[ K m ]

so that Eq. (8) still holds.


The approach to determining the optimal PM strategy for production lines pre-
sented in this section is best demonstrated using an example in the following section.

4 Example

A PM strategy is required for a period of the next two years for an automated food
production line that has been operating for some time. This production line can be
described as a simplified serial system as shown in Figure 1. Part 1 is composed of
those machines that have very short mean time to failure compared with the re-
mainder of the production line and Part 2 is composed of the remainder of the
production line. The times of critical failures of Part 1 followed a Weibull distri-
bution and were expressed as
2.1
τ 
R1 (τ )0 = exp[−   ]. (14)
 18 
Part 2 was assumed to have an exponential failure distribution, that is,
 −τ 
R2 (τ )0 = exp  . (15)
 400 
In reality, the failure distributions and the parameters of the corresponding fail-
ure distribution functions can be determined based on historical failure data and
maintenance records of the production line.
Hence, the reliability of the entire production line was

  τ  2.1  τ  
Rs (τ )0 = exp[−    +   . (16)
  18   400  

Conducting PM on the machines in Part 1 can improve the overall reliability of


the entire production line since Part 1 was operating at its wear-out stage. This
scenario has been studied in Section 3.2. The reliability of the entire production
line with multiple PM intervals can be analysed using Eqs. (1) and (4).
Two PM strategies were considered. Strategy one is a type of Reliability Based
PM (RBPM) strategy. In this strategy, Part 1 will be maintained whenever the
reliability of the entire production line after PM falls to 0.9. The required mini-
mum operational time of the production line after a PM action is 0.5 months
(15 days. A calendar system of twelve 30-day months is used in this paper.) The
142 Y. Sun, L. Ma and J. Mathew

second strategy is a type of Time Based PM (TBPM) strategy. In this PM strategy,


PM on the machines in Part 1 starts after one month (30 days) after which it will
be conducted in fixed intervals. As mentioned in Section 3.1, the reliability of
Part 1 after maintenance can also be predicted using the SSA. However, the de-
rived reliability formula is complicated. In this paper, the following approximate
formula was used to describe the reliability of Part 1 after a repair, that is,

R1 (τ ) j = R1 (τ + f c Δt j −1 ), ( j = 1, 2, ..., n). (17)

where ƒc is termed as the recovery coefficient, which is used to represent the de-
gree of the reliability of Part 1 after a PM action has recovered to its original reli-
ability. When ƒc = 0, the state of Part 1 after a PM action is as good as new; When
ƒc = 1, the state of Part 1 after a PM action is as bad as old; When 0 < ƒc < 1,
Part 1 has an imperfect repair.
Substituting Eq. (17) into Eq. (1) gives the conditional reliability function of
the production line after the jth PM action (j = 1, 2, …, n):
j
R1 (τ + f c Δt j −1 ) j Rs (τ +  Δti )0
Rs (τ ) j = j
i =1
, ( j = 1, 2, ..., n). (18)
R1 (τ +  Δti )0
i =1

Equation (18) indicates that Rs(τ)j (j = 1, 2, …, n) becomes smaller, when ƒc in-


creases. As a result, the required minimum PM intervals Δti (j = 1, 2, …, n) be-
come shorter, and the overall reliability of the production line after PM becomes
lower (see Eq. (4)). Therefore, to maintain the same reliability level, more PM
actions which mean more PM costs, are required over the same period.
The reliability of the entire production line with different PM strategies was
predicted using SSA. Two examples of reliability prediction are shown in Fig-
ures 4 and 5. In both figures, ƒc = 0.005. From these two figures, it can be seen
that both TBPM and RBPM strategies improved the reliability of the entire pro-
duction line. One can find that the reliability of the production line with a shorter
PM interval is higher, when considering the cumulative reliability with TBPM
solely. However, this result does not mean that a PM strategy with a shorter in-
terval is superior over the PM strategy with a longer interval because the required
number of PM actions corresponding to the shorter PM interval is higher than that
corresponding to the longer PM interval during the same period. More PM ac-
tions often cause higher maintenance costs. An optimal decision of PM strategy
should be based on both reliability requirement and maintenance costs. A trade-
off between reliability level and the number of PM actions is necessary to keep
the TEC at the lowest level.
An optimal PM interval exists (see Figure 6). This optimal PM interval is
RMCR, krm, dependent. From Figure 6, it can be seen that the best PM interval is
two months when RMCR krm are 200 and 100. When krm is 10, the optimal PM
interval changes to 8 months. However, when RMCR krm is 4, the optimal PM
Optimising Preventive Maintenance Strategy for Production Lines 143

Reliability of the Production Line


1

0.9

0.8
Ro=0.9
0.7 MTTF1=16 months
Reliability, R(t)

PM interval 1=1 months


0.6 PM interval 2=1.5 months
No. of PM actions - RBPM=6 times
No. of PM actions - TBPM=17 times
0.5 Recovery coefficient=0.05
Minimum required operational time=0.5 months
0.4
Reliability based PM
0.3 Cumulative with RBPM
Reliability without repair
0.2 Time based PM
Cumulative with TBPM
0.1
0 5 10 15 20 25
Time, t (months)

Figure 4 Reliability Prediction of the Production Line – Simulation 1

Reliability of the Production Line


1

0.9

0.8

0.7 Ro=0.9
Reliability, R(t)

MTTF1=16 months
PM interval 1=1 months
0.6 PM interval 2=5.5 months
No. of PM actions - RBPM=6 times
0.5 No. of PM actions - TBPM=6 times
Recovery coefficient=0.05
Minimum required operational time=0.5 months
0.4
Reliability based PM
0.3 Cumulative reliability with RBPM
Reliability without repair
0.2 Time based PM
Cumulative reliability with TBPM
0.1
0 5 10 15 20 25
Time, t (months)

Figure 5 Reliability Prediction of the Production Line – Simulation 2

interval becomes 24 months. This result indicates that PM is no longer needed in


this case because the risk-related cost is not significant compared with the main-
tenance-related cost.
The optimal interval is also dependent on the recovery coefficient ƒc. From Fig-
ure 7, it can be seen that the optimal interval increases with the increase of recov-
ery coefficient. When ƒc is greater than 0.75, the optimal PM interval becomes 24
months, that is, no TBPM is required during the scheduled operating period of the
production line. This finding can be explained by the property of the recovery
coefficient. As shown in Eq. (12), the recovery coefficient ƒc represents the degree
that the reliability of Part 1 after a PM action lowers its initial reliability before
this PM. In other words, the recovery coefficient ƒc represents the effectiveness of
144 Y. Sun, L. Ma and J. Mathew

Figure 6 Relationship Between TECI and Preventive Maintenance Intervals

Figure 7 Relationship Between TECI and the Recovery Coefficients

a PM action. A greater value of ƒc indicates poorer PM performance. If PM per-


formance is so degraded that it cannot improve the reliability of production lines
effectively, it is better that this PM not be conducted.
The above analysis focuses on obtaining optimal TBPM strategy. However,
there are times when another type of PM strategy is preferred over this optimal
TBPM. When determining an optimal PM strategy, one needs to investigate dif-
ferent types of PM strategies. The effectiveness of different types of PM strategies
can vary in different scenarios. In the scenario shown in Figure 8, the lowest TECI
for TBPM is 13.7, whereas the TECI for RBPM is 13.1, i.e. in this case, RBPM
strategy rather than TBPM strategy should be applied. However, in the scenario
presented in Table 1, TBPM strategy is better than RBPM strategy.
Optimising Preventive Maintenance Strategy for Production Lines 145

Figure 8 Comparison Between RBPM and TBPM

Table 1 Relationship Between TECI and the Recovery Coefficients

Lowest TECI
fc Optimal PM interval (months) krm
TBPM RBPM
0.05 2 58.6 75.1
0.1 2 75.9 85.5
0.15 3 92.2 108
0.2 4 106.9 inapplicable 200
0.3 6 129.9 inapplicable
0.7 11.5 169.4 inapplicable
0.75 24 170.6 inapplicable

In Table 1, the word “inapplicable” means that RBPM is not applicable because
the PM interval required by this strategy will become shorter than the required
minimum operational time of the production line.

5 Conclusion

A SSA based methodology for determining an optimal Preventive Maintenance


(PM) strategy of production lines was developed in this paper. This methodology
is especially useful for long term PM decision making.
The determination of an optimal PM strategy of production lines is essentially
a multiple criteria decision making issue. A number of factors can influence pro-
duction line PM decision making. The major factors include failure probability,
costs due to failure of production lines, costs relating to maintenance, PM per-
formance, and the type of PM strategy. The SSA based methodology considers all
146 Y. Sun, L. Ma and J. Mathew

these factors simultaneously and analyses the effects of these factors on PM deci-
sions quantitatively.
This research finds that the performance of a PM strategy can be measured by
its Total Expected Cost Index (TECI). A PM strategy with lower TECI is better.
The effectiveness of different types of PM strategies can vary in different scenar-
ios. The optimal PM interval is dependent on TECI, PM performance and the type
of PM strategy. A trade-off between reliability requirement and the number of PM
actions is often needed if one wishes to minimise the Total Expected Cost (TEC)
of using production lines.
While this paper focuses on serial production lines, the methodology developed
in the paper can be applied to other serially connected engineering systems such as
power generation units in coal-fired power stations.

Acknowledgments This research was conducted within the CRC for Integrated Engineering
Asset Management, established and supported under the Australian Government’s Cooperative
Research Centres Program.

References

[1] Dallery Y, Bihan HL (1999) An improved decomposition method for the analysis of pro-
duction lines with unreliable machines and finite buffers. Int J of Production Research
37(5):1093−1117
[2] Liberopoulos G, Tsarouhas P (2004) Reliability analysis of an automated pizza production
line. J of Food Engineering. In press
[3] Miltenburg J (2002) The effect of breakdowns on U-shaped production lines. Int J of Pro-
duction Research 38(2): 352−364
[4] Cavory G, Dupas R, Goncalves G (2001) A genetic approach to the scheduling of preven-
tive maintenance tasks on a single product manufacturing production line. Int J of Produc-
tion Economics 74(1):135−146
[5] Percy DF, Kobbacy KAH, Fawzi BB (1997) Setting preventive maintenance schedules
when data are sparse. Int J of Production Economics 51(2):223−234
[6] Charepnsuk C, Nagarur N, Tabucanon MT (1997) A multicriteria approach to the selection
of preventive maintenance intervals. Int J of Production Economics 49(1):55−64
[7] Jiang R, Murthy DNP (2008) Maintenance: decision models or management. Science Press,
Beijing
[8] Ebeling CE (1997) An Introduction to Reliability and Maintainability Engineering. The
McGraw-Hill Company Inc., New York 124–128
[9] Khan FI, Haddara MM (2003) Risk-based maintenance (RBM): a quantitative approach for
maintenance/inspection scheduling and planning. J of Loss Prevention in the Process Indus-
tries 16(6):561−573
[10] Krishnasamy L, Khan F, Haddara M (2005) Development of a risk-based maintenance
(RBM) strategy for a power-generating plant. J of Loss Prevention in the Process Industries
18(2):69−81
[11] Kierulff HE (2007) The replacement decision: Getting it right. Business Horizons
50(3):231−237
[12] Tsang AHC, Yeung WK, Jardine AKS, Leung BPK (2006) Data management for CBM
optimization. J of Quality in Maintenance Engineering 12(1):37−51
Optimising Preventive Maintenance Strategy for Production Lines 147

[13] Kallen MJ, van Noortwijk, JM (2003) Optimal maintenance decisions under imperfect
inspection. Reliability Engineering & System Safety (Selected papers from ESREL 2003)
90(2−3):177−185
[14] Sun Y, Ma L, Mathew J (2004) Reliability prediction of repairable systems for single com-
ponent repair. in: Proceedings of International Conference on Intelligent Maintenance Sys-
tem. Arles, France: IMS, S2-A.
[15] Sun, Y, Ma L, Morris J (2009) A practical approach for reliability prediction of pipeline
systems. Eur J of Operational Research 198(1):210−214
[16] Sun Y, Ma L, Mathew J (2007) Prediction of system reliability for multiple component
repairs. in: Proceedings of The 2007 IEEE International Conference on Industrial Engineer-
ing and Engineering Management. 2007. Singapore: IEEE, 1186−1190
[17] Kelly A (1984) Maintenance Planning and Control. Butterworth & Co Ltd., Cambridge
[18] Pham H (2003) ed. Handbook of Reliability Engineering. Springer, London
[19] Blischke WR, Murthy DNP (2000) Reliability – Modelling, Prediction, and Optimization.
John Wiley & Sons Inc., New York 143−239
A Flexible Asset Maintenance Decision-Making
Process Model

Yong Sun, Colin Fidge and Lin Ma

Abstract Optimal Asset Maintenance (AM) decisions are imperative for effi-
cient asset management. Decision Support Systems (DSSs) are often used to help
asset managers make maintenance decisions, but high quality decision support
must be based on sound decision-making principles. For long-lived assets, a suc-
cessful AM decision-making process must effectively handle multiple time scales.
For example, high-level strategic plans are normally made for periods of years,
while daily operational decisions may need to be made within a space of mere
minutes. When making strategic decisions, one usually has the luxury of time to
explore alternatives, whereas routine operational decisions must often be made
with no time for contemplation. In this paper, we present an innovative, flexible
decision-making process model which distinguishes meta-level decision making,
i.e. deciding how to make decisions, from the information gathering and analysis
steps required to make the decisions themselves. The new model can accommo-
date various decision types. Three industrial cases are given to demonstrate its
applicability.

Keywords Decision-making processes, Decision support systems, Asset man-


agement, Asset maintenance decisions

__________________________________
Y. Sun
CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queen-
sland University of Technology, Brisbane, QLD 4001, Australia
e-mail: y3.sun@qut.edu.au, Tel: (61 7) 3138 2442, Fax: (61 7) 3138 1469
C. Fidge
Faculty of Science and Technology, Queensland University of Technology,
Brisbane, QLD 4001, Australia
L. Ma
CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queen-
sland University of Technology, Brisbane, QLD 4001, Australia

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 149


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_8, © Springer-Verlag London Limited 2012
150 Y. Sun, C. Fidge and L. Ma

1 Introduction

There is an increasing demand for optimising engineering Asset Maintenance


(AM) decisions [1] because they have significant technical and financial conse-
quences for asset owners and operators. As AM decisions involve multiple factors,
and different objectives and constraints, optimising AM decisions is highly chal-
lenging, so automated Decision Support Systems (DSSs) are essential to assist in
decision making. DSSs have found broad applications, e.g. for ISO9000 certifica-
tion in the health service [2] and for intraenterprise production scheduling in small
and medium-sized enterprises [3]. To ensure that decisions are made efficiently
and on a scientific basis, an effective AM decision-making process is needed.
Such a process involves a sequence of interrelated activities, undertaken within the
context of an organisational structure and resource constraints. Decision-making
processes provide the foundation for developing an overall AM decision support
framework and an integrated DSS. The process defines executive-level workflow,
the required analysis tools, and data input and output requirements.
In practice, a DSS has various users who need to make different Asset Mainte-
nance decisions with different focuses and time scales. To make a decision effi-
ciently, users need to follow an effective process. Although specific processes can
be designed for particular types of AM decisions, this approach is impractical for a
general AM DSS because so many different decision-making processes are
needed. Instead, we can use a generic process model which can be applied to all
types of AM decisions. This model can also be used as a template for AM practi-
tioners to enable them to customise their own decision-making processes for spe-
cific AM activities. To date development of a generic process has proven difficult
due to the complex nature of AM decisions. Strategic decisions need to be made
over the long term, such as annually, routine decisions are needed in the medium
term, such as monthly, and urgent decisions may need to be made within a much
shorter period, such as within hours or even minutes. In addition, AM decisions
often involve multiple roles in an organisation and have various, sometimes con-
flicting, objectives. Finally, different AM decisions often require different infor-
mation and data analyses.
While much attention has been paid to decision models [4, 5], there are few publi-
cations on the decision making process itself. Most existing publications focus on a
specific part of this process only. For instance, Wanyama and Homayoun [6] pre-
sented a process for automated agent negotiation, Zoeteman and Esveld [7] pre-
sented a railway maintenance planning process, and Khan and Haddara [8] presented
an approach for planning asset maintenance-based risk. Some process models have
been developed for specific enterprises. For example, Boccalatta and Prefumo [2]
presented a process for documentation in the ISO9000 certification DSS.
A notable exception is the decision process model for infrastructure project man-
agement defined by the New Zealand National Asset Management Steering
(NAMS) Group [1]. This model has a much more complete consideration of AM
decision-making activities and allowance for multiple criteria. Similarly, Rhodes
[9] presented a very generic five-step decision-making process model: (1) gathering
A Flexible Asset Maintenance Decision-Making Process Model 151

data and information, (2) finding an exhaustive set of possible options,


(3) allocating to each of these a degree of desirability, (4) selecting the best option,
and (5) verifying the option. However, this model is highly abstract, making it diffi-
cult to directly apply in AM practice. In addition, most decision-making process
models awkwardly mix short-term decision-making activities with long-term in-
formation generation and analysis activities. This failure to separate activities that
occur at different time scales, and decision types that have consequences over dif-
ferent periods of time, makes the models confusing and difficult to apply directly.
It is worth observing that there have been different asset management models
which have established a solid foundation for developing an asset maintenance
process model. The most commonly applied models are the PAS 55 Asset Man-
agement specification [10] and the International Infrastructure Management Man-
ual (IIMM) [11]. PAS 55 has two parts: PAS 55-1 describes optimised manage-
ment of physical infrastructure assets, and PAS 55-2 provides guidelines for the
application of PAS 55-1. PAS 55 does not describe decision making processes
specifically, but it presents a number of other important processes such as the steps
for forming, implementing and maintaining the asset management policy, as well
as the process for performing effective risk assessment and control. IIMM presents
various asset management specifications and processes, including a decision mak-
ing process which is the same as the NAMS Group’s model [1]. The Australian
Asset Management Council (AMC) [12] have also developed a Capability Assur-
ance model which consists of one asset management process (Plan-Do-Check-
Act), four asset management principles (output focus, capabilities, level of assur-
ance and leading organisation) and two supporting elements (culture and leader-
ship). These documents provide excellent guidelines and principles for optimised
asset management. However, implementing this knowledge in real maintenance
decision practice is often a great challenge due to multiple interdependent factors.
In this paper, we present a novel Flexible Asset Maintenance Decision-making
Process (FAMDP) model to address the need for a more generic process. Our model
is based on an analysis of the characteristics of typical industrial AM decisions,
while also considering the NAMS Group’s decision process model, Rhodes’ five-
step process model, and the guidelines, specifications and asset management mod-
els provided by PAS 55, the IIMM and the AMC. It can address both “basic” AM
decision-making processes and the specific needs of the AM decision’s context. As
its name implies, the proposed process is mainly used for optimising maintenance
decisions, e.g. establishing optimal renewal, replacement and repair times. It is not
suitable for making high level asset management policies or strategies. A number of
process modelling techniques are available to represent AM decision-making proc-
esses. We favour simple flowcharts in this paper because they are well-established,
familiar to most engineers and business managers, and can be directly adopted as a
workflow model in developing an AM Decision Support System. Industrial case
studies have demonstrated that our model can serve as an effective generic process
model, and it is therefore useful for developing an effective AM DSS.
The rest of the article is organised as follows. AM decision types and their char-
acteristics are analysed in Section 2. Our “split” AM decision support framework is
152 Y. Sun, C. Fidge and L. Ma

described in Section 3. Following this, our FAMDP model is developed in Sec-


tion 4. Some of the issues associated with its design are discussed in Section 5.
Three case studies are presented in Section 6, while Section 7 concludes the article.

2 Characteristics of Asset Maintenance Decisions

To develop an effective, generic Asset Maintenance decision-making process


model, it is essential to first understand AM decision types and their correspond-
ing characteristics. AM decisions can be classified using different criteria such as
their relevant time scale and the organisational levels involved. With respect to the
relevant time scale, we recognise the following four types of decisions:
1) AM strategic decisions. Such decisions include defining AM objectives, con-
sistent with the asset management policy and strategy, as well as the business
objectives in an organisation, and developing long-term AM strategic plans for
deciding on each asset’s operational, maintenance and capital investment poli-
cies. Asset renewal decisions often belong to this category. AM strategic deci-
sions are normally made annually, every five years, or over an even longer pe-
riod.
2) AM technical decisions. This type of decision includes developing AM plans
based on overall strategic plans to determine major preventive maintenance and
upgrading activities, as well as operational regimes. This type of decision is
typically made annually, but it can be made quarterly, or monthly.
3) AM implementation decisions. This type of decision includes scheduling asset
operational and maintenance activities, workforce allocation, expenditure and
material delivery timetables based on AM plans for the short term, such as the
next week or month.
4) Reactive decisions. This type of corrective maintenance decision is needed
when unplanned events occur, e.g. a component fails or there is an unexpected
peak in demand. These decisions often have to be made in the short term, that
is, half an hour to a day, in order to decide, for instance, whether the failure-
related assets should be shut down or whether more resources must be de-
ployed. Since reactive decisions need to be made in a short time, detailed tech-
nical and cost analyses usually cannot be conducted. Therefore, to ensure the
accuracy of the decisions made, the potential situations, the corresponding
costs, and the appropriate responses are often defined in advance by the AM
strategic or technical planning stages.
With respect to the organisational roles involved, we recognise the following
three categories of decisions:
1) Executive level decisions. This type of decision is normally made by the board
of an enterprise or its CEO to decide on asset management policies, operations
and maintenance strategies, capital projects and the asset maintenance budget.
A Flexible Asset Maintenance Decision-Making Process Model 153

2) Managerial level decisions. This type of decision is normally made by gen-


eral managers or local office managers to determine the asset operation plan,
maintenance job priorities, inventory levels, workforce allocations and/or
maintenance budgets.
3) Operational level decisions. This type of decision is normally made by a site
director or engineers to decide on maintenance/repair types, locations and pro-
cedures. Some reactive decisions need to be made by these personnel.
The decision types based on the first classification criterion and those based on
the second criterion have some corresponding relationships. Personnel at execu-
tive level mainly deal with AM strategic decisions, but they may also need to
understand technical decisions. People at the managerial level usually focus on
technical decisions, but they also need to consider implementation-level decisions
which are normally made at the operational level. Reactive decisions are usually
made at the lower levels of an organisation, but some of them may need to esca-
late to higher levels, even the executive level, if they have significant impacts on
the organisation, especially financial.
When developing a generic process model, we not only need to consider that
AM decisions operate over different time scales and involve a wide range of
personnel and maintenance activities, but also need to consider that making dif-
ferent types of decisions requires different information. Making lower level deci-
sions such as repair decisions usually needs more specific technical information,
such as failure locations and modes, whereas making higher level decisions, such
as planning capital renewal projects, needs more general summaries such as the
system’s overall condition as measured by recent system reliability and availabil-
ity. The relationship among the different AM decision types, time scales and
decision information can be described using a multiple-scale decision-making
conceptual model (Figure 1).

Decision
hierarchy
Long term General
(e.g. 5 years)
Information
needed
Time scale

Strategic
decisions

Technical
decisions
Implementation
decisions
Short term
(e.g. hours) Reactive decisions Specific

Figure 1 A Typical Multi-Scale Decision-Making Conceptual Model


154 Y. Sun, C. Fidge and L. Ma

Asset Maintenance decisions have other characteristics such as multiple criteria


and interactions. Different types of decisions are not isolated. They have interac-
tions with each other, e.g. repeated needs to make short-term corrective repairs to a
particular asset may lead to a change in its long-term replacement strategy. On the
other hand, short-term decisions have to be in compliance with the long-term goals
of an organisation. An AM decision-making process has to enable decision makers
to deal effectively with these multiple decision criteria and interactions.

3 A “Split” Asset Maintenance Decision Support Framework

A generic Asset Maintenance decision-making process model has to address the


different time scales and different information requirements of different decision
types, as well as the interactions among these decision types. However, previous
approaches do not solve this problem effectively. Existing decision-making proc-
ess models cannot cope with AM decisions with different time scales because they
mix ‘basic’ decision-making activities, such as defining and selecting the best
decision option, together with ‘meta-level’ decision-support information genera-
tion and analysis activities, such as identifying project objectives and statistical
analyses of previous failures. Although all of these activities are necessary for
decision making, some are related to AM strategic decisions and others to imple-
mentation-level or reactive decisions.
In practice, whereas ‘basic’ decision making is necessary for all AM decisions
at each level in Figure 1, not every AM decision needs to perform long-term
information gathering and analysis activities before a decision can be made. For
example, once typical failure modes have been identified for an asset, decision
makers merely need to use these results to determine corresponding responses to
a failure in subsequent decision making. They do not need to repeat the failure
mode analysis used to identify possible responses during each decision-making
task. Typically, less frequent and more time-consuming information gathering and
analysis activities must be performed in advance to support more immediate AM
decisions. For instance, when making a reactive decision, there is normally not
enough time to conduct sophisticated data analyses. This type of decision needs
to be based on previously-identified failure modes and predefined decision-
making rules.
To address these differences between low-level Asset Maintenance decision-
making activities and their supporting, higher-level information generating and
analysis activities, we divide the overall decision-making process into a ‘basic’
decision-making process, which focuses on decision-making activities only, and a
number of decision-supporting information acquisition and generation processes,
which provide inputs for decision making. This division leads to the concept of a
‘split’ Asset Maintenance decision support framework (Figure 2). This framework
is a conceptual model or process guideline for how to make AM decisions effec-
A Flexible Asset Maintenance Decision-Making Process Model 155

AM Trigger information
decisions collection/generation
Basic AM
information
decision-
acquisition/
making
generation
process
processes
Requests
for decision
inputs
Decision Decision
required required
information information
Database

Figure 2 Our ‘Split’ AM Decision Support Framework

tively through proper integration of various decision models and methodologies. It


separates processes for obtaining the information needed for making AM deci-
sions from the basic process of making decisions. High-level information acquisi-
tion and generation processes are triggered by ‘basic’ decision-making events at a
lower level. However, the processes of acquiring new information for decision
making typically occur at a much longer time scale than those of making the deci-
sions themselves. Therefore, the data generated is usually stored in a database to
support subsequent decisions at the lower level.

4 A Flexible Asset Maintenance Decision-Making


Process Model

Based on our ‘split’ Asset Maintenance (AM) decision support framework from
Section 3 above, and taking into account the NAMS Group’s decision process
model, Rhodes’ five-step process model, and the guidelines, specifications and
asset management models provided by PAS 55, IIMM and the AMC, we devel-
oped a Flexible Asset Maintenance Decision-making Process (FAMDP) model as
shown in Figure 3.
The first step in this process model is to identify an AM decision which needs
to be made. As mentioned above, asset maintenance involves numerous decisions,
from routine maintenance planning to how to respond to an unexpected failure.
Different decisions need different information and analyses. Therefore, when
making a decision using a Decision Support System, one first needs to specify the
kind of decision to be made.
The second step is to identify the objectives and the constraints for making the
decision. Accurately recognising the decision objectives and constraints is im-
perative because they define the criteria for optimising the decision. The objec-
tives of a specific AM decision have to be in compliance with the asset manage-
156 Y. Sun, C. Fidge and L. Ma

ment policy and strategy, as well as business objectives in an organisation. In


order to identify the objectives and constraints, one often needs to conduct a
number of analyses which may take a long time to complete, and these analyses
are not suitable for those decisions that need to be made within a short time pe-
riod. Fortunately, although every decision must be based on a clear understanding
of the objectives and constraints, this does not necessarily mean that the objective
and constraint analyses have to be conducted during each decision making event.
Instead, the analysis of objectives and constraints can be completed in advance,
based on the experience and knowledge of domain experts, in order to produce a
set of ‘pre-packaged’ decision options which can be applied quickly based on the
current system or asset state only. To allow this, the AM decision objective and
constraint identification process has been separated from the basic decision-
making process in Figure 3. This design allows the interactions among decisions
to be considered. Some decisions may result in subsequent changes to business
objectives and constraints. These modified objectives and constraints will be
stored so that other decisions can use and test them.
The third step is to gather the health status and operational information of as-
sets which are associated with the identified decision. This step is essential for all
AM decision making. It includes identifying an asset’s failure modes and causes,
and assessing each relevant asset’s current condition. It may also include predict-
ing the next failure time of other, related assets, i.e. ‘backup’ assets which are
currently forced to take the load of the failed one. In engineering asset health
assessment, analysing interactions between failures, i.e. interactive failures [13,
14] is often necessary. The impact of AM decisions on potentially improving
asset health also needs to be analysed [15]. Asset health assessment and predic-
tion is often time-consuming since it typically involves gathering and analysing
historical data for a large number of assets. For the same reason as mentioned
above, this long-term asset health assessment and prediction process is separated
from the short-term ‘basic’ decision-making process in our model.
The fourth step is to identify all potential decision options. Making a decision
requires selecting the best one among several alternatives [16]. Thus, identifica-
tion of decision options (alternatives) is a crucial step in a decision-making proc-
ess. In engineering Asset Maintenance, some options are discrete (e.g. to replace
a component), while others are continuous (e.g. to increase the frequency of in-
spections). After the decision options have been identified, we then need to short-
list them against ‘deal-breaker’ rules, i.e. eliminate those options that cannot meet
overall business objectives and constraints. When there are a large number of
options or there are continuous options, obtaining a shortlist of decision options
becomes difficult. Note in Figure 3 that there is a feedback loop from decision
risk assessment and verification (the ninth step) to the decision option identifica-
tion process. If all shortlisted decision options are unsatisfied, the decision maker
may need to reconsider those options previously discarded. To this end, these
options should be temporarily retained until the whole decision-making process is
closed. In some cases, discarded options become viable again because of changes
to the decision objectives and constraints.
A Flexible Asset Maintenance Decision-Making Process Model 157

1. Identify an AM decision

Have the decision


objectives and constraints Yes
No been identified?
AM decision 2. Gather AM decision
objectives objectives/constraints
/constraints information
identification
process
Have asset conditions
No been identified?
Yes
Asset health
assessment 3. Gather asset condition
and and operation
prediction information
process

Modify AM Have the decision


decision options been identified?
objectives No
Yes

Option 4. Gather decision


identification options
process

Have the decision


options been ranked? Yes
No

Option 5. Gather option


ranking ranking information
process

6. Select the best option


and check the decision

Yes
Do the decision
parameters need to
be optimised?
No
Has the relationship
between decision
parameters and objectives
been quantified? Yes
No

Relationship 7. Gather the quantified


analysis relationship information
process

8. Identify the optimal


decision parameters

Are all outcomes of the


decision known? Yes
No
9. Risk assessment and
Risk verify the decision
evaluation &
what-if No
analysis
process Is the decision
satisfied?
Yes Are any other
options available? Yes
Yes
10. Enact the decision
Authorised to No
modify objectives?
11. Report and abort the
No decision process

Figure 3 Our Flexible Asset Maintenance Decision-Making Process


158 Y. Sun, C. Fidge and L. Ma

The fifth step is to rank the decision options based on decision criteria which are
determined according to the decision objectives and constraints. In modern Asset
Maintenance, decisions often involve multiple factors, and different objectives and
constraints, i.e. AM decision making belongs to the class of ‘multiple criteria’ deci-
sion problems. As a result, ranking decision options is often difficult. To address
this issue, various option ranking models and methodologies have been developed,
e.g. Decision Trees, the Analytic Hierarchy Process, and fuzzy logic. These tech-
niques can effectively assist in AM decision ranking. For decision making in safety-
critical environments, a risk-based decision making approach may be applied. The
IIMM presents a risk analysis method, and a risk assessment and management proc-
ess [17]. However, no matter which methodology is used, applying it correctly
typically requires a sound knowledge of how it works and, in particular, an under-
standing of its limitations. In addition, in most cases, it also takes a significant
amount of time to conduct decision option ranking analyses, and hence the ranking
process is also separated from the basic decision-making process in our model.
The sixth, seventh and eighth steps are to optimise decision parameters, such as
asset renewal times. After the fifth step, the decision options have been ranked.
Then one can determine a best option based on the rankings. However, deciding on
the best option does not necessarily mean that the decision can be finalised because
Asset Maintenance decisions are so complex. In practice, further analyses may be
needed to optimise those parameters which are associated with the selected deci-
sion option. For example, when the reliability of an asset is lower than an accept-
able level, a number of maintenance activities can be applied to improve its reli-
ability, including conducting preventive maintenance or renewing the whole asset.
If a decision to renew the asset is made, then one needs to further decide on the
optimal renewal time. To address these issues, our FAMDP model has additional
steps in which we need to identify data availability and then conduct an optimisa-
tion analysis using an appropriate optimisation model or method based on the deci-
sion objectives and constraints which have been identified in the second step.
The ninth and final step in our FAMDP model is to assess the risk and verify
the decision. Risk assessment of a decision is a part of the whole risk identifica-
tion, assessment and control system in an organisation. PAS 55 includes a well-
established methodology for risk identification, assessment and control. Decision
verification is an important step in an AM decision-making process. It usually
involves a number of ‘what-if’ analyses to ensure that the selected decision is
robust. Once the decision has been validated, it becomes the final one which leads
to the tenth step, to enact the decision. However, if the chosen decision option
proves unsatisfactory, and no other viable options are available, the decision
maker will need to modify the objectives or reconsider the decision options. Un-
fortunately, some decision makers, especially those at lower levels in an organisa-
tion, such as equipment operators, may not be allowed to change AM decision
objectives which are associated with the organisation’s business objectives. In this
case, the need for modification of objectives must be reported to their super-
visors – the eleventh step in our FAMDP model – and the whole decision-making
process is suspended until new AM objectives are determined.
A Flexible Asset Maintenance Decision-Making Process Model 159

5 Discussion and Comparison

Decision makers need to go through a ‘basic’ decision-making process for every


decision made, but they do not necessarily need to go through all the information
generation and/or analysis processes at the same time. They can do information
collection and analyses, such as cost analysis and failure prediction, less fre-
quently and over a longer period of time. The information will be stored in a data-
base to be used to inform later decisions. The basic decision-making process en-
ables decision makers to consider the decision inputs systematically so that the
required inputs can be prepared in advance. This capability is essential for making
lower-level decisions in a relatively short time, informed by higher-level analyses
conducted over a long period.
The flexible process is also beneficial for designing decision support software
design as shown in Figure 2. It allows a core decision-making module to implement
the ‘basic’ decision-making process, and a number of separate analysis modules to
acquire or generate inputs for decision making. The core module and the analysis
modules need to be loosely coupled only, making the overall software development
process easier. Users of the resulting system need the core module and some se-
lected analysis modules so that they can do some simple and common analyses
themselves. When the users need more sophisticated and/or unusual analyses, they
can access other analysis modules through stand-alone application software or web-
based services. Since the analysis modules are loosely linked to the core module,
they can be modified and extended without affecting the core module.
Our FAMDP has two feedback loops. One is from the verification step back to
identifying the decision objectives and constraints. The other is from verifying the
decision to defining decision options. As mentioned above, some AM decisions
have numerous options. The difficulty exists in initially and exhaustively identify-
ing all potential options. Therefore, reviewing options is often necessary.
For comparison, we present a simplified view of the NAMS infrastructure
management decision-making process model in Figure 4. Comparing this figure
with Figure 3, we can see that the NAMS Group’s decision process starts with
identifying project objectives because it was designed specifically for infrastruc-
ture project management. In contrast, our FAMDP model starts with identifying an
AM decision and then identifying the objectives of the decision. This arrangement
enables our process to accommodate more generic AM decisions. In reality, dif-
ferent AM decisions often have different objectives. It is necessary to define ob-
jectives clearly when making a decision. Decision objective identification can be
complex, and needs to follow an appropriate process. However, in many AM
decision making cases, objectives are often well defined in advance, especially for
decisions made at lower levels in an organisation where decision makers are often
not able to define business-critical objectives. In this case, decision makers are
only requested to gather and understand decision objectives. In addition, our proc-
ess requires not only identifying the decision objectives, but also identifying the
constraints for making the decision because identifying the constraints is crucial
for decision optimisation.
160 Y. Sun, C. Fidge and L. Ma

Modify project 1. Define project objectives


objectives

Yes Does the problem


relate to an existing
asset?

2. Identify potential failures No


3. Identify the nature of the opportunity
4. Define the criteria for failure

5. Define decision options

6. Analyse options against multiple criteria

7. Review options

No
Can a preferred option be selected
from the remaining options?

Yes
8. Complete financial analysis

Figure 4 A Simplified Version of the NAMS Group’s Decision-Making Process for Infra-
structure Projects [1]

6 Case Studies

To validate it, our Flexible Asset Maintenance Decision-making Process model


has been applied to ‘economiser’ maintenance decision making in an Australian
power generation company and pipeline renewal decision support for an Austra-
lian water supply company. The first two case studies below illustrate long-term
decision making and short-term decision making, respectively. The third case
study explains how the model was used as the basis for implementing a prototype
decision support tool. All three examples demonstrate the versatility of the
FAMDP by instantiating it for the particular decision making requirement at
hand, producing a model that precisely matches the respective company’s actual
decision-making processes. (To protect the companies’ commercial interests, the
data presented below have been modified.)
A Flexible Asset Maintenance Decision-Making Process Model 161

6.1 Case 1: Determination of an Optimal Economiser


Maintenance Strategy

The economiser is a critical component for efficient operation of coal-fired power


stations. It consists of a large system of water-filled tubes which extract heat from
the exhaust gases. When it fails, usually due to erosion causing a leak, the entire
generator unit must be shut down for repairs. In economiser maintenance man-
agement, there are a variety of decision-making requirements, each involving
different time frames. Assume that a coal-fired power station has two identical
600 MW electricity generation units which were built just over 30 years ago and
that the designed life of the station was 26 years. However, after assessing the
health of its assets, the electricity supplier decides to extend the units’ lives for
another 20 years. As the units were running at their wear-out stages, to ensure the
electricity generation units can meet the organisation’s ongoing business require-
ments, optimal strategies to operate and maintain the units need to be developed.
One of these strategies is an economiser maintenance strategy as the economisers
are critical components in the electricity generation units. A process to choose an
optimal economiser maintenance strategy for the long term can be defined as a
specific instantiation of the FAMDP model from Figure 3 as shown in Figure 5.
Step 1: Identify an AM decision. In this case study, the AM decision is to de-
cide the optimal maintenance strategy for an economiser in a coal-fired power
station.
Step 2: Define the objectives and constraints associated with economiser
maintenance management. On the first occasion, we need to go through the AM
decision objectives/constraints identification process to gather the required infor-
mation. In this case study, we assume the objectives and constraints have already
been defined by the organisation’s strategic plan. The major objectives and con-
straints are: to ensure the overall availability of the economisers to be greater than
98 % under normal circumstances and 100 % in peak hours while simultaneously
minimising the total maintenance cost; and to conduct a major planned outage for
maintenance every five years and a minor planned outage every two years. In
total, there are to be, at most, 10 weeks in total of planned and unplanned outages
allowed in every 10 year period. However, one should bear in mind that these
objectives/constraints need to be audited against asset health conditions and opera-
tional requirements regularly. These data have been stored in a database. When
users use a Decision Support System based on the FAMDP model, the system will
automatically retrieve these data to obtain the required information. Users do not
need to go through the information acquisition process again.
Step 3: Assess and predict economiser conditions. Since economisers are
complex, dynamic systems, we have to assess their current health and predict
future changes accurately through an asset health assessment and prediction proc-
ess to ensure the accuracy of the decision. (Economiser health prediction has been
studied extensively [18], but is beyond the scope of this article.) The health condi-
tion of an economiser can be represented by either (1) a failure probability func-
162 Y. Sun, C. Fidge and L. Ma

tion (or reliability function), or (2) tube thicknesses at installation plus their ero-
sion rates, or both.
Step 4: Define potential maintenance strategy options for economisers.
This work is done by domain experts based on their experience. The potential
options include reactive (corrective) maintenance, preventive maintenance, predic-
tive maintenance, renewal of the tubing system and various combinations of these
actions. Renewal of an economiser tubing system can be defined as replacing
more than 40 % of the individual tubes. In economiser maintenance, the type of
preventive maintenance is opportunistic, e.g. preventively replacing some worn
tubes when the economiser is shut down to repair a leaking tube or for some other
reason.
Step 5: Select the best option and check the decision parameters. After a
qualitative analysis, assume that a combined maintenance strategy has been se-
lected. The economiser tubing system will be renewed at a scheduled interval.
Between renewals, the economiser will be maintained based on reactive mainte-
nance and opportunistic preventive maintenance strategies. In this case, the re-
newal interval is a decision parameter which needs to be optimised. Another two
decision parameters are the renewal area (i.e. how much of the old tubing to cut
away and replace) and location (i.e. which erosion ‘hotspots’ to focus on).
Step 6: Optimise the renewal intervals. The aim of optimising the renewal in-
tervals is to minimise the expected total maintenance cost of the economisers
which includes expected repair costs, expected renewal costs and expected pro-
duction losses due to maintenance downtime. The other objectives which have
been identified in Step 2 become constraints. Here, the expected repair cost is
assumed to be proportional to the failure probability of the tubes. The proportional
scale can be assumed to be constant, i.e. ignoring the influence of inflation and
interest. The failure probability of the tubes is time dependent. Therefore, the
expected repair cost is a function of renewal intervals. The expected renewal cost
is assumed to be inversely proportional to the renewal interval; hence, it is also a
function of renewal intervals. Again, the proportional scale can be assumed to be
constant. The expected production loss is assumed to be proportional to the failure
probability of the tubes and the outage duration. As a result, it is also a function of
renewal intervals. However, the proportional scale cannot be assumed to be con-
stant in this case. Seasonal changes of the electricity market price have to be taken
into account (however, daily fluctuations in the price do not need to be considered
because the outage duration due to maintenance is always greater than one day).
Therefore, the expected production loss depends on both renewal intervals and the
calendar times when the renewal actions are conducted. Adding the expected re-
pair cost, the expected renewal cost and the expected production loss together, we
can obtain the expected total maintenance cost of the economisers which is a func-
tion of renewal intervals and the calendar times when the renewal actions are
conducted. Using an appropriate optimisation algorithm, one can then finally iden-
tify the optimal renewal intervals.
Step 7: Verify the decision using sensitivity analysis and risk assessment. If
the decision is satisfied, we will accept the decision and the decision making loop
A Flexible Asset Maintenance Decision-Making Process Model 163

1. Identify an AM decision

AM decision
objectives 2. Define the objectives and
/constraints constraints associated with
identification economiser maintenance
process management

3. Assess and predict economiser


conditions
Modify AM
decision 4. Define potential maintenance
objectives strategy options for economisers

Option 5. Select the best option and check


identification the decision parameters
process

6. Optimise renewal interval

No Yes
7. Verify the decision using
Are any sensitivity analysis and risk
other options assessment
available?

No
Is the decision
satisfied?

Yes

8. Enact the decision

Figure 5 Economiser Maintenance Strategy Determination Process

is closed. Otherwise, we need to go back to review and modify the objectives


and/or choose other maintenance strategies. The verification of a selected decision
is also out of scope and not further discussed.

6.2 Case 2: Determination of the Optimal Lead Time


to Repair Leaking Tubes

Repairing leaking tubes is a form of corrective maintenance and is a type of an


emergency decision in the power station. When a leak in an economiser has been
detected, the site manager needs to decide whether to shut down the electricity
generation unit and fix the problem immediately or to continue operating the unit
164 Y. Sun, C. Fidge and L. Ma

for a certain period and then fix the problem. A process for making this short-term
decision by instantiating the FAMDP model from Figure 3 is shown in Figure 6.
Step 1: Identify an AM decision. In this case study, the AM decision is to de-
cide the optimal repair lead time for an economiser when a leak is detected.
Step 2: Gather the objectives and constraints associated with economiser
repairs. Since the required decision is a type of emergency decision, the objec-
tives and the constraints have to be defined in advance because there is no time for
reflection and analysis when the decision is required. Fortunately, in this case
study, the objectives and constraints are the same as those which have been identi-
fied above for choosing the optimal maintenance strategy. However, this coinci-
dence also means that these two types of decisions have interactions. Changes in
objectives and constraints in one decision will result in changes to other decisions.
Step 3: Assess and predict economiser conditions. Although a leak has been
identified, one has to check its severity and predict the consequential failures if the
leak is not fixed. According to historical observations, leaving a leak unrepaired
will produce around three further leaks every 24 hours due to the high-pressure
water escaping from the leaking tube eroding neighbouring tubes, and, conse-
quently, an additional one day is needed to fix ‘consequent’ leaks. These conse-
quential failures have to be considered in the decision as they can significantly
increase repair costs and production losses.
Step 4: Obtain potential repair options. As an emergency decision, the op-
tions should be clearly defined in advance. In practice, this work is done by do-
main experts based on their experience. When a leak is identified, potential op-
tions are to (1) shut down the unit and fix the leak immediately; (2) continue
operating the unit and fix the leaks three days later; or (3) continue operating the
unit and fix the leaks six days later.
Step 5: Select the best option. The optimal repair action heavily depends on
the electricity market price at the time when the leak occurs. The electricity mar-
ket price fluctuates significantly, from a typical $25/MWh up to $2500/MWh in
some short-lived peaks. As a result, production losses due to outages of the same
duration occurring at different times can be dramatically different, compared to
relatively stable repair costs. The major objective for determining the best repair
option is to minimise the total cost, which includes production losses and repair
costs. In current practice, we assume that the electricity supplier makes their deci-
sions based on the following rules: if the electricity market prices when a failure
occurs is less than $30/MWh, select option (1); if $30−$100/MWh, select option
(2); and if greater than $100/MWh, select option (3).
As in this case no decision parameters need to be further optimised, the steps
for optimisation of decision parameters (i.e. the sixth, seventh and eighth steps) in
the FAMDP (Figure 3) are skipped. Furthermore, because fixing economiser leaks
is a responsive decision and there is not enough time to do a what-if analysis, the
selected decision in Step 5 normally becomes the final decision. However, we also
noticed that decisions made based on these previously well-defined selection crite-
ria may not always be optimal. Therefore, when time permits, a risk assessment
and what-if analysis is needed to justify the decisions and calibrate the rules (i.e.
A Flexible Asset Maintenance Decision-Making Process Model 165

1. Identify an AM decision

AM decision
objectives 2. Gather the objectives and
/constraints constraints associated with
identification economiser repairs
process

3. Assess and predict economiser


Modify AM conditions
decision
objectives
4. Obtain potential repair options

5. Select the best option

Risk
assessment Yes No
and what- Is time enough
if analysis for what-if
process analysis?

6. Assess risk and justify the


decision
Yes
Are any No
Yes other options
available? Is the decision
satisfied?

Authorised Yes
to modify
objectives? No
7. Enact the decision

No 8. Report and abort


the decision process

Figure 6 Optimal Repair Lead Time Determination Process

by going through a risk assessment and what-if analysis process as per Step 6 in
Figure 6). This case study once again has demonstrated the importance of separat-
ing the basic decision making activities and information generation and/or analysis
processes in a decision making process model used for emergency decisions.

6.3 Case 3: Pipeline Renewal Decision Support

Our process model has also been used to design a pipeline renewal decision sup-
port tool for a water utility company. Pipeline renewal is a type of long-term (over
166 Y. Sun, C. Fidge and L. Ma

30 years) decision in the company. The decision tool software was designed to
assist users to follow the procedure shown in Figure 3 automatically.
Step 1: As a special-purpose decision support tool, the decision of interest is to
decide the optimal renewal time for each pipeline in terms of minimum total cost,
while meeting the company’s major business objectives.
Step 2: After discussion with maintenance staff in the company, the objective
was identified as minimising the total cost, which included repair costs due to
pipeline failures and replacement costs. Production losses can be ignored in this
application. The major constraints to achieve this goal were (1) business risk con-
trol and (2) customers’ requirements for service interruptions.
Step 3: The pipeline’s health status is one of the most critical factors for decid-
ing renewal times. As the company has over 1000 pipelines which are made of
various materials and have different lengths, diameters and working environments,
a special process was designed for pipeline health assessment and prediction,
which includes pipeline filtering and grouping, and data quality analysis (censored
data or complete data), and statistical analysis.
Steps 4, 5 and 6 were not relevant in this case study as the tool was specifically
designed for making renewal time decisions only, so there are no other alternative
options to consider.
Step 7: The decision parameter ‘renewal time’ is what needs to be optimised in
this case. To this end, a total cost rate (i.e. the total cost per unit time) was formu-
lated as a function of repair cost per repair, renewal time, pipeline failure probabil-
ity and replacement cost per unit time. To evaluate the service interruption risk,
the quantitative relationship of service interruptions due to planned and unplanned
maintenances vs. the renewal time was also developed.
Step 8: The cost rate function, reliability function and service interruption
function were entered into a multi-criteria optimisation algorithm to calculate the
optimal renewal times which correspond to a minimal total cost rate and satisfy
the minimum reliability requirement and service interruption requirement. These
renewal times are then offered to decision makers.
Step 9: Because of the uncertainty in failures and costs, especially the pre-
dicted pipeline replacement cost, decision makers need to justify the recom-
mended renewal times through risk evaluations and what-if analyses, i.e. to see if
the decision is robust. An analysis tool was developed to calculate the changes of
failure probability, service interruptions and the total cost rate, as well as the fluc-
tuations of maintenance expenditure over a given decision horizon (e.g. 30 years)
corresponding to different renewal times. This function enables decision makers to
reschedule the renewal times which still remain to meet a particular risk control
level. However, for risk management, decision makers have to record their reasons
for such changes so that their decisions can be traced and audited.
Step 10: Once the renewal times of all pipelines have been determined, the de-
cision support system will automatically generate a renewal scheduling table
which shows the renewal time and cost of every pipeline and the total expected
repair cost over its life-span.
A Flexible Asset Maintenance Decision-Making Process Model 167

From the three case studies provided, it can be seen that our Flexible Asset
Maintenance Decision-making Process model can be instantiated for both long-
term economiser maintenance strategy decision making, short-term economiser
repair decision making, and long-term pipeline renewal decision making. Impor-
tantly, in all three cases it was possible to instantiate the model in a way that pre-
cisely matched the relevant company’s existing maintenance practices.

7 Conclusion

Engineering Asset Maintenance (AM) involves various AM decisions which have


different characteristics. These decisions have different time scales and focuses.
They also involve different personnel and analyses. With respect to the time scale,
AM decisions can be classified into four categories: strategic decisions, technical
decisions, implementation decisions and reactive responses. These four types of
decisions have very different time scales ranging from years to minutes. Existing
decision-making process models which require implementing basic decision ac-
tivities as well as decision information generation and analysis activities sequen-
tially are not suitable for all of these different types of decisions, and hence cannot
be used as a sufficiently generic AM decision-making process model as needed for
developing an effective AM decision support system.
Here, we have presented a Flexible AM Decision-making Process (FAMDP)
model. In this new model, the ‘basic’ decision-making process focuses solely on
decision making activities, and has been separated from the decision-supporting
information acquisition and generation processes which provide inputs for making
decisions. This ‘split’ design can effectively address the issue that AM decisions
have different time scales and involve different roles. The rationale behind our
FAMDP model is that when making an AM decision, one always has to go
through the basic decision-making process, but it is not always necessary to go
through all the decision information acquisition and generation processes.
Three specific industrial maintenance decision making processes were pre-
sented to show that the FAMDP is a sufficiently generic model. It is applicable to
the wide range of decision making activities required in large-scale engineering
asset maintenance management. The model has proven useful as a framework for
developing an integrated AM decision support software system. We have already
developed a demonstrable prototype of such a system. The FAMDP model can
also be used as a reference model so that industrial personnel can quickly develop
their own customised decision-making process for different specific AM activities.

Acknowledgments This research was conducted within the CRC for Integrated Engineering
Asset Management, established and supported under the Australian Government’s Cooperative
Research Centres Program.
168 Y. Sun, C. Fidge and L. Ma

References

[1] NAMS_Group, Optimised Decision Making Guidelines: A sustainable approach to manag-


ing infrastructure. 2004, Thams: NZ National Asset Management Steering Group
[2] Boccalatte A, Prefumo R (1997) A DSS for ISO 9000 certification in the health service:
a case study. In: P 1997 IEEE Int Conf Syst, Man Cy. Orlando, FL: IEEE, 577−581
[3] Becvar P, Smidl L, Psutka J, Pechoucek M (2007) An Intelligent Telephony Interface of
Multiagent Decision Support Systems. IEEE T Syst Man Cy C 37(4):553−560
[4] Zou X, Chen Y, Liu M, Kang L (2008) A New Evolutionary Algorithm for Solving Many-
Objective Optimization Problems. IEEE T Syst Man Cy B 38(5):1402−1412
[5] Sarkis J, Sundarraj RP (2006) Evaluation of Enterprise Information Technologies: A Deci-
sion Model for High-level Consideration of Strategic and Operational Issues. IEEE T Syst
Man Cy C 36(2):260−273
[6] Wanyama T, Homayoun Far B (2007) A protocol for multi-agent negotiation in a group-
choice decision making process. J Netw Comput Appl 30(3):1173−1195
[7] Zoeteman A, Eyveld C (2004) State of the art in railway maintenance management: plan-
ning systems and their application in Europe. In: P 2004 IEEE Int Conf Sys Man Cy. The
Hague, Netherlands, 4165−4170
[8] Khan FI, Haddara MM (2003) Risk-based maintenance (RBM): a quantitative approach for
maintenance/inspection scheduling and planning. J Loss Prevent Proc 16(6):561−573
[9] Rhodes PC (1993) Decision Support Systems: Theory and Practice. Alfred Waller Limited,
Henley-on-Thames
[10] Institution_of_Asset_Management (2004) PAS 55 – Optimal management of physical as-
sets. British Standards Institution, London
[11] Institute_of_Public_Works_Engineering_Australia, et al. (2006) International Infrastruc-
ture Management Manual. Institute of Public Works Engineering Australia, et al.
[12] Asset_Management_Council (2010) What is asset management.
http://www.amcouncil.com.au/files/Asset_Management_Council_0906_2000_084%20Wh
at%20is%20Asset%20Management.pdf. Accessed on 24 June 2010
[13] Sun Y, Ma L, Mathew J, Zhang S (2006) An analytical model for interactive failures.
Reliab Eng Syst Safe 91(3):495−504
[14] Sun Y, Ma L, Mathew J (2009) Failure analysis of engineering systems with preventive
maintenance and failure interactions. Comput Ind Eng 57(2):539−549
[15] Sun Y, Ma L, Morris J (2009) A practical approach for reliability prediction of pipeline
systems. Eur J Oper Res 198(1):210−214
[16] Holloway CA (1979) Decision Making under Uncertainty: Models and Choices. Prentice-
Hall, Inc., Englewood Cliffs, NJ
[17] IPWEA (2006) International Infrastructure Management Manual
[18] Platfoot R (1990) Erosion life of tube banks in coal fired boilers. In: P Int Coal Eng Conf.
Institution of Engineers 237−241
Machine Prognostics Based on Health State
Estimation Using SVM

Hack-Eun Kim, Andy C.C. Tan, Joseph Mathew, Eric Y.H. Kim
and Byeong-Keun Choi

Abstract The ability to accurately predict the remaining useful life of machine
components is critical for machine continuous operation, and can also improve
productivity and enhance system safety. In condition-based maintenance (CBM),
maintenance is performed based on information collected through condition moni-
toring and an assessment of the machine health. Effective diagnostics and prog-
nostics are important aspects of CBM for maintenance engineers to schedule a
repair and to acquire replacement components before the components actually fail.
All machine components are subjected to degradation processes in real environ-
ments and they have certain failure characteristics which can be related to the
operating conditions. This paper describes a technique for accurate assessment of
the remnant life of machines based on health state probability estimation and in-
volving historical knowledge embedded in the closed loop diagnostics and prog-
nostics systems. The technique uses a Support Vector Machine (SVM) classifier

__________________________________
H.-E. Kim
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
A.C.C. Tan
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
J. Mathew
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
E.Y.H. Kim
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
B.-K. Choi
School of Mechanical and Aerospace Engineering, Gyeongsang National Univ.,
Tongyoung, Kyongnam, Korea

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 169


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_9, © Springer-Verlag London Limited 2012
170 H.-E. Kim et al.

as a tool for estimating health state probability of machine degradation, which can
affect the accuracy of prediction. To validate the feasibility of the proposed model,
real life historical data from bearings of High Pressure Liquefied Natural Gas
(HP-LNG) pumps were analysed and used to obtain the optimal prediction of
remaining useful life. The results obtained were very encouraging and showed that
the proposed prognostic system based on health state probability estimation has
the potential to be used as an estimation tool for remnant life prediction in indus-
trial machinery.

Keywords Prognostics, Support Vector Machines (SVMs), Remaining Useful


Life (RUL), High Pressure LNG pump

1 Introduction

An important objective of CBM is to determine the optimal time for replacement


or overhaul of a machine. The ability to accurately predict the remaining useful
life of a machine system is critical for its operation and can also be used to im-
prove productivity and enhance system safety. In CBM, maintenance is usually
performed based on an assessment or prediction of the machine health instead of
its service time, which leads to intended usage of the machine, reduced down time
and enhanced operation safety. An effective prognostics program will provide
ample time for maintenance engineers to schedule a repair and to acquire replace-
ment components before catastrophic failures occur. Recent advances in comput-
ing and information technology have accelerated the production capability of
modern machines and reasonable progress has been achieved in machine failure
diagnostics but not in prognostics.
Prognostics is considerably more difficult to formulate since its accuracy is
subjected to stochastic processes that have yet to occur. In general, although many
diagnostic engineers have lots of information and experience about machine fail-
ure and health states by continuously condition monitoring and analysing of ma-
chine condition in industry, there are still no clear systematic methodologies on
how to predict machine remnant life. The task still relies on human expert knowl-
edge and experience.
Although a variety of prognostic methodologies have been reported in recent,
their application in industry is still relatively new, and mostly focused on the pre-
diction of specific component degradations. They are also insufficient fault sensi-
tive features for the interpretation of machine degradation process. Moreover,
major challenges for long-term prediction of remaining useful life (RUL) still
remain to be addressed. Therefore, continuous development and improvement of a
machine health management system is required for real industry application.
Therefore, there is an urgent need to continuously develop and improve effective
prognostic models which can be implemented in intelligent maintenance systems
for industrial applications.
Machine Prognostics Based on Health State Estimation Using SVM 171

This paper presents an integrated diagnostics and prognostics framework based


on health state probability estimation for engineering systems. In the proposed
model, prior empirical (historical) knowledge is embedded in the integrated diag-
nostics and prognostics system together for the isolation of impending faults in
machine system and the accurate probability estimation of discrete degradation
states (health states) for the machine remnant life prediction. The methodology
assumes that machine degradation consists of a series of degraded states (health
states) which effectively represent the dynamic and stochastic process of machine
failure. The estimation of discrete health state probability for the prediction of
machine remnant life is performed using an ability of classification algorithms. In
this research, to validate the feasibility of the proposed model, bearing fault cases
of HP-LNG pumps were analysed to obtain the failure degradation process of
bearing failure. Then, predetermined failure states were trained for the estimation
of the machine health state probability by using an ability of SVM classifier. The
results showed that the proposed prognostic system has the potential to be used as
an estimation tool for machine remnant life prediction in industrial applications.
The remaining part of the paper is organized as follows. Section 2 presents the
proposed prognostic system based on health state probability estimation with em-
bedded historical knowledge. In Section 3, the methodology of health state prob-
ability estimation using SVMs for RUL prediction is described briefly. Section 4
presents the result of bearing failure cases for HP-LNG pumps. We conclude the
paper in Section 5 with a summary of future research.

2 Prognostics System Based on Health State Estimation

In this research, a new prognostics system based on health state estimation with
embedded historical knowledge is proposed. In terms of design and development
of intelligent maintenance systems, effective intelligent prognostics models using
condition monitoring techniques and failure pattern analysis for a critical dynamic
system can lead to a robust prognostics system in industry. Furthermore the com-
bined analysis of event data and condition monitoring data can be accomplished
by building a mathematical model that properly describes the underlying mecha-
nism of a fault or a failure.
For an accurate assessment of machine health, a significant amount of a priori
knowledge about the assessed machine or process is required because the corre-
sponding failure modes must be known and well-described in order to assess the
current machine or process performance [1].
Figure 1 illustrates the conceptual integration of diagnostics and prognostics
with embedded historical knowledge. To obtain the best possible prediction on the
machine remnant life, the proposed prognostics model is integrated with fault
diagnostics and empirical historical knowledge. Li et al. [2] suggested that a reli-
able diagnostic model is essential for the overall performance of a prognostics
system. To provide long range prediction, this model allows for integration with
172 H.-E. Kim et al.

Figure 1 Closed Loop Architecture


of the Prognostics System

diagnostics as remnant life prediction requires good diagnostic information before


progressing to prognostics. The outcome of a diagnostics module provides reliable
information for the estimation of machine health state and system redesign by
employing the precise failure pattern of the impending fault. Therefore, by using
an integrated system of diagnostics and prognostics, knowledge of a predeter-
mined dominant fault obtained in the diagnostic process can be used to improve
the accuracy of prognostics in predicting the remnant life.
In this model, through prior analysis of the historical data and events, major
failure patterns that affect the entire life of the machine are identified for diagnos-
tics and prognostics. The historical knowledge provides the key information on
diagnostics and prognostics of this system such as empirical training data for the
classification of impending faults and historical failure patterns for the estimation
of current health state. Moreover, it also could be used to determine appropriate
signal processing techniques and feature extraction techniques for effective diag-
nostics and prognostics.

Figure 2 Flowchart of the Diagnostic and Prognostic System Based on Health State Estimation
Machine Prognostics Based on Health State Estimation Using SVM 173

Figure 2 presents the flowchart of the integration of historical knowledge, diag-


nostic system and prognostics system for health state estimation. The proposed
system consists of three subsystems, namely, historical knowledge, diagnostics
and prognostics. The entire sequence includes condition monitoring, classification
of impending faults, health state estimation and prognostics, and is performed by
linking them to case-based historical knowledge.
Through prior analysis of historical data, the historical knowledge provides
useful information for the selection of suitable condition monitoring techniques,
such as sensor (data) type and signal processing techniques, which are dependent
on machine fault type. In the proposed model, the feature extraction and selection
techniques in the diagnostics module are linked with the historical knowledge. The
predetermined discrete failure degradation of the machine located in the historical
knowledge module can be used to estimate the health state of the machine located
in the prognostics module. The final output of the prognostics module of certain
impending faults can also be accumulated to update the historical knowledge. This
accumulated historical knowledge can then be used for system updating and im-
proving of the prognostics model by providing reliable posterior degradation fea-
tures for diverse failure modes and fault types. In this proposed model, the health
states probability estimation of discrete failure degradation can be performed us-
ing classification algorithms. The authors employed the SVMs classifier for the
health state probability estimation in this paper because SVMs show outstanding
performance in the classification process compared with the other classifiers in
recent literatures [3–6].

3 Health State Probability Estimation Using SVMs


for RUL Prediction

After identifying the impending fault in the diagnostic module, the discrete failure
degradation states determined in prior historical knowledge module are employed
in the health state estimation module as depicted in Figure 2. The historical failure
patterns also can be used to determine the optimum number of health states for the
prediction of the machine remnant life. In estimating the health state, predeter-
mined discrete degradation states were trained before being used to test the current
health state. Through prior training of each failure degradation state, current health
condition is obtained in terms of probabilities of each health state of the machine
using the capability of multiclassification. At the end of each prognostics process,
the output information will also be used to update the historical knowledge. This
section provides a brief summary of the proposed health state estimation method-
ology and the RUL prediction using the SVM classifier.
SVM is based on the statistical learning theory introduced by Vapnik and his co-
workers [7, 8]. SVM is also known as maximum margin classifier with the abilities
of simultaneously minimizing the empirical classification error and maximizing the
geometric margin. Due to its excellent generalization ability, a number of success-
174 H.-E. Kim et al.

ful applications have been implemented in the past few years. The theory, method-
ology and software of SVM are readily available in references [7–10]. Although
SVMs were originally designed for binary classification, multi-classification can
be obtained by the combination of several binary classifications. Several methods
have been proposed, for example, “one-against-one,” “one-against-all,” and di-
rected acyclic graph SVMs (DAGSVM). Hsu and Lin [10] presented a comparison
of these methods and pointed out that the “one-against-one” method is suitable for
practical use than the other methods. Consequently, in this study, the authors em-
ployed the “one-against-one” method to perform the classification of discrete fail-
ure degradation states.
G
Let xt = ( xt1 , xt 2 , ..., xtm ) be the observations, where m is the number of obser-
vations and t is the time index. Also, let yt be the health state (class) at time (t) and
yt = 1, 2, …, n, where n is the number of health states. For multiclassification of
n-health state (class) event, the “one-against-one” method has n(n-1)/2 classifiers,
where each classifier is trained on data from two classes. For training data from
the ith and the jth classes, SVM solve the following classification problem:

1 ij
+ c  ξtij ( wij )T
2
minimize : w
2 t

subject to : ( wij )T φ ( xt ) + bij ≥ 1 − ξtij ,if yt = i, (1)


( w ) φ ( xt ) + b ≤ −1 + ξ ,if yt = j ,
ij T ij
t
ij

ξtij ≥ 0, j = 1, 2, ..., l
G
where the training data xt is mapped to a higher dimensional space by function
φ , φ ( xt ) is kernel function, ( xt , yt ) is the ith or jth training sample, w ∈ R n and
b ∈ R are the weighting factors, ξtij is the slack variable and C is the penalty pa-
rameter. Detailed explanations on the weighting factors, slack variable and penalty
parameter can be seen in [7].
There are different methods which can be used in future testing after all the
n(n-1)/2 classifiers are constructed. After a series of tests, the decision is made
using the following strategy: if sign (( wij )T φ ( xt ) + bij ) says x is in the ith class,
then the vote for the ith class is added by one. Otherwise, the jth value is increased
by one. Then, the ith class is predicted using the largest vote. The voting approach
described above is also called Max Win strategy [11]. From the above SVM mul-
ticlassification result (yt), we obtain the probabilities of each health states (Si)
using the smooth window and indicator function (Ii) as following:
t +u −1
G G
Prob ( St = i xt ,… , xt +u −1 ) =  I i ( y j ) u
j =t
(2)
0 y ≠ i
Ii ( y) = 
1 y = i
where (St) is the smoothed health state and u is the width of the smooth window.
Machine Prognostics Based on Health State Estimation Using SVM 175

Figure 3 Illustration of Health State Probability Distributions of Simple Linear Degradation


Process

In the given smooth window subset, the sum of each health state probabilities is
shown in Eq. (3)
m
G G
 Pr ( St = i xt ,…, xt +u −1 ) = 1.
i =1
(3)

From the result of each of the health probabilities, the probability distribution
of each health state subject to time (t) can be obtained as illustrated in Figure 3.
Figure 3 shows an example of probability distribution which has a simple linear
degradation process consisting of n number of discrete health states. As the prob-
ability of one state decreases, the probability of the next state increases. At the
point of intersection there is a region of over-lap between two health states, which
is natural phenomenon in linear degradation process. In real life, the probability
distribution of failure process is far more complex due to the dynamic and stochas-
tic nature of machine degradation.
After the estimation of current and each health state in terms of the probability
distributions, the RUL of machine is obtained according to the probability of each
health state (st) and historical operation time (age) at each state (τi), and can be
expressed as
m
G G
RUL(Tt ) =  Pr ( St = i xt ,… , xt +u −1 ) ⋅τ i (4)
i =1

where τi is the average remaining life at state i.

4 Validation of Model Using Hp-LNG Pump

4.1 High Pressure LNG Pump

Liquefied natural gas (LNG) condenses the natural gas six hundred times by freez-
ing the gas below the boiling temperature (–162℃), which can make storage and
176 H.-E. Kim et al.

Table 1 Pump Specifications

Capacity Pressure Impeller Stage Speed Voltage Rating Current


241.8 m3/hr 88.7 kg/cm2. g 9 3585 RPM 6600 V 746 kW 84.5 A

transportation much easier. In an LNG receiving terminal, high pressure LNG


pumps are used to boost the LNG pressure to 80 bar for evaporation into highly
compressed natural gas in order to be sent out as highly compressed natural gas
via a pipeline network across the nation. The numbers of high pressure LNG
pumps determine the amount of LNG at the receiving terminal. It is a critical piece
of equipment in the LNG production process and should be maintained at optimal
conditions. Therefore, vibration and noise of high pressure LNG pumps are regu-
larly monitored and managed based on predictive maintenance techniques.
Table 1 shows the pump specifications. These high pressure LNG pumps are
submerged and operate at super cooled temperatures. They are self-lubricated on
both sides of the rotor shaft and tail bearings using LNG. However, due to the low
viscous value (about 0.16cP) of LNG, the three bearings of the high pressure LNG
pump are poorly lubricated and the bearing must be specially designed. There are
some difficulties in detecting the cause of pump failure at an early stage because
of certain bearing components which can result in rapid bearing failure due to poor
lubricating conditions and a high-operating speed (3600 rpm). In other words, in
case of abnormal problems happening, one would not have sufficient time to ana-
lyze the possible root cause before pump failure. Especially, due to the material
property variations of cryogenic pumps at super low temperatures and some diffi-
culties in measuring the vibration signals on the submerged pump housing, there
are some restrictions for the diagnosis of pump health and the study of vibration
behaviour. Hence, there is a need to use the expert knowledge of the failure pat-
terns for accurate estimation of remnant life. Long-term prediction of certain fail-
ures for safe operation and CBM program is also highly recommended in case of
these pumps.

Figure 4 Pump Schematic and Vibra-


tion Measuring Points
Machine Prognostics Based on Health State Estimation Using SVM 177

As shown in Figure 4, HP-LNG pumps are enclosed within a suction vessel


and mounted with a vessel top plate. Three ball bearings are installed to support
entire dynamic load of the integrated shaft of the pump and motor. The sub-
merged motor is cooled and the bearings lubricated by a predetermined portion of
the LNG being pumped. For condition monitoring of pumps, two accelerometers
are installed on housing near the bearing assembly in horizontal and vertical di-
rections respectively.

4.2 Acquisition of Bearing Failure Vibration Data

For machinery fault diagnostics and prognostics, signals such as vibration, tem-
perature and pressure are commonly used. In this research, the authors used vibra-
tion data because it is readily available in industry, and the trend of vibration
features closely related to the bearing failure degradation process. Figure 5 shows
the frequency spectrum plots of P301D pump. The bearing resonance component
increased over the period of operation hours. The first symptom of a bearing
failure was detected as early as 14 months before the bearing final failure. Other
bearing fault components appeared progressively until the final bearing failure, as
shown in plots (a)–(d) of Figure 5.
Vibration data were collected through two accelerometers installed on the
pump housing as shown in Figure 4. The vibration data from two LNG pumps of
identical specification were used for prediction of the remaining useful life. Due
to the random operation of the pumps to meet the total production target of LNG
supply, there were some restrictions to collect more complete data over the entire
life of the pump. The acquired vibration data are summarized in Table 2. As
shown in Table 2, a total 136 vibration samples for P301 C and 120 vibration
samples for P301 D were collected during the full range of operation over the life
of the pump, for training and testing of the proposed prognostic model.
Figure 6 shows the damage of (a) the outer raceway spalling of P301 C and (b)
the inner raceway flaking of P301 D, respectively. Although these two bearing
faults had different fault severities on the inner race and the outer race, these
faults occurred on similar bearings located on the same location of the pump.

Table 2 Acquired Vibration Data of Bearing Failure

Machine Total operation Reason of remove No. of sample Sampling


No hours & Root cause data frequency
P301 C 4698 h High Vibration and Outer race- 120 12,800 Hz
way spalling
P301 D 3511 h High Vibration and Inner raceway 136 12,800 Hz
flaking
178 H.-E. Kim et al.

Figure 5 Spectrum Plots of P301D Pump Bearing Failure


Machine Prognostics Based on Health State Estimation Using SVM 179

Figure 6 Outer and Inner Race Bearing Failures

4.3 Feature Calculation and Selection

Although bearing faults are the primary causes of machine breakdown, a number
of other component faults can also be embedded in bearing fault signals which
make it problematic in bearing diagnostics/prognostics. Currently, a number of
physical model-based prognoses have been reported which focused on identifying
appropriate features of damages or faults. However, current researches of prognos-
tics only concentrate on specific component degradations and do not include other
types of fault. In this research, the authors aim to address a generic and scalable
prognostic model which is applicable for different faults in identical machine. The
conventional statistical parameters from the vibration signals are used for prognos-
tic tests to establish the generic and scalable prognostic model in this study. In this
work, a total of 28 features (14 parameters, 2 positions) were also calculated for
health state probability estimation of bearing failure. The calculated features from
the two sets of vibration data of HP-LNG pumps are summarized in Table 3.
For the outstanding performance of fault classification and the reduction of
computational effort, effective features were selected using the distance evalua-
tion technique of feature effectiveness introduced by Knerr et al. [12] as depicted
below.
The average distance (di,j) of all the features in state i can be defined as follows:
N
1
di , j =  Pi, j (m) − Pi, j (n) .
N × ( N − 1) m , n =1
(5)

The average distance (d′i,j)of all the features in different states is


M
1
di′, j = 
M × ( M − 1) m ,n=1
Pai ,m − Pai ,n (6)

where, m, n = 1, 2, …, N, m ≠ n, Pi,j: eigen value, i: data index, j: class index,


N: number of feature and M: number of class.
180 H.-E. Kim et al.

Table 3 Statistical Feature Parameters and Attributed Label

Position Time Domain Parameters Frequency Domain Parameters


Acc. (A) Mean (1), RMS (2), Shape factor (3), RMS frequency value (11),
Acc. (B) Skewness (4), Kurtosis (5), Crest factor Frequency centre value (12),
(6), Entropy estimation value (7), En- Root variance frequency (13) and
tropy estimation error (8), Histogram Peak value (14)
upper (9) and Histogram lower (10)

When the average distance (di,j) inside a certain class is small and the average
distance (d′i,j) between different classes is big, these averages represent that the
features are well separated among the classes. Therefore, the distance evaluation
criteria (αi) can define as
(α i ) = d ai′ d ai (7)

The optimal features can be selected from the original feature sets according to
the large distance evaluation criteria (αi).
In this work, a total of 14 features were used to extract effective features from
each signal sample measured at the same accelerometer positions. The distance
evaluation criterion (αi) of 14 features in this work are shown in Figure 7, with
almost zero upper histogram value (No. 9). In order to select the effective degrada-
tion features, the authors defined a value greater than 1.3 of a normalized distance
evaluation criterion, |αi /αN | > 1.3, where (αi) is the distance evaluation criterion
and (αN) is the mean value of (αi). The ratio of 1.3 is selected based on past his-
torical records for this particular bearing/pump. From the results, three features are
selected for health state probability estimation, namely Kurtosis (5), Entropy esti-

Figure 7 Distance Evaluation Criterion of Features


Machine Prognostics Based on Health State Estimation Using SVM 181

mation value (7) and Entropy estimation error value (8). They meet the large dis-
tance evaluation criterion (αi) as compared with other features. These features
could minimize the classification training and test error of each health state.
Figure 8 shows the selected feature trends of kurtosis, entropy estimation and
entropy estimation error value, respectively. All the selected features show in-
creasing trends which indicate the failure degradation process of the machine over
time as shown in the plots.

Figure 8 Feature Trends of Selected Features


182 H.-E. Kim et al.

4.4 Selection of Number of Health States for Training

In this case study, to select the optimal number of health states of bearing degrada-
tion, several health states were investigated using the data sets of P301 D for train-
ing and prediction tests. As the basic kernel function of SVM, a polynomial func-
tion was used in this work. Multiclass classification using OAO method was
applied to perform the classification of bearing degradation as described in Sec-
tion 3. Sequential minimal optimization (SMO) proposed by Platt [13] was used to
solve the SVM classification problem. For selection of optimal kernel parameters
(C, γ, d), the cross-validation technique is also used in order to avoid over-fitting
or under-fitting problems of classification performance. The result of the investi-
gation to select the optimal number of health states are plotted in Figure 9. The
average prediction value was estimated using Eq. (8) as follows:

N
 i = 1 μ′ − μ
i i
Average prediction error = . (8)
N

A total of nine different states were investigated, ranging from two to ten states.
As shown in Figure 9, although low health states had low training error values,
they showed high prediction error values compared with other higher health states.
On the contrary, high health states also had high training error values, but rela-
tively low prediction error values. From this result, the authors selected five states
as the optimal number of health states because beyond five states the training error
values increased rapidly and without significant decrease in the prediction error
values. The training error and prediction error values of using five states were
10 % and 5.6 %, respectively.
Table 4 shows the training data sets of the selected five degradation states used
in this work and with eight sets of samples in each state using three selected fea-

Figure 9 Result of Investigation to Determine Optimal Health States


Machine Prognostics Based on Health State Estimation Using SVM 183

Table 4 Training Data Sets for the Health State Probability Estimation (P301D)

State No. No. of samples (u) Average operation Hours (τi) RUL (%) No. of features
1 1~8 4 99.89 % 3
2 25 ~ 32 503 85.67 % 3
3 41 ~ 48 843 75.99 % 3
4 81 ~ 88 2501 28.77 % 3
5 121 ~ 128 3405 3.02 % 3

tures. Initially (State 1), the percentage of RUL is almost 100 % (99.89 %) and
progressively reduced to 28.77 % in state 4. At the 5th state, the remaining bear-
ing life is about 3.02 %.

4.5 RUL Prediction of Bearing Failure

In this RUL prediction of bearing failure, closed and open tests were conducted. In
the closed test, the five states were trained using the listed training data sets shown
in Table 4, and full data sets from P301 D (136 data sets) were tested to obtain the
probabilities of the five degradation states. Figure 10 shows the probabilities of
each state of P301 D. The first state probability started with 100 % and decreased
as long as the next state probability increased. For example, the probability of first
state (solid lines) decreases first and increases again to 90 % and eventually
dropped to zero. Simultaneously, the second state (dotted lines) reached 100 %.
Some overlaps between the states and the nonuniformity of the distribution could
be explained by the dynamic and stochastic degradation process and the un-
certainty of machine health condition or inappropriate data acquisitions in a real
environment. The entire probabilities of each state follow a nonlinear degradation
process and are distinctly separated.

Figure 10 Probability Distribution of Each Health State (Closed Test, P301 D)


184 H.-E. Kim et al.

Figure 11 Probability Distribution of Each Health State (Open Test, P301 C)

As an open test, the similar bearing fault data (P301 C), which consisted of 120
sample sets, is tested to obtain the probability distribution of each health state of
P301 C using identical training data sets shown in Table 4. Figure 11 shows the
probability distribution of each health state of P301 C. Similar nonlinear probabili-
ties distribution and overlaps between states are also observed due to reasons ex-
plained above.
For the estimation of remaining useful life (RUL), the expected life of the ma-
chine was estimated by using the historical operation hours (τi) of each training
data set described in Table 4 and their probabilities evaluated using Eq. (4). Fig-
ure 12 shows the closed test result of estimated remnant life and the comparison
between real remaining useful life and estimated life. As shown in Figure 12,
although there are some discrepancies in the middle zone of the display, the over-
all trend of the estimated life follows the gradient of real remaining useful life of
the machine. The average prediction accuracy was 94.4 %, which is calculated
using Eq. (5) over the entire range of the data set. Furthermore, the estimated life

Figure 12 Comparison of Real Remaining Useful Life and Estimated Life (Closed Test,
P301 D)
Machine Prognostics Based on Health State Estimation Using SVM 185

Figure 13 Comparison of Real Remaining Useful Life and Estimated Life (Open Test, P301 C)

at the final state matched closely the real remaining useful life with less than 1 %
of remaining life.
Figure 13 shows the open test result of estimated remnant life and the compari-
son between real remaining useful life and estimated life. There is a large differ-
ence in remnant life at the initial degradation states as shown in Figure 13. In open
test, the estimated time was obtained from training data sets (P301 D) which had
3511 h in total operation. This causes the discrepancy between real remaining
useful life and estimated life in the beginning of the test. However, as it ap-
proaches final bearing failure, the estimated life matched more closely with the
real remaining useful life than those in the initial and middle states.

5 Conclusion

This paper proposed an innovative machine prognostic model based on health state
probability estimation. Through prior analysis of historical data in terms of histori-
cal knowledge, discrete failure degradation states were employed to estimate dis-
crete health state probability for long-term machine prognostics. To verify the
proposed model, bearing failure data from HP-LNG pumps were used to extract
prominent features and to determine the probabilities of degradation states. For
optimum performance of the classifier, effective features were selected using the
distance evaluation method. To select the optimal health states of bearing failure,
several health states were investigated. The health state probability estimation was
carried out using a full failure degradation process of the machine by optimally
selecting the number of health state over time from new to final failure states. The
result from the industrial case study indicates that the proposed model has the ca-
pability to provide accurate estimation of health condition for long-term prediction
of machine remnant life. The selection of number of optimal health states of bear-
ing failure is vital to avoid high training error with no improvement in prediction
accuracy. However, knowledge of failure patterns and physical degradation from
different historical data of machine faults still needs further investigation.
186 H.-E. Kim et al.

Acknowledgments This research was conducted with financial support from QUT-Internation-
al Postgraduate Award and the CRC for Integrated Engineering Asset Management, established
and supported under the Australian Government’s Cooperative Research Centres Programme.

References

[1] AKS Jardine, D Lin, D Banjevic (2006) A review on machinery diagnostics and prognostics
implementing condition-based maintenance. Mech Sys Signal Pr 20:1483−1510.
[2] Y Li, S Billington, C Zhang, T Kurfess, S Danyluk, S Liang (1999) Adaptive Prognostics
for Rolling Element Bearing Condition. Mech Sys Signal Pr 13:103−113.
[3] M Pal, PM Mather (2004) Assessment of the effectiveness of support vector machines for
hyperspectral data. Future Gener Comp Sy 20:1215−1225.
[4] G Niu, JD Son, A Widodo, BS Yang, DH Hwang, DS Kang (2007) A comparison of classi-
fier performance for fault diagnosis of induction motor using multi-type signals. Struct
Health Monit 6:215−229.
[5] Y Weizhong, X Feng (2008) Jet engine gas path fault diagnosis using dynamic fusion of
multiple classifiers. In: Neural Networ. IJCNN 2008. (IEEE World Congress on Computa-
tional Intelligence). IEEE Int Joint Conf 1585−1591.
[6] G Niu, T Han, BS Yang, ACC Tan (2007) Multi-agent decision fusion for motor fault
diagnosis, Mech Sys Signal Pr Vol. 21.
[7] VN Vapnik (1995) The Nature of Statistical Learning Theory. Springer, New York.
[8] VN Vapnik (1999) An overview of statistical learning theory. IEEE Tr Neural Networ10(5):
988−999.
[9] N Cristianini, NJ Shawe-Taylor (2000) An Introduction to Support Vector Machines. Cam-
bridge University Press, Cambridge.
[10] CW Hsu, CJ Lin (2002) A comparison of methods for multiclass support vector machines.
IEEE Tr Neural Networ 13:415−425.
[11] LM He, FS Kong, ZQ Shen (2005) Multiclass SVM based on land cover classification with
multisource data, In: Pr Fourth Intl Conf Mach Learn Cybernet 3541−3545.
[12] S Knerr, L Personnaz, G Dreyfus, Single-layer learning revisited: a stepwise procedure for
building and training a neural network. Springer-Verlag, New York.
[13] J Platt (1999) Fast training of support vector machines using sequential minimal optimiza-
tion. In: B. Scholkopf et al Advances in Kernel Methods-Support Vector Learning. MIT
Press, Cambridge.
Modeling Risk in Discrete Multistate
Repairable Systems

M.G. Lipsett and R. Gallardo Bobadilla

Abstract In production processes, maintenance decisions are often made based


on uncertain assessment of risk. This uncertainty may not only appear in the prob-
ability when a process component goes into a state of failure, but also in the cost
of associated repairs, consequential damage, and opportunity cost of lost produc-
tion. In this paper, repair of a component is modeled as a Markov process with
multiple states under the assumption that with a sufficient number of states, the
Markovian property is valid, that is, the transition probabilities from the current
state describe the future state of the system. A Markov formulation is developed
for a system component with states representing a range of operating, fault and
repair situations. A risk function is calculated based on the sum of the products of
cost estimate and transition probability for possible states.

Keywords Reliability, Repairable components and systems, Discrete Markov


modeling

1 Introduction

A repairable component or system can, after a failure, be restored to a condition in


which it can once again perform its intended function (or functions) to a satisfac-
tory standard, without having to replace the entire system. This definition can be
__________________________________
M.G. Lipsett
Department of Mechanical Engineering, 5–8J Mechanical Engineering Building,
University of Alberta, Edmonton, Alberta, Canada T6G 2G8
e-mail: mlipsett@ualberta.ca
R.G. Bobadilla
Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta,
Canada T6G 2G8

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 187


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_10, © Springer-Verlag London Limited 2012
188 M.G. Lipsett and R.G. Bobadilla

extended to include the possibility of additional maintenance actions aimed at


improving system performance [1].
Typical repairable component formulations have only two states: good and
failed. When the component has failed, it is replaced with another identical com-
ponent, thus bringing the component back into the good state. The traditional
concept of reliability can be extended by considering a system with more than two
states: as well as up-state (failure-free and capable of full performance) and down-
state (failed and under repair), the system also has states in which it performs at
levels of reduced efficiency [2]. This approach addresses situations when the sys-
tem is neither fully operable nor fully inoperable, provided that the change in
performance is related to reliability.
Combinatorial models such as fault-trees and reliability block diagrams are ef-
fective approaches to specifying and evaluating the reliability of systems. How-
ever, in such models, it is difficult to include conditional reliability relationships
and other types of dependency, for example, repair dependency and near-coinci-
dent-fault type dependency, transient and intermittent faults, and redundancy [3].
Markov models can describe such dependencies under some conditions [4]. In this
paper, we examine under what conditions a Markovian formulation can describe a
repairable component and how maintenance of the asset is managed.

2 Reliability Model of a Single Repairable Component

A repairable component is an object in a system that can have its reliability re-
stored after it has become unreliable. A description of the component reliability
and performance is needed to understand its contribution to system reliability [5].
In some cases, there is a threshold of performance that the component must ex-
ceed. In that case, it is appropriate to describe the component as a member of one
of two sets: good and failed. In other cases, the component may operate in a range
of service duty, and may be able to deliver acceptable performance even though
reliability and performance is compromised [6].
For a system with a range of performance and reliability, a more general de-
scription of component reliability is necessary. Ideally, this description is a mech-
anistic relationship for variables and constraints of both production and mainte-
nance. In reality, these relationships are difficult to develop and validate, and so a
simplified formulation is preferred.
Maintenance activities are usually described as discrete-event activities; and
many types of operating systems can have different operating conditions classified
discretely as well [1]. Since it is generally not possible to describe the operation
and maintenance of a single repairable component in a system as a deterministic
process, a reasonable formulation of this type of system uses a discrete-event,
stochastic process model [7].
One of the simplest formulations for a stochastic process is a Markovian proc-
ess, which can be either continuous or discrete. The key attribute of a discrete-
Modeling Risk in Discrete Multistate Repairable Systems 189

state continuous-time Markovian random process X(t) є{1,2,... } is that the past
has no influence on the future if the present state is specified. The conditional
probabilities satisfy the relation

Pr { X (Tn ) = xn X (tn−1 ) = xn−1 ,…, X (t2 ) = x2 , X (t1 ) = x1}


(1)
= Pr { X (Tn ) = xn X (tn−1 ) = xn−1}

for t1 < t2 <... < tn–1 < tn. The conditional probabilities in Eq. (1) are called transi-
tion probabilities. The transition probabilities from state to state themselves do not
change over time, and they are described as negative exponential distributions.
The size of a Markov model for the evaluation of such a system may grow expo-
nentially with the number of components in the system [8,9].
For a single repairable component, these restrictions apply in some circum-
stances. Because the state transition probabilities must be stable over time, the
system, including operating and repair practices, must be mature and unchanging.
This implies no change in system duty, and no change in either operating practices
or the maintenance practices. In other words, if the system changes in some way,
such as a change in the effectiveness of operating and maintaining practices, then
the original Markovian process is no longer a correct representation of the system
[10].
If the system representation can be updated with new system behavior that re-
mains Markovian, then it is possible to describe the evolution of the system as a
set of Markov processes. If not, then a different formulation is required. In main-
tenance practices, the Markovian property may not be valid. When the transition
time between states of a component is a random variable that does not follow an
exponential distribution, the use of discrete Markov Chains for describing the sys-
tem is inappropriate. A semi-Markov process may be more representative since
the transition probabilities in a Semi-Markov process are functions of the duration
of time spent on a state of the system [11]. In general, a semi-Markov process is
the process that chooses its next state according to a Markov chain, but the transi-
tion time spent in this state it is a random amount of time. In general, in a semi-
Markov model, the transition rates in a particular state depend on the time already
spent in that state. However, the rates do not depend on the path taken to get to the
present state. The transition times between states for a component do not necessar-
ily follow an exponential distribution.

3 Multistate Reliability Modeling for a Discrete-Event System

For a repairable component, the number of reliability states depends on how the
system is operated and maintained, and the possible failure modes.
Each state has a transition probability μii of remaining in the current state i, as
well as transition probabilities of changing from state i to a different state j.
A transition probability that makes the system less reliable in new state j is λij;
190 M.G. Lipsett and R.G. Bobadilla

a transition that improves system reliability is μij, with the convention that a more
reliable state is a higher numbered state. State transitions have either reliability-
related causes or production causes. Reliability-related causes of state transitions
are natural damage accumulation rates and maintenance decisions. Production-
related causes are operating decisions, including service duty (demand on the
component) and delays in maintenance.
Figure 1 illustrates a system with a single repairable component that has eight
possible states of reliability and performance related to demand. The eight discrete
states are:
• spare;
• standby;
• derated duty;
• full normal duty;
• minor fault;
• major fault;
• failed;
• in repair.
A spare is a good component that is not currently available for operation.
A partially consumed spare has some of its reliability consumed previously (in
another state), and so it has a transition probability of moving to a degraded

Spare μ8,8
8
μ7,7 λ μ7,8
8,7

Standby 7
λ7,6 μ6,7
μ6,6
Derated duty μ5,7
6
λ7,5
λ7,4 λ6,5 μ5,6 μ4,7
μ5,5
Full normal duty
5 μ4,6

λ5,4 μ4,5 μ1,8


μ3,6
Minor fault λ5,3 μ1,7
μ4,4 4

λ4,3 μ3,4
μ3,3 μ1,5
Major fault 3 μ2,4 μ1,4
λ3,2 μ2,3
μ2,2
Failed 2
μ1,3
λ2,1 μ1,2
μ1,1
In repair 1

Figure 1 Discrete Reliability Model for a Repairable Component with Eight States
Modeling Risk in Discrete Multistate Repairable Systems 191

(faulty) state that is higher than that of a new good component. It can usually be
assumed that a spare component has little if any probability of becoming less
reliable over time while it remains a spare, however, in some cases, a spare part
has a “shelf life” and thus a finite transition probability of moving to a degraded
state.
During standby, the component is idle. It may or may not be consuming its reli-
ability while in this state. When a component is on hot standby, whereby it is
actively powered as a redundant part of a system and ready to operate on demand,
there is likely some consumption of reliability over time.
In derated duty, the component is operating, but at a reduced rating (lower per-
formance). There are many reasons for a component to operate in this state. Typi-
cally, derated duty consumes reliability at a lower rate than at full duty, but that is
not always the case.
At full normal duty, the component is operating at or above its nominal per-
formance level, is having no reliability issues, and is consuming reliability at a rate
that is related to its service demands. The production/reliability consumption rela-
tionship is usually not well characterized; but in the absence of other information,
the rate at which reliability is consumed is often assumed to be a linear function of
cumulative operating time.
A component with a minor fault has only incipient damage, meaning that there
is no effect on its performance in its intended service.
In contrast, a component with a major fault is no longer able to meet or exceed
its nominal level of performance. A component in such a state reduces the per-
formance of the overall system, and its reliability.
A failed component is unreliable and is unable to deliver any level of perform-
ance in the system. Because it is still part of an overall system, it affects the reli-
ability of the overall system.
A component that is in repair has been removed from the system, and is in the
process of having some level of reliability restored.

4 Transitions Between States

When a component is consuming reliability, there is a nonzero probability λij that


it will change to a less reliable (more damaged) state j than the current state i.
Similarly, when the component is being repaired or put into less severe service,
there is a nonzero probability μij that it will change to a more reliable (less dam-
aged) state j than the current state i. There may also be nonzero probabilities of
going to other states (more or less reliable), depending on the nature of the system.
If reliability consumption rate is high, then the transition probability decreases and
there is a higher transition probability of a change to a lower reliability state (one
of the fault states or the failed state).
This section gives physical explanations for the transitions between states for a
repairable component such as a process pump or a valve. While it is theoretically
192 M.G. Lipsett and R.G. Bobadilla

possible for a transition to occur between any two states, for this type of repairable
component, only some types of transitions have any real possibility of occurring.
The transition probabilities from one state into another state not only describe the
reliability of the process and the design of the components, but also the effective-
ness of operating and maintenance practices.

4.1 Spare (State 8)

• spare to standby (λ8,7): component goes into service but on standby rather than
operating service;
• spare to spare (μ8,8): component does not change state, and there is no con-
sumption of reliability over time.

4.2 Standby (State 7)

• standby to spare (μ7,8): component goes from standby service to spare;


• standby to derated duty (λ7,6): component is brought into partial operating duty;
• standby to full duty (λ7,5): component is brought into full operating duty;
• standby to minor fault (λ7,4): component goes into service, with an incipient
failure, because reliability has been consumed over time in the standby state;
• standby to standby (μ7,7): component does not change state.

4.3 Derated (State 6)

• derated to full duty (λ7,5): change in operating conditions;


• derated to standby (λ6,7): change in operating conditions;
• derated to derated (μ6,6): no change in operating conditions, component does
not change state, and there is a lower rate of reliability consumption than the
rate in the full duty state.

4.4 Full Normal Duty (State 5)

• full normal duty to full normal duty (μ5,5): there is no change in state, and reli-
ability is consumed at the nominal rate so the probability of changing to a lower
reliability state remains unchanged;
• full normal duty to minor fault (λ5,4): incipient failure but with no degradation
in performance;
Modeling Risk in Discrete Multistate Repairable Systems 193

• full normal duty to major fault (λ5,3): acute failure with degradation in per-
formance;
• full normal duty to standby (μ5,7): change in operation conditions or system
configuration.

4.5 Minor Fault (State 4)

• minor fault to minor fault (μ4,4): reliability is being consumed but the compo-
nent performance has not been compromised;
• minor fault to major fault (λ4,3): reliability has been consumed to the point that
the performance has been compromised;
• minor fault to full duty (μ4,5): reliability is restored without having to go to the
repair state, either through field service (condition-based), misdiagnosis of fault
and reclassification, or spontaneous self-repair;
• minor fault to derated (μ4,6): questionable component goes into derated service,
as a precaution;
• minor fault to standby (μ4,7): questionable component goes into standby ser-
vice, as a precaution.

4.6 Major Fault (State 3)

• major fault to major fault (μ3,3): component remains in service even though not
performing adequately, affecting system performance;
• major fault to minor fault (μ3,4): field repair to partially restore reliability;
• major fault to derated (μ3,6): change in operating condition to accommodate
achievable level of performance;
• major fault to failed (λ3,2): loss of reliability and function to the point of unac-
ceptable performance.

4.7 Failed (State 2)

• failed to failed (μ2,2): component has not changed, and system reliability has
not changed;
• failed to in repair (λ2,1): component leaves the operating system and goes into a
repair activity;
• failed to minor fault (μ2,4): component has only part of its reliability restored
without being removed from the operating system, either through a partial ser-
vicing repair or a spontaneous self-correction of an intermittent fault.
194 M.G. Lipsett and R.G. Bobadilla

4.8 In Repair (State 1)

• in repair to in repair (μ1,1): ongoing repair work;


• in repair to spare (μ1,8): repair is complete and component goes to inventory as
a spare;
• in repair to standby (μ1,7): repaired component goes into standby service;
• in repair to full normal duty (μ1,5): component goes back into service right
away;
• in repair to minor fault (μ1,4): component goes back into service after only a
partial shop repair (which, either intentionally or unintentionally, has restored
only some of the component reliability);
• in repair to major fault (μ1,3): component has little or no effective restoration
during repair before being put back into service.

5 Cost Functions

In practice, costs are easier to evaluate than transition probabilities, provided that
the cost of a transition between states is captured unambiguously in a cost ac-
counting system. Specific costs associated with each transition between states
depend on understanding the system in which the component operates (including
operational control and maintenance decision making) as well as the maintenance
processes involved. Examples include the kind of field repair undertaken, and
when a component is refurbished [0].
Costs should include all aspects of a transition between one state and another;
but the cost function should only include the cost of the transition to the new state.
This means that the costs associated with a state are allocated across the transitions
associated with arriving at that state, weighted by the probability of their respec-
tive occurrence. No future state transition costs are considered. For example, a
transition to a minor fault condition does not yet incur a cost, even though some-
time in the future there will very likely be a change to a major fault or failed state
(which will incur a cost to restore reliability and an opportunity cost of lost pro-
duction).
It is further assumed that the states are all known, and no state is misclassified.
If misclassification occurs, then there may be additional costs associated with
performing inappropriate actions based on incorrect information, for example,
having a transition from Failed to Spare a poor maintenance practice due to either
a misdiagnosis of a fault condition or mishandling of a failed component into
spares inventory.
The eight-state reliability model has the following costs:
Spare to Spare: There is a very small cost associated with this transition since
there is no a change in state and there are no costs associated with handling or
shipping; the only cost associated is related with storage. There is no change in
reliability of the component within this state.
Modeling Risk in Discrete Multistate Repairable Systems 195

Spare to Standby: In this model, the only way that a spare is introduced into
service is by shutting down the system, and so the system is in a standby state. For
this reason, there is a set of transitions: Spare to Standby, and then from standby to
an operating state. A simplified model may eliminate the Standby state.
Standby to Standby: The Standby state has no cost, unless the system is nonre-
dundant and incurs an opportunity cost of lost production while in standby.
Standby to Derated Duty, Standby to Normal Duty: From the maintenance cost
point of view, the costs related with these transitions are mainly handling and
installation of the component into the system. This cost does not include the op-
portunity cost of lost production during the change, if the system has to be down
for the change-out for other reasons. If the system has to be shutdown only to
install the component, then the cost of lost production should be included.
Standby to Minor Fault: Any degradation of the component during storage is
reflected in transition probabilities to the Minor Fault state.
Duty to Standby: The transition from operating state (normal or derated) to
standby has almost no cost, as it is simply a change of operating mode.
Standby to Spare: This transition has costs related to handling, relocation of the
component, and storage.
Duty to Duty (Normal or Derated): There is a small cost incurred in this transi-
tion, since there is no change in state, and the cost is only related to the component
working and performing its intended functions.
Duty (Normal or Derated) to Fault (Minor or Major): There is a small cost as-
sociated with this transition since the component remains operating and perform-
ing its function, this small cost is only related to operation. If the fault condition
has a large negative impact on functional performance, there could be a high op-
portunity cost of lost production from wasted product or high cost of consequen-
tial damage to other components. Duty to Fail is not included in this model be-
cause it is assumed that the system always progresses through a fault state before
reaching functional failure.
Fault to Spare: This transition has costs associated with handling, relocation of
the component, and restoration of reliability (repair). Storage cost only incurs
when passing from spare to spare.
Fault to Duty (Normal or Derated): Costs incurred in this transition are for mi-
nor restoration of reliability and reparations, which do not require going out of an
operating state to the In Repair state.
Fault to Fault: This transition has a cost associated for remaining in a fault
state due to compromised process performance. There is a higher transition cost
for transition from Minor Fault to Major Fault than from Major Fault to Minor
Fault.
Fault to Fail: There can be a large cost related to this transition due to produc-
tion losses. It is assumed that a component does not go directly from Minor Fault
to Failed, but progresses from some incipient problem to a Major Fault. This tran-
sition probability depends on the failure mode for the component. For a compo-
nent with multiple failure modes with conspicuously different hazard rates, the
model should be modified to incorporate multiple fault states.
196 M.G. Lipsett and R.G. Bobadilla

Failed to In Repair: The direct maintenance costs are counted only in this tran-
sition and in the transition within the In Repair state.
Failed to Fault: In this model, there may be transitions from Failed to a Fault
(Minor or Major) representing the spontaneous self-restoration of functionality
that can occur after an intermittent fault, or a minor maintenance activity that does
not require a real repair activity. This model does not consider cases when a failed
component returns to Normal or Derated Duty (with component reliability fully
restored).
In Repair to In Repair: This state transition to the same state has the cost of
shop repairs, and any lost production due to nonredundant components undergoing
shop repairs that incur a cost of lost production.
Fail to Fail: This state transition to the same state implies a cost for ongoing
production losses, including field fixes that don’t fix the problem.
In Repair to Failed: This state transition captures the cost of a bad shop repair
that does not fix the problem.
In Repair to Fault or Standby or Spare: These state transitions capture the costs
of going to a state of partial or full restoration of component reliability. The model
does not include transition from shop repair directly to operation, but rather goes
to standby before putting the component back into operation.

6 Risk Modeling

Usually, maintenance decisions are based on the risk associated with the next
change in state. This can be represented by a single transition in a Markov process.
At other times, it may be of interest to evaluate the risk after multiple steps.

6.1 Risk After One Transition Step

The risk associated with a particular state i is the measure of the cost of transition
from that state and the probabilities of the transitions from that state, defined as
the sum of products of cost estimates and their respective transition probabilities
times the probability of being at state i at the current time (with respect to a step)
[13]. The total risk is the sum for all states:
n n
Risk =  Pi ( 0)  Cij Pij (2)
i =1 j =1

where Pi ( 0) is the ith element of the Initial Probability Vector P ( 0) which repre-
sents the probability of being in state i as the initial state, Cij is the transition cost
of changing from state i to state j, Pij is the transition probability of changing from
state i to state j, and n is the number of states. In a real system, there may also be
Modeling Risk in Discrete Multistate Repairable Systems 197

an underlying cost dependency, in which case it would be appropriate to allocate


the cost on a weighted basis across the transitions that have the cost dependency.
In our model with eight possible states
8 8
Risk =  Pi ( 0)  Cij Pij (3)
i =1 j =1

where
8
P (0) = ( P1(0) , P2(0) , P3(0) , P4(0) , P5(0) , P6 (0) , P7 (0) , P8(0) ) ,  Pi ( 0) = 1. (4)
i =1

If the initial state h is known, then the risk equation can be simplified as
n =8
Risk h =  Chj Phj (5)
j =1

where Riskh is the risk associated with transitions from state h.

6.2 Risk After k Transition Steps

The risk for transitions from state i to another state j in multiple steps may be
different for that of one step because of the intermediate transitions to other states
k, etc. that may occur with different costs than that of a single step from state i to
state j.
We define the probability matrix P as

 P11 P12 P13 P14 P15 P16 P17 P18   μ11 μ12 μ13 μ14 μ15 μ16 μ17 μ18 
   
 P21 P22 P23 P24 P25 P26 P27 P27  λ21 μ 22 μ 23 μ 24 μ 25 μ 26 μ 27 μ 28 
 P31 P32 P33 P34 P35 P36 P37 P38  λ31 λ32 μ33 μ34 μ35 μ36 μ37 μ38 
   
 P41 P42 P43 P44 P45 P46 P47 P48  λ41 λ42 λ43 μ 44 μ 45 μ 46 μ 47 μ 48 
P= =  (6)
 P51 P52 P53 P54 P55 P56 P57 P58  λ51 λ52 λ53 λ54 μ55 μ56 μ57 μ58 
 P61 P62 P63 P64 P65 P66 P67 P68  λ61 λ62 λ63 λ64 λ65 μ66 μ67 μ68 
   
 P71 P72 P73 P74 P75 P76 P77 P78  λ71 λ72 λ73 λ74 λ75 λ76 μ77 μ78 
P P P83 P84 P85 P86 P87 P88  λ81 λ82 λ83 λ84 λ85 λ86 λ87 μ88 
 81 82

where
8

P
j =1
ij = 1, i = 1,…8. (7)

The matrix P comprises the sum of an upper triangular matrix of transition


probability elements μij where j > i, representing changes to states of increased
198 M.G. Lipsett and R.G. Bobadilla

reliability, a lower triangular matrix of elements λij where j < i, representing


changes to states of decreased reliability, and a diagonal matrix of elements μii in
which there is no change in state. In Eq. (1), P is shown for a repairable compo-
nent with eight states. As described in the previous section, some of the transition
probabilities may be zero, which may make the matrix sparse.
We then define the risk transition matrix (simply called the risk matrix) to be
the entry-wise product of the probability and cost matrices:
R = C ⋅ P. (8)
The risk of a process changing from state i to state j after k steps is represented
as Rij(k). Then, the transition risk matrix after k steps R(k) is equal to the risk matrix
to the power of k:

R(k ) = R k . (9)
(k)
The risk Ri of being at state i after k steps is the total risk; and for every k
steps there is an stochastic vector formed by all the total risks of this step:

R ( k ) = ( R1( k ) , R2 ( k ) , R3( k ) , R4 ( k ) , R5( k ) , R6( k ) , R7 ( k ) , R8( k ) ) , (10)

where Ri ( k ) is the risk associated with being at state i after k steps. R ( k ) is also
known as the risk distribution after k steps. Using a discrete Markov process rep-
resentation, with a risk transition matrix R, we obtain the risk after a number of
steps:

R (1) = P ( 0) R
R ( 2) = R (1) R = P ( 0) R 2 (11)
(k ) ( k −1) ( 0)
R =R R=P R . k

Then, after k steps, the risk is


k
 R11 R12 R13 R14 R15 R16 R17 R18 
 
 R21 R22 R23 R24 R25 R26 R27 R27 
 R31 R32 R33 R34 R35 R36 R37 R38 
 
 R41 R42 R43 R44 R45 R46 R47 R48 
R(k ) = ( P1(0) , P2 (0) , P3(0) , P4(0) , P5(0) , P6(0) , P7 (0) , P8(0) )  (12)
R51 R52 R53 R54 R55 R56 R57 R58 
 
 R61 R62 R63 R64 R65 R66 R67 R68 
 
 R71 R72 R73 R74 R75 R76 R77 R78 
R R R R R R R R 
 81 82 83 84 85 86 87 88 

or

R ( ) = P ( ) R k = ( R1( k ) , R2 ( k ) , R3( k ) , R4 ( k ) , R5( k ) , R6( k ) , R7 ( k ) , R8( k ) ) .


k 0
(13)
Modeling Risk in Discrete Multistate Repairable Systems 199

Using an eight-element column vector of ones as a transformation vector V1,


the final model for calculating the Risk after k steps can be calculated as
8
Risk =  Ri (
K)
= R ( k )V1 = P (0) R kV1 (14)
i =1

which is
Risk = ( R1( k ) , R2( k ) , R3( k ) , R4( k ) , R5( k ) , R6( k ) , R7( k ) , R8( k ) )V1
or (15)
Risk = R (k )
1 +R (k )
2 +R (k )
3 +R (k )
4 +R (k )
5 +R (k )
6 +R(k )
7 +R .
(k )
8

This equation includes the one-step case, so Eq. (5) is equivalent to Eq. (3)
when k = 1:
8 8
Risk =  Pi ( 0)  Cij Pij = P ( 0) RV1. (16)
i =1 j =1

7 Simple Four-State Model

The model requires only a sufficient number of states to satisfy the Markovian
property. Having more states than necessary would complicate the model, and
may make the model difficult to validate and to apply in practice. For example,
rather than an eight-state model, it may be adequate to use only four states: spare,
duty, fault, and failed. A four-state model is illustrated in Figure 2.

Spare μ4,4
4

μ3,4
λ 4,3
μ3,3

Duty λ 4,2 3
μ2,4

λ 3,2 μ2,3
μ1 ,4

Fault λ μ2,2 2 μ1,3


3,1

λ 2,1
Figure 2 Discrete Reli- μ1,2
ability Model for a Repair-
able Component with Four μ1,1
Failed 1
States
200 M.G. Lipsett and R.G. Bobadilla

8 Verification

This basic approach to model formulation was examined in discrete-event simula-


tion using the software package RENO from Reliasoft using the general simula-
tion process illustrated in block diagram form in Figure 3.
A simple model was created in RENO to simulate the Markov process of the
four-state model of this study. In this model, a transition probability matrix was
arbitrarily created to test the modeling and approach the flow chart as

0.25 0.25 0.25 0.25 


0.40 0.30 0.15 0.15 

P= 
0.10 0.40 0.40 0.10 
0 0.1 0.3 0.6 

The limiting probabilities for these four states, given the transition probabilities
above, were found to be 0.1776, 0.2632, 0.2796 and 0.2796 for states 1, 2, 3 and 4,
respectively, since:
(k ) as k →∞
0.25 0.25 0.25 0.25  0.1776 0.2632 0.2796 0.2796 
0.40 0.30 0.15 0.15  0.1776 0.2632 0.2796 0.2796 
 
P(k ) =  =  .
0.10 0.40 0.40 0.10  0.1776 0.2632 0.2796 0.2796 
0 0.1 0.3 0.6  0.1776 0.2632 0.2796 0.2796 

In other words, for this specific transition matrix, the component would be
17.76 % of the time in “fail” (state 1), 26.32 % of the time in “fault”, 27.96 % in
“duty” and 27.96 % of the time in “spare.” This case was then run with the RENO
simulation. The same results were obtained when the number of steps (k) and the
number of simulations was sufficiently large. Very good numbers (close conver-
gence between limiting probabilities and values obtained with RENO simulation)

Figure 3 General Block Diagram of the Discrete-Event Simulation Process


Modeling Risk in Discrete Multistate Repairable Systems 201

Table 1 Simulation Results for the Four-State Markov Process

# Steps 5000 10 100 1,000 100 1000 5000 10,000


# Simulations 10 5000 1000 100 5000 1000 5000 10,000
State % of time being in a certain state, obtained by simulation Goal
Spare 27.75 24.73 27.46 27.73 27.73 27.98 27.9649 27.9614 27.96
Vs Goal 0.21 3.23 0.5 0.23 0.23 0.02 0.0049 0.0014
Duty 17.5 28.3 27.86 27.83 27.94 27.88 27.9545 27.9592 27.96
Vs Goal 10.46 0.34 0.1 0.13 0.02 0.08 0.0055 0.0008
Fault 26.7 28.73 26.84 26.61 26.5 26.3 26.3145 26.315 26.32
Vs Goal 0.38 2.41 0.52 0.29 0.18 0.02 0.0055 0.005
Fail 18.3 18.22 17.84 17.82 17.84 17.83 17.7662 17.7644 17.76
Vs Goal 0.27 0.46 0.08 0.06 0.08 0.07 0.0062 0.0044
Sum Differences 11.32 6.44 1.2 0.71 0.51 0.19 0.0221 0.0116

were reached with combinations of 5000 steps and 5000 simulations; and 10,000
steps and 10,000 simulations, as shown in Table 1. Good results were also ob-
tained for a combination of 1000 steps and 1000 simulations. The flowchart and
some of the results obtained with RENO are showed in Figure 4.
Once the flowchart was created in RENO, many different analyses were run to
sense the best combination of number of steps and simulations necessary to obtain
acceptable results. Table 1 shows these different runs and their results. It can be
confirmed that the larger the number of simulations and steps, the more accurate
the numbers obtained, and the obtained values approach to the limiting probabili-
ties expected for the discrete Markov model.
In this numerical experiment, the term simulation is used to describe a single
pass through a flowchart or process. In the example of 5000 steps and 5000 simu-
lations, a complete pass through the flowchart (simulation) was only completed
when 5000 steps were reached. This process was carried out 5000 times in order to

Figure 4 Representation of a Single Discrete-Event Simulation for a Four-State Markov Process


202 M.G. Lipsett and R.G. Bobadilla

complete the 5000 simulations. More than one simulation is carried out in order to
represent randomness of the process appropriately and minimize the effects of
outliers. An average of the 5000 set of results is calculated.
The simulations were always run with a seed, which means that the software
was forced to use the same sequence of random numbers to start each simulation
in order to compare the results. Specifying the use of the same seed for each simu-
lation run allows you to obtain same value results. In other words, the simulation
can be duplicated. A seed also helps when tracking changes in simulation results
when changing the program. Without a seed, in some computer simulation scenar-
ios, it would be hard to determine whether changes in the outcome were due to the
changes in the code or due to different random numbers.
The number of steps has to be sufficiently large in order to imitate the infinite
number of steps (k→∞). The larger the number of steps, the closer the numbers of
the simulation will be to the limiting probabilities for a system with Markovian
properties.
Among the different analyses tested with an Intel Pentium 4 CPU 2.40 GHz, the
best results (closest numbers to limiting probabilities) were obtained with 10,000
steps in each simulation and 10,000 simulations followed by the test with
5000 steps in each simulation and 5000 simulations; but considering that the run
for the 10,000 steps in each simulation and 10,000 simulations took 4 days and
30 minutes, and the 5000 steps in each simulation and 5000 simulations one took
only 5 hours and 43 minutes, and that both sets of results had an error of less than
0.025 %, with respect to the limiting probabilities, a run with 5000 simulations
with 5000 steps in each simulation was considered to be sufficient. For more com-
plicated scenarios, where computation time becomes relevant to the process, a
combination of 1000 steps in each simulation and 1000 simulations should also be
acceptable since in this exercise model, this combination gave an error of less than

DEVIATON OF SIMULATION RESULTS FOR SPARE STATE


0.6
Error with respect of Spare limiting probability

1,000 steps-1000 sims

0.5
10 steps-5,000 sims
100 steps-1,000 sims

0.4
10,000 steps-10,000 sims
1,000 steps-1,000 sims

5,000 steps-5,000 sims

0.3

0.2

0.1

0
SIMULATION RUNS

Figure 5 Comparison of Limiting Probability Versus Markov Simulation Results for Spare State
Modeling Risk in Discrete Multistate Repairable Systems 203

0.2 %. Some of these results are shown in Figure 5 for one of the possible states, in
this case, the spare state. This figure only shows the results of the cases where the
number of simulations and steps were equal or greater than 100.

9 Using Discrete-Event Simulation for Sensitivity Analysis


of Decision Variables in Asset Management

In most analyses, the system may not necessarily have constant parameters. Using
this framework, a sensitivity analysis can be conducted with changes in system
parameters over time. For example, a parameter of great interest to maintenance
planners is time interval between preventive maintenance activities.
New models can be constructed that consider changes such as an ongoing de-
crease of reliability after a certain number of steps until maintenance is performed,
or a continuous decrease of reliability at every time step. If these changes are well
behaved over the time interval, then they may be modeled as nonhomogeneous
Poisson processes. Reliability changes will affect the transition probabilities, and
changes in business activities can change the elements of the cost matrix. Multiple
analyses settings may be chosen to assess the impact on maintenance scheduling
for such changes. For example, a maintenance optimization goal may be to mini-
mize the “average total cost” of the process after 1000 steps. A set of simulations
covering multiple sensitivity analysis cases across the range of variables of interest
can show whether a near optimal PM length has been found.
There are several considerations in applying the proposed model. Primarily, the
model should have an appropriate set of states. A component with more than one
failure mode may require a different state to describe that mode if that failure
mode has different transition probabilities to other states than those of other failure
modes.
Estimating transition probabilities between states in a system can be achieved
in two ways. If the system has a means of automatically identifying states, then it
is a fairly simple matter to collect the record of events when the system entered
and exited a particular state. An example is a mine equipment dispatching system
which records the time when each equipment operator enters a code describing the
state of the machine. Of course, manual entry of codes may be subject to error,
and so some data cleaning may have to be done.
From this information, the transition probabilities can be estimated using stan-
dard statistical analysis software. It is important to have both the entering and
exiting information for each state so that the set of events for each type of transi-
tion can be determined. In the case of an exponential distribution, the random
events will follow a Poisson process.
If the system does not have an automatic method for recording the state of the
system, then it may be possible to identify a particular state from a vector of fea-
tures that are observable from system processes (production and maintenance).
204 M.G. Lipsett and R.G. Bobadilla

Once this parsing of states has been achieved, then estimation of the transition
probabilities proceeds as described above.
Estimating costs for Cij, the transition cost of changing from state i to state j
within a period of time may be more challenging. Ideally, the organization will
have an activity-based costing system. In that case, each transition between states
will map onto some cost that is recorded. Some transitions may have zero cost.
Benefits will have negative cost. Opportunity cost of lost production can be esti-
mated from the difference between the base-case cost and the costs associated
with the transition.

10 Conclusion

This work describes a formulation for modeling a single repairable component


with multiple states of reliability using a Markov process, and outlines the transi-
tion probabilities and costs associated with a risk function when the component is
in a particular state.
A particular challenge in reliability is estimation of transition probabilities. For
a Markov process, estimating a probability distribution entails several steps, in-
cluding collecting quality time data (and quarantining a portion of the data for
validation), feature vector extraction and classification into appropriate categories
representing states, and estimation of the probability distributions for transitions
between states. Since human decision making is part of the process, and subject to
its own transition probabilities, human learning may change the system, poten-
tially violating the Markov assumption. These model validation issues will be
addressed as part of future work.
Future work will also consider how to model more general cases (such as a re-
pairable component with transition probabilities that are not negative exponential
functions), how to include uncertainty in the cost estimation, and how to validate
models for actual systems.

References

[1] Lipsett M (2001) Modeling the Flow of Information in Mine Maintenance Systems. Proc.
CIM Annual Conference
[2] Virtanen I (2006) On The Concepts And Derivation Of Reliability In Stochastic Systems
With States Of Reduced Efficiency. Dissertation, University of Turku
[3] Lugtigheid D, Banjevic D, Jardine A (2004) Modeling Repairable System Reliability with
Explanatory Variables and Repair and Maintenance Actions. IMA J Manag Math
15:89−110. doi: 10.1093/imaman/15.2.89
[4] Ching WK (2006) Markov chains: models, algorithms and applications. Springer, New York
[5] Kececioglu D (1995) Maintainability, Availability and Operational Readiness Engineering
Handbook. Prentice Hall, Upper Saddle River
Modeling Risk in Discrete Multistate Repairable Systems 205

[6] Caldeira J, Taborda J, Trigo T (2006) Optimization of the preventive maintenance plan of
a series components system. Int J Press Vessel Pip 83:244−248.
doi: 10.1016/j.ijpvp.2006.02.016
[7] Lindqvist B (2006) On the Statistical Modeling and Analysis of Repairable Systems. Statis-
tical Science.Vol. 21, No.4, 532−551. doi: 10.1214/088342306000000448
[8] Norris JR (1997) Markov Chains. Cambridge University Press, New York
[9] Sahner R, Trivedi K (1986) A Hierarchical Combinatorial-Markov Method of Solving
Complex Reliability Models. IEEE Computer Society Press, Los Alamitos CA. In Proceed-
ings of FJCC. 1986, 817−825
[10] Lisnianski A, Levitin G (2003) Multi-State System Reliability. World Scientific Publishing,
Singapore.
[11] D’Amico G, Janssen J, Manca R (2005) Credit Risk Migration Semi-Markov Models:
A Reliability Approach
[12] Zhang J (2005) Maintenance Planning and Cost Effective Replacement Strategies. Disserta-
tion, University of Alberta
[13] Modarres M, Kaminskiy M, Krivtsov V (1999) Reliability Engineering and Risk Analysis:
A Practical Guide. Marcel Dekker, New York
Managing the Risks of Adverse Operational
Requirements in Power Generation –
Case Study in Gas and Hydro Turbines

M. Salman Leong and Ng Boon Hee

Abstract Load demands in power generation for the national or district grid
often require turbo-generator sets to operate under adverse operational require-
ments with respect to maintenance and design ideals. Such instances typically
involve turbines operating beyond maintenance schedules or at part load condi-
tions. Part load operations for hydro turbines, in particular, present a set of unique
problems. Power generation managers have to manage the risks of machine dam-
age imposed on their engineering assets in attempt to ensure continuing and stable
electricity despatch. This paper presents two case studies examining the risks of
machine failures from adverse operating requirements and how it could be man-
aged by condition monitoring. One involves gas turbines operating beyond OEM
recommended operating hours between maintenance. Blades failures are potential
concerns as well. The risks were evaluated and managed with vibration monitor-
ing of the blades passing frequencies. The other case study relates to hydro tur-
bines operating in rough zones at part load conditions dictated by load stabiliza-
tion requirements of the electricity grid. Measurements of vibrations, draft tube
pressures and strain gauging showed distressed conditions when the turbines were
operated at part loads. Premature failures were experienced in these units.

Keywords Asset risk, Technical integrity, Equipment failure modes

__________________________________
M.S. Leong, B.Sc, PhD
Professor, Institute of Noise and Vibration, Universiti Teknologi Malaysia,
Jalan Semarak, 54100 Kuala Lumpur, Malaysia
e-mail: salman.leong@gmail.com
N.B. Hee, B.Sc
Research Associate, (formerly Power Station Manager, Tenaga Nasional Berhad),
Institute of Noise and Vibration, Universiti Teknologi Malaysia,
Jalan Semarak, 54100 Kuala Lumpur, Malaysia
e-mail: ngbh1@yahoo.com

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 207


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_11, © Springer-Verlag London Limited 2012
208 M.S. Leong and N.B. Hee

1 Introduction

One of the many challenges that must be addressed by electricity generation op-
erators, planners and national grid administrators is its ability to meet the require-
ments for the continuous supply of electricity to the national community with the
necessary reliability, taking into considerations technical, economical, environ-
mental and socio-political conditions. This, in particular, relates to electricity
supply having to meet electricity demand without fail. The dynamics between
supply and demand involve both long-term and daily short-term time frames. This
paper deals with maintenance and reliability issues faced by plant operators as a
result of having to ensure short-term power generation coping with immediate
supply (load despatch) from their facilities. Electricity demand fluctuates through-
out the day and night, peaking when industrial and consumer demands peak,
which are amongst many factors influenced by industrial usage, climate and sea-
sonal changes.
Recent experiences around the world demonstrated that power generation for
the national and district electricity grid have little excess load capacity (often
termed “spinning reserves”), partly due to the exorbitant capital cost of power
generation plants expansion and inherent unplanned outages (non-availability) of
existing facilities. Under these scenarios, power generation plants are often oper-
ated at maximum load capacities. In the event of unscheduled breakdowns or
equipment out on maintenance not brought back to service as originally planned,
plant operators often find themselves unable or not allowed to remove currently
operating units for maintenance based on the sole reason that a maintenance (in-
spection outage) is due. Maintenance schedules for large turbo-generator sets are
often guided by recommendations of the manufacturer (and insurance coverage
which may dictate compliance to such recommendations). This inevitably results
in a dilemma to a plant operator (and the national electricity grid Administra-
tor/National Load Despatch Centre) when national electricity load demands does
not permit units to be removed for maintenance. This paper, in part, examines how
such a dilemma needs to be managed.
Another problem relates to how electricity generation (MW power output) has
to be matched against electricity consumption. Base loads are provided by turbo-
generator sets on continuous operations, and peaking units are used to accommo-
date the varying peak load demands. In Malaysia, and probably in other countries,
base loads are usually assigned to steam and gas turbine sets (and nuclear if avail-
able) and peak electricity loads assigned to gas turbines and hydro turbines since
start ups and stoppage could be more readily accommodated on these turbine
types as compared to steam turbines, for example. This would, of course, be dic-
tated by the generation mix and availability unique to the country. Under such
scenarios of daily start stops of gas turbines, daily heat cycles are imposed on gas
turbines. Some manufacturers use Equivalent Operating Hours (EOH) to reflect
additional thermal reversal cycles imposed on the turbines in addition to actual
running hours.
Managing the Risks of Adverse Operational Requirements in Power Generation 209

To accommodate requirements of load stabilization on the electricity grid, hy-


dro turbines are often used for such load stabilization due to the almost instantane-
ous response of hydro units in electricity generation, merely from adjustments to
the wicket gates opening to the turbines. This poses another pertinent problem
where hydro units are then required to operate at part load conditions away from
peak capacity (full load design) operations. This obviously has undesirable conse-
quences to the long-term mechanical integrity of the hydro turbines. This paper
also presents issues and problems arising from such part load operations over the
service life of hydro turbine units.

2 Issues with Gas Turbines Operations

There are several issues of pertinent concern relating to gas turbine operations in
power generation which are farily typical in industrial gas turbines.

2.1 Common Failures in Gas Turbines

Past experiences of power generation plants showed that blade failures are the
most common in gas turbines (see Figure 1). Rubs are also occasionally noticed on
the casing and rotor. This was consistent with experiences reported in the literature
that showed that blade failures are the most common fault in industrial gas tur-
bines. Meher-Homji [1, 2] cited statistics from a renowned insurance company
that blade failures accounted for as much as 42 % of failures in gas turbines. In a
more recent article by an insurance company (Allianz Technology Centre AZT
[3]), it was stated that statistical analysis of 714 gas turbine installation compo-
nents investigated by them during the last 10 years had shown that turbine blading
(14 %), compressor parts (9 %), casing (5 %), combustion chambers (5 %), rotors
(5 %) and burners (3 %) had the highest damage rates.

Figure 1 Common Gas Turbine Blade Failures Including: (a) foreign Object Damage (FOD),
(b) lost Parts, and (c) cracks at Root
210 M.S. Leong and N.B. Hee

The more common problems from turbine blade rows are foreign object dam-
age, lost parts, cracks (at the blades and roots), rubs, loose disk coupling, deforma-
tion and erosion. Lost parts would usually result in an increased synchronous
vibration response (increased amplitude and/or phase shift) and are more readily
detected from the increased vibration amplitude and/or phase shift of the x1 vibra-
tion vector. Cracks, looseness and rubs, unless reaching catastrophic stage, often
remains undetected from overall vibration levels monitoring that are typically used
in the equipment protection system and in-plant DCS/monitoring displays. Blade
related faults had been shown to be more readily detected from increased ampli-
tudes of blade passing frequency components [4, 5].

2.2 Equivalent Operating Hours (EOH)

For equipment operated with variable loads, cycled frequently or operated in a


degraded service environment, the usable life before overhaul/replacement is po-
tentially reduced. A useful measure that accounts for varying wear rates as a func-
tion of operating history is the Equivalent Operating Hour (EOH). First developed
in the aerospace industry, the concept had been widely used by power plant opera-
tors and OEMs to give a normalized measure of service life for the turbines (gas
and steam turbines).
One of the major factors influencing the Equivalent Operating Hours of the
gas turbines in peaking plants is the inherent daily start stops. For peak load op-
erations, the EOH typically increased 4 to 6 h per daily operation. Load variations
during the course of its daily operations are also required of the gas turbines that
may typically operate from 40 to 100 MW (for base load unit of 100 MW). This
significantly shortens the preventive maintenance schedules of the turbines. This
inevitably compels the plant operator to squeeze the last bit of recommended
running hours out of the unit. Predicting components residual life and determin-
ing the optimal maintenance intervals is, at best, difficult, as it requires balancing
maintenance and repair costs against the risk of trying to squeeze the last bit of
useful life out of the component before it fails. A situation where the Plant Opera-
tor cannot remove a unit for inspection and/or maintenance due to pressing elec-
tricity grid load demands makes the situation more complicated (and necessary)
for the Plant Operator/National Load Despatch Centre to extend the EOH before
maintenance outage.

2.3 Managing Risks of Operating Beyond


Maintenance Schedules

A case study on a power plant having to manage an imposed situation where a


gas turbine to be operated beyond recommended maintenance schedules and re-
Managing the Risks of Adverse Operational Requirements in Power Generation 211

examination of EOH schedules is presented in this section. This case involved


four identical gas turbines (GT3, GT4, GT5 and GT6) used for peak load dis-
patch (with daily start stops and occasional fuel change). One particular unit
(GT6) had to be operated well beyond OEM’s recommended EOH maintenance
schedules due to the pressing load despatch required of the plant as another unit
was not available due to unexpected delays in bringing that unit back to service
after a scheduled maintenance. A unique situation arose when there was a re-
quest by the OEM (and consequently the insurance company as well) for an im-
mediate outage.
Risks of unforeseen turbine failure and, in particular, potential blade failures
resulting from the continued operation of GT6 had to be assessed by the plant
operators. The risks of cracked or loose compressor blading and foreign object
damage (FOD), and particularly the costs of this damage, were weighted against
potential revenue loss with an immediate unscheduled outage of the unit. These
economical considerations had to be balanced against potential savings from gen-
eration revenue and deferred (reduced) maintenance costs with an extended EOH
resulting from the continued operation of the unit. To ensure safe continuing op-
erations of the unit, vibration monitoring and analysis (FFT spectrum and analysis
of blades passing frequencies) were undertaken. Data interpretations were com-
pared with other “good units” (GT3 and GT4 which had recent maintenance works
that included compressor blades replacement).
In principle, blade faults could be detected from measurements and monitor-
ing of gas turbine operating parameters such as pressure, vibration, strain and
stress, and acoustic signals in an attempt to obtain information to assess the
blades’ condition. This is often easier said than done under practical operating
situations in the plant. Vibration analysis represents the most expedient tech-
nique. It had been reported in the literature that blade faults could be detected by
observing relative changes in the BPF and its harmonics amplitude. Mitchell [4]
had shown that blade faults diagnosis (for pumps) can be done based on rela-
tional changes in the blade passing frequency (BPF) and its harmonics. Kubiak
et al. [5] reported that blade rubbing could be detected if the blade passing fre-
quencies (BPF) amplitude is found to be extremely high in the vibration spec-
trum. Figure 2 shows the blade passing frequencies of the compressor and tur-
bine blade rows that were traceable to the individual rows, particularly if the
spectrum is high passed filtered to exclude the higher amplitudes low frequency
components (typically synchronous x1, x2 RPM components) to improve the
vibration signals associated with the blades. Recent work by Lim and Leong [7]
on wavelet analysis on blade passing frequencies in a laboratory test rig showed
that additional information could be extracted from the time frequency display of
the wavelet for fault diagnosis.
In this particular case, the vibration spectral components of the BPFs of the
unit of concern (GT6) were compared against other good units and were found to
be with similar amplitudes. The BPFs based on daily monitoring were also
trended over time. Particular attention was made on sidebands modulation of the
rotor speed. This allowed an assessment to be made on the blades condition.
212 M.S. Leong and N.B. Hee

Figure 2 Typical Blade Passing Frequencies (BPFs) of a Gas Turbine

1.80

1.60

1.40

1.20
Amplitute, Gs

GT3
1.00 GT4
0.80 GT5
GT6
0.60

0.40

0.20

0.00
1150
1300
1500
1650
1950
2000
2300
2500
2600
3100
3200
3100
3200
3800
3800
3950
4550
4700
4900

Frequency, Hz

Figure 3 Comparison of BPFs Vibration Spectra Between Different Units

As illustrated in Figure 3, there were no changes noted in the spectrum to suggest


any significant changes in blade conditions, and the unit was operated until such
time when a maintenance outage could be undertaken by the plant. The assess-
ment of the amplitudes severity of the BPFs and excessive side bands generated
from these gas turbines was also shown to be able to detect blade rubs of another
gas turbine unit (GT4) a year later [6].
The unit of concern was subsequently taken off for its major overhaul (‘C’ third
inspection) at a more appropriate time. During this inspection, it was found that all
the compressor blades and its intermediate pieces were intact and undamaged.
However, eight intermediate pieces at Row 12 were found to be protruding out
due to blade looseness, but were nevertheless still within the acceptable limits.
This confirmed the correctness of the plant’s decision regarding the continued
operation of the unit.
Managing the Risks of Adverse Operational Requirements in Power Generation 213

2.4 Economics and Financial Risks/Gains of Extended EOH

This section summarizes the economics and financial risks/gains of the extended
EOH based on the experience of the power plant with respect to the unit of con-
cern (GT6). The financial risks were evaluated based on potential cost of blade
failures (in all likelihood FOD damage) weighted against opportunity costs (reve-
nue and capacity payments from the electricity distribution party). Even with a
FOD damage, there exists an excess clause in the insurance coverage that makes
payment for typical FOD damage not a claimable sum. The key is to ensure that
risks associated with a major catastrophic failure of the turbine is avoided.
The maintenance schedule in accordance to OEM’s recommendations was
16,000 EOH for complete cycle of inspection with intervals between minor inspec-
tions of 4000 EOH. This unit was approximately at 64,000 EOH at the time OEM’s
request for an immediate outage (as compared to the scheduled 48,000 EOH).
When the unit was finally removed for overhaul at 65,953 EOH, this meant an
extension of 17,953 EOH, saving one complete cycle of inspection.
A more significant saving was achieved based on availability as reflected in ca-
pacity payment and energy payment if the unit was taken out on an untimely out-
age. This unit was operated for more than 120 days beyond the day when an im-
mediate outage was recommended by the OEM. This represented an additional
120 days availability. The Capacity Payment and Energy Payment for the gas
turbine payable to the plant were valued at USD 21,100 and USD 35,350 respec-
tively per machine per day, which amounted to USD 56,450 per day. This repre-
sented a revenue savings for the power plant of USD 6,774,000 for availability.
The combined savings to the plant for this extended EOH from maintenance sav-
ings and increased availability revenue were almost USD 14.6 million. Therefore,
it made financial sense for this plant to have considered and implemented the
extended EOH in an environment of pressing MW load demand.

Table 1 Examples of Scheduled Inspection Costs for Gas Turbines in Malaysia

Description Average Cost (USD)


Cost of A Inspection 14,200
Cost of B Inspection 15,800
Cost of C Inspection 7,773,000
Extension in EOH 17,953 EOH

3 Issues with Hydro Turbines

While the effects of load variations in gas turbines are less obvious to the plant
operator on an immediate basis (notwithstanding the fact that it does have a sig-
nificant long-term impact on its useful life and the EOH), load variations in hydro
214 M.S. Leong and N.B. Hee

turbines are however more apparent on an immediate basis. Hydro turbines inher-
ently have a designated “rough zone” with respect to its performance curve (oper-
ating window). Due to the flow angles of the working fluid (water) as it enters and
leaves the runner, fluid structural interaction under part load conditions results in
unbalanced hydraulic conditions in the working section and draft tube. In the part
load operating zone, the hydraulic efficiency drops and, more importantly, from a
life cycle perspective, vibrations (and stresses) induced on the turbine are substan-
tially increased. This section of the paper presents issues related to increased risks
of long-term integrity of hydro turbines that are often not readily recognized by
National Load Despatch administrators (and perhaps even the plant operator)
arising from operating hydro turbines under part load conditions.
This case study relates to four Francis turbine units (base load each 100 MW)
operating at a constant speed of 250 rpm (4.1 cps). The hydro turbines were, al-
most as a matter of routine, used to stabilize power supply to the national electric-
ity grid; and, as such, operated over a broad load range over extended period in its
service.

3.1 Draft Tube Pressure Pulsations

Draft tube pressures (although often accessible for manual readings, but not neces-
sarily monitored for condition assessment) would exhibit dynamic variations aris-
ing from changes in flow conditions. A typical plot of draft tube pressures under
different load regimes is shown in Figure 4. The pressure variations with time
inherently results in pressure pulsations with frequency content. Fast Fourier
Transformation (FFT) of the pressure would yield dynamic pressures at sub-syn-
chronous frequencies of the shaft running speed. A pressure FFT is shown in Fig-
ure 5 for an operational load condition of 40 MW. A dominant pressure peak was
evident at 1.0 Hz (~25 % of runner RPM).
Operations of hydro turbines under part load conditions had been long known
to result in a spiral vortex flow as the water leaves the runner into the draft tube.
This flow vortex results in cyclic pressure fluctuations as evident in the above

Figure 4 Draft Tube Pres-


sure Variations with Time for
Different Load Conditions
Managing the Risks of Adverse Operational Requirements in Power Generation 215

Figure 5 FFT of Draft Tube


Pressure Under Part Load
Conditions

measurements. Flow turbulence and cavitation, in particular, results in erosion and


pitting on the runners and on the draft tube casing internal liner. Repairs to the
runner and liner replacement would inherently be required.

3.2 High Sub-Synchronous Vibrations

A consequence of the vortex flow generated within the runner and draft tube is
high sub-synchronous vibrations induced in the rotor. The sub-synchronous com-
ponent (1.03 Hz corresponding to ~0.25xRPM) in fact exceeds the synchronous
x1 RPM associated with residual rotor unbalance. This sub-synchronous peak
frequency at 0.25xRPM (1.03 Hz) was identical to the dynamic frequency of the
pressure peak measured at the draft tube. This confirmed that the sub-synchronous
peak was flow induced. A plot of vibration spectrum against load (as obtained
from controlled tests in load increments of 10 MW) is given in Figure 6. These
plots clearly showed the onset of relatively higher flow induced vibrations result-
ing from part load operations (often referred to by the OEM and plant operators as
the “rough zone”).
A visually more dramatic insight on the effects of part load operations is ob-
tained when the shaft vibrations were displayed in time waveforms. Vibration time

Figure 6 Vibration Spectrum Plotted


Against Generator Load (MW)
216 M.S. Leong and N.B. Hee

Figure 7 Vibration Time Waveforms for Base Load and Part Load Conditions

waveforms (rotor absolute displacements relative to the structural foundation)


were obtained under incremental load conditions from 0 MW (full speed, no load)
to base load. Comparisons between the baseline (100 MW) vibration time wave-
forms with the part load condition (40 MW) are given in Figure 7. The plots
showed relatively more severe impulsive vibrations at part load conditions as
compared to more regular harmonic type vibrations associated with residual rotor
unbalance.

3.3 Draft Tube Casing Stresses

A consequence of the pressure pulsations often visually observed on the draft tube
casing is the physical deformations (flexing) of the draft tube casing. The draft
tube steel casing for all four units of this particular hydro power plant, in fact, had
to be stiffened with additional ribs soon after initial commissioning as a result of
cracks in the draft tube external casing due to excessive vibrations (flexing) of the
draft tube casing. Even with additional steel ribs reinforcement for additional
rigidity, flexing of the draft tube casing was still visible.

Figure 8 Draft Tube Casing Strains (Maximum and Minimum Principal Stress) Versus Time
for Base Load and Part Load Conditions
Managing the Risks of Adverse Operational Requirements in Power Generation 217

Strain gauging of the draft tube casing was undertaken on one unit. Strain lev-
els were measured under incremental load conditions during the same time when
the above shaft vibrations were obtained. Comparisons between the baseline
(100 MW) strains time waveforms against part load condition (40 MW) are given
in Figure 8. The time waveforms of the measured strain (which were then con-
verted to stress levels) showed dynamic characteristics similar to the shaft vibra-
tions for the same load conditions. Stress reversals were significantly more ex-
treme at part load conditions as compared to base load conditions, which were
typically five to ten times higher. This demonstrated that components with struc-
tural fluid interaction were higher stressed, inevitably leading to reduced life.

3.4 Potential Consequences

The most commonly recognized and perhaps accepted consequence of part load
operations in hydro turbines are repairs and part replacement to the runner and
draft tube liner due to cavitation after several years of operation. The unit inher-
ently operates under reduced hydraulic (cost) efficiencies under part load condi-
tions. This may be deemed an acceptable price to pay arising from the necessity to
operate in the rough zone for load stabilization to the electricity grid. What is
unacceptable to plant operators would be the inability to operate at all at higher
loads due to high vibrations inherent with part load operations. In fact, there was
an incident with this particular power station where the main bearing pedestal
which supports the entire rotor train suffered structural cracks well before design
life of the unit, resulting in an inability of the unit to be operated for load dispatch
at higher loads. It was the considered opinion of the authors that this bearing ped-
estal structural failure was a result of extended operations in the rough zone under
part load operations of the unit.

4 Conclusion

Turbo-generator sets operating outside design operating windows inevitably suffer


a higher risk of premature failures in addition to being less efficient. Electricity
generating costs per unit output are also higher. While this may be inevitable op-
erationally necessary due to pressing load demands and load stabilization require-
ments, plant operators need to recognize and manage the risks of potential failures
associated with these operating regimes.
Managing the risks of adverse operational conditions would first require the
plant operator to recognize the nature and potential severity of the risk. Operating
beyond maintenance schedules would potentially exacerbate fatigue related fail-
ures. This requires the plant operator to closely monitor all available condition
indicators. In the case of the gas turbines, monitoring of the blade pass frequencies
218 M.S. Leong and N.B. Hee

and sideband activities were used to assess potential deterioration in the blades
condition. For operations under part load and operating window, all available
monitoring tools should be used. For hydro turbines, this included monitoring and
dynamic analysis (FFTs) of the draft tube pressures.

References

[1] Meher-Homji CB (1995) Blading vibration and failures in gas turbines: Part C – Detection
and troubleshooting. ASME no. 95-GT-420
[2] Meher-Homji (1995) CB Blading vibration and failures in gas turbines: Part D – Case
studies. ASME no. 95-GT-421
[3] Allianz Center for Technology (2008) Product service information 1/00. Information / Dam-
age analysis. www.en.allianz-azt.com
[4] Mitchell J (1975) Examination of pump cavitation, gear mesh and blade performance using
external vibration characteristics. In: Pr 4th Turbomach Sym Texas A&M University 39–45
[5] Kubiak J, Gonzalez G, Garcia G, Urquiza B (2001) Hybrid fault pattern for the diagnosis of
gas turbine component degradation. Int Joint Power Generation Conf New Orleans no.
PWR-19112
[6] Leong MS, Lim MH (2008) Detection of blade rubs and looseness in gas turbines – Opera-
tional field experience and laboratory study. 5th Int Conf Cond Monit Mach Failure Detect
Prev Tech Edinburgh 901–912
[7] Lim MH, Leong MS (2008) Improved blade fault diagnosis using discrete blade passing
energy packet and rotor dynamics wavelet analysis. ASME no. GT2010-22218, ASME
Turbo Expo2010: Power for Land, Sea and Air, Glasgow
Field-Wide Integrated Planning in a Complex
and Remote Operational Environment:
Reflections Based on an Industrial Case Study

Yu Bai and Jayantha P. Liyanage

Abstract Oil and Gas (O&G) producers are challenged to increase the working
efficiency while reducing production costs. This demands application of various
innovative techniques and novel work management solutions. In this context,
collaborative work and integration of work processes have become a major focus
of interest. One well-known initiative involves strategic and field-wide integrated
work planning that aims at more efficient and cost-effective coordination of activi-
ties by core disciplines and stakeholders for maximising business results.
This paper addresses issues related to Integrated Planning (IP) within an O&G
offshore production environment. It is based on an ongoing project in Norway in
close cooperation with the O&G industry.

Keywords Oil and gas assets, Work management, Operations and maintenance
performance

1 Introduction

From the official energy statistics of the U.S. Government [1], the world’s de-
mand for oil continues to grow. The shortage of supplies together with the
growth of global requirements has significantly contributed to the rise in the
price of oil. Higher oil prices have led to a significant expansion of O&G pro-
duction and exploration [2, 3] to meet the energy demand and meet the raising

__________________________________
Y. Bai
Centre for Industrial Asset Management, University of Stavanger N-4036, Stavanger, Norway
J.P. Liyanage
Centre for Industrial Asset Management, University of Stavanger N-4036, Stavanger, Norway

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 219


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0_12, © Springer-Verlag London Limited 2012
220 Y. Bai and J.P. Liyanage

concerns of multiple stakeholders in business [4, 5]. This is particularly evident


today in the North Sea.
Although some new fields have been scheduled or have already completed ex-
ploration activities, the status of current production and limited expected reserves
have forced producers to improve oil field productivity. The central focus is on
increasing production at the lowest cost possible to enhance the maximum utilisa-
tion of available reserves. Following technological development and the imple-
mentation of new IT techniques and advanced infrastructure in recent years [6],
more and more O&G producers in the Norwegian Continental Shelf (NCS) have
started to realise opportunities of field-wide integration between offshore produc-
tion and onshore support [3, 7]. This is particularly seen in the offshore O&G
production environment in the North Sea [3] in relation to a major re-engineering
process termed “Integrated Operations” (IO). This began in 2004−2005 as a new
development scenario for the offshore industry [6, 8]. It has major benefits in
making work more efficient, reducing work conflicts, avoiding unnecessary re-
source waste and enhancing cost-effectiveness, etc.

1.1 Integrated Operations

Integrated Operations (IO) is a new baseline established in the NCS during the
past few years. It is seen as a way to optimise and improve business performance
by integrating in operational disciplines, different phases of complex but inter-
dependent work processes, cooperative organisations, and different geographical
locations. This is under implementation through a number of innovative solutions
involving real time data integration, field-wide information sharing, interpretation,
support tools, management techniques, advanced technologies and new principles
of collaborative working [9, 10].
IO could also be seen as an operational setting where integration of both pro-
duction assets and technical support environment [9] is required to create an active
collaborative environment for better efficiency of production assets based on en-
hanced capabilities. In some oil fields, as experienced today on NCS, the estab-
lishment of a common digital infrastructure and reliable data management is al-
ready on schedule. Meanwhile, as one of the necessary factors, intelligent work
processes, which develop collaborative decision loops, and task and activity flow
across disciplines both onshore and offshore, is also under focus as a prerequisite
for successful applications of IO [8].
In this context, initiatives related to Integrated Work Processes (IWP) are also
in progress to streamline decisions and activities. In principle, the IWP involves an
effort to integrate work processes across operational disciplines by using Informa-
tion Communication Techniques (ICT) [11, 9]. It involves a series of technical and
managerial measures where information about operations must be made available
to all parties involved online and in real time to enhance the work collaborative
management process with better time, quality, cost and less risk. To realise the
Field-Wide Integrated Planning in a Complex and Remote Operational Environment 221

IWP, it is necessary to install an effective planning process for the rearrangement


of all tasks and activities within or between disciplines.
This paper focuses on the definition of Integrated Planning and other related
factors which could provide a framework for further research.

1.2 Method

This case study was performed with one of the major O&G producers in the
North Sea with participation in a company’s planning process. The objective was
to identify Integrated Planning scenarios, and was addressed mainly by using
empirical data from the Norwegian Continental Shelf (NCS), participating in the
company’s internal programs and projects, and using the knowledge of professio-
nals in the field and existing academic knowledge. Required data was collected
and knowledge and understanding gathered through communication with key
offshore engineers, active co-operation with IP planners, review of project reports
and other company documents, and being an observer in internal project work-
shops and meetings.
This paper focuses on the Integrated Planning concepts and its possible appli-
cation levels based on different environments. A brief introduction of influential
factors derived from aspects of dynamic businesses, cost, time, and quality will
also be addressed to illustrate the limits and constraints of the actual Integrated
Planning solution.

2 Integrated Planning

As Kayacan and Celik describe [12], Integrated Planning (IP) enables the align-
ment of key operational planning processes to provide a common perspective
across work plans. The major objective of IP is to integrate all operational plans
into a single centralised planning system which will be realised online and is based
on a complete database that contains key data of critical processes.
Oil and gas production and exploration involves complex working processes.
According to Payne [13], historical operation planning fails to link strategic plans
to operational plans. Each operational segment focuses on its own plan, creating
conflicts and resource waste based on constraint factors management [14]. Also,
the lack of performance measurement results in the deviation between business
strategy and execution [15]. This seriously harms the feasibility of strategies and
reduces production effectiveness. The effort of the O&G sector is to merge all
activity-related information coming from multi-disciplinary sources to an accurate,
integrated plan with seamless interface for efficient alignment between need and
requirements, and daily work.
222 Y. Bai and J.P. Liyanage

2.1 Operational Requirements of Integrated Planning

An IP process can influence three key operational requirements in a business


context [14]:
a. planning the future work with horizontal periodic plans based on constraint
factors;
b. creating commitment to work process milestones and templates for continuous
integrity in planning; and
c. enhancing the IT environment to be well-suited for the users’ requirements and
optimising the Integrated Planning work process (i.e. web-based publishing
board, data auto-transfer and conversion of tools).

2.2 Horizontal Periodic Planning

In principle, integrated planning contributes to efficiently coordinating, schedul-


ing, and carrying out the work of field-wide operations. Following Dewhurst and
Horton [15, 16], not only short-term plans to guide the execution of activities are
required, but also medium-term and long-term plans are in place in order to organ-
ise a series of actions to achieve tactical and strategic business goals.
Information of required activities from different operational disciplines is ag-
gregated in an independent system, database and variable periodic plans. The
periodic plans that are created through the integrated planning processes can be
divided into three separate time periods as shown in Figure 1.
The short-term plan (i.e. weekly plan) is an operational plan which schedules
detailed performance activities of operation with clear roles and responsibilities. In
order to ensure the success of a business objective, a set of quantitative measure-
ments (i.e. key performance indicators) is required for planning stakeholders (i.e.
the onshore scheduler). This is to provide an effective, on-time interface between
strategic, tactical and operational decisions [15].

Figure 1 Different periodic plans are addressed in Integrated Planning


Field-Wide Integrated Planning in a Complex and Remote Operational Environment 223

The medium-term plan involves important information to summarise the status


of future work in relation to production continuity. Thus, as a tool, it can be used
to evaluate the possible constraint factors, which limit the production capacity
offshore by establishing a multi-discipline workshop between onshore and off-
shore operations. This helps the effective coordination of work requirements and
conflicts, bringing together the current status and the strategic needs of the immed-
iate future.
The long-term plan (one-year plan) is the reflection of the organisation’s strat-
egy that involves information about cost, time, quality, and risk which are funda-
mental components of business planning. Some specific constraint factors (i.e.
employee quantities, budget distribution) are also handled by high-level managers
in the long-term plan.
All three plans described here are involved in the Integrated Planning process.
They illustrate the relationships and structure at operational, tactical, and strategic
levels.

2.3 Work Process Milestones and Templates for Continuous


Integrity in Planning

Integrated Planning is a continuous, repeatable process for sustained production in


O&G business environments. In their book [17], Hammer and Champy state that
“integrated planning with business requirements is the fundamental rethinking and
radical redesign of business processes to achieve dramatic improvements in criti-
cal current constraints of performance, such as cost, quality, service, and speed”.
Integrated Planning needs an efficient process design to concentrate attention
on critical constraint factors, namely, their consequence and frequency, to help
users to arrange their work, avoiding potential risks and conflicts [18]. Also, a
detailed process design with clear roles and responsibilities provides better coop-
eration and communication between disciplines, reducing potential pitfalls due to
misunderstanding.
A typical cycle of Integrated Planning processes in the O&G industry, as
shown in Figure 2, starts from information collection and ends in work execution
and reporting. Information from different disciplines is integrated into a database.
Related specialists identify potential conflicts respectively through analysis based
on constraint factors (e.g. utilisation effect of critical equipment, loading rate of

Figure 2 Integration Planning Process


224 Y. Bai and J.P. Liyanage

ship space) and priority. Planners, as coordinators, arrange multi-disciplinary


workshops to evaluate the frequency and consequences for issues of conflict and
handle these problems by plan adjustments or altering priority for activities. With
the agreement of key specialists and administrators, a baseline plan is created and
prepared for execution purpose.
An important consideration here is that the above process is an effective candi-
date cycle for optimisation efforts. The final baseline doesn’t mean that all activi-
ties prepared and made available can be executable precisely and immediately. In
fact, it needs adequate time and field information for adapting. This means that
having some deviations between practises in offshore and the planned baseline
onshore are inevitable. The engineers in offshore platforms may have different
perceptions of activity priorities that may be in misalignment with onshore spe-
cialists. Also, IP is not the only focus for specialists and users. Some unplanned
tasks or critical performance delays always occur, resulting in a consequent delay
of schedules. So, in the first few months of an IP application, the proportion of
target plan attainment according to baseline estimates would not always be high.
Many problems emerge during execution efforts that push the process template to
continuous optimisation.

2.4 Enhancing IT Environment to Suit Users’ Requirements


and the Optimisation of Integrated Planning
Work Processes

Realisation of Integrated Planning relies on a highly efficient IT system. The utili-


sation of advanced infrastructure and Information Communication Technology
(ICT) provides an opportunity to the group of engineers, specialists, and planners
through better visualisation, communication, and work management to improve
the competence of the planning process, thereby improving the stability and reli-
ability of final plans. As Holmstroem and Drejer [18] indicated, the IT system
needs to support all steps of the planning process and offer related tools for the
interface between databases of information delivery. Moreover, they must also
satisfy the requirements for integrated planers and systems users.
In Integrated Planning processes, data integration, migration, cleanliness, and
standardisation are important concerns for information migrating between disci-
plines [19]. Compared with huge amounts of data delivery, normally, there are not
enough planners to manually check and monitor the quality of data. There is a
clear need for the IT group to reorganise and create tools tied to the original IT
system and to take over some of the tasks, for instance, by auto prioritising and
scheduling activities, and following a pre-defined set of rules, thereby releasing
planners to the key planning tasks.
From the users’ point of view, the major functionality of an integrated plan is to
track, search, and monitor schedules that are related with their work and work-
group. If a published information interface based on a website could be established
Field-Wide Integrated Planning in a Complex and Remote Operational Environment 225

with rational leverages, linkages, references and charts for better visualisation,
interpretation and application, process efficiencies can be significantly improved
as information sharing goes beyond a “need to know basis” [20].
By nature, Integrated Planning is much more than a simple and linear design of
plans and schedules. All departmental plans with some temporary projects con-
tribute to a complex mix of information, and involve many kinds of inter-
relationships that are relatively difficult to fully understand. This raises the re-
quirement for a form of Portfolio Management, a management tool for construc-
tive management between different projects by project scope identification and
organisation patterns. The expectation here is that the IT system provides a mov-
able portfolio structure for future developments, thus providing a platform to real-
ise the control of task portfolio and applications following variability in the critical
dimensions [21].

3 Status of Integrated Planning

For realising the operational objectives of IP, O&G producers need to evaluate the
current planning status and optimise it through work process integration and by
updating IT and infrastructure tools. However, due to various reasons (i.e. busi-
ness requirements, financial limit, future growth prospects, etc.), it is not easy to
achieve all business objectives of IP for each oil field. It challenges O&G produc-
ers to evaluate their production capacity and environment, and identify the best
solution based on an effective balance between the cost of IP establishment and
the benefits from its implementation.

3.1 Levels of Integrated Planning

Based on the degree of integration and available capacity, Integrated Planning in


O&G industry can be classified into four different levels from simplest integration
(level-1) to the most effective integration (level-4). A given oil field may decide to
limit itself to one specific level subjected to its business conditions, or it may
gradually proceed from one level to another in order to realise the business benefits
of full-scale integration. The IP levels are briefly described in the next subsections.
Level-1 (conventional status): Each discipline (i.e. drilling, logistics, and main-
tenance) prepares their plans respectively for the activities of the next period.
According to their own priorities, disciplines provide required work activity lists
to the onshore scheduler and supervisor for review. A multi-discipline workshop
involving offshore or onshore schedulers, supervisors, and material coordinators is
established for selecting urgent or critical work, considering the various constraint
factors offshore. The activities progress in the past period (i.e. completed percent-
age) is reported to the supervisor or director from each department (see Figure 3).
226 Y. Bai and J.P. Liyanage

Figure 3 Level-1 of Integrated Planning

Level-2: The major characters of IP level-2 are (see Figure 4):


i. There is an independent database for IP.
ii. A Key Performance Indicator (KPI) is established to evaluate the planning
processes and execution.
The disciplines’ expected activity list for the next period is organised into an in-
dependent database. Data delivery from the disciplines is executed following a
standard input criteria (i.e. planned and actual start and finish times, duration, peo-
ple in charge, complete percentage, priority, and relative resource and cost informa-
tion, etc). This decides the types and scope of data. An integrated planning process
is implemented for creating field-wide time-horizontal plans (i.e. short, medium and
long-term plans) by agreement and support from different disciplines. Some critical
constraint factors are established to reflect the status of plan execution offshore,
based on the advanced ICT and information from the database, for better surveil-
lance of execution. These are reviewed in weekly multi-disciplines workshops.
Level-3: The integration of planning to Onshore Centres (OC) is the key charac-
ter of level-3. Following technological developments and the use of advanced
communication systems, O&G producers require such centres for managing dy-
namic work content in multi-disciplinary work processes between onshore and
offshore. Such centres are normally equipped with high quality communication and

Figure 4 Level-2 of Integrated Planning


Field-Wide Integrated Planning in a Complex and Remote Operational Environment 227

Figure 5 Level-3 of Integrated Planning

monitoring tools, advanced visualisation technologies, and a convenient working


environment for real-time support and the coordination of dynamic work [8].
As Figure 5 shows, the OC is a dynamic environment involving a real-time data
delivery and generating the need for multi-discipline workshops. Information and
data of critical constraint factors can be illustrated directly in OC, which pushes
forward the necessary follow-up initiatives for improvements in collaboration
between disciplines. The information flow processes are created and optimised
from manual to automatic by the application of specific IT-based support tools. In
addition, a web-based IP publishing page can be coupled for all designated users.
In this page, users can easily check relevant integrated plans and the plan execu-
tion status by filter tools and access authority. Planning of key constraints (e.g.
accommodation in platforms), dashboards of KPIs, planning process definitions
and explanations can be incorporated into the IP pages.
Level-4: Integrated Planning in this stage is expanded to focus on the coopera-
tion between operators and external venders. A new infrastructure (e.g. better
monitoring tools) is installed in OC for extension of communication and coordina-
tion with other vendors and contractors (e.g. through external KPIs, real-time
support for vendors, etc.). This helps the producers to actively involve business
partners directly in the planning processes, thus reducing the potential risk of work
deviations (see Figure 5).
In fact, the continuous progress from one level to another is largely influenced by
Economic Status (i.e. budget, investment) and cost-profit calculations. Some limita-
tions in this regard can also be imposed by the growth focus of O&G producers.

3.2 Impact of Economical Limitations

Following the description above, IP level-1 is the basic and historical IP template
for planning in O&G industry. When the IP develops from level-1 to level-2, the
cost is mostly in the adjustment of traditional work processes. The amount of
effort required for an oil field to move to level-2, through the establishment of
independent databases, organisation of multi-disciplinary workshops, common
planning formats, etc., would not be so relatively extreme.
228 Y. Bai and J.P. Liyanage

Figure 6 Influence of Economic Status on Achievements in IP level

From level-2 to level-3, the high requirement of infrastructure and application


tools (e.g. OC establishment, advanced IT support, managing internal changes to
work routines, etc.) sharply increases the cost. There can be various forms of other
hidden costs in relation to fine tuning and adjustments of the IP system, optimisa-
tion of process being implemented, and in making IP a routine work process across
the organisation and its production assets. A major part of the effort required ap-
plies to upgrading the IT infrastructure and work environment, incorporating many
advanced tools and support systems to optimise the planning efficiency.
The development from level-3 to level-4 is decided by the scope of business co-
operation and types of vendors involved. This needs further infrastructure upgrades
and tools for expanding communication and cooperation. The efforts required for
IP implementation here is relatively moderate. Figure 6 shows the impact of eco-
nomic limits on the achievement of the desired IP level in O&G industry.
The budget and investment of IP involves various costs, both direct and indi-
rect, inclusive of new infrastructure installation, exploitation tools, and human
resource development, which can be limited by the current financial status of a
company. As mentioned above, this can also be influenced by the growth scope of
a company. The returns of the investments need to have an acceptable positive
margin within a reasonable time period to meet the business benefits of IP.

3.3 Impact of Profit-Cost Assessment

As it appears, the profit-cost calculation is also a key criterion to IP development. A


mature, complex, large-scale, rich-reserve oil field with expected long-term growth
opportunities can be motivated by major potential benefits through the development
of IP. The work related complexity, in such a setting, creates an immediate need to
enhance the requirement of IP. Due to the complexity of operations, new and fairly
Field-Wide Integrated Planning in a Complex and Remote Operational Environment 229

Figure 7 Profit Potential for IP Vary from one Business Situation to Another

small-reserve oil fields with short-term operating contracts and limited growth
opportunities may find that the situation is not conducive to the development and
implementation of IP on a large scale. In such cases, efficiency improvements in
work planning processes are compared to maximum production with limited budget
consumption. Figure 7 illustrates the profit potential for the two cases.
In Figure 7, line ‘AB’ represents the case of the small-reserve oil field with
limited growth opportunities, while the line ‘CD’ represents that for a complex
rich-reserve oil field with better growth opportunities. The difference in profit
potential can occur because the impacts of changed planning processes differ in
relation to the complexity and scope of operations.
Furthermore, at least regarding the North Sea, the development within the
O&G industry related to IO has provided a common and an effective basis for IP
type activities. Even though economic status and profit-cost calculation have a
large impact on IP development and implementation, there are some other factors
as well. These are briefly presented in the next section.

4 Influence Factors for Integrated Planning

The implementation process of Integrated Planning efforts is subjected to influ-


ences from other factors. These factors can be divided into three specific areas,
namely, Corporate Business, Integration and Systems development.

4.1 Influence Factors at the Corporate Business Level

Cost, time, quality and risks are among key criteria for evaluating business per-
formance. The IP of all activities must satisfy the requirement from these criteria.
230 Y. Bai and J.P. Liyanage

The O&G production and exploration projects are characterised by large capital
investments and complex processes. As an optimising solution of O&G produc-
tion, IP is inevitably influenced by business scope, budget, profit, and related
strategy. Among the main factors are:
i. Scope of O&G production: The number of assets involved and the scale of
production.
ii. Company business strategies and policies: The business objectives and oppor-
tunities in the region.
iii. Growth opportunities: The business options to grow the activities.
iv. Life-extension: The production life of current producing assets.
v. Constraints form business cooperation: The types of business cooperation
available and related needs of business partners.

4.2 Influence Factors at Integration Level

IP development is not an independent process, but needs multi-disciplinary sup-


port. Efficiency in any discipline may result in the optimisation of the IP work
processes. In NCS, the continuous development and research of IO started a few
years ago. Its focus is not only on IP, but also on the other components of Inte-
grated Operation (IO) (i.e. fibre cable-based communication system, logistic opti-
mising). The research team consists of experts with knowledge of logistics, IT,
drilling, cost and budget, and so on. Integrated planning takes into consideration:
i. Organisational structure between IP and operation disciplines: This influences
the efficiency of workshops, agreements and signature processes.
ii. Coupling independent logistics process: logistics is an independent process
and there are complexities in managing logistics required by production and
other disciplines. Thus, an effective logistical process can enhance the integra-
tion between material flow and offshore work-related needs.
iii. Communication and understanding of IP scope and requirements. Principles
for communication should be established in multi-disciplinary workshops that
help all participators to clearly understand their roles and responsibilities.
iv. Performance Measurement: An efficient measurement system is required to
reflect the risk and monitor the work execution status compared with inte-
grated plans.
v. Authority or support from senior manager: The organising of multi-discipline
workshops and quick and clear decision making processes.
Field-Wide Integrated Planning in a Complex and Remote Operational Environment 231

4.3 Influence Factors at System Development

The IP level is limited by system capacity, involving the functions of hardware


(infrastructure, IT) and the feasibility of software (communication technique).
Five factors are involved:
i. Capacity of infrastructure: The realisation of complex communication and
monitoring between different geographical locations.
ii. Information Communication Technique (ICT): Improved Information Com-
munication Techniques enhance the capacity of infrastructure and realise co-
operation in field-wide IP.
iii. System Support: Different groups could provide support to IP by developing
tools (e.g. data delivery tools, data filtering tools) to accelerate the IP work
process.
iv. Method of communication: Convenient environment helps to ensure the effec-
tiveness of multi-disciplinary communication.
v. Competency of planners, workshop participators and users: Planners familiar
with project management and operation engineering control the complex, IP
process. Experts contribute suggestions for optimisation of IP work processes.
As the description above inculcates, it is not necessary to have the highest inte-
grated level as the final target. There is a great diversity of conditions in the fields.
High level integration requires a large investment, such as, the cost of utilising
OSC, and must create many tools for automatic data delivery between different
disciplines and the IP database, which demands long-term planning and imple-
mentation with the IT group. So, for some oil fields, it makes no sense to establish
an excellent and expensive IP. The balancing point of profit earned and the cost of
IP implementation is a prerequisite to Integrated Planning development.

5 Conclusion

Integrated Planning of O&G industry in a remote-operation environment is a large


endeavour within a complex framework. Facing a variety of system applications,
natural environments, platform conditions, and operation processes, it is difficult
to define an ideal template for integrated planning. Current planning processes still
cannot totally avoid deviations in implementation that force us to identify the kind
of integrated planning we need to develop IP systems and techniques based on the
current situation. As this document shows, planners need to cooperate closely with
operational disciplines to decide the goals of IP, and then to develop detailed
planning in order to find the potential capacity of each aspect of the current plan.
232 Y. Bai and J.P. Liyanage

References

[1] EIA (Energy Information Administration). (2008a) Short-term energy outlook.


http://www.eia.doe.gov/steo/pub/aug08.pdf
[2] EIA (Energy Information Administration). (2008b) Market trends.
http://www.eia.doe.gov/oiaf/aeo/pdf/trend_1.pdf
[3] Hart SM, (2002) Norwegian workforce involvement unsafety offshore: Regulatory frame-
work and participants’ perspectives. Employee Relat 24(5):496−498
[4] Midttun A, Dirdal T, Gautesen K, Omland T, Wenstoep S (2007) Integrating corporate
social responsibility and other strategic foci in a distributed production system: a transaction
cost perspective on the North Sea offshore petroleum industry. Corp Gov 7(2):194−197
[5] Jensen M (2001) Value maximization, stakeholder theory, and the corporate objective
function. JACF 14(3):8–22
[6] OLF (Oljeinidustriens landsforening/Norwegian Oil Industry Association). (2003) eDrift for
norsk sokkel: Det tredje effektiviseringsspranget (eOperations in the Norwegian continental
shelf: The third efficiency leap). http://www.olf.no
[7] Zhang C, Orangi A, Bakshi A, Da Sie W, Prasnna VK (2006) Model-based framework for
oil production forecasting and optimization. SPE (Society of Petroleum Engineers).
www.spe.org, SPE 99979
[8] Liyanage JP, Herbert M, Harestad J (2006) Smart integrated e-operations for high-risk and
technologically complex assets: Operational networks and collaborative partnerships in the
digital environment. In: YC Wang, et al (Eds) Supply chain management: Issues in the new
era of collaboration and competition Idea Group, USA
[9] Liyanage JP, Langeland T (2009) Smart assets through digital capabilities. Information
Science and Technology (IST). Idea Group, USA. In press.
[10] OLF (Oljeinidustriens landsforening/Norwegian Oil Industry Association) (2005) Integrated
work processes: Future work processes on the Norwegian Continental Shelf (NCS).
http://www.olf.no
[11] Truitt WB (2003) Business planning, A comprehensive framework and process. Quorum
Books, London
[12] Kayacan MC, Celik SA (2003) Process planning system for prismatic parts. J Manuf Tech
14(2):75–86
[13] Payne T (2008) Integrated business planning fills the gap between strategic planning and
S&OP. Gartner, Inc. http://www.gartner.com/DisplayDocument?id=681807&ref=g_sitelink
[14] Mourits M, Evers JJM (1996) Distribution network design: an integrated planning support
framework. LIM 9(1):45–54
[15] Dewhurst F, Barber KS, Rogers JJB (2001) Towards integrated manufacturing planning
with common tool and information sets. Int J Oper Prod Man 21(11):1460–1482
[16] Horton G, Dedigama T (2006) Drilling and petroleum engineering program and project
management at Santos Ltd.. Society of Petroleum Engineers (SPE), www.spe.org. SPE
104062
[17] Hammer M, Champy J (1993) Reengineering the corporation: A manifesto for business
revolution. Nicholas Brealey Publishing, London
[18] Holmstroem J, Drejer A (1996) Re-engineering in sales and distribution-creating a flexible
and integrated operation. BPR 2(2):23–38
[19] Ormerod L, Sardoff H, Wllkinson J, Erlendson B, Cox B, Stephenson G (2007) Real-time
field surveillance and well services management in a large mature onshore field: Case study.
SPE (Society of Petroleum Engineers). www.spe.org. SPE 99949
[20] Rixse MG, Thorogood JL (2000) Building a system in a service company to assure techni-
cal integrity and institutionalize organizational learning. SPE (Society of Petroleum Engi-
neers). www.spe.org. SPE 62100.
[21] Colin A, Willett R, Lambrineas P (2011) Optimizing Budget Allocations in Naval Configu-
ration Management. EAMR 1(3):95–113
About the Editors

Joe Amadi-Echendu is a Professor of Engineering and Technology Management


at the University of Pretoria. Joe’s considerable experience is underpinned by his
doctoral research in digital signal processing, condition monitoring and diagnostic
engineering management of physical plants and processes. Joe has worked in indus-
try as a technician, engineer, project manager, systems analyst, managing consult-
ant and practice director, and was latterly involved in the implementation of “opera-
tional readiness” programmes for greenfield capital development in metals process-
ing and gas liquefaction projects. Professor Amadi-Echendu has published exten-
sively with numerous contributions to international conferences, journals and
books, and received a number of awards including the ISA England Section Distin-
guished Service Award. He is Editor-in-Chief of Engineering Asset Management
Review Series, a registered professional engineer, a member of the national IEC
committee as Chairman TC50 Standards South Africa, Founding Fellow and Board
Member of International Association for Engineering Asset Management, Found-
ing Director of Institute of Engineering, Technology and Innovation Management at
University of Port Harcourt, Visiting Fellow at University of Greenwich, and served
as the President of Southern African Maintenance Association from 2003 to 2005.
Kerry Brown is the Mulpha Chair in Tourism Asset Management and Director
of the Centre for Tourism, Leisure and Work at Southern Cross University. Kerry
is an editorial board member of the International Journal of Small Business and
Globalization, the Journal of Organizational Change Management and the Jour-
nal of Management and Organisation. Professor Brown is an Executive Board
Member of the International Society for Public Management and, Executive Board
Member and Founding Fellow of the International Society for Engineering Asset
Management. She was recently awarded an Australia and New Zealand Academy
of Management Research Fellowship (2009–2011). Her principal research areas
are collaboration, networks and industry clusters; capability, strategy, management
and policy for infrastructure and asset management; work-life balance and leisure;
public sector management and policy; government-business relations; govern-
ment-community relations and employment relations.

J.E. Amadi-Echendu, K. Brown, R. Willett, J. Mathew, Asset Condition, Information 233


Systems and Decision Models, Engineering Asset Management Review,
DOI 10.1007/978-1-4471-2924-0, © Springer-Verlag London Limited 2012
234 About the Editors

Roger Willett is Professor and Head of the Department of Accountancy and


Business Law at the University of Otago, New Zealand. Roger has held Chairs at
the University of Wollongong (Dubai) and Queensland University of Technology,
and positions at the ANU and the Universities of Wales and Aberdeen in the UK.
Professor Willett is a member of the Institute of Chartered Accountants in England
and Wales, and a past New Zealand President of the Accounting and Finance
Association of Australia and New Zealand. He has published articles and books on
statistical aspects of accounting measurement, international accounting, manage-
ment accounting, auditing and other aspects of accounting. He is currently work-
ing on a number of projects relating to issues in the theory of accounting meas-
urement, economic models and asset return, risk and valuation measurement in
organizations and markets.
Joseph Mathew is the Chief Executive Officer of the Cooperative Research
Centre in Integrated Engineering Asset Management (CIEAM) located Brisbane,
Australia. He was previously Queensland University of Technology’s Head of
School of Mechanical, Manufacturing and Medical Engineering, and Monash
University’s Professor of Manufacturing and Industrial Engineering. He has also
served as Executive Director of Monash’s Centre for Machine Condition Monitor-
ing from 1993–1997. He has presented numerous invited lectures and addresses to
professional societies and industrial organisations on engineering asset manage-
ment, machine condition monitoring, and vibrations and noise control. He serves
as Chairman of the Board of the International Society of Engineering Asset Man-
agement (ISEAM), Chairman of the ISO’s subcommittee ISO/TC 108/SC 5 on
Condition Monitoring and Diagnostics of Machines and as General Chair for the
World Congress on Engineering Asset Management (WCEAM).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy