Asset Condition, Information Systems and Decision Models
Asset Condition, Information Systems and Decision Models
Asset Condition,
Information Systems
and Decision Models
123
Editors
Joe E. Amadi-Echendu, Prof. Kerry Brown, Prof.
University of Pretoria Southern Cross University
Graduate School of Technology Tweed Heads NSW 2485
Management Australia
Pretoria 0002
South Africa
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be
reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of
the publishers, or in the case of reprographic reproduction in accordance with the terms of licences
issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms
should be sent to the publishers.
The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of
a specific statement, that such names are exempt from the relevant laws and regulations and therefore
free for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the
information contained in this book and cannot accept any legal responsibility or liability for any errors
or omissions that may be made.
v
vi Foreword
and equipment, including mobile assets. For example, rail companies must man-
age both plant and equipment, such as locomotives and carriages, and rail infra-
structure, such as tracks and bridges.
Many organizations utilize corporate enterprise resource planning (ERP) sys-
tems, which are gradually driving businesses to consider all types of assets in
a strategic and integrated way for effective decisions at the highest levels of gov-
ernance. The need to have an integrated view of EAM becomes imperative as a
result – representing the next big challenge for this field.
I trust that the selected papers in this and future EAM Reviews will continue to
add to our understanding and knowledge and assist in consolidating this integrated
and holistic systems-orientated view of our developing transdisciplinary field of
endeavour.
vii
Contents
ix
x Contents
Abstract Maintaining good quality information is a difficult task, and many lead-
ing asset management (AM) organisations have difficulty planning and executing
successful information quality management (IQM) practices. The aims of this work
are, therefore, to understand how organisations approach IQM in the AM unit of
their organisation, to highlight general trends in IQM, and to provide guidance on
how organisations can improve IQM practices. Using the case study methodology,
the current level of IQM maturity was benchmarked for ten organisations in the
U.K. focussing on the AM unit of the organisation. By understanding how the most
mature organisations approach the task of IQM, specific guidelines for how organi-
sations with lower maturity levels can improve their IQM practices are presented.
Five critical success factors from the IQM-CMM maturity model were identified as
being significant for improving IQM maturity: information quality (IQ) manage-
ment team and project management, IQ requirements analysis, IQ requirements
management, information product visualisation and meta-information management.
__________________________________
P. Woodall
Institute for Manufacturing, Department of Engineering, University of Cambridge,
Cambridge, CB3 0FS, UK
e-mail: phil.woodall@eng.cam.ac.uk
A.K. Parlikad
Institute for Manufacturing, Department of Engineering, University of Cambridge,
Cambridge, CB3 0FS, UK
L. Lebrun
Institute for Manufacturing, Department of Engineering, University of Cambridge,
Cambridge, CB3 0FS, UK
1 Introduction
In this work, the term asset is used to describe physical engineering objects, and
examples of assets for the rail and utilities industries include trains, junction box-
es, rails, transformers, power cables and water pipelines. AM is defined as the
Approaches to Information Quality Management 3
3 Information Quality
Different definitions have been used for IQ in the past 20 years [6], and currently,
the most widely accepted definition of IQ is “fitness for use” [7, 8, 9, 10]. This
definition expresses the fact that IQ is something dependent on the context, and
4 P. Woodall, A.K. Parlikad and L. Lebrun
Information Quality Management can be defined as “the function that leads the
organisation to improve information quality by implementing processes to meas-
ure, assess costs of, improve and control information quality, and by providing
guidelines, policies, and education for information quality improvement” [9], and
whose goal is to increase the organisation’s effectiveness by eliminating the costs
of poor information quality [16]. Some definitions incorporate knowledge man-
agement such as the work of Ge and Helfert [17], who defined three areas of re-
search for IQM: quality management, information management and knowledge
management. This work, however, excludes the complex area of knowledge man-
agement to focus on quality management and information management (Figure 2).
Moreover, no comprehensive framework has so far encompassed the three afore-
mentioned approaches to IQM [17], and it is still unclear exactly what IQM en-
compasses [18]. Note that another important area in IQM relates to the importance
A number of IQM maturity models have been developed with different levels of
complexity, methods of development and levels of usability (Table 1). The Infor-
mation Quality Management Capability Maturity Model (IQM-CMM) was devel-
oped and validated with AM organisations and is, therefore, ideally suited to the
focus of this study. Moreover, it also has a usable and extensive set of process
areas (PAs) and CSFs which can be used as appraisal criteria for determining the
level of maturity. These CSFs are defined for each of the maturity levels in the
IQM-CMM model (optimising, managed, measuring, reactive and chaotic).
A high-level view of the model is shown in Figure 3, which illustrates the ma-
turity levels with brief descriptions of the characteristics of each level. For each
maturity level, PAs are defined, and these contain a set of CSFs. The mapping of
all PAs to CSFs is shown in the results section in Table 3. Details of the meaning
of the CSFs can be found in [2]. The aim of a maturity assessment using this
model is therefore to determine the extent to which each CSF is satisfied within an
organisation. The results for each CSF are then aggregated to determine the extent
to which each PA is satisfied and then aggregated once again to determine whether
a maturity level is satisfied.
4 Assessment Process
The case study methodology was used to assess the how organisations approach
IQM in the AM unit of their organisation. Case studies are ideal in the following
circumstances [25]:
1. The focus of the study is to answer ‘how’ or ‘why’ questions.
2. Study participants’ behaviour cannot be manipulated.
3. Contextual issues need to be addressed.
4. Boundaries between phenomena and their context are not clear.
Each of these is relevant to the characteristics of this study. The question for
this work (‘how do organisations approach IQM in the AM unit of their organisa-
tion?’) is a ‘how’-style question and therefore meets the first requirement. In
terms of manipulating the behaviour of the people involved with improving IQM,
while it may be possible to influence what will be done, it is not possible to influ-
ence what has been done to reach the current state of IQM maturity. We also
assert that IQM improvement in the AM unit of organisations must be related to
the context because IQM improvement will depend on details such as the strate-
gic direction of the organisation, the type of assets owned by the organisation
(and hence the type of data/information required), and the type of regulations
imposed on the organisation. Finally, the boundaries between the contextual de-
tails and IQM improvement are not clear because of the number of different con-
textual details and the current lack of understanding of the linkage between con-
textual details and IQM improvement.
Approaches to Information Quality Management 7
Table 2 Business Sectors and Roles of the Interview Respondents for Each Organisation
To ensure suitable respondents were selected, a sample set of questions from the
interview was sent to each organisation prior to each interview. Each interview was
conducted either over the telephone (8 cases) or face-to-face (2 cases), and recorded
with the help of a Dictaphone. Notes were also taken by the interviewer during the
interview. The details of the full interview protocol are available on request from the
authors. Most organisations had respondents who were asset information specialists,
only one organisation, case G, had a dedicated IQ manager (see Table 2). Cases F
and H did not have information specialists, and cases D and I had IT specialists. In
some cases, the lack of dedicated positions related to IQM was due to resource con-
straints and business priorities for the two facility management organisations.
To place each organisation on a particular maturity level, the answers to the 31 ma-
turity interview questions were used to determine the extent to which each CSF was
satisfied. The level of satisfaction was measured using an ordinal scale (not satisfied,
partially satisfied and fully satisfied). The actual levels of satisfaction for each CSF
for the ten organisations (labelled organisation A to J) is shown in Table 3, where ‘–’
represents not satisfied, ‘P’ partially satisfied and ‘S’ fully satisfied. The table also
shows the maturity level, process areas for each maturity level and the groups of
CSFs belonging to each process area. Note that maturity level 1 is not shown in Ta-
ble 3 because it is always satisfied. The final two columns show the frequencies of
partially satisfied (cP) and fully satisfied (cF) across all the organisations.
The processes and systems being analysed were complex, and determining
whether these processes and systems met the CSFs was not feasible beyond the
scale used. Unfortunately, partially satisfied cannot be interpreted simply as 50 %
because in some cases partially satisfied was less than 50 % and in other cases
more than 50 %. This does mean that the intervals between these categories are not
always equal. Therefore, calculating aggregate measures, such as the mean, using
these values for a set of CSFs would violate the restrictions imposed by ordinal
scales [26]. The following measures were therefore developed to aggregate the
values for the CSFs in Table 3 into maturity levels which could then be used to
determine the extent to which an organisation had satisfied each maturity level.
• F = Number of CSFs fully satisfied / Number of CSFs
• FP = Number of CSFs fully satisfied or partially satisfied / Number of CSFs
Table 4 shows the final maturity level of each organisation, and the values of ‘F’
and ‘FP’ for each maturity level are shown as percentages. For example, for organi-
sation A no CSFs were fully satisfied for maturity level 4, but 3 out of 13 CSFs were
fully or partially satisfied for maturity level 4, which is shown as 23 % in the FP
column for organisation A. A maturity level was deemed satisfied when F > 50 and
FP > 80; the final maturity levels of the organisations are shown in the bottom row.
Table 3 CSFs Satisfied by the Organisations (– = Not Satisfied, P = Partially Satisfied, S = Fully Satisfied)
Organisation
Maturity
Process Area CSF A B C D E F G H I J cP cF
Level
5 IQ Firewall IQ Firewall – – – – – – – – – – 0 0
5 IQ Management IQ Management Metrics – – – – – – – – – – 0 0
Performance Analysis and Reporting – – – – – – – – – – 0 0
‘Monitoring’ IQ Management Benchmarking – – P – – – P – – – 2 0
4 Continuous IQ IQ Problem Root–Cause–Analysis – P S – – – – – – – 1 1
Improvement IQ Risk Management and Impact Assessment P – – P – – S P P – 4 1
IQ Management Cost–Benefit Analysis – – S – P – – S – – 1 2
Business Process Reengineering for IQ Improvements – – S P – – – P – – 2 1
4 Enterprise Information Enterprise Tier Management P P S P P P S S P P 7 3
Architecture Management Information Tier Management – P P – – – P – P P 5 0
Approaches to Information Quality Management
Table 3 (continued)
Organisation
Maturity
Process Area CSF A B C D E F G H I J cP cF
Level
3 IQ Needs Analysis Requirements Elicitation P P S P P P P S P P 8 2
Requirements Analysis – P S – – – S P – – 2 2
Requirements Management – – S – – – S P – – 1 2
3 Information Product Information Supply Chain Management – P S P – – S S P P 4 3
Management Information Product Configuration Management – S S S S – S S S S 0 8
Information Product Taxonomy P S S S P P S S P P 5 5
Information Product Visualisation P P S P P P S P P P 8 2
Derived Information Products Management S P S – P – – S – – 2 3
Meta-information Management – P S – P – S P – – 3 2
2 Information Security Security Classification of Information Products S S S S S S S S S P 1 9
Management Secure Transmission of Sensitive Information S S S S S S S S S S 0 10
Sensitive Information Disposal Management S S S S S S S S S S 0 10
2 Access Control Authentication S S S S S S S S S S 0 10
Management Authorisation S S S S S S S S S S 0 10
Audit Trail S S S P S – P P S S 3 6
2 Information Storage Physical Storage S S S S S S S S S S 0 10
Management Backup and Recovery S S S S S S S S S S 0 10
Archival and Retrieval S S S S S S S S S S 0 10
Information Destruction S S S S S S S S S S 0 10
2 Information Needs Stakeholder Management S S S S S S S S S P 1 9
Analysis Conceptual Modelling S S S S S S S S P P 2 8
Logical Modelling S S S S S S S S S P 1 9
Physical Modelling S S S S S S S S S P 1 9
P. Woodall, A.K. Parlikad and L. Lebrun
Table 4 Final Maturity Level of Each Organisation with Percentage Values of F and FP for each Maturity Level
Organisation
A B C D E F G H I J
Maturity Level
F FP F FP F FP F FP F FP F FP F FP F FP F FP F FP
5 – Optimising 0 0 0 0 0 25 0 0 0 0 0 0 0 25 0 0 0 0 0 0
4 – Managed 0 23 8 62 54 92 0 46 0 46 0 15 15 54 15 46 8 46 0 23
3 – Measuring 7 33 13 67 73 100 20 47 7 53 0 20 53 87 33 73 7 47 13 47
2 – Reactive 100 100 100 100 100 100 93 100 100 100 93 93 93 100 93 100 93 100 64 100
1 – Chaotic 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
Final Maturity Level 2 2 4 2 2 2 3 2 2 2
Approaches to Information Quality Management
11
12 P. Woodall, A.K. Parlikad and L. Lebrun
Figure 4 illustrates the aggregated (for all organisations) level of satisfaction for
each CSF. The actual values (cP and cF) for this figure are shown in the rightmost
columns of Table 3, where these values are represented as percentages. For ex-
CSFs
Physical Modelling
Logical Modelling
Conceptual Modelling
Stakeholder Management
Information Destruction
Archival and Retrieval
Level 2
Requirements Analysis
Requirements Elicitation
Information Quality Evaluation
Partially satisfied (cP%)
20
40
60
80
100
ample, all organisations (100 %) fully satisfied the ‘information destruction’ CSF,
whereas 80 % of organisations partially satisfied and 20 % fully satisfied the ‘re-
quirements elicitation’ CSF; all organisations therefore attempted the ‘require-
ments elicitation’ CSF.
The bulk of the maturity level 2 CSFs (on the left of Figure 4) were fully satis-
fied by all organisations, whereas for level 3 and above, fewer CSFs were fully
satisfied and more partially satisfied or not satisfied. Three CSFs which were not
attempted by any organisation surveyed. These are all in maturity level 5 and
include the IQ firewall, IQ management metrics, and analysis and reporting (IQ
management performance monitoring). High-level organisations looking to under-
take new IQM activities can attempt to implement these practices.
The higher-level CSFs (level 3 and above), which were attempted by 70 % or
more of the organisations, include the following factors (see the two groups of
values in levels 3 and 4 in Figure 5):
• IP visualisation;
• IP taxonomy;
• IP configuration management;
• information supply chain management;
• enterprise tier management;
• application tier management;
• physical tier management;
• requirements elicitation.
Except for requirements elicitation, these CSFs fall into two categories defined
by the IQM-CMM model: Information Product Management and Enterprise In-
formation Architecture Management. Most organisations had partially satisfied
the IP visualisation CSF, which requires that the same information in multiple
systems be represented consistently to the user. This is because the systems used
by the asset managers contain ‘default’ forms which were designed with the sys-
tem. However, to fully satisfy this CSF requires that different systems have a
consistent look and feel for a given information product. Clearly, this is much
harder to achieve, and only the higher-level organisations have achieved this to a
certain degree. The IP taxonomy CSF concerns organising information products
into a hierarchical structure as well as identifying relationships between informa-
tion products, including aggregations, compositions and associations. IP configu-
ration management processes ensure that any changes to information are recorded
and can be rolled back. This process is managed by change requests, which are
initiated, reviewed, approved and tracked to closure. Formal audits are regularly
performed to assess compliance with the configuration management plan. The
implementation of these processes within the organisations was largely success-
ful. Information supply chain management refers to the fact that both internal and
external information suppliers have been identified and documented. Furthermore,
information flows have also been documented, and communication between in-
formation suppliers and users has been established with suitable agreements
in place.
14 P. Woodall, A.K. Parlikad and L. Lebrun
All organisations expend significant effort on the development and use of their
information systems, and, hence, the CSFs related to enterprise information archi-
tecture feature prominently in Figure 5, despite their being at a higher maturity
level (4) than most organisations are currently at. Enterprise tier management is
about maximising information integration and interoperability, and organisations
that have satisfied this have developed and documented their information architec-
ture. Most organisations have some level of information integration, and the in-
CSFs
Physical Modelling
Logical Modelling
Conceptual Modelling
Stakeholder Management
Information Destruction
Archival and Retrieval
Level 2
Requirements Analysis
Requirements Elicitation
Information Quality Evaluation
Partially satisfied (cP%)
20
40
60
80
100
Five CSFs were fully satisfied by the highest maturity level organisations which
were not fully satisfied by any of the lower-level (level 2) organisations. The
higher-level organisations therefore demonstrated the feasibility to fully imple-
ment these CSFs and obtain higher maturity levels (level 3 for case G and level 4
for case C). These five CSFs (Table 5) are therefore ideal candidates for level 2
organisations to focus on to improve their IQM practices.
The ‘IQ management team and project management’ CSF requires the formal
management of all IQM practices. This includes allocating the key roles for a
project, determining the scope of the work required, project deliverables, busi-
ness/technical aspects of the project, and estimating project costs and benefits [2].
In the process area of ‘IQ needs analysis’, the CSFs ‘requirements analysis’ and
‘requirements management’ received very little attention from lower maturity
Table 5 Key CSFs for Improving IQM Practices for Organisations in Maturity Level 2
Organisation
High Maturity Level 2
Maturity
Process Area CSF C G A D E F B H I J
Level
IQ Management IQ Management Team
Roles and and Project S S P – P – P – – P
Responsibilities Management
IQ Needs Analysis Requirements Analysis S S – – – – P P – –
Requirements
IQ Needs Analysis S S – – – – – P – –
Management
Information Product Information Product
S S P P P P P P P P
Management Visualisation
Information Product Meta-information
S S – – P – P P – –
Management Management
16 P. Woodall, A.K. Parlikad and L. Lebrun
7 Conclusion
The IQM maturity of the AM unit of ten organisations was benchmarked to de-
termine how the organisations approached IQM. Most of the organisations found
it a challenge to improve IQM and needed guidance on how to advance from their
current level of maturity. No organisation is currently at the top level of the matur-
ity model, and so there is room for improvement in all the organisations surveyed.
An analysis of how the CSFs in the IQM-CMM maturity model were satisfied
showed that five CSFs were fully satisfied by the two higher maturity level or-
ganisations, and these were never fully satisfied by any of the lower maturity or-
ganisations. It is recommended, therefore, that the lower maturity organisations
focus on these five CSFs to quickly improve their IQM practices. These five CSFs
concern IQ management team and project management, requirements analysis,
requirements management, information product visualisation, and meta-infor-
mation management. Further work is required to understand the order in which
organisations should implement the CSFs in the IQM-CMM maturity model to
improve their IQM practices and move up in the hierarchy of maturity levels.
Acknowledgments We would like to thank all the respondents for committing the time and
effort to take part in this study; their help is very much appreciated. We also thank Andy
Koronios and Jing Gao for assistance with the IQM–CMM maturity model, Alex Borek for help
with proof reading this work, and EPSRC for supporting this research.
Approaches to Information Quality Management 17
References
[1] Gao J, Baškarada S, Koronios A (2006) Agile maturity model approach to assessing and
enhancing the quality of asset information in engineering asset management information
systems. In: Proceedings of the 9th international conference on business information sys-
tems (BIS 2006), 31 May–2 June 2006, Klagenfurt, Austria, pp. 486–500.
[2] Baškarada S (2008) IQM-CMM: information quality management capability maturity
model. PhD thesis, University of South Australia, Adelaide, South Australia.
[3] British Standards Institution (2004) Asset management: PAS 55-1: British Standards
Institution.
[4] Ouertani MZ, Parlikad AK, McFarlane DC (2008) Towards an approach to select an asset
information management strategy. Int J Comput Sci Appl 5:25–44.
[5] Baškarada S, Koronios A, Gao J (2006) Towards a capability maturity model for informa-
tion quality management: a TDQM approach. In: Proceedings of the 11th international con-
ference on information quality (ICIQ-06), Cambridge, MA, 10–12 November 2006.
[6] Eppler MJ (2000) Conceptualizing information quality: a review of information quality
frameworks from the last ten years. In: Proceedings of the 5th international conference on
information quality, Cambridge, MA, pp. 83–96.
[7] Juran JM (1974) Quality control handbook. McGraw-Hill, New York.
[8] Wang R, Strong D (1996) Beyond accuracy: what data quality means to data consumers. J
Manage Inf Syst 12:5–34.
[9] Strong D, Lee YW, Wang R (1997) 10 potholes in the road to information quality. IEEE
Comput 30:38–46.
[10] Lin S, Gao J, Koronios A (2006) Key data quality issues for enterprise asset management in
engineering organisations. Int J Electron Bus 4:96–110.
[11] English L (1999) Improving Data warehouse and business information quality: methods for
reducing costs and increasing profits. Wiley, New York.
[12] Kahn B, Strong D, Wang R (2002) Information quality benchmarks: product and service
performance. Commun ACM 45:84–192.
[13] Al-Hakim L (2007) Information quality management: theory and applications. IGI Global,
Hershey, PA.
[14] Redman T (1996) Why care about data quality? In: Data Quality for the Information Age.
Artech House, Boston.
[15] Batini C, Cappiello C, Francalanci C, Maurino A (2009) Methodologies for Data Quality
Assessment and Improvement. ACM Comput Surv 41:1–52.
[16] English L (2002) The essentials of information quality management. Information Manage-
ment Magazine, 1 September 2002.
http://www.information-management.com/issues/20020901/5690-1.html
[17] Ge M, Helfert M (2007) A review of information quality research. In: Proceedings of the
12th international conference on information quality, 9–11 November 2007, Cambridge,
MA.
[18] Levis M, Helfert M, Brady M (2007) Information quality management: review of an evolv-
ing research area. In: Proceedings of the 12th international conference on information qual-
ity, 9–11 November 2007, Cambridge, MA.
[19] Ruževičius J, Gedminaitė A (2007) Business information quality and its assessment. Eng
Econ 2:18–25.
[20] DataFlux (2008) The Data Governance Maturity Model.
http://www.dataflux.com/DataFlux-Approach/Data-Governance-Maturity-Model.aspx
[21] Ryu K, Park J, Park J (2006) A data quality management maturity model. ETRI J
28:191−204.
[22] Institute of Asset Management (2009) Asset information guidelines – guidelines for the
management of asset information. Woodlands Grange, UK.
18 P. Woodall, A.K. Parlikad and L. Lebrun
[23] Délez T, Hostettler D (2006) Information quality: a business-led approach. In: Proceedings
of the 11th international conference on information quality, Cambridge, MA, 10–12 No-
vember 2006.
[24] Caballero I, Caro A, Calero C, Piattini M (2008) IQM3: information quality management
maturity model. J Universal Comput Sci 14:3658–3685.
[25] Baxter P, Jack S (2008) Qualitative case study methodology: study design and implementa-
tion for novice researchers. Qual Rep 13:544–559.
[26] Fowler FJ (1993) Survey research methods, 2nd edn. Sage, Thousand Oaks, CA.
Information Systems Implementation
for Asset Management:
A Theoretical Perspective
Abrar Haider
__________________________________
Abrar Haider
School of Computer and Information Science, University of South Australia,
Mawson Lakes Campus, Mawson Lakes, South Australia 5095, Australia
1 Introduction
clude SCADA systems, CMMS and enterprise asset management systems. These
systems further provide inputs to maintenance planning and execution. However,
maintenance requires not only effective planning but also availability of spares,
maintenance expertise, work order generation and other financial and non-finan-
cial supports. This necessitates the integration of technical, administrative and
operational information of the asset lifecycle such that timely, informed and cost-
effective choices can be made about the maintenance of an asset. For example, a
typical water pump station in Australia is located far from major infrastructure and
has rather long pipeline assets that bring water from the source to the various des-
tinations. The demand for water exists 24 hours a day, 7 days a week. Although
the station may have an early warning system installed, maintenance labour at the
water stations and along the pipeline is limited and spare inventory is generally not
held at water stations. Therefore, it is important to continuously monitor asset
operation (which in this case constitutes equipment at the water station as well as
the pipeline) to sense asset failures as soon as possible. However, early-fault de-
tection is of little use if it is not backed up with the ready availability of excess
capacity and maintenance expertise. The expectations placed on a water station by
its stakeholders concern not just continuous availability of operational assets but
also the efficiency and reliability of support processes. IT systems or ISs therefore
need to enable maintenance workflow execution as well as decision support by
enabling information manipulation on such factors as asset failure and wear pat-
tern; maintenance work plan generation; maintenance scheduling and follow-up
actions, asset shutdown scheduling, maintenance simulation, spare water acquisi-
tion, testing after servicing/repair treatment, identification of asset design weak-
nesses, and asset operation cost-benefit analysis. An important measure of the
effectiveness of ITs, therefore, is the level of integration which they provide in
bringing together different functions of asset lifecycle management, as well as
stakeholders, such as business partners, customers, and regulatory agencies like
environmental and government organisations.
The lack of convergence between IT and OT is a major issue that has technical,
management and organisational dimensions. The root cause of this issue, however,
is the fact that IT and OT are managed and owned by different departments within
an organisation [21]. IT is generally governed by an IT department, whereas OT is
controlled by the department within which it is deployed. IT is thus managed by
an IT department and OT is managed by engineers. The absence of a common set
of rules to govern the implementation and use of OT and IT leads to the formation
of islands of isolated technologies within the organisation, which makes integra-
tion and interoperability of technologies cumbersome if not impossible. With
limited or no integration, there is poor leverage of learnings and benefits, and
decision support is unintelligible. Management of IT and OT by different func-
tions is cost and effort intensive, as this multiplicity of strategies to manage tech-
nology (which are essentially of the same stock) cannot connect properly with the
business strategy and operational plans [22]. At the same time, this multiplicity
also results in a lack of accountability around standardisation of technology and
practice and policy enforcement.
Information Systems Implementation for Asset Management: A Theoretical Perspective 29
The issues discussed here regarding IS implementation for asset lifecycle man-
agement are diverse. These issues have technical, human and organisational di-
mensions and significant consequences for business development. IS implementa-
tion should, therefore, not be treated as a support activity in the value chain of
asset management. It should be pursued proactively and aim to continuously align
technology with the organisational structure and infrastructure, process design and
strategic business considerations so as to realise the soft and hard benefits associ-
ated with the use of these systems. Thus when ISs are physically adopted and
socially and organisationally consistent, there will be consensus on what the tech-
nology is supposed to accomplish and how it is to be utilised. These systems
would then provide a learning platform to facilitate organisational evolution and
maturity where they act as business enablers and strategic translators.
IS institutionalisation is strongly underpinned by the political, economic and
cultural context of the organisations, which bring together individuals and groups
with particular interests and interpretations and help them in creating and sustain-
ing ISs as socio-technical systems [42]. The relationship between ISs and the
context of their implementation has been the focus of many research initiatives
such as the connection between planning sophistication and IS success [43], expe-
diency of strategic IS planning [44], differences between IS capabilities and man-
agement perceptions [45], impact of inter-organisational behaviour and organisa-
tional context on the success of IS planning [46] and identification of key
dimensions of IS planning and the systems’ effectiveness [47].
IS implementation planning is an intricate task with a complex mix of activities
[48]. It is a continuous process aimed at harmonising the objectives of ISs, defin-
ing strategies to achieve these objectives and establishing plans to implement these
strategies [49]. However, as IT environments in general and IS applications in
32 A. Haider
view; whereas the organisation technology and medium metaphors are debatable
and can conform to either view.
A review of the literature on IS adoption reveals that researchers have at-
tempted to address implementation of these systems from a variety of different
perspectives. At the same time, it also reveals that the value profile which organi-
sations attach to IS implementation spans from simple process automation to
providing decision support for strategic competitiveness. An in-depth literature
review of IS implementation and adoption from 2000 to 2007 was carried out for
this research (Appendix 2). This literature review identifies different theoretical
perspectives which originated from diversified fields of knowledge such as busi-
ness management, organisational behaviour, computer science, mathematics,
engineering, sociology and cognitive sciences. These theories can be classified
into three broad categories: technological determinism (such as information proc-
essing, task-technology fit and agency theory); socio-technical interactions (such
as actor network theory, socio-technical theory, and contingency theory) and
organisational imperatives (such as strategic competitiveness, resource-based
view theory and dynamic capabilities theory).
Technological determinism theories adopt a mechanistic view of organisations
where technology is applied to bring about predicted or desired effects. Socio-
technical theories are focused on the interaction of technology with the social and
cultural context of the organisation to produce desired results. Organisational
imperative theories focus on the relationships between the environment in which
the business operates, business strategies and strategic orientation, and the tech-
nology management strategies to produce desired results in the organisation. The
following sections discuss these perspectives in detail and examine their role in
effective implementation of ISs for engineering asset management.
the speed with which technology updates itself renders these strategic considera-
tions obsolete. Consequently, by the time strategy is fully implemented, the pri-
mary principles adopted and assumptions made about the business are outdated,
and this approach ends up strategising for the past and not for the future.
These three theoretical perspectives encompass the existing principles em-
ployed to implement technologies within business organisations. All have their
own limitations and benefits and are further dependent on a variety of intra- or
extra-organisational factors for their success. However, for implementation of ISs
for asset management, none of these theoretical perspectives could be considered
all-encompassing or all-inclusive. Theoretically, a hybrid approach which draws
on all three of these perspectives seems most appropriate for IS implementation
for asset management. The following sections describe how ISs must be imple-
mented to align strategic asset management considerations with technology, so as
to respond to external and internal challenges.
In asset management, ISs are not just business automation tools. Among the most
significant contributions of these systems are that they translate strategic objec-
tives into action and inform asset and business strategy through value-added deci-
sion support. However, the fundamental building block to enable such a value
profile is the quality of the alignment of strategic business objectives with the
physical, social and technical context of the organisation such as policies, internal
structures, systems and relationships which support business execution [76]. These
contexts and their mutual interaction help organisational maturity by shaping col-
laboration, empowerment, adaptability and learning in the organisation [77]. The
mutual interaction of these contexts depends three critical aspects: firstly, the
design of the organisation, i.e. the organisation’s structure and functions, and the
reporting relationships that give shape to this structure; secondly, the business
processes and related information flows; and thirdly, the skills and competencies
required to execute business and operate enabling technologies, i.e. job design and
training, sourcing and management of human resources [78]. The concept of align-
ing strategic business objectives with the physical, social and technical context of
an organisation illustrates that IS implementation should be aimed at binding these
contexts together so that they contribute to the strategic advantage of the business
[79, 80]. As a result, institutionalisation of these systems contributes to the matur-
ity of these contexts and increases organisational responsiveness to internal and
external challenges [81].
Each implementation of an IS is unique, and it is not possible to follow particu-
lar theories (e.g. technological determinism, socio-technical alignment, organisa-
tional imperatives) regarding implementation in letter and spirit. For example, ISs
for asset management include operational technologies like sensors and other
40 A. Haider
tors which govern the domain, whereas imperatives illustrate the key aspects
which need to be taken in account to manage the domain. This framework pro-
vides guidelines for strategic management of IT and ISs and their integration.
Earl [80] argues that the organisation must have answers to some fundamental
questions to align the four domains. Although the framework does not answer
these questions, it formalises them into the strategic agenda of the organisation
and points to the processes through which these questions are raised and answered
regularly. These questions could be as follows:
a. What IS and IT applications should the organisation develop to improve the
competitiveness of its business strategies?
b. What technological opportunities should the organisation consider to enhance
the efficiency and quality of its business processes?
c. Which IT platforms should the organisation be developing, and what plan and
policies are required to do that?
d. What IT capabilities should the organisation develop, and how may these be
acquired?
e. How should the IS activities be organised and what is the role of ISs?
f. How should IS/information technologies be governed and what kind of mana-
gerial profile best serves these needs?
The framework has an organisational strategy domain at at its core and sug-
gests its two components as being the organisational intent interpreted through
strategic choices and the organisational context shaped by the organisational in-
frastructure and culture. The components and imperatives of an organisation’s
strategy need to be accounted for while formulating IS strategy. The organisa-
tional context and business intent are subjective, and therefore the process with
which they feed into information strategy is not always clear or formalised. Earl
[80] terms understanding of these strategic considerations which influence infor-
mation strategy domain as the ‘clarification process’ and argues that familiarity
with strategic business intent and the organisational context is essential for IS
implementation and management. IS strategy is, thus, developed in response to
this process of clarification. The two key components comprising IS strategy
domain are ‘alignment’ and ‘opportunity’. Alignment is based on the clarification
process and calls for aligning IS implementation with business intent, goals and
context. The aim of alignment is to keep IS implementation aligned with business
orientation through strategic business units by employing methodologies such as
critical success factors or through steering committees [82]. The opportunity
component seeks to seize opportunities for organisational growth and maturity
through creative use of technology by actively looking out for technology-centric
business improvement enablers and thus contributing to the ‘innovation process’.
The IS strategy domain influences other domains through this innovation process,
for instance, the promise of translating or informing organisational strategy with
ISs is much greater than making structural adjustments. At the same time, the IS
strategy domain prompts changes to information management when reconfigura-
tion of the functionality of these systems necessitates business process reengineer-
42 A. Haider
BUSINESS TECHNOLOGY
SCOPE SCOPE
EXTERNAL
I/T
DISTINCTIVE BUSINESS SYSTEMATIC
GOVERNANCE
COMPETENCIES GOVERNANCE COMPETENCIES
ADMINISTRATIVE
INFRASTRUCTURE
PROCESSES
INTERNAL
PROCESSES
SKILLS SKILLS ARCHITECTURES
IS implementation and its alignment with the organisational social and cultural
environment, structure, infrastructure and strategy do not follow a mechanistic
pattern and require time to take shape and deliver expected results. It is a process
which is socially and technically engendered in the organisation and, therefore,
requires a maturity of interacting actors and infrastructure to provide an appro-
priate level of alignment. Using available IS theories along with the lessons
learnt from the alignment theories discussed in previous sections, this section
attempts to develop an alternative approach to IS implementation and its align-
ment with the technical, organisational and social contexts of the organisation.
An IS-based engineering asset management alignment framework is illustrated
in Figure 4.
This framework treats alignment as a process which is technically and socially
composed and embedded in the organisation; in addition, it highlights the role of
information in shaping alignment. Proponents of contingency theory [83, 84]
suggest that the performance of an entity is contingent upon various internal and
external constraints. These theorists highlight four important points: (1) there is
no one best way to manage an organisation, (2) the subsystems of an organisation
need to be aligned with each other and with the overall organisation, (3) success-
ful organisations are able to extend this alignment to the organisational environ-
ment, and (4) organisational design and management must satisfy the nature and
needs of the task and work groups. Contingency theory stresses the multivariate
nature of organisations and, along with systems theory, assists in understanding
the interrelationships within and among subsystems of an organisation [85]. The
framework applies systems theory [86], and instead of considering an organisa-
tion’s or its constituent domains’ properties alone, it builds upon the relationships
and understanding of the domains which collectively provide for the IS alignment
within and with the organisation. This framework embodies these relationships
and applies the theory of dynamic capabilities to address the changing nature of
the asset management business environment by stressing integration, building and
reconfiguration of competencies to address the changing business environment
[87, 88].
The framework takes a resource-based view and proposes four domains: strate-
gic orientation, operational orientation, IS design and organisational design. Ana-
logous to Henderson and Venkatraman’s model, it argues that the strategic orien-
tation of the asset-managing organisation is defined through the interaction of
business scope, unique competencies and business governance choices. The opera-
tional orientation of asset management is derived from this strategic orientation.
The framework seeks to develop alignment based on goals of asset lifecycle man-
agement processes with the organisation’s overall objectives. This means that
asset lifecycle management processes conform to the strategic asset management
orientation. The asset lifecycle management domain is strategically aligned with
46 A. Haider
Goals Alignment
Business
Scope Renewal
Renewal Cycle
Supply &
Risk Quality Lifecycle
Logistics
Management Management Accounting
Management
Process
Standardisation of Technology
Business Needs Definition
Competency Management
Business Responsiveness
Development
Collaborative Culture and Structure Development Data Acquisition and Technology Support Infrastructure
the organisational design domain in the sense that not only do the organisational
and social contexts conform to asset lifecycle management objectives but they also
contribute to the responsiveness of the organisation, and in so doing help asset
lifecycle management processes to adapt to changes in the internal and external
business environment.
In this framework, the information requirements of asset lifecycle processes
drive IS design. The framework treats operational and information technologies in
the same domain as IS design. Thus, the alignment sought between operational
orientation of asset management and IS design aims at a functional integration of
asset lifecycle. To ensure information integration and quality, the IS design do-
main takes a bottom-up approach and stresses standardised data acquisition and
technology support infrastructure, which facilitates information integration and
communication and consequently allows for information storage in a way that
makes information accessible and available throughout the organisation. This
helps with information and knowledge management and functional integration.
The analysis layer refers to both the analysis to evaluate if the existing standard
of information and information systems meets the process and organisational
objectives (hence the strategic alignment between the IS design domain and stra-
tegic orientation and operational orientation domains) and to the level of decision
support which is required at various stages of an asset’s lifecycle. The quality of
the asset lifecycle management processes strongly depends upon the quality of
Information Systems Implementation for Asset Management: A Theoretical Perspective 47
information, and information quality itself is a measure of how effectively the ISs
cater for the information needs of the business processes. The analysis layer,
therefore, also measures the integration between ISs and business processes.
However, technologies, whether information or operational, are passive entities.
Their use and institutionalisation are not mechanistic processes and rely on the
culture, structure and human actors in an organisation. Therefore, the framework
proposes contextual alignment between IS design and organisation design
domains.
Organisational design takes time to develop, and its alignment with the IS is
also subject to the same time constraints. Therefore, the organisation design do-
main stresses the ‘development’ of a collaborative culture and structure as the
fundamental element of organisational design. This foundation provides the build-
ing block for developing an organisational infrastructure (internal structures,
policies and procedures put in place to support the strategic orientation of the
business), which shapes formal and informal relationships and drives human
resources management and skills development. Thus, organisational design pro-
vides for the development of core competencies which aid in utilising information
and operational technologies as well as executing asset management processes for
the advantage of the organisation through alignment based on organisational in-
tent (i.e. organisational vision, mission and objectives). In doing so, the social and
organisational contexts contribute to strategic orientation and are themselves
shaped in line with the strategic orientation. In doing so, organisational design
domain improves responsiveness of the organisation, which enables the organisa-
tion to respond to changes in the business environment. At the same time, since
the organisational design domain is strategically aligned with the operational
orientation domain, it accounts for the objectives of the overall business as well
as the asset lifecycle demands and goals. It thus provides the context within
which the ISs are employed, shaped and institutionalised. The context of the or-
ganisation is subject to change due to internal and external forces; therefore, the
framework suggests context-based dynamic alignment between the IS design and
organisational design domains.
This framework treats information as the key enabler of asset management and
emphasises that IS implementation is not a managerial process or activity. In
actual fact, it is a social process which is continuously aimed at aligning and
matching IS capabilities with business objectives and requirements. The frame-
work also highlights that to achieve the desired results, it is important to account
for those organisational areas which influence technology implementation and
those which are influenced by it. This framework thus treats IS implementation as
a means to translate strategic asset management objectives into operational ac-
tions by enabling asset lifecycle processes and utilises the information generated
by the execution of these processes to inform asset management strategy for stra-
tegic reorientation and recalibration. In this way, IS implementation becomes a
generative learning process which helps in the maturity of the technical, social
and organisational context of the organisation.
48 A. Haider
9 Conclusions
References
[1] Earl MJ (1989) Management strategies for information technology. Prentice-Hall, Hemel
Hempstead, UK
[2] Galliers RD (1991) Strategic information systems: myths, realities and guidelines for
successful implementation. Eur J Inf Syst 1(1):55–64
[3] Lederer AL, Sethi V (1996) Key prescriptions for strategic information systems planning.
J Manage Inf Syst 13(1):35–62
[4] Haider A, Koronios A, Quirchmayr G (2006) You cannot manage what you cannot meas-
ure: an information systems based asset management perspective. In: Mathew J, Ma L,
Tan A, Anderson D (eds) Proceedings of the inaugural world congress on engineering as-
set management, 11–14 July 2006, Gold Coast, Australia
[5] Haider A, Koronios A (2005) ICT based asset management framework. In: Proceedings of
the 8th international conference on enterprise information systems (ICEIS), Paphos, Cy-
prus, vol 3, pp. 312–322
[6] Checkland P (1981) Systems thinking, systems practice. Wiley, Chichester
[7] Walsham G (2001) Making a world of difference: IT in a global context. Wiley, Chichester
[8] Giddens A (1984) The constitution of society: outline of the theory of structure. University
of California Press, Berkeley, CA
Information Systems Implementation for Asset Management: A Theoretical Perspective 49
[9] Haider A (2007) Information systems based engineering asset management evaluation:
operational interpretations. Dissertation, University of South Australia, Adelaide, Australia
[10] Haider A (2009) Value maximisation from information technology in asset management –
a cultural study. In: Proceedings of the international conference of maintenance societies
(ICOMS), 2–4 June 2009, Sydney, Australia
[11] IIMM (2006) International infrastructure management manual. Association of Local
Government Engineering NZ, National Asset Management Steering Group, New Zealand,
Thames, ITBN 0-473-10685-X
[12] Marosszeky M, Sauer C, Johnson K, Karim K, Yetton P (2000) Information technology in
the building and construction industry: the Australian experience. In: Li H, Shen Q, Scott
D, Love PED (eds) Proceedings of the INCITE 2000 conference: Implementing IT to ob-
tain a competitive advantage in the 21st century. Hong Kong Polytechnic University Press,
Hong Kong, pp. 78–92
[13] Power D (2005) Implementation and use of B2B-enabling technologies: five manufactur-
ing cases. J Manuf Technol Manage 16(5):554–572
[14] Songer AD, Young R, Davis K (2001) Social architecture for sustainable IT implementa-
tion in AEC/EPC. In: Coetzee G, Boshoff F (eds) Proceedings of IT in construction in Af-
rica, 30 May–1 June, Mpumalunga, South Africa
[15] Stewart R, Mohamed S (2002) IT/IS projects selection using multi-criteria utility theory.
Logist Inf Manage 15(4):254–270
[16] Laurindo FJB, de Carvalho MM (2005) Changing product development process through
information technology: a Brazilian case. J Manuf Technol Manage 16(3):312–327
[17] Small MH (2006) Justifying investment in advanced manufacturing technology: a portfo-
lio analysis. Ind Manage Data Syst 106(4):485–508
[18] Zipf PJ (2000) Technology-enhanced project management. J Manage Eng 16(1):34–39
[19] Weippert A, Kajewski SL, Tilley PA (2002) Internet-based information and communica-
tion systems on remote construction projects: a case study analysis. Construct Innovat
2(2):103–116
[20] Steenstrup K (2008) EAM and IT enabled assets: what is your equipment thinking about
today? In: Energy & Utilities Summit, 7–10 September 2008, JW Marriott Grande Lakes,
Orlando, FL
[21] Marsh L, Flanagan R (2000) Measuring the costs and benefits of information technology
in construction. Eng Construct Architect Manage 7(4):423–435
[22] Gindy NNZ, Cerit B, Hodgson A (2006) Technology roadmapping for the next generation
manufacturing enterprise. J Manuf Technol Manage 17(4):404–416
[23] Haider A, Koronios A (2003) Managing engineering assets: a knowledge based approach
through information quality. In: Proceedings of the 2003 international business informa-
tion management conference, Cairo, Egypt, pp. 443–452
[24] Haider A (2008) Information systems for asset lifecycle management: lessons from two
cases. In: 3rd world congress on engineering asset management, 27–30 October 2008, Bei-
jing, People’s Republic of China
[25] Haider A (2010) Governance of IT for engineering asset management. In: 14th business
transformation through innovation and knowledge management – an academic perspec-
tive, 23–24 June 2010, Istanbul, Turkey
[26] Lee I (2004) Evaluating business process-integrated information technology investment.
Bus Process Manage J 10(2):214–233
[27] O’Brien WJ (2000) Implementation issues in project web sites: a practitioner’s viewpoint.
J Manage Eng 16(3):34–39
[28] Abdel-Malek L, Das SK, Wolf C (2000) Design and implementation of flexible manufac-
turing solutions in agile enterprises. Int J Agile Manage Syst 2(3):187–195
[29] Paiva EL, Roth AV, Fensterseifer JE (2002) Focusing information in manufacturing:
a knowledge management perspective. Ind Manage Data Syst 102(7):381–389
50 A. Haider
[30] Whyte J, Bouchlaghem D (2001) IT innovation within the construction organisation. In:
Coetzee G, Boshoff F (eds) Proceedings of IT in construction in Africa, 30 May–1 June
2001, Mpumalunga, South Africa
[31] Haider A (2010) Enterprise architectures for information and operational technologies for
asset management. In: 5th world congress on engineering asset management, 25–27 Octo-
ber 2010, Brisbane, Australia
[32] Pun KF (2005) An empirical investigation of strategy determinants and choices in manu-
facturing enterprises. J Manuf Technol Manage 16(3):282–301
[33] Stephenson P, Blaza S (2001) Implementing technological change in construction orga-
nisations. In: Coetzee G, Boshoff F (eds) Proceedings of IT in construction in Africa,
30 May–1 June, Mpumalunga, South Africa
[34] Jaska PV, Hogan PT (2006) Effective management of the information technology func-
tion. Manage Res News 29(8):464–470
[35] Love PED, Irani Z Li H, Cheng EWL, Tse RYC (2001) An empirical analysis of the
barriers to implementing e-commerce in small-medium sized construction contractors in
the state of Victoria, Australia. Construct Innovat 1(1):31–41
[36] Gordon SR, Gordon JR (2002) Organizational options for resolving the tension between
IT departments and business units in the delivery of IT services. Inf Technol People
15(4):286–305
[37] Voordijk, H, Leuven, AV, & Laan, A 2003) Enterprise resource planning in a large con-
struction firm: implementation analysis. Construct Manage Econ 21(5):511–521
[38] Gomes CF, Yasin MM, Lisboa JV (2004) A literature review of manufacturing perform-
ance measures and measurement in an organizational context: a framework and direction
for future research. J Manuf Technol Manage 15(6):511–530
[39] Nitithamyong P, Skibniewski MJ (2004) Web-based construction project management
systems: how to make them successful? Automat Construct 13(4):491–506
[40] Alshawi M, Ingirige B (2003) Web-enabled project management: an emerging paradigm
in construction. Automat Construct 12(4):349–364
[41] Bjork BC (2002) The impact of electronic document management on construction infor-
mation management. In: Proceedings of the international council for research and innova-
tion in building and construction, Council for Research and Innovation in Building and
Construction Working Group 78 conference 2002, 12–14 June 2002, Aarhus, Denmark
[42] Bijker WE, Law J (eds) (1992) Shaping technology/building society: studies in sociotech-
nical change. MIT Press, Cambridge, MA
[43] Sabherwal R (1999) The relationship between information system planning sophistication
and information system success: an empirical assessment. Decis Sci 30(1):137–67
[44] Teo TSH, Ang JSK (1999) Critical success factors in the alignment of IS plans with busi-
ness plans. Int J Inf Manage 19(2):173–185
[45] Kunnathur AS, Shi Z (2001) An investigation of the strategic information systems plan-
ning success in Chinese publicly traded firms. Int J Inf Manage 21(6):423–439
[46] Lee GG, Pai RJ (2003) Effects of organizational context and inter-group behaviour on the
success of strategic information systems planning: an empirical study. Behav Inf Technol
22(4):263–280
[47] Grover V, Segars AH (2005) An empirical evaluation of stages of strategic information
systems planning: patterns of process design and effectiveness. Inf Manage 42(5):761–779
[48] Newkirk HE, Lederer AL, Srinivasan C (2003) Strategic information systems planning:
too little or too much. J Strateg Inf Syst 12(3):201–228
[49] Teo TSH, King WR (1997) Integration between business planning and information sys-
tems planning: an evolutionary-contingency perspective. J Manage Inf Syst 14(1):185–224
[50] Allen JP (2000) Information systems as technological innovation. Inf Technol People
13(3):210–221
[51] Kwon TH, Zmud RW (1987) Unifying the fragmented models of information systems
implementation. In: Boland RJ Jr, Hirshheim RA (eds) Critical issues in information sys-
tems research. Wiley, New York
Information Systems Implementation for Asset Management: A Theoretical Perspective 51
[74] Mintzberg H (1990) The design school: reconsidering the basic premises of strategic
management. Strateg Manage J 11(3):171–195
[75] Davenport TH (1998) Putting the enterprise into the enterprise system. Harvard Bus Rev
July–August, pp. 121–131
[76] Scott Morton MS (ed) (1991) The corporation of the 1990s: information technology and
organizational transformation. Oxford University Press, Oxford
[77] Tapscott D, Caston A (1993) Paradigm shift: the new promise of information technology.
McGraw-Hill, New York
[78] Henderson JC, Venkatraman N (1993) Strategic alignment: leveraging information tech-
nology for transforming organizations. IBM Syst J 32(1):4–16
[79] Henderson JC, Venkatraman N (1992) Strategic alignment: a model for organizational
transformation through information technology. In: Kochan TA, Useem M (eds) Trans-
forming organizations. Oxford University Press, Oxford
[80] Earl M (1996) Integrating IS and the organization: a framework of organizational fit. In:
Earl MJ (ed) Information management: the organizational dimension. Oxford University
Press, Oxford
[81] Robson C (2004) Real world research, 2nd edn. Blackwell, Oxford
[82] Ward J, Griffiths P (1996) Strategic planning for information systems, 2nd edn. Wiley,
London
[83] Chin WW, Marcolin BL, Newsted PR (2003) A partial least squares latent variable model-
ling approach for measuring interaction effects: results from a Monte Carlo simulation
study and an electronic-mail emotion/adoption study. Inf Syst Res 14(2):189–217
[84] Khazanchi D (2005) Information technology (IT) appropriateness: the contingency theory
of fit and IT implementation in small and medium enterprises. J Comput Inf Syst
45(3):88–95
[85] Premkumar G, King WR (1994) Organizational characteristics and information systems
planning: an empirical study. Inf Syst Res 5(2):75–109
[86] Churchman CW (1994) Management science: science of managing and managing of
science. Interfaces 24(4):99–110
[87] Zahra SA, George G (2002) The net-enabled business innovation cycle and the evolution
of dynamic capabilities. Inf Syst Res 13(2):147–150
[88] Daniel EM, Wilson HN (2003) The role of dynamic capabilities in e-business transforma-
tion. Eur J Inf Syst 12(4):282–296
[89] Chan FTS, Chan MH, Lau H, Ip RWL (2001) Investment appraisal techniques for ad-
vanced manufacturing technology (AMT): a literature review. Integr Manuf Syst
12(1):35–47
[90] Huang C, Fisher N, Spreadborough A, Suchocki M (2003) Identify in the critical factors
of IT innovation adoption and implementation within the construction industry. In: Pro-
ceedings of the 2nd international conference on construction in the 21st century (CITC-II),
Sustainablity and Innovation in Management and Technology, 10–12 December 2003,
Hong Kong
[91] Thorpe D (2003) Online remote construction management trials in Queensland department
of main roads: a participant’s perspective. Construct Innovat 3(2):65–79
[92] Stewart RA, Mohamed S, Marosszeky M (2004) An empirical investigation into the link
between information technology implementation barriers and coping strategies in the Aus-
tralian construction industry. Construct Innovat 4(3):155–171
[93] Abdel-Makoud AB (2004) Manufacturing in the UK: contemporary characteristics and
performance indicators. J Manuf Technol Manage 15(2):155–171
[94] Dangayach GS, Deshmukh SG (2005) Advanced manufacturing technology implementa-
tion: evidence from Indian small and medium enterprises (SMEs). J Manuf Technol Man-
age 16(5):483–496
[95] Adam A (2002) Exploring the gender question in critical information systems. J Inf Tech-
nol 17(2):59
Information Systems Implementation for Asset Management: A Theoretical Perspective 53
[121] Chen ANK, Edgington TM (2005) Assessing value in organizational knowledge creation:
considerations for knowledge workers. MIS Q 29(2):279–309
[122] Chen JC, Chong PP, Chen Y (2001) Decision criteria consolidation: a theoretical founda-
tion of Pareto principle to Porter’s competitive forces. J Organ Comput Electron Com-
merce 11(1):1–14
[123] Chen Y, Chong PP, Chen JC (2000) Small business management: an IT-based approach.
J Comput Inf Syst 41(2):40–47
[124] Chin WW, Marcolin BL, Newsted PR (2003) A partial least squares latent variable model-
ling approach for measuring interaction effects: results from a Monte Carlo simulation
study and an electronic-mail emotion/adoption study. Inf Syst Res 14(2):189–217
[125] Chung WY, Fisher CW, Wang RY (2005) Redefining the scope and focus of information
quality work: a general systems theory perspective. In: Wang RY, Pierce WM, Madnick
SE, Fisher CW (eds) Advances in management information systems. ME Sharpe, Armonk,
NY
[126] Churchman CW (1994) Management science: science of managing and managing of
science. Interfaces 24(4):99–110
[127] Clemons EK, Hitt LM (2004) Poaching and the misappropriation of information: transac-
tion risks of information exchange. J Manage Inf Syst 21(2):87–107
[128] Cohen W, Levinthal D (1990) Absorptive capacity: a new perspective on learning and
innovation. Adm Sci Q 35(1):128–152
[129] Compeau D, Higgins CA, Huff S (1999) Social cognitive theory and individual reactions
to computing technology: a longitudinal study. MIS Q 23(2):145–159
[130] Cooper RB, Wolfe RA (2005) Information processing model of information technology
adaptation: an intra-organizational diffusion perspective. Database Adv Inf Syst
36(1):30−48
[131] Daniel EM, Wilson HN (2003) The role of dynamic capabilities in e-business transforma-
tion. Eur J Inf Syst 12(4):282–296
[132] Dennis AR, Garfield MJ (2003) The adoption and use of GSS in project teams: toward
more participative processes and outcomes. MIS Q 27(2):289
[133] Dennis AR, Wixom BH, Vandenberg RJ (2001) Understanding fit and appropriation
effects in group support systems via meta-analysis. MIS Q 25(2):167–193
[134] Dunn C, Grabski S (2001) An investigation of localization as an element of cognitive fit in
accounting model representations. Decis Sci 32(1):55–94
[135] Feeley TH, Barnett GA (1996) Predicting employee turnover from communication net-
works. Hum Commun Res 23(1):370–387
[136] Garicano L, Kaplan SN (2001) The effects of business-to-business E-commerce on trans-
action costs. J Ind Econ 49(4):463–485
[137] Garrity EJ (2002) Synthesizing user centred and designer centred is development ap-
proaches using general systems theory. Inf Syst Frontiers 3(1):107–121
[138] Gattiker TF, Goodhue DL (2005) What happens after ERP implementation: understanding
the impact of inter-dependence and differentiation on plant-level outcomes. MIS Q
29(3):559–585
[139] Gebauer J, Shaw MJ (2004) Success factors and impacts of mobile business applications:
results from a mobile e-procurement study. Int J Electron Commerce 8(3):19–41
[140] Ginberg MJ (1980) An organizational contingencies view of accounting and information
systems implementation. Account Organ Soc 5(4):369–382
[141] Goodhue DL (1995) Understanding user evaluations of information systems. Manage Sci
41(12):1827–1844
[142] Goodhue DL, Thompson RL (1995) Task-technology fit and individual performance. MIS
Q 19(2):213–236
[143] Gregoire YM, Wade JH, Antia K (2001) Resource redeployment in an ecommerce envi-
ronment: a resource-based view. In: Proceedings of the American Marketing Association
conference, Long Beach, CA
Information Systems Implementation for Asset Management: A Theoretical Perspective 55
[144] Griffith TL, Sawyer JE, Neale MA (2003) Virtualness and knowledge in teams: managing
the love triangle of organizations, individuals, and information technology. MIS Q
27(2):265–287
[145] Hansen T, Jensen JM, Solgaard HS (2004) Predicting online grocery buying intention: a
comparison of the theory of reasoned action and the theory of planned behavior. Int J Inf
Manage 24(6):539–550
[146] Hasan B, Ali JMH (2004) An empirical examination of a model of computer learning
performance. J Comput Inf Syst 44(4):27–34
[147] Heng MSH, de Moor A (2003) From Habermas’s communicative theory to practice on the
internet. Inf Syst J 13(4):331–352
[148] Henwood F, Hart A (2003) Articulating gender in the context of ICTs in health care: the
case of electronic patient records in the maternity services. Crit Soc Policy 23(2):249–267
[149] Hidding G (2001) Sustaining strategic IT advantage in the information age: how strategy
paradigms differ by speed. Strateg Inf Syst 10(3):201–222
[150] Hinds PJ, Bailey DE (2003) Out of sight, out of sync: understanding conflict in distributed
teams. Organ Sci 14(6):615–632
[151] Hoxmeier JA, Nie W, Purvis GT (2000) The impact of gender and experience on user
confidence in electronic mail. J End User Comput 12(4):11–20
[152] Humphreys PK, Lai MK, Sculli D (2001) An inter-organizational information system for
supply chain management. Int J Prod Econ 70(3):245–255
[153] Huseyin T 2005) Information technology relatedness, knowledge management capability,
and performance of multibusiness firms. MIS Q 29(2):311–335
[154] Iskandar BY, Kurokawa S, LeBlanc LJ (2001) Adoption of electronic data interchange:
the role of buyer-supplier relationships. IEEE Trans Eng Manage 48(4):505–517
[155] Jae-Nam L, Young-Gul K (2005) Understanding outsourcing partnership: a comparison of
three theoretical perspectives. IEEE Trans Eng Manage 52(1):43–58
[156] Jagodzinski P, Reid FJM, Culverhouse P, Parsons R, Phillips I (2000) A study of electron-
ics engineering design teams. Des Stud 21(4):375–402
[157] Janson M, Cecez-Kecmanovic D (2005) Making sense of e-commerce as social action. Inf
Technol People 14(4):311–343
[158] Jarvenpaa SL (1988) The importance of laboratory experimentation in information sys-
tems research. Commun ACM 31(12):1502–1504
[159] Jasperson J, Carter PE, Zmud RW (2005) A comprehensive conceptualization of post-
adoptive behaviors associated with information technology enabled work systems. MIS Q
29(3):525–557
[160] Jones M, Karsten H (2003) Review: structuration theory and information systems re-
search. WP 11/03. Judge Institute Working Papers, University of Cambridge.
http://www.jbs.cam.ac.uk/research/working_papers/2003/wp0311.pdf.
Accessed 3 December 2009
[161] Kauffman RJ, Mohtadi H (2004) Proprietary and open systems adoption in E-procure-
ment: a risk-augmented transaction cost perspective. J Manage Inf Syst 21(1):137–166
[162] Keil M, Smith HJ, Pawlowski S, Jin L (2004) Why didn’t somebody tell me? Climate,
information asymmetry, and bad news about troubled projects. Database Adv Inf Syst
35(2):65–84
[163] Kern T, Kreijger J, Willcocks L (2002) Exploring ASP as sourcing strategy: theoretical
perspectives, propositions for practice. J Strateg Inf Syst 11(2):153–177
[164] Khazanchi D (2005) Information technology (IT) appropriateness: the contingency theory
of fit and IT implementation in small and medium enterprises. J Comput Inf Syst
45(3):88–95
[165] Kim KK, Michelman JE (1990) An examination of factors for the strategic use of informa-
tion systems in the health care industry. MIS Q 14(2):201–215
[166] Kling R, McKim G, King A (2003) A bit more to it: scholarly communication forums as
socio- technical interaction networks. J Am Soc Inf Sci Technol 54(1):47–67
56 A. Haider
[167] Ko D, Kirsch LJ, King WR (2005) Antecedents of knowledge transfer from consultants to
clients in enterprise system implementations. MIS Q 29(1):59–85
[168] Kohli R, Kettinger WJ (2004) Informating the clan: controlling physicians’ costs and
outcomes. MIS Q 28(3):363–394
[169] Kuo FY, Chu TH, Hsu MH, Hsieh HS (2004) An investigation of effort-accuracy trade-off
and the impact of self-efficacy on Web searching behaviors. Decis Support Syst
37(3):331–342
[170] Lamb R, Kling R (2003) Reconceptualizing users as social actors in information systems
research. MIS Q 27(2):197–235
[171] Larsen T, Levine L, DeGross JI (eds) (1999) Information systems: current issues and
future changes. IFIP, Laxenburg, Austria
[172] Ledington PWJ, Ledington J (1999) The problem of comparison in soft systems method-
ology. Syst Res Behav Sci 16(4):329–339
[173] Leonard LNK, Cronan TP, Kreie J (2004) What influences IT ethical behavior intentions-
planned behavior, reasoned action, perceived importance, or individual characteristics? Inf
Manage 42(1):143–158
[174] Liaw SS, Chang WC, Hung WH, Huang HM (2006) Attitudes toward search engines as a
learning assisted tool: approach of Liaw and Huang’s research model. Comput Hum Be-
hav 22(2):177–190
[175] Lim K, Benbasat I (2000) The effect of multimedia on perceived equivocality and per-
ceived usefulness of information systems. MIS Q 24(3):449–471
[176] Loch CH, Huberman BA (1999) A punctuated equilibrium model of technology diffusion.
Manage Sci 45(2):160–177
[177] Madey G, Freeh V, Tynan R (2002) The open source software development phenomenon:
an analysis based on social network theory. In: Proceedimgs of Americas Conference on
Information Systems (AMCIS2002), Dallas, TX, pp. 1806–1813
[178] Mahaney RC, Lederer AL (2003) Information systems project management: an agency
theory interpretation. J Syst Softw 68(1):1–9
[179] Mahoney LS, Roush PB, Bandy D (2003) An investigation of the effects of decisional
guidance and cognitive ability on decision-making involving uncertainty data. Inf Organ
13(2):85–110
[180] Majchrzak A, Malhotra A, John R (2005) Perceived individual collaboration know-how
development through information technology-enabled contextualization: evidence from
distributed teams. Inf Syst Res 16(1):9–27
[181] Malhotra A, Gosain S, El Sawy OA (2005) Absorptive capacity configurations in supply
chains: gearing for partner-enabled market knowledge creation. MIS Q 29(1):145–187
[182] Markus ML, Majchrzak A, Gasser L (2002) A design theory for systems that support
emergent knowledge processes. MIS Q 26(3):179–212
[183] Massey AP, Montoya-Weiss MM (2006) Unraveling the temporal fabric of knowledge
conversion: a model of media selection and use. MIS Q 30(1):99–114
[184] McMaster TE, Mumford EB, Swanson EB, Warboys B, Wastell D (eds) (1997) Facilitat-
ing technology transfer through partnership: learning from practice and research. Chap-
man & Hall, London
[185] Melville N, Kraemer KL, Gurbaxani V (2004) Information technology and organizational
performance: an integrative model of IT business value. MIS Q 28(2):283–322
[186] Mirchandani DA, Lederer AL (2004) IS planning autonomy in US subsidiaries of multina-
tional firms. Inf Manage 41(8):1021–1036
[187] Mora M, Gelman O, Cervantes F, Mejia M, Weitzenfeld A (2003) A systemic approach
for the formalization of the information systems concept: why information systems are
systems? In: Cano JJ (ed) Critical reflections on information systems: a systemic ap-
proach. Idea Group, Hershey, PA
[188] Newman M, Robey D (1992) A social process model of user-analyst relationships. MIS Q
16(2):249–266
Information Systems Implementation for Asset Management: A Theoretical Perspective 57
[189] Orlikowski WJ (2000) Using technology and constituting structures: a practice lens for
studying technology in organizations. Organ Sci 11(4):404–428
[190] Orlikowski WJ, Barley SR (2001) Technology and institutions: what can research on
information technology and research on organizations learn from each other? MIS Q
25(2):245–265
[191] Orlikowski WJ, Walsham G, Jones M, DeGross JI (eds) (1996) Information technology
and changes in organizational work, Chapman & Hall, London
[192] Palvia SC, Sharma RS, Conrath DW (2001) A socio-technical framework for quality
assessment of computer information systems. Ind Manage Data Syst 101(5–6):237–251
[193] Pawlowski SD, Robey D (2004) Bridging user organizations: knowledge brokering and
the work of information technology professionals. MIS Q 28(4):645–672
[194] Pollock TG, Whitbred RC, Contractor N (2000) Social information processing and job
characteristics: a simultaneous test of two theories with implications for job satisfaction.
Hum Commun Res 26(2):292–330
[195] Porra J, Hirschiem R, Parks MS (2005) The history of Texaco’s corporate information
technology function: a general systems theoretical interpretation. MIS Q 29(4):721–746
[196] Porter ME (2001) Strategy and the internet. Harvard Bus Rev 79(3):63–78
[197] Pozzebon M, Pinsonneault A (2005) Global-local negotiations for implementing configur-
able packages: the power of initial organizational decisions. J Strateg Inf Syst
14(2):121−145
[198] Premkumar G, Ramamurthy K, Saunders CS (2005) Information processing view of
organizations: an exploratory examination of fit in the context of interorganizational rela-
tionships. J Manage Inf Syst 22(1):257–294
[199] Qu Z, Brocklehurst M (2003) What will it take for china to become a competitive force in
offshore outsourcing? An analysis of the role of transaction costs in supplier selection.
J Inf Technol 18(1):53–67
[200] Rose J (2002) Interaction, transformation and information systems development – an
extended application of soft systems methodology. Inf Technol People 15(3):242–268
[201] Ryan SD, Harrison DA, Schkade LL (2002) Information-technology investment decisions:
when do costs and benefits in the social subsystem matter? J Manage Inf Syst
19(2):85−127
[202] Sabherwal R, Hirschheim R, Goles T (2001) The dynamics of alignment: insights from a
punctuated equilibrium model. Organ Sci 12(2):79–197
[203] Sahay S (1997) Implementation of information technology: a time-space perspective.
Organ Stud 18(2):229–260
[204] Sakaguchi T, Nicovich SG, Dibrell CC (2004) Empirical evaluation of an integrated
supply chain model for small and medium sized firms. Inf Resour Manage J 17(3):1–9
[205] Sambamurthy V, Bharadwaj A, Grover V (2003) Shaping firm agility through digital
options: reconceptualizing the role of it in contemporary firms. MIS Q 27(2):237–263
[206] Santhanam R, Hartono E (2003) Issues in linking information technology capability to
firm performance. MIS Q 27(1):125–153
[207] Schilling MA, Vidal P, Ployhart RE, Marangoni A (2003) Learning by doing something
else: variation, relatedness, and the learning curve. Manage Sci 49(1):39–56
[208] Scott J (2000) Social network analysis: a handbook, 2nd edn. Sage, London
[209] Scott SV, Wagner EL (2003) Networks, negotiations and new times: the implementation
of enterprise resource planning into an academic administration. Inf Organ 13(4):285–313
[210] Shaft TM, Vessey I (2006) The role of cognitive fit in the relationship between software
comprehension and modification. MIS Q 30(1):29–55
[211] Street CT, Meister DB (2004) Small business growth and internal transparency: the role of
information systems. MIS Q 28(3):473–506
[212] Sudweeks F, Mclaughlin ML, Rafaeli S (eds) (1998) Network and netplay. MIT Press,
Cambridge, MA
[213] Sutcliffe AG (2000) Requirements analysis for socio-technical system design. Inf Syst
25(3):213–233
58 A. Haider
[214] Teo TSH, Yu Y (2005) Online buying behavior: a transaction cost economics perspective.
Omega 33(5):451–465
[215] Venkatesh V, Morris MG, Davis GB, Davis FD (2003) User acceptance of information
technology: toward a unified view. MIS Q 27(3):425–478
[216] Vessey I (1991) Cognitive fit: a theory-based analysis of the graphs versus tables litera-
ture. Decis Sci 22(2):219–240
[217] Vessey I (2006) The theory of cognitive fit: one aspect of a general theory of problem
solving? In: Zhang P, Galletta D (eds) Human-computer interaction and management in-
formation systems: foundations. Advances in Management Information Systems Series.
ME Sharpe, Armonk, NY
[218] Vessey I, Glass RL (1994) Applications-based methodologies. Inf Syst Manage
11(4):53−57
[219] Wade M, Hulland J (2004) The resource-based view and information systems research:
review, extension and suggestions for future research. MIS Q 28(1):107–138
[220] Walsham G, Sahay S (1999) GIS for district-level administration in India: problems and
opportunities. MIS Q 23(1):39–65
[221] Walsham G (2002) Cross-cultural software production and use: a structurational analysis.
MIS Q 26(4):359–380
[222] Walther JB (1995) Relational aspects of computer-mediated communication. Organ Sci
6(2):186–203
[223] Whitworth B, De Moor A (2003) Legitimate by design: towards trusted socio-technical
systems. Behav Inf Technol 22(1):31–51
[224] Ying-Pin Y (2005) Identification of factors affecting continuity of cooperative electronic
supply chain relationships: empirical case of the Taiwanese motor industry. Supply Chain
Manage Int J 10(4):327–335
[225] Yoh E, Damhorst ML, Sapp S, Laczniak R (2003) Consumer adoption of the internet: the
case of apparel shopping. Psychol Market 20(12):1095–1118
[226] Zacharia ZG, Mentzer JT (2004) Logistics salience in a changing environment. J Bus
Logist 25(1):187–210
[227] Zaheer A, Dirks K (1999) Research on strategic information technology: a resource-based
perspective. In: Venkatraman N, Henderson JC (eds) Strategic management and informa-
tion technology. JAI, Greenwich, CT
[228] Zmud RW (1988) Building relationships throughout the corporate entity. In: Elam J,
Ginzberg M, Keen P, Zmud R (eds) Transforming the IS organization: the mission, the
framework, the transition. ICIT Press, Washington, DC
Appendix 1 Summary of Literature Relating to Barriers to Implementation of Information Systems
change; lack of technology levels in technology adoption process ments and information needs
integration
59
60
Study of barriers to IT imple- Lack of quality IT infrastructure; Lack of awareness of multidisciplinary Industrial fragmentation; high
mentation in engineering or- lack of system compatibility; lack nature of IT; lack of support from cost of IT investments; decreased [15]
ganisations in developing coun- of information interoperability; middle managers; high staff workload profit margins
tries unavailability of skill base
Study identifying IT implemen- Compatibility of technologies; Lack of user involvement in IT adop- Narrow focus of management in [19]
tation success factors information accessibility and tion choices; lack of training and making choices about technology
reliability; quality and accuracy technical support investment
of information and data input
Study of user attitudes to elec- Slow processing speed; lack of Lack of resources for technology Organisational functional silos [41]
tronic data management sys- data and data communication support and optimal utilisation driving technology adoption
tems standards; employee resistance to strategies
change; varying user attitudes
towards technology adoption
Study of importance of informa- Lack of access to information, Mismatch between information needs Inability of top management to [29]
tion to knowledge management information accuracy, timeliness of organisation and information sys- view information as an asset
in manufacturing organisations of information; task-technology tems; lack of trust among business
mismatch partners to share data
Study of Dutch and US-based Lack of requisite hardware and Lack of IT coordination and control; High degree of IT centralisation [36]
manufacturing organisations IT software infrastructure non-supportive organisational culture and business strategy, structure
Information Systems Implementation for Asset Management: A Theoretical Perspective
Study of performance meas- Short-term focus on process Inability to take into account financial Lack of IT implementation as a
urement literature in manufac- automation; inability to appreci- and non-financial benefits of IT in- means of business strategy trans- [38]
turing organisations from 1988 ate multidimensional nature of vestments in performance evaluation lation; lack of matching organisa-
to 2000 technology implementation methods; inability to effect change tional objectives, customer needs
management to adapt to technology; and organisational success factors
lack of pre- and post-implementation with IT investments
evaluation of IT
Study of a business process Lack of fit between IT and busi- Inability to redesign business processes Lack of strategic analysis of [26]
integrated IT evaluation meth- ness processes to adapt to new technology; inability to impact of IT investments
odology which integrates busi- properly measure process requirements
ness strategy, business process and manage IT configuration
design and supporting IT in-
vestment
Study of Shanghai- and Hong Lack of research and develop- Lack of fit of IT infrastructure with Inability of technology to con- [32]
Kong-based manufacturing ment capabilities on technology business, objectives tribute to horizontal/vertical
organisations to identify and investments; lack of employee integration
prioritise the strategy determi- skills and competencies
nants for manufacturing enter-
prises
Information Systems Implementation for Asset Management: A Theoretical Perspective
63
64
Agency The- Study of ubiquitous agency and principal relationships, in which the principal delegates Efficiency through align- [105, 121,
ory work to an agent. Agency theory addresses two issues which arise out of such a relationship: ment of interests, risk 159, 162, 168,
firstly, the conflicts between the aims of the principal and, secondly, the inability of the sharing and contracting 178, 186]
principal to verify the behaviour of the agent
Absorptive Emphasises establishment by organisations of internal R&D capacities which aid IS devel- Capabilities through [96, 128, 167,
Capacity opment in line with existing familiarity of technology and through evaluation and incorpora- amount of knowledge 181, 193,
tion of externally generated technical knowledge absorption 207]
Cognitive Fit Developed by Vessey [218], it proposes that there is a link between information presentation Problem resolution; process [112, 134,
and the tasks enabled by the information. This relationship defines task performance for enhancement; task perform- 179, 210, 216,
individual users ance 217, 218]
Critical Social Suggests that social reality has historical underpinnings and is constituted and reconstituted Learning by doing; social [95, 98, 109,
by people. Even though people or organisations can mindfully make an effort to alter their emancipation 132, 147, 148,
social and economic conditions, their ability to do so is hampered by the dominant social, 180]
cultural and political structures. It focuses on the conflicts and contradictions in the social
environment and seeks to be a source of emancipation to alleviate dissonance
A. Haider
Theory Description Focus Reference
from IS
Literature
Contingency Optimal organisational performance is contingent upon various internal and external con- Organisational Efficiency [106, 111,
straints. Important postulates of this theory: 124, 140, 164,
a. there is no one best way to manage an organisation; 228]
b. ‘fit’ between organisation and its subsystems;
c. successful organisations extend this fit to the organisational environment;
d. organisational design and management must satisfy the nature of tasks and work groups.
Dynamic Stresses integration, building and reconfiguration of organisational competencies (external as Competitiveness [131, 205,
Capabilities well as internal) to address changing business environment 219]
Information Suggests that learning should be approached through use of memory. It is based on two ideas Learning by doing; knowl- [100, 101,
Processing proposed by Miller (1956). Firstly, the concept of ‘chunking and the limited capacity’, which edge reuse 102, 115, 130,
posits that short-term memory can hold 5 to 9 chunks of meaningful information. The second 138, 199]
feature of information processing mimics human capabilities of information processing
Knowledge- Treats knowledge as most strategically important resource of an organisation, due mainly to Core competencies; sus- [97, 153, 183]
based theory of social complexity and difficulty of imitation of knowledge-based resources. Organisational tained competitive advan-
firm knowledge and competencies are therefore chief determinants of enhanced organisational tage
performance and sustained competitive advantage
Punctuated In terms of organisational behaviour, this theory comprises three elements: deep structures, Strategic change [158, 176,
Equilibrium equilibrium periods and revolutionary periods. Deep structures are the sets of basic choices 188, 195, 202,
comprising a system, i.e. fundamental parts into which its units are organised, and the fun- 211]
damental activity patterns in maintaining the existence of the system. Equilibrium period is
the maintenance of organisational structure and activity patterns with small-scale incre-
mental changes made to system for it to adapt to changing environment, without affecting
Information Systems Implementation for Asset Management: A Theoretical Perspective
the deep structures. Revolutionary periods occur when deep structures are changed, leading
to a disorderly state, until choices are made to enact new structures for the system
67
68
Task- Use of IT is expected to have a positive effect on people’s performance if capabilities of Technical fit; system utili- [133, 139,
Technology Fit technology match tasks which people must perform [143] sation 141, 142,
175]
69
Improving Asset Management Process
Modelling and Integration
__________________________________
Yong Sun
CRC for Infrastructure and Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, Brisbane, QLD 4001, Australia
e-mail: y3.sun@qut.edu.au
Lin Ma
CRC for Infrastructure and Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, Brisbane, QLD 4001, Australia
Joseph Mathew
CRC for Infrastructure and Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, Brisbane, QLD 4001, Australia
1 Introduction
An enterprise often conducts various asset management (AM) activities which are
interlinked in different logical ways, resulting in different processes. These proc-
esses are termed AM processes. Inefficient AM processes can incur significant
costs for an organisation or even the failure of an organisation to achieve its AM
goals. AM processes can be improved using process modelling and reengineering
technology. AM process modelling is the documentation, analysis and design of
the structure of AM processes. Process working mechanisms, required resources,
external factors, constraints and their relationships with the environment in which
these processes operate are also included in process modelling. AM process mod-
els can be used for visualising processes, developing data requirements, coordinat-
ing AM activities among different personnel [1], generating workflow to develop
AM information systems and assisting in the integration of AM information sys-
tems with other IT systems. With improved processes, an organisation can achieve
its AM goals effectively with less consumption of its resources including time,
finances, labour, IT systems and materials.
Process modelling has attracted much attention of engineering researchers since
the beginning of the industrial revolution [2]. During the late 1980s and early
1990s, businesses started to become more interested in processes [3]. Modelling is
important as it provides managers, asset maintenance personnel, operators and
users with a common understanding of each process [4]. It also visualises proc-
esses so that they can be discussed and audited more intuitively [4]. A survey
conducted in 2006 [5] showed process improvement in general is beneficial to
most users. AM processes have been used to guide AM practices [6, 7]. However,
these processes are modelled using flowcharts. This modelling method is insuffi-
cient for comprehensively describing the characteristics of AM activities.
Research on AM process modelling methods has also attracted increased atten-
tion in recent times [8, 9]. The research of Ma et al. [10] shows that AM processes
have common characteristics for different businesses. They are dynamic over a
long time span, generally focus on engineering assets which are hierarchically
structured, are closely related to decision support processes, and involve a diver-
sity of information and data. Modelling AM processes normally involves different
people in different departments or organisations and often outsourcing. Noting
these features, Frolov [9] studied AM process modelling and recognised that a
sound foundation to enable effective application needed to be developed.
This paper addresses this issue and focuses on analysing AM process modelling
requirements. The analysis considers the following aspects:
To make AM processes intuitive and easy to follow, symbols and notations should
be straightforward [11]. Process modelling symbols that have specific meanings
will need to be learnt and hence can be hard to understand unless viewers have an
engineering background. However, on the other hand, notations must be compre-
hensive enough to represent the required AM information.
Currently, flowcharts are still widely used to model AM processes [6, 7] be-
cause they are well established, familiar to most engineers and business managers
and can be readily adopted as workflow models in developing AM systems. How-
ever, flowcharts model the relationships of activities and judgements only, with-
out presenting other important information such as data flow and participants
simultaneously.
IDEF0 (one of the Integration DEFinition methods) has also been used for
modelling AM processes [12]. IDEF0 is one of the 16 modelling methods in the
family of IDEF, which was created by the United States Air Force. IDEF0 was
released as a standard for function (activity) modelling in 1993. It is a method
designed to model the decisions, actions and activities of an organisation or sys-
tem using simple boxes and arrows (Figure 1). Effective IDEF0 models enable the
analysis of a system and promote good communication between the analyst and
the customer.
In Figure 1, the box represents an activity. Input and output arrows represent
material and information (data) flow. ‘Control’ stands for something used to im-
plement the activity such as conditions, recipes or manuals. ‘Mechanisms’ stands
for the resources or organisations required by the activity.
IDEF0 is a type of graph-plus-text notation which is easier to understand and
better for AM process management, especially for developing AM IT systems.
This graph-plus-text notation has different variations such as the five views used
in the Architecture of Information Integration Systems (ARIS) [13], the architec-
Controls
Inputs Outputs
Activity
(function)
Mechanisms
ture modelling notation (AMN) used by James Martin & Co. [4] and the Generic
Activity Model (GAM) in integrated enterprise modelling (IEM) [14].
The presentation of process models in ARIS is very similar to IDEF0 (Fig-
ure 2). The major difference is that in ARIS, the Control view and the Mechanism
view are activity (function) self-contained and do not link to other activities using
lines. This design makes ARIS process models more readable and clearer.
ARIS was developed to attempt to model all aspects of complex businesses.
However, Green and Rosemann [15] analysed the five views in ARIS and con-
cluded that “even when considering all five views in combination, problems may
arise in representing all potentially required business rules, specifying the scope
and boundaries of the system under consideration, and employing a ‘top-down’
approach to analysis and design”.
When using ARIS to model AM processes, Ma et al. [16] noted that the influ-
ence of decisions is not reflected in the general ARIS views. Information about
assets can be included in the Output view. In this case, the information of assets
is not highlighted. However, in the asset maintenance management process, one
emphasises the influence of decision making and the layout of the asset. Asset
maintenance management is a dynamic process which is closely related to deci-
sion support and information about assets. To accommodate the requirements of
AM, the authors suggested extending current ARIS-based views to include the
views for maintenance decision support and asset technical information while
developing AM process models using APRS, i.e. adding a Decision view and an
Asset view when modelling asset maintenance management processes. The Deci-
sion view includes all aspects for maintenance decision making. The Asset view
includes the layout and the configuration of assets. The technical specifications of
assets are also allocated in the Asset view. Considering the existing Output view
is often misleading as it contains both input and output. The original Output view
was divided into Input view and Output view. Figure 3 shows the modified ARIS
views to accommodate the requirements of AM. The authors also indicated that
the modified views are still far from being a satisfactory solution. Further re-
search is therefore required.
Data view
Control
Execute
Organisation view
Figure 2 The general ARIS business process views
76 Y. Sun, L. Ma and J. Mathew
Input view
Activity (function)
Data view
view
Output view
Figure 3 The modified ARIS process views for asset management [16]
IEM also uses the concept of views. The representation method in IEM (Fig-
ure 4) is nearly the same as in ARIS [14].
A key feature of IDEF0, ARIS and IEM is that all conditions to complete an ac-
tivity are represented using separate boxes, and then these boxes are linked to an
activity box using lines. AMN is different from these three modelling methods in
that it includes an activity, the time the activity takes (metrics), the people who
complete the activity (roles) and the techniques and tools used to complete the
activity within the same box (Figure 5). The major advantage of this method of
representation is that a box contains more information so that the process models
become less messy. Another advantage is that the time used for implementing an
activity is explicitly presented. The major disadvantage of this design is that it
does not describe data flow. In addition, different properties in the same box will
create difficulties in software development.
Order
Controls the
Product, Order execution Processed Product,
or Resource Order or Resource
Objects Objects
Object Object
ACTIVITY
(Status n) (Status n+1)
Resource
Executes the
Activity
Metrics Roles
Activity
Inputs Deliverables
Techniques Tools
In recent years, Business Process Model and Notation (BPMN) has become an
increasingly important standard for process modelling. BPMN is also a type of
graph-plus-text notation similar to activity diagrams used in the Unified Model-
ling Language (UML). According to documentation provided by Object Man-
agement Group “In BPMN a Process is depicted as a graph of Flow Elements,
which are a set of Activities, Events, Gateways, and Sequence Flows that define
finite execution semantics” [17]. BPMN adopts both an event-driven activity-
focused representation and Swimlanes to focus on participants (Figure 6). BPMN
is much richer than other existing notations. BPMN 2.0 has defined five basic
categories of notations: Flow Objects, Data, Connecting Objects, Swimlanes, and
Artefacts. Each category has several elements which can be further subdivided
into to subelements. For example, three elements including events, activities and
gateways are included in the category of Flow Objects, whereas activities are
divided into non-atomic activities which can be expanded into subprocesses and
atomic activities which are termed Tasks. Therefore, in BPMN, the terms ‘activ-
ity’ and ‘task’ are both used because they have different meanings. Tasks are
further divided into different types with different notations, including service
task, send task, receive task, user task, manual task, business rule task and script
Message
Association Data
Event
Task
Sequence Sequence flow
flow
Message flow
Pool
task. The advantage of the richness is that it can be used to deal with the com-
plexity that is inherent in business processes. However, the richness also makes
this language more complicated to deal with. End users often have difficulty
identifying the interface between process modelling and business rule modelling
[18].
A major advantage of BPMN is that it provides a mapping between the graph-
ics of notations and Web Services Business Process Execution Language
(WS-BPEL), or Business Process Execution Language (BPEL) for short. BPEL is
a standard executable language developed by OASIS for modelling actions within
business processes using Web-based services. However, BPEL cannot appropri-
ately describe the interconnection of multiple partners [19]. BPMN models can
also be mapped to the Yet Another Workflow Language (YAWL) environment
through the BPMN2YAWL component for execution [20]. YAWL was developed
by Wil van der Aalst at the Eindhoven University of Technology, the Netherlands,
and Arthur ter Hofstede at the Queensland University of Technology, Australia, in
2002, aiming to extend Petri nets’ support for various control flow patterns [20].
(Petri nets are reviewed in Section 4.) YAWL supports dynamic workflows, which
is particularly useful for modelling dynamic AM processes.
Systems thinking has also been used to model dynamic processes. A typical
system process model is demonstrated in Figure 7. The notations of systems think-
ing are also a type of graph-plus-text, but they are less intuitive. The process mod-
els developed using the systems thinking method has better simulation capabilities
Financial
model
Process
Competitors Initiation model Outcome
Resource
Competitors External
model
supplies
Competitors
Personnel Materials Tools Facilities
Competin
g Competing
process
process
Asset Business
manufacturers managers /
/ dealers planners
AM
Regulators / processes Human
legal workers / resource
policy makers managers
Technical
Finance Inventory
manuals /
drawings
Figure 9 Factors likely to be involved in AM process modelling
Improving Asset Management Process Modelling and Integration 81
Develop/refine process
models
Documentation
82 Y. Sun, L. Ma and J. Mathew
During AM process modelling, modellers and users often need to evaluate differ-
ent process models. The evaluation has two objectives: (1) to evaluate whether the
process can achieve its goals and (2) to compare different process alternatives and
determine the best one for an enterprise. An evaluation of process models is im-
portant because some ineffective processes can cause significant financial losses
to an enterprise. AM process modelling must ensure that enterprises can gain
advantages from their investment.
AM processes are dependent on an organisation’s objectives/goals, structure
business scale and ready access to resources. An evaluation of an AM process
should be made by considering the application environment of the process. A poor
AM process for one enterprise may be perfect for another enterprise. To quantify
the evidence for this argument, two possible processes for a virtual asset repair are
assumed in Figure 11. The implementation time of Process A is 3 hours 45 min-
utes, whereas the implementation time of Process B is 2 hours 45 minutes. If the
duration of service interruption requires less than 4 hours, then both processes can
be used. In this case, Process A is more favourable because it can be implemented
by a single qualified maintenance technician. Process B needs two technicians.
Scheduling the workload of these two technicians is not straightforward. However,
if the interruption duration requires less than 3 hours, only Process B can be se-
lected. On the other hand, if an organisation has merely one qualified technician,
only Process A is possible.
Currently, a methodology to evaluate AM processes systematically awaits de-
velopment. The following three critical criteria must be considered in the evalua-
tion of AM processes:
Improving Asset Management Process Modelling and Integration 83
(1) effectiveness, which measures the degree to which AM goals are achieved
through implementation of an AM process for which it is designed. For example,
an AM strategy planning process without risk analysis would not be effective;
(2) efficiency, which measures the usage rate of enterprise resources including
time, finances, labour, IT systems and materials when implementing an AM
process to achieve its business goals. An optimised AM process would en-
able users to achieve their goals with minimum assumption of enterprise
resources;
(3) flexibility, which measures the adaptability of an AM process to frequently
changing organisational structures and dynamic business environments. The
knowledge about process changes can be captured using process-aware infor-
mation systems (PAISs) [23].
60 min 60 min
Activity a Activity a
60 min
Activity b
60 min 45 min
Activity b Activity c
45 min
Activity c
30 min
Activity d
30 min
Activity d
30 min 30 min
Activity e Activity e
Process A Process B
Legend:
Logic AND
Figure 11 Example of AM process options
84 Y. Sun, L. Ma and J. Mathew
pendent, probabilistic systems, stochastic Petri nets (SPNs) were developed. Two
of these SPNs are generalised stochastic Petri nets (GSPNs) and stochastic activ-
ity networks (SANs). Both of these can be used for numerical and simulation
analysis.
PN technology has been incorporated with other methodologies to enhance its
capability. An integration of PNs and the trace logic of the communicating se-
quential processes theory led to the event-driven GSPN-based modelling approach
for the construction of complex system models. A combination of PNs and activity
networks led to the SAN-based modelling approach, which can be used to model
timed and instantaneous activities [25]. An integration of PNs and workflow pat-
terns which are used as a benchmark for the suitability of a process specification
language led to YAWL [20]. While PN process models are too abstract to be un-
derstood by ordinary viewers including business managers and engineers, YAWL
is much more intuitive for both process designers and users.
Another example of process simulation tools is UPPAAL, which is an inte-
grated tool developed by Uppsala University in Sweden and Aalborg University in
Denmark. It can be used to model and validate real-world systems which are mod-
elled as networks of timed automata and, hence, has the potential for AM process
simulation. This tool has been used for systematic evaluation of fault trees [26].
However, the process models developed in UPPAAL cannot be easily understood
without a sound knowledge of this tool.
In general, simulation is more suitable for evaluating efficiency and flexibility
rather than effectiveness. Some analytic approaches with more specific concerns
have also been developed. Chen at al. [27] presented a data envelopment analysis
(DEA) non-linear model for measuring the impact of IT on a multistage business
process. Sarkis [28] presented an activity-based analysis methodology for the
selection or prioritisation of a set of candidate business processes or projects that
should undergo reengineering. The same concept may be applied to compare dif-
ferent AM process options. Although existing business process evaluation meth-
ods have potential for AM process evaluation, they only evaluate processes from
specific points of view. A method to evaluate AM processes systematically and
effectively has yet to be developed.
6 Conclusions
References
[11] Weichhardt F (1999) Modelling and evaluation of processes based enterprise goals. In:
Scholz-Reiter B, Stahlmann H-D, Nethe A (eds) Process modeling. Springer, Berlin Heidel-
berg New York, pp. 115–131
[12] Gómez Fernández JF, Crespo Márquez A (2009) Framework for implementation of mainte-
nance management in distribution network service providers. Reliab Eng Syst Saf
94(10):1639–1649
[13] Scheer A-W (1999) ARIS – business process frameworks, 3rd edn. Springer, Berlin Heidel-
berg New York
[14] Mertins K, Jochem R (1999) Quality-oriented design of business processes. Kluwer, Boston
[15] Green P, Rosemann M (2000) Integrated process modelling: an ontological evaluation. Inf
Syst 25(2):73–87
[16] Ma L, Sun Y, Mathew J (2004) Asset management process modelling. In: Proceedings of
the international conference of maintenance societies. Maintenance Engineering Society of
Australia, Sydney, Australia
[17] Object Management Group (2010) Business Process Model and Notation (BPMN).
http://www.omg.org/spec/BPMN/2.0 (Accessed 14 March 2012)
[18] Recker JC (2010) Opportunities and constraints: the current struggle with BPMN. Bus
Process Manage J 16(1):181–201
[19] Decker G, et al. (2009) Interacting services: from specification to execution. Data Knowl
Eng 68(10):946–972
[20] Adams M (2010) YAWL – user manual.
http://www.yawlfoundation.org/yawldocs/YAWLUserManual2.0.pdf
[21] Hitchins DK (2003) Advanced systems thinking, engineering, and management. Artech,
Boston
[22] Muller J-A (1999) Automatic model generation in process modeling. In: Scholz-Reiter B,
Stahlmann H-D, Nethe A (eds) Process modeling. Springer, Berlin Heidelberg New York,
pp. 17–36
[23] Weber B, et al. (2009) Providing integrated life cycle support in process-aware information
systems. Int J Coop Inf Syst 18(1):115–165
[24] Volkner P, Werners B (2000) A decision support system for business process planning. Eur
J Oper Res 125(3):633–647
[25] Mazzocca N, Russo S, Vittorini V (1999) The modelling process and Petri nets: reasoning
on different approaches. In: Scholz-Reiter B, Stahlmann H-D, Nethe A (eds) Process model-
ing. Springer, Berlin Heidelberg New York, pp. 37–56
[26] Cha S, et al. (2003) System evaluation of fault trees using real-time model checker
UPPAAL. Reliab Eng Syst Saf 82(1):11–20
[27] Chen Y, et al. (2006) Evaluation of information technology investment: a data envelopment
analysis approach. Comput Oper Res 33:1368–1379
[28] Sarkis J, Presley A, Liles D (1997) The strategic evaluation of candidate business process
reengineering projects. Int J Prod Econ 50(2–3):261–274
[29] Kuhlmann T, Lamping R, Massow C (1998) Intelligent decision support. J Mater Process
Technol 76(2):257–260
[30] Holland CP, Shaw DR, Kawalek P (2005) BP’s multi-enterprise asset management system.
Inf Softw Technol 47(4):999–1007
Utilising Reliability and Condition Monitoring
Data for Asset Health Prognosis
__________________________________
Andy Chit Tan
Queensland University of Technology, Brisbane, QLD 4001, Australia
Aiwina Heng
Queensland University of Technology, Brisbane, QLD 4001, Australia
Joseph Mathew
Queensland University of Technology, Brisbane, QLD 4001, Australia
1 Introduction
An FFNN consists of a layer of input nodes, one or more layers of hidden nodes,
one layer of output nodes and connection weights. During training, input and tar-
get pairs are repetitively presented to a network. The network will draw the rela-
tionships between the inputs and targets and adjust its connection weights to pro-
duce outputs as close as possible to the targets. The FFNN used in this work has
one hidden layer, d + 1 input nodes (d is the number of delayed indices of asset
condition), and h output nodes (h is the desired number of time intervals to be
forecasted) (Figure 1).
92 A.C. Tan, A. Heng and J. Mathew
Y (t ) Sˆ (t + Δ)
Y (t − Δ) Sˆ (t + 2Δ)
…
…
…
Y (t − dΔ) Sˆ (t + hΔ)
The FFNN training targets are estimates of the survival probabilities of each moni-
tored item in the training set. They are computed based on the actual survival
status of the historical item at the time of measurement, as well as on how the
health of this item compared to the health of the entire population at similar oper-
ating times. These two considerations are detailed in the following sections.
A historical dataset is considered complete if the monitored item has reached fail-
ure when removed from operation. Let i = 1, 2, …, m and m represent the number
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 93
of monitored historical items. If item i has reached failure before repair or re-
placement, its survival probability is assigned a value of 1 up until its failure time
step, Ti, and a value of 0 thereafter:
1, 0 ≤ t < Ti
S KM ,i ( t ) = . (1)
0, t ≥ Ti
Note that we consider all functions discussed here to be the true function esti-
mated from the given degradation datasets and drop the hat “^” for notational
clarity.
A historical dataset is considered suspended if the item has not reached failure but
has been repaired or removed from operation. For such suspended datasets, the
survival probability is similarly assigned a value of 1 up until the time interval in
which survival was last observed. Survival probabilities for subsequent time inter-
vals are computed using a variation of the KM estimator [21] based on the sur-
vival rate of the complete datasets from this moment onwards.
For suspended units which are overhauled/replaced due to non-deterioration
factors (e.g. calendar-time-based suspensions), the modified KM estimator tracks
the cumulative survival probability of the suspended unit i as follows:
1, 0 ≤ t < Li
S KM ,i ( t ) = dj , (2)
∏ 1 − n , t ≥ Li
Li ≤t j ≤t j
where dj is the number of failures up to time step tj, nj is the number of units at risk
just prior to time tj and Li denotes the time interval in which historical unit i was
last observed to be still surviving.
For suspended units which are repaired/replaced to prevent failures because a
fault has been detected (informative suspensions), the modified KM estimator
calculates the cumulative survival probability of the suspended unit i as follows:
1, 0 ≤ t < Li
S KM ,i ( t ) = μi , t = Li , (3)
μ ⋅ dj
∏ 1−
L ≤t ≤t n j
i , t > Li
i j
where μi is the health index estimated based on the fault severity of the unit at
repair/replacement and 0 ≤ μi ≤ 1.
94 A.C. Tan, A. Heng and J. Mathew
Let Yi(t) be the condition value for item i at operating age t and Y(t) a vector con-
taining the condition values from all of the m historical items in interval t:
Y (t ) = [Y1 (t ); Y2 (t ); ...; Ym (t ) ] (4)
The PDF of condition values at an interval u is denoted as f(Y | t). The overall
survival probability in the case considered is defined as the probability of condi-
tion indices not exceeding the failure threshold
Ythresh
The preceding equation shows that the reliability function can be estimated tak-
ing into account the mechanism of change in the condition of each historical item
(Figure 2).
To estimate the specific survival probability for each historical item i, we
successively multiply the probability of the items that have survived the preced-
ing intervals having condition indices higher than the observed index of item i
but lower than the threshold. We assume that the condition value, which re-
presents the degradation of the corresponding asset, will not decrease. This is
an assumption which will yield us a conservative estimate of survival probability.
R(t)
Failure PDF, f (T |Ythresh)
Ythresh
Condition value PDF, f (Y | t j )
Probability
of survival
tj
Figure 2 Instantaneous reliability based on historical degradation processes
Let k = 1, 2, …; then the conditional probability of an item i surviving interval t + k∆ is
Pr[Ti > t + k Δ | Yi ( t + k Δ ) ≥ yi ,t + k Δ , Ti > t , Yi ( t ) ≥ yi ,t ,...]
k
= ∏ Pr[Ti > t + j Δ | Yi ( t + j Δ ) ≥ yi ,t + j Δ , Ti > t + ( j − 1)Δ, Yi ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
j =1
k Pr[Ti > t + j Δ, Y i
( t + j Δ ) ≥ yi ,t + jΔ | Ti > t + ( j − 1)Δ, Y i ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
=∏ (6)
j =1 Pr[Ti ( t + j Δ ) ≥ yi ,t + j Δ | Ti > t + ( j − 1)Δ, Y i ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
k Pr[ y
thresh > Yi ( t + j Δ ) ≥ yi ,t + j Δ | ythresh > Yi ( t + ( j − 1) Δ ) ≥ yi , t + ( j −1) Δ ,...]
=∏
j =1 Pr[Yi ( t + j Δ ) ≥ yi ,t + j Δ | ythresh > Yi ( t + ( j − 1)Δ ) ≥ yi ,t + ( j −1) Δ ,...]
ythresh
f ( y | t + j Δ ) dy
k
yi ,t + jΔ
=∏ ~
,
j =1
f ( y | t + j Δ ) dy
yi ,t + jΔ
ythresh
where f ( y | t + jΔ ) dy is the integral of the conditional PDF between the observed degradation index of item i and the threshold
yi ,t + jΔ
~
and f ( y | t + j Δ ) dy is the integral of the conditional PDF over all possible values equal to or higher than the observed degrada-
yi ,t + jΔ
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis
The final estimated survival probability is then the mean of the two survival
probability estimates. After training, when a series of condition indices at the
current time t and d previous time steps are fed into the input nodes, the network
will produce an estimate of the survival probabilities in the h future intervals,
which can be plotted as the forecasted survival curve for that unit. As the next
set of input values becomes available, a new updated output vector will be pro-
duced, generating a new survival curve, with the final survival probability given
in Eq. (7):
Si (t ) = mean S KM ,i (t ), S PDF ,i (t ) .
(7)
The training target vector for historical item i, denoted here by Di, consists of
the estimated survival probability in the h successive intervals:
Si (t + Δ )
S (t + 2Δ )
Di (t ) = i . (8)
#
Si (t + hΔ )
During training, the input and target vectors of the training sets are repetitively
presented to the neural network. The network attempts to produce output values
which are as close as possible to the target vectors. After training, when a series of
condition indices at the current time t and d previous time steps
Y (t )
Y (t − Δ )
y (t ) = Y (t − 2Δ) (9)
#
Y (t + hΔ)
are fed into the input nodes, the network will produce an output vector
Sˆ (t + Δ )
Sˆ (t + 2Δ)
O(t ) = , (10)
#
Sˆ (t + hΔ)
which can be plotted as the survival curve for that unit, estimated at time t. As the
next set of input values becomes available, a new updated output vector will be
produced, generating a new survival probability curve.
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 97
2 Model Validation
As the prediction output of the proposed model is survival probabilities, the exact
predicted failure times are not represented. For evaluation purposes, the predicted
failure time was identified by noting the first output unit which predicted a survival
probability of less than 0.5; each time step is 10 days. Table 1 shows the prediction
results of the first test set, in which the actual failure time was at t = 600 days. Sur-
vival probabilities of less than 11 time steps were not presented as the pump was in
the stage of normal operation. Figure 3 shows the interpolated input data and the
graphical representation of predicted survival probability at selected time steps.
98
t = 200 t = 210 t = 220 t = 230 t = 240 t = 250 t = 260 t = 270 t = 280 t = 290 t = 300 t = 310 t = 320 t = 330 t = 340 t = 350
0.83 0.83 0.84 0.84 0.85 0.85 0.85 0.84 0.84 0.83 0.82 0.83 0.84 0.84 0.83 0.83
0.83 0.83 0.83 0.84 0.84 0.85 0.84 0.84 0.83 0.82 0.81 0.82 0.83 0.83 0.82 0.82
0.83 0.83 0.83 0.84 0.84 0.84 0.84 0.83 0.83 0.82 0.81 0.82 0.82 0.82 0.82 0.81
0.82 0.82 0.82 0.83 0.83 0.84 0.83 0.83 0.82 0.81 0.80 0.81 0.81 0.81 0.81 0.81
0.82 0.82 0.82 0.82 0.83 0.83 0.83 0.82 0.82 0.81 0.80 0.80 0.81 0.81 0.81 0.80
t = 360 t = 370 t = 380 t = 390 t = 400 t = 410 t = 420 t = 430 t = 440 t = 450 t = 460 t = 470 t = 480 t = 490 t = 500 t = 510
0.82 0.81 0.81 0.81 0.80 0.81 0.80 0.80 0.79 0.79 0.76 0.73 0.70 0.67 0.64 0.63
0.81 0.81 0.81 0.80 0.80 0.79 0.79 0.78 0.77 0.75 0.71 0.67 0.64 0.62 0.61 0.60
0.81 0.80 0.80 0.80 0.80 0.80 0.79 0.78 0.77 0.75 0.71 0.67 0.63 0.60 0.59 0.59
0.81 0.80 0.80 0.80 0.79 0.79 0.78 0.76 0.75 0.74 0.70 0.66 0.62 0.58 0.57 0.55
0.80 0.80 0.80 0.80 0.79 0.78 0.78 0.76 0.75 0.73 0.69 0.64 0.59 0.55 0.54 0.52
Figure 3 Graphical representation of prediction output by the proposed model at selected time
steps for test set 1 in Assessment I
training sets still have a certain amount of remaining useful life at replacement. This
short period of time discrepancy may have created a slight bias in the failure data
modelling. The bearing in this test set might have been run to a higher level of defect
severity before being replaced, and therefore the failure point seemed to be post-
poned slightly in the lifetime than the normal failure point that the proposed ANN
has learned to recognise. In fact, test set No. 1 indeed has a longer period of decreas-
ing vibration RMS value at the end of the bearing life compared to the training sets.
This observation may suggest that the bearing in test set 1 might indeed have been
left running to a higher stage of damage than the bearings in the failure training sets.
The prediction results of the proposed model were compared with those of the
following models:
• FFNN with the same structure and training function but trained with the false
assumption that suspension times were failure times (Model A);
• FFNN with the same structure and training function but trained using only
complete failure datasets (Model B); and
• one-step-ahead time series prediction (Model C).
The test consisted of three assessments. In Assessment I, all 6 complete data-
sets and 16 suspended ones were made available for model training. In Assess-
ment II, only 3 complete training sets and the 16 suspended ones were used. In the
last assessment, only 1 complete training sets and the 16 suspended training sets
were used.
The prediction results of the proposed model were also compared with those of
a recurrent neural network (RNN) which approached machine health prognosis as
a time-series prediction (Model C). RNNs are the most commonly used artificial
intelligence prognostic models reported, such as in [20]. Based on the condition
values in the failure datasets, a threshold value of 0.6 was selected. The RNN
selected for comparison here is an Elman network which had a Levenberg–
Marquardt backpropagation training function and nine hidden nodes and predicted
one step ahead. This structure is selected based on the best trade-off between
structure complexity, prediction horizon length and prediction accuracy obtained
through a post-training regression analysis.
For comparison of the proposed model with Models A, B and C, we define a
penalty function which considers the mean prediction accuracy and the prediction
horizon of a prognostic model:
1 c
p( y) = pg ( y j ) + ph ( y),
c j =1
(11)
The prediction accuracy function pg measures the discrepancy between the ac-
tual failure time T and the predicted failure time Tˆ in each test set:
α (T − Tˆ ), Tˆ < T
pg ( y ) = 0, Tˆ = T (12)
ˆ ˆ
β (T − T ), T > T
ph ( y ) = e − λh , (13)
3 Conclusions
Acknowledgements The authors gratefully acknowledge the financial support from the QUT
Faculty of Built Environment and Engineering and the Cooperative Research Centre for Inte-
grated Engineering Asset Management (CIEAM). Thanks are also due to the Centre for Mainte-
nance Optimization and Reliability Engineering (C-MORE) at the University of Toronto and to
Irving Pulp and Paper for generously providing the pump data and contributing to the model
improvement.
References
[1] Goode KB, Moore J, et al (2000) Plant machinery working life prediction method utilizing
reliability and condition-monitoring data. Proc Inst Mech Eng 214:109–122
[2] Jardine AKS, Anderson M (1985) Use of concomitant variables for reliability estimation.
Maint Manage Int 5:135–140
Utilising Reliability and Condition Monitoring Data for Asset Health Prognosis 103
[3] Jardine AKS, Anderson PM, et al (1987) Application of the Weibull proportional hazards
model to aircraft and marine engine failure data. Qual Reliab Eng Int 3:77–82
[4] Banjevic D, Jardine AKS (2006) Calculation of reliability function and remaining useful life
for a Markov failure time process. IMA J Manage Math 17(2):115–130
[5] Sundin PO, Montgomery N, et al (2007) Pulp mill on-site implementation of CBM decision
support software. In: Proceedings of the international conference of maintenance societies,
Melbourne, Australia
[6] Wang W (2002) A model to predict the residual life of rolling element bearings given moni-
tored condition information to date. IMA J Manage Math 13(1):3–16
[7] Wang, W. and W. Zhang (2005). A model to predict the residual life of aircraft engines
based upon oil analysis data. Naval Research Logistics 52: 276–284
[8] Heng A, Zhang S, Tan ACC, Mathew J (2009) Rotating machinery prognostics: state of the
art, challenges and opportunities. J Mech Syst Signal Process 23:724–739
[9] Kothamasu R, Huang SH, VerDuin WH (2006) System health monitoring and prognostics –
a review of current paradigms and practices. Int J Adv Manuf Technol 28:1012–1024
[10] Jardine AKS, Lin D, Banjevic D (2006) A review on machinery diagnostics and prognostics
implementing condition-based maintenance. Mech Syst Signal Process 20:1483–1510
[11] Vlcek BL, Hendricks RC, Zaretsky EV (2003) Determination of rolling-element fatigue life
from computer generated bearing tests. Tribology Transactions, 46(4):479–493, Oct 2003
[12] Groer PG, Analysis of time-to-failure with a Weibull model, Proceedings of the Mainte-
nance and Reliability Conference, Knoxville, TN, USA, 2000, 59.01–59.04
[13] Schomig A, Rose O (2003) On the suitability of the Weibull distribution for the approxima-
tion of machine failure. Proceedings of the conference on industrial engineering research,
Portland OR, June 2003
[14] Heng, Tan ACC, Mathew J, Jardine A (2009) Intelligent condition based prediction of
machine reliability. J Mech Syst Signal Process 23:1600–1614
[15] Li Y, Kurfess TR, Liang SY (2000) Stochastic prognostics for rolling element bearings.
Mech Syst Signal Process 14(5):747–762
[16] Qiu J, Set BB, Liang SY, Zhang C (2002) Damage mechanics approach for bearing lifetime
prognostics. Mech Syst Signal Process 16(5):817–829
[17] Roemer MJ, Byington CS, Kacprznski GJ, Vachtsevanos G (2005) An overview of selected
prognostic technology with reference to an integrated PHM architecture. Proceedings of
ISHEM forum, Napa Valley, CA, Nov 7–10, 2005, 65
[18] Huang R, Xi L, Li X, Richard Liu C, Qiu H, Lee J (2007) Residual life predictions for ball
bearings based on self-organizing map and back propagation neural network methods. Mech
Syst Signal Process 21:193–207
[19] Wang P, Vachtsevanos G (2001) Fault prognostics using dynamic wavelet neural networks.
Artif Intell Eng Des Anal Manuf 15:349–365
[20] Tse P, Atherton D (1999) Prediction of machine deterioration using vibration based fault
trends and recurrent neural networks. Trans ASME J Vibrat Acoust 121(3):355–362
[21] Kaplan EL, Meier P (1958) Nonparametric estimation from incomplete observations. J Am
Stat Assoc 53:457–481
Vibration-Based Wear Assessment
in Slurry Pumps
Abstract Centrifugal slurry pumps are widely used in various industries, includ-
ing Canada’s oil sands industry, to move mixtures of solids and liquids, typically
from mine sites to central processing facilities. In highly abrasive applications,
such as oil sand slurry, wear of wetted components is the main failure mode of the
pumps, and impellers are often the shortest-lived components. An accurate, non-
intrusive assessment of component wear in slurry pumps has yet to be developed.
This paper will outline a non-destructive vibration-based diagnosis platform based
on a novel hypothesis that a specific pattern of vibration – resulting from wear-
induced pressure pulsation alteration – can be observed and recorded. Specifically,
this method quantifies impeller vane trailing edge damage by analysing the ampli-
tude at the vane passing frequency (VPF) of vibration data. To counter data vari-
ability, we employ a combination of three approaches to analyse the acquired
vibration data according to the hypothesis.
First, a cumulative amplitude measure was evaluated from VPF amplitudes by
employing auto-scaling of time-domain vibration data followed by fast Fourier
transform (FFT). Second, an amplitude measure was evaluated from the first
component at VPF after utilizing principal component analysis (PCA) on mul-
tichannel time-domain data. Finally, an amplitude measure was evaluated from
the first component at VPF after utilizing PCA on frequency-domain data. It was
__________________________________
G. Mani
University of Alberta, Canada
D. Wolfe
Syncrude Research Centre, Canada
X. Zhao
University of Alberta, Canada
M.J. Zuo
University of Alberta, Canada
e-mail: ming.zuo@ualberta.ca
found that the final measure had great potential to be used for the identification
and estimation of impeller damage due to wear since its values followed the pro-
gression of the impeller damage. A viable wear assessment method based on this
platform can potentially be used to discern the extent of wear damage on a slurry
pump impeller.
1 Introduction
Centrifugal slurry pumps are widely used in mining, ore processing, waste treat-
ment, cement production and other industries. In oil sands operations, they are
crucial in moving the raw material for bitumen extraction and tailings disposal.
Maintaining and extending their useful life is thus essential to the reliable opera-
tion of these processes. Slurry pumps are subject to wear due to the existence of
solid particles in the pumped media. Consequently, they require regular mainte-
nance throughout their life, in contrast to regular centrifugal pumps, which can last
for years between repairs. Even with scheduled maintenance, undetected wear of
wetted components can result in costly unscheduled outages of slurry pumps.
Unscheduled outages cost oil sand companies millions of dollars each year.
Sophisticated on-line assessment of the wear status of wetted components in
slurry pumps thus has the potential to generate significant cost savings for slurry
pump operators. Reported studies on slurry pumps focus on improvement of their
design and understanding of wear mechanisms. As reported in [2], in a case study
conducted for a 10 × 14 in. pump in a fluid catalytic cracking unit (FCCU), the
initial cost of a fully lined pump was higher compared to conventional American
Petroleum Institute (API) pumps, but over a 6-year evaluation life, the total cost
(capital cost plus maintenance, repair and replacement parts) was 45 % lower.
Engin [3, 4] has studied the effect of solids on the performance of slurry pumps.
Liu et al. [5] investigated the erosive wear of the impellers and liner of centrifugal
slurry pumps. They studied the eroded material surfaces of impellers and liners
with a scanning electron microscope (SEM).
Some research work has been reported that deals with the investigation of dif-
ferent wetted components. Ridgway et al. [6] consider the life cycle tribology of
the slurry pump gland seal. Slurry pumps are commonly used in mineral process-
ing to transport two-phase mixtures of liquids and solid particles. The authors
concluded that the particle properties significantly influenced seal failure. They
also developed a hypothesis on gland seal failure and wear in a slurry environ-
ment, discussed alternative methods to quantify the wear including empirical and
experimental approaches, and presented some preliminary results from the work.
Khalid and Sapuan [7] focused on impeller wear patterns. They fabricated a wear
testing rig for a water pump impeller and selected a parameter that could be used
to determine the wear of slurry pump impeller as a function of operating hours.
Vibration-Based Wear Assessment in Slurry Pumps 107
Their main findings were that (a) erosion is the dominant type of wear, (b) the
weight loss of an impeller is due to material removal from the impeller as a result
of erosive wear, (c) the diameter loss of an impeller is attributed to the impinge-
ment of solid particles on the impeller vane trailing edge, and (d) the surface to-
pography under a microscope indicates that the region near the centre (vane lead-
ing edge) of the impeller encounters less wear compared to the region at the rim
(vane trailing edge) of the impeller.
In spite of all these findings, relatively limited research has been conducted in
the development of condition monitoring of slurry pumps [1], particularly using
non-invasive techniques. In this paper, we present a non-destructive wear assess-
ment technique based on vibration monitoring for damage assessment of impel-
lers, specifically of the vane trailing edge. Vane trailing edge wear is one of the
most important wear modes in pump impellers. The technique is based on a novel
hypothesis that connects two different phenomena: (a) pressure pulsation altera-
tion due to trailing edge wear and (b) ensuing vibration response.
Our hypothesis is to bridge the void between knowledge gained from earlier pump
research and a possible method of unobtrusive impeller wear pattern analysis of
slurry pumps. In particular, the studies of Srivastav [11] and Hodkiewicz [13] as
discussed above are relevant here. Both studies – one using vibration analysis and
the other using pressure pulsation – were done to focus on improvement of pump
design in terms of the radial gap between the impeller and the volute.
In our study here, we hypothesize that vane trailing edge wear of the impeller –
a very common form of wear in slurry pumps – will cause an effective increase of
‘periodic’ radial gap between the impeller and the volute. The term ‘periodic’
refers to the VPF. This increase will cause flow alteration, leading to a reduction
of pressure pulsations at the VPF, which in turn will manifest in the outside meas-
ured vibrations. Therefore, we expect a reduction in amplitude of the VPF compo-
nent in the frequency domain when trailing edge wear occurs. Note that we as-
sume all the vanes/blades of the impeller will experience identical amounts of
damage simultaneously.
The primary aim of this work was to develop a non-invasive technique for wear
assessment of slurry pump components that could be easily implemented while the
pumps are in service. It has been well established that machinery damage or de-
fects often manifest in vibrations. Most studies of machinery vibrations focus on
vibrations generated by mechanical damage in components such as bearings,
shafts or seals. Fluid interaction with mechanical components is an additional
aspect of pumps that can have an impact on perceived vibration from outside the
impeller casing, and this is the focus of this paper. The slurry pump monitored in
the experiments presented was run with a series of impellers with different levels
of artificially created wear. The damage progression levels are considered to be
slight, moderate and severe. The vibration data are measured in a non-intrusive
manner by sensors installed at three different locations outside the pump. Ampli-
tude measures are evaluated from vane pass frequency amplitudes by employing
three different approaches.
Vibration-Based Wear Assessment in Slurry Pumps 109
The experimental system for this study enabled pump speed, flow rate, slurry
density and inlet pressure to be controlled while using wetted components with
various levels of damage. The collected data include, e.g., vibration, acoustic,
pressure, flow rate and motor current. However, the focus of this paper is vibration
signal analysis.
Figure 3 Schematic of trailing edge vane damage levels (Aulakh and Wu, 2006)1
3 Signal Processing
To validate the hypothesis proposed in this paper, the vibration signals obtained
from experiments were numerically processed in the time and frequency domains
to evaluate measures that are representative of impeller wear in a slurry pump. This
procedure comprised a number of stages, as depicted in Figure 4. We employed a
combination of three approaches to analyse the data. Considering the fact that the
system was very complex and considerable data variability was expected, multiple
approaches could have resulted in superior wear identification and estimation.
1
Amit S Aulakh and Siyan Wu, Slurry Pump CBM Project, Progress Report 35 (09), Syncrude
Canada Ltd., Edmonton, Alberta, Canada, August 21, 2006.
112 G. Mani et al.
Experiment
Acquire multichannel
vibration data
Preprocessing
Filter/normalize
Perform confidence
analysis
Make decision
0.7 VPF
0.6
0.5
Amplitude (g)
0.4
0.2
0.1
0
0 1 2 3 4 5 6 7 8
PCA is central to the study of multivariate data and is extremely versatile with
applications in many disciplines [19]. PCA continues to be the subject of much
research, ranging from new model-based approaches to algorithmic ideas from
neural networks. PCA has found application in fields such as face recognition and
image compression and is a common technique for finding patterns in data of high
114 G. Mani et al.
dimension. Since patterns in data can be hard to find in data of high dimension,
where the luxury of graphical representation is not available, PCA is a powerful
tool for analysing this type of data.
Here are the steps that are followed to calculate principal components:
Trendafilova et al. [20] used PCA for feature selection using frequency-domain
vibration data in an effort to detect faults in aircraft wings. Huang [21] used PCA
5
0 (a)
-5
5
0 (b)
-5
5
0
(c)
-5
5
Amplitude (g)
0 (d)
-5
5
0 (e)
-5
5
0 (f)
-5
5
0 (g)
-5
5
0 (h)
-5
5
0
(i)
-5
152 152.5 153 153.5 154 154.5 155
Time (s)
Figure 6 Vibration data: 1800 RPM, undamaged impeller: (a) sensor 1, x direction, (b) sen-
sor 1, y direction, (c) sensor 1, z direction, (d) sensor 2, x direction, (e) sensor 2, y direction,
(f) sensor 2, z direction, (g) sensor 3, x direction, (h) Sensor 3, y direction, and (i) Sensor 3,
z direction
Vibration-Based Wear Assessment in Slurry Pumps 115
5
0 (a)
-5
5
0 (b)
-5
5
0
(c)
-5
5
0 (d)
Amplitude (g)
-5
5
0 (e)
-5
5
0 (f)
-5
5
0 (g)
-5
5
0 (h)
-5
5
0 (i)
-5
152 152.5 153 153.5 154 154.5 155
Time (s)
Figure 7 Application of PCA on vibration data: 1800 RPM, undamaged impeller: (a) first
principal component (PC), (b) second PC, (c) third PC, (d) fourth PC, (e) fifth PC, (f) sixth PC,
(g) seventh PC, (h) eighth PC, and (i) Ninth PC
116 G. Mani et al.
0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(a) (b) (c)
Amplitude (g)
0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(d) (e) (f)
0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(g) (h) (i)
Frequency (multiple of pump speed)
0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(a) (b) (c)
Amplitude (g)
0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
(g) (h) (i)
Frequency (multiple of pump speed)
Figure 9 Application of PCA on frequency-domain responses of acquired vibration data:
(a) first principal component (PC), (b) second PC, (c) third PC, (d) fourth PC, (e) fifth PC,
(f) sixth PC, (g) seventh PC, (h) eighth PC, and (i) Ninth PC
0.6
(a)
Amplitude (g)
0.4
0.2
0
Cumulative Amplitude
2.5
(b)
2
1.5
0.5
Undamaged Slight Moderate Severe
Figure 10 Amplitude of vane pass frequency component for 1800 RPM: (a) all nine signals
from the three sensors, and (b) cumulative amplitude
2.5
2
1800 RPM
Cumulative Amplitude
1.5
2200 RPM
0.5
2600 RPM
These values were then added to reduce variability, thereby obtaining the ‘cumula-
tive amplitude’ measure as depicted in Figure 10b. The amplitude values of dam-
aged cases and baseline cases (cases with undamaged impellers) indicate that the
trend is quite consistent. The trend can be seen even more clearly in the plot of the
cumulative amplitude measures (Figure 10b). The trend shows that a pump with a
worn impeller can clearly be discerned from one with an undamaged impeller.
This finding was validated by testing the signal processing procedure on data
collected at different pump speeds, as illustrated in Figure 11.
In the time-domain PCA approach, the amplitude of the peak VPF was obtain-
ed for the most significant PC calculated for each test scenario. The result (Fig-
ure 12) clearly shows the expected decreasing trend, except for the 2200-RPM
case with a severely worn impeller, which increases slightly from the moderately
worn impeller case. However, lower-level wear (undamaged or slight) can be
easily discernible from higher level wear (moderate or severe). Frequency-domain
PCA approach results are shown in Figure 13. Similar observations can be made
here. In this case, the value for severely worn impeller cases is slightly more than
that for moderately worn impeller cases for both 2200 and 2600 RPM. In Fig-
ures 11–13, we are unable to obtain absolute monotonic trends because the vibra-
tions are generated by complex fluid and impeller interactions. However, the
roughly monotonic trends provide rough indications of impeller damage growth.
0.7
0.6
0.5
1800 RPM
Amplitude
0.4
0.2
0.1
2600 RPM
0.9
0.8
1800 RPM
0.7
0.6
Amplitude
0.5
0.4
2200 RPM
0.3
0.2
In Figures 14–16, VPF amplitudes of damaged cases are plotted and normal-
ized with respect to the undamaged case. The first approach is depicted in Fig-
ure 14, where the average and standard deviation are illustrated for the cumulative
amplitude approach. The average values of all pump speeds show a reduction in
100
90
Percentage Amplitude Reduction
80
70
60
50
40
30
20
10
0
Undamaged Slight Moderate Severe
Figure 14 Cumulative amplitude reduction as wear progresses; each bar represents average of
VPF values at all speeds; each vertical line represents the standard deviation of those values
Vibration-Based Wear Assessment in Slurry Pumps 121
100
90
Percentage Amplitude Reduction
80
70
60
50
40
30
20
10
0
Undamaged Slight Moderate Severe
Figure 15 Time-domain PCA application – reduction of VPF amplitude of first PC as wear
progresses; each bar represents average of VPF values at all speeds; each vertical line represents
the standard deviation of those values
100
90
Percentage Amplitude Reduction
80
70
60
50
40
30
20
10
0
Undamaged Slight Moderate Severe
Figure 16 Frequency-domain PCA application – reduction of VPF amplitude of first PC as
wear progresses; each bar represents average of VPF values at all speeds; each vertical line
represents the standard deviation of those values
122 G. Mani et al.
bration of the system. This specific component is the VPF component as predicted
by our hypothesis. The VPF component can be monitored to identify the extent of
wear on the vane trailing edge. In terms of estimation, higher-level damage can be
clearly distinguished from lower-level damage by a significantly diminished VPF
amplitude.
5 Conclusion
References
[1] Volk MW (2005) Pump characteristics and applications, 2nd edn. CRC, Boca Raton, FL
[2] Orchard B, Moreland C, Warne C (2007) Optimizing the working life of hydrocarbon
slurry pumps. World Pumps 492:50–54
Vibration-Based Wear Assessment in Slurry Pumps 123
[3] Engin T, Gur M (2003) Comparative evaluation of some existing correlations to predict
head degradation of centrifugal slurry pumps. J Fluids Eng 125:149–157
[4] Engin T (2007) Prediction of relative efficiency reduction of centrifugal slurry pumps:
empirical- and artificial-neural network-based methods. J Power Energy A Proc Inst Mech
Eng 221:41–50
[5] Liu J, Xu H, Qi L, Li H (2004) Study on erosive wear and novel wear-resistant materials
for centrifugal slurry pumps. In: Proceedings of the ASME conference on heat trans-
fer/fluids engineering, 11–15 July 2004, Charlotte, NC
[6] Ridgway N, O’Neill B, Colby C (2005) The life cycle tribology of slurry pump gland seals.
In: 18th international conference of fluid sealing, 12–14 October 2005, Antwerp, Belgium
[7] Khalid YA, Sapuan SM (2007) Wear analysis of centrifugal slurry pump impellers. Ind
Lubricat Tribol 59(1):18–28
[8] Rodriguez CG, Egusquiza E, Santos IF (2007) Frequencies in the vibration induced by the
rotor stator interaction in a centrifugal pump turbine. J Fluids Eng 129:1428–1435
[9] Wang J, Hu H (2006) Vibration-based fault diagnosis of pump using fuzzy technique.
Measurement 39:176–185
[10] Abbot P, Gedney C, Morton D, Celuzza S, Dyer I, Ehlers P, Vaicaitis R, Brown J,
Guinzburg A, Hodgson W (2000) Vibration and acoustic evaluation of a large centrifugal
wastewater pump, Part 1: Background and experiment. American Society of Mechanical
Engineers, Noise Control and Acoustics Division (Publication) NCA, 27:243–252, 2000
[11] Srivastav OP, Pandu KR, Gupta K (2003) Effect of radial gap between impeller and dif-
fuser on vibration and noise in a centrifugal pump. J Inst Eng India Mech Eng Div
84(1):36–39
[12] Weissgerber C, Day MW (1980) Reduction of pressure pulsations in fan pumps. TAPPI
63(4):143–146
[13] Hodkiewicz MR, Norton MP (2002) The effect of change in flow rate on the vibration of
double-suction centrifugal pumps. Proc Inst Mech Eng E J Process Mech Eng 216:47–58
[14] Guo SJ, Maruta Y (2005) Experimental investigations on pressure fluctuations and vibra-
tion of the impeller in a centrifugal pump with vaned diffusers. JSME Int J Ser B Fluids
Thermal Eng 48(1):136–143
[15] Rzentkowski G, Zbroja S (2000) Experimental characterization of centrifugal pumps as an
acoustic source at the blade-passing frequency. J Fluids Struct 14:529–558
[16] Morgenroth M, Weaver DS (1998) Sound generation by a centrifugal pump at blade pass-
ing frequency. J Turbomach Trans ASME 120(4):736–743
[17] Mani G, Wolfe D, Zhao X, Zuo MJ (2008) Slurry pump wear assessment through vibration
monitoring. In: Proceedings of WCEAM-IMS, 27–30 October, Beijing, China
[18] Sohn H, Farrar CR (2001) Damage diagnosis using time series analysis of vibration sig-
nals. Smart Mater Struct 10:446–451
[19] Jolliffe IT (2002) Principal component analysis, 2nd edn. Springer Series in Statistics,
Springer Berlin Heidelberg New York
[20] Trendafilova I, Cartmell MP, Ostachowicz W (2008) Vibration-based damage detection in
an aircraft wing scaled model using principal component analysis and pattern recognition.
J Sound Vibrat 313:560–566
[21] Huang X (2008) Visualizing principal components analysis for multivariate process data.
J Qual Technol 40(3):299–309
[22] Deng JS, Wang K, Deng YH, Qi GJ (2008) PCA-based land-use change detection and
analysis using multitemporal and multisensor satellite data. Int J Remote Sens
29(16):4823–438
The Concept of the Distributed Diagnostic
System for Structural Health Monitoring
of Critical Elements of Infrastructure Objects
Jedrzej Maczak
1 Introduction
__________________________________
J. Maczak
Institute of Automotive Engineering, Poland
e-mail: jma@mechatronika.net.pl
One of the most popular methods of determining the stress in machine design is
tensometry. Properly used, tensometry allows for stress/strain assessment at places
of applied strain gauge. This method could be adopted to measure the load applied
to a given structure. The only problem is that tensometric measurements are rela-
tive to some base measurement, usually the first measurement taken after applying
a strain gauge to the structure. This means that it is possible to obtain only incre-
mental stress measurements, not total stress values. Of course, for a new construc-
tion, tensometric methods could be used as such methods enable gluing strain
gauges to structures with a minimal or known load applied. Alternatively, it is
necessary to build a mathematical model of the construction with distributed load
for static load assessment and determin the critical elements in the construction for
proper placement of strain gauges.
An extension of classical tensometry is fibre-optic tensometry. Instead of using
strain gauges, it uses Bragg gratings connected by fibre optics. Using optical lines
simplifies cabling as several gauges can be added to the same fibre-optic line.
Tensometric methods are rather inexpensive and widely used, so they are easily
adopted for automatic on-line monitoring of the load applied to steel structures.
The only problem that remains is the proper selection of critical points for install-
ing strain gauges and determining the border (limiting) values. On the other hand,
adoption of tensometric methods to existing prestressed concrete constructions is
very limited as usually there is no way to apply strain gauges to cables, and what
is worse, the load of these cables (prestressing force) in that moment is usually
unknown for old structures.
The prestressing force of old existing structures made of prestressed concrete is
very hard to evaluate because there are currently no ‘off-the-shelf’ methods that
one could apply. A very promising method currently in the development stage is
based on an analysis of the dynamic response of a structure such as a bridge [1].
The method is based on the analysis in amplitude modulation phenomena of the
vibroacoustic signal caused by the impact of a modal hammer or any other source
of excitation. Preliminary tests show that it is possible to develop a diagnostic
model that, contrary to currently used models, allow us to analyse the relationships
between the stress distribution in the transverse section and the parameters of the
vibroacoustic signal [2]. The basis of the model is the assumption that the initial
prestress in the bended beam is accompanied by dispersion phenomena that cause
changes in the wave propagation parameters, mainly differences between group
and phase velocities. These changes engender modulation phenomena in the spec-
trum of beam acceleration signals. Assuming that existing damages in a beam
would cause a decrease in the stress in the transverse section, this should cause
measureable changes in modulating frequencies. Those frequency changes depend
only on the beam characteristics and beam load and are independent of the excita-
tion value of the signal [3]. The relation between the stress distribution in concrete
128 J. Maczak
and the steel beams allows one to build diagnostic inverted models and, thus, to
determine the qualitative changes in the construction technical state like load and
stress in concrete or prestressing beams.
Another very promising method of determining stress in ferromagnetic materi-
als is based on measurement of the free magnetic field of the construction material
[4, 5, 6]. The magnetic field of a steel construction’s element is related to the
stress concentration and is easily measured. Because this is a free field, there is no
need to magnetize the construction. The author’s preliminary experiments using
steel material samples confirms the possibility of using this method in monitoring
systems. The problem which remains unsolved relates to the effect of disturbances
caused by external magnetic fields. This method seems very promising as it is not
limited to particular construction points, as with strain gauges, but rather allows
for assessing the stress in whole elements of the construction.
large area, which limits costs and manpower. This approach is especially advanta-
geous in cases involving great distances between machines and a diagnostic techni-
cian [8]. This method is thus limited only by network availability and performance.
This concept could be easily adopted for on-line monitoring of infrastructure
objects. A distributed diagnostic system (Fig. 1) is a network of intelligent, pro-
grammable units monitoring particular construction elements or machines (Fig. 2).
These units are built in accordance with the microprocessor controller’s capabili-
ties and are equipped with signal conditioning circuits well matched to the signal
sensors, measuring values linked to the object’s technical state. All controllers are
linked to the database which stores information about changes in the construction
technical state. This database is accessible to the technical staff overseeing the
diagnosed infrastructure objects who are able to make appropriate decisions re-
garding use of the system. These local networks are easily expanded into larger
e-monitoring networks (Fig. 3).
Local diagnostic units usually have the ability to communicate with their envi-
ronment using either TCP/IP or CAN networks for the purpose of informing users
and the managing unit about a structure’s current technical state or load and the
decisions made regarding use. TCP/IP networks are increasingly able to authorize
Acquisition threads
Interthread
Diagnostic signal
communication
acquisition
service
External environment
Sensors
Process signal
acquisition*
External
communication
Diagnostic conclusion interface
Monitored structure
threads
Internal system bus
Signalization of
Signal analysis
construction state
(signal estimate calc.)
TCP/IP
`
Storing estimates
Information storage
(database of estimate)
External
communication
threads
Performing diagnostic
conclusions
Programable
Actuators Change of construction Automation
parameters as a reaction
on detected failure* Controller Database
Thread of construction
control* * if possible
external access to the system. It is also very easy on such networks to implement
the automatic messaging module (e.g. e-mail, SMS) informing authorized person-
nel about current problems with monitored objects. The network could also be
used for communication with an external database storing processed measurement
results and information about a structure’s current technical state. Such a solution
would serve to release the controller from the necessity of handling the local data-
base and reduce the limitation imposed by hardware requirements.
The exact structure of the system and number of database units depend on the
type, size and number of infrastructure objects being monitored. Data from similar
objects could be stored in a single database, allowing for easy comparison of the
diagnostic data. If a main diagnostic centre exists, then a central database could be
established. The database storing results allows diagnostic technicians for histori-
cal trends viewing and allows for modification of diagnostic algorithms. Also the
comparison of behaviour of objects of the same type is possible.
As a source of informations about the current technical state of the monitored
element signals from different transducers could be used. Strain gauges or fibre-
optic Bragg gratings and magnetic field transducers could be used for determining
load applied to a construction. To analyse the dynamic behaviour of a construction
and additionally to determine the prestress force in concrete structures, piezoelec-
tric accelerometers could also be used [9]. The latter could be utilized to determine
the prestress force in concrete elements. Additionally, accelerometer signals could
be used to check an object’s technical condition. This is based on the assumption
that the development of the degradation and fatigue processes emerging in infra-
structure objects causes modulation phenomena of measurable dynamic parame-
ters as well as a quantitative and qualitative increase in non-linear effects in sys-
tems in which static loads predominate. Application of these methods requires the
FBG x NObj
(fiber optic Monitored structure
HMI/SCADA tensometry)
(optional)
Diagnostic signals
Structure
TCP/IP parameter Ζ Tensometry (classic)
Export of measured values control Ζ Magnetometry
(server OPC) Ζ Acceleration
DSC
x NObj - monitored structures
DSC (datalogging Data backup
x NOper - operator consoles
- Data archiving cRIO
and supervisory in case of x NSign - signalization units
- Momentary data (Real Time transmission
control) (data socket) controller) errors
DSC - system database
server
Archiving
(SQL DB)
- Historical data TCP/IP
- Momentary data System diagnostic
(UDP)
x NSign
x NOper
Data flow
System diagnostic Operator’s Signalization
(UDP) console units TCP/IP
UDP
Reporting
(MS Office ActiveX)
Disk write
4 Conclusions
References
1
This research was conducted within the CRC for Integrated Engineering Asset Management,
established and supported under the Australian Government’s Cooperative Research Centres
Programme.
1 Introduction
long term PM schedule for new production lines, Percy et al. [5] postulated a new
Bayesian method based approach but did not develop an applicable algorithm. As
Reliability (or Risk) Based PM (RBPM) is generally more cost-effective than
Time Based PM (TBPM), maintenance management has shifted its focus on
TBPM to the use of RBPM. Khan and Handdara [9] presented a risk-based main-
tenance approach composed of risk determination, risk evaluation and mainte-
nance planning for optimising maintenance/inspection strategy. The risk-based
maintenance strategy has been used for a power generation plant [10]. Fault tree
analysis and Monte Carlo simulation are the major methods for probabilistic fail-
ure analysis in maintenance decision making [9]. The effect of PM has not been
investigated adequately. As financial risk is a major issue in maintenance strategy
determination, Kierulff [11] discussed the replacement issues from the financial
point of view. To reduce decision uncertainty, the Proportional Hazard Model
(PHM) based approach has been proposed for optimising Condition-based Main-
tenance (CBM) [12]. This PHM based method is generally used to optimise the
next maintenance time. More sophisticated maintenance optimisation models have
also been developed. For example, Kallen and Noortwijk [13] proposed an adap-
tive Bayesian decision model to optimise periodic inspection and replacement
policy for structural components. A practical model for determining the optimal
PM strategy for production lines over its life-span is yet to be developed. The
major barrier to developing such a model is reliability prediction of production
lines with multiple PM actions over a long operational period. Production lines are
normally complex repairable systems and PM actions on these complex systems
are generally imperfect, i.e. the state of a production line after a PM action is be-
tween “as good as new” and “as bad as old”.
A Split System Approach (SSA) based methodology is developed in this paper
to remove this barrier. SSA was proposed by the authors [14] to predict the reli-
ability of systems with multiple PM actions over multiple intervals. In this paper,
the SSA is used to predict the reliability of production lines with multiple PM
actions. Only serial production lines are considered. A serial production line indi-
cates that the failure of any machine in this production line will cause the failure
of the whole system (production line). Serial production lines are commonplace in
manufacturing industries such as automobile manufacturing factories, food proc-
essing factories and clothes making factories.
The rest of the paper is organised as follows: in Section 2, the concept and
methodology of SSA are reviewed. In Section 3, a methodology for determining
the optimal PM strategy based on SSA is presented, and this is followed by an
example in Section 4. A conclusion is provided in Section 5.
The basic concept of the SSA is to separate repaired and unrepaired components
within a system virtually when modelling the reliability of a system after PM
136 Y. Sun, L. Ma and J. Mathew
actions. This concept enables the analysis of system reliability at the component
level, and stems from the fact that generally when a complex system has a PM
action, only some of components are repaired.
The following assumptions were made in developing SSA based models:
1 2 3 M
Original system
(a)
R1 (τ )i R2 (τ ) i
Part 1 Part 2
Rs(t)
Rs(t)0 Rs(t)1
Rs(t)n-1
Rs(t)n
τ
R0
Δt1 Δt2 Δt3… Δtn
t0 t1 t2 tn
Note that Eqs. (1) and (2) both describe the reliability of a system which has
been preventively maintained for n times, i.e. these two equations both describe the
conditional probability of survival of a system with n PM intervals. To predict the
overall reliability of a system with multiple PM intervals, the cumulative effect of
multiple PM actions needs to be considered, i.e. the probability of survival of the
repaired components until their individual repair times should be considered [8].
The overall reliability function of a serial system after the first PM action is
Rsc (τ )1 = R1 (Δt1 )0 Rs (τ )1 , (3)
where Rsc(τ)1 is the cumulative reliability of the system after the first PM action.
R1(Δt1)0 is the probability of survival of Part 1 until time t1.
Generally, the overall reliability of the system over the n PM cycles can be ex-
pressed as
j
Rsc (τ ) j = ∏ R1 (Δti )i −1 Rs (τ ) j , ( j = 1, 2, ..., n), (4)
i =1
where Rsc(τ)j is the overall reliability of the system after the jth PM action
(j = 1, 2, …, n).
The authors have also developed a model for calculating the reliability of a sys-
tem with multiple repaired components over multiple PM cycles [16].
Production line
Remainder of the
Repaired component(s)
assemblies
A bottom-up approach can be used for analysing the reliability of the produc-
tion line after a production line has been virtually decomposed as shown in Fig-
ure 3. The reliability functions of assemblies are estimated firstly at the component
level using SSA, and then the reliability functions of machines can be estimated at
the assembly level. Finally, the reliability function of the production line can be
estimated at the machine level. For simplification, only the last step is demon-
strated in this paper.
Cm = k m N T , (6)
140 Y. Sun, L. Ma and J. Mathew
where T is the operational period of the production line that an enterprise is inter-
ested in. Typically, T is the life span of the production line. R(T) is the reliability
of the production line at time T. Parameters kr and km are two scale constants. NT is
the required number of PM actions over the period of time T for maintaining the
production line above the reliability level of R(T).
Define the Total Expected Cost (TEC) as the sum of the expected risk-related
cost and the expected maintenance-related cost and the Total Expected Cost Index
(TECI) as the result that the TEC is divided by km:
TEC = Cr + Cm , (7)
∞
where E[ K r ] = kr f r (kr )dkr is the first moment of K r .
0
4 Example
A PM strategy is required for a period of the next two years for an automated food
production line that has been operating for some time. This production line can be
described as a simplified serial system as shown in Figure 1. Part 1 is composed of
those machines that have very short mean time to failure compared with the re-
mainder of the production line and Part 2 is composed of the remainder of the
production line. The times of critical failures of Part 1 followed a Weibull distri-
bution and were expressed as
2.1
τ
R1 (τ )0 = exp[− ]. (14)
18
Part 2 was assumed to have an exponential failure distribution, that is,
−τ
R2 (τ )0 = exp . (15)
400
In reality, the failure distributions and the parameters of the corresponding fail-
ure distribution functions can be determined based on historical failure data and
maintenance records of the production line.
Hence, the reliability of the entire production line was
τ 2.1 τ
Rs (τ )0 = exp[− + . (16)
18 400
where ƒc is termed as the recovery coefficient, which is used to represent the de-
gree of the reliability of Part 1 after a PM action has recovered to its original reli-
ability. When ƒc = 0, the state of Part 1 after a PM action is as good as new; When
ƒc = 1, the state of Part 1 after a PM action is as bad as old; When 0 < ƒc < 1,
Part 1 has an imperfect repair.
Substituting Eq. (17) into Eq. (1) gives the conditional reliability function of
the production line after the jth PM action (j = 1, 2, …, n):
j
R1 (τ + f c Δt j −1 ) j Rs (τ + Δti )0
Rs (τ ) j = j
i =1
, ( j = 1, 2, ..., n). (18)
R1 (τ + Δti )0
i =1
0.9
0.8
Ro=0.9
0.7 MTTF1=16 months
Reliability, R(t)
0.9
0.8
0.7 Ro=0.9
Reliability, R(t)
MTTF1=16 months
PM interval 1=1 months
0.6 PM interval 2=5.5 months
No. of PM actions - RBPM=6 times
0.5 No. of PM actions - TBPM=6 times
Recovery coefficient=0.05
Minimum required operational time=0.5 months
0.4
Reliability based PM
0.3 Cumulative reliability with RBPM
Reliability without repair
0.2 Time based PM
Cumulative reliability with TBPM
0.1
0 5 10 15 20 25
Time, t (months)
Lowest TECI
fc Optimal PM interval (months) krm
TBPM RBPM
0.05 2 58.6 75.1
0.1 2 75.9 85.5
0.15 3 92.2 108
0.2 4 106.9 inapplicable 200
0.3 6 129.9 inapplicable
0.7 11.5 169.4 inapplicable
0.75 24 170.6 inapplicable
In Table 1, the word “inapplicable” means that RBPM is not applicable because
the PM interval required by this strategy will become shorter than the required
minimum operational time of the production line.
5 Conclusion
these factors simultaneously and analyses the effects of these factors on PM deci-
sions quantitatively.
This research finds that the performance of a PM strategy can be measured by
its Total Expected Cost Index (TECI). A PM strategy with lower TECI is better.
The effectiveness of different types of PM strategies can vary in different scenar-
ios. The optimal PM interval is dependent on TECI, PM performance and the type
of PM strategy. A trade-off between reliability requirement and the number of PM
actions is often needed if one wishes to minimise the Total Expected Cost (TEC)
of using production lines.
While this paper focuses on serial production lines, the methodology developed
in the paper can be applied to other serially connected engineering systems such as
power generation units in coal-fired power stations.
Acknowledgments This research was conducted within the CRC for Integrated Engineering
Asset Management, established and supported under the Australian Government’s Cooperative
Research Centres Program.
References
[1] Dallery Y, Bihan HL (1999) An improved decomposition method for the analysis of pro-
duction lines with unreliable machines and finite buffers. Int J of Production Research
37(5):1093−1117
[2] Liberopoulos G, Tsarouhas P (2004) Reliability analysis of an automated pizza production
line. J of Food Engineering. In press
[3] Miltenburg J (2002) The effect of breakdowns on U-shaped production lines. Int J of Pro-
duction Research 38(2): 352−364
[4] Cavory G, Dupas R, Goncalves G (2001) A genetic approach to the scheduling of preven-
tive maintenance tasks on a single product manufacturing production line. Int J of Produc-
tion Economics 74(1):135−146
[5] Percy DF, Kobbacy KAH, Fawzi BB (1997) Setting preventive maintenance schedules
when data are sparse. Int J of Production Economics 51(2):223−234
[6] Charepnsuk C, Nagarur N, Tabucanon MT (1997) A multicriteria approach to the selection
of preventive maintenance intervals. Int J of Production Economics 49(1):55−64
[7] Jiang R, Murthy DNP (2008) Maintenance: decision models or management. Science Press,
Beijing
[8] Ebeling CE (1997) An Introduction to Reliability and Maintainability Engineering. The
McGraw-Hill Company Inc., New York 124–128
[9] Khan FI, Haddara MM (2003) Risk-based maintenance (RBM): a quantitative approach for
maintenance/inspection scheduling and planning. J of Loss Prevention in the Process Indus-
tries 16(6):561−573
[10] Krishnasamy L, Khan F, Haddara M (2005) Development of a risk-based maintenance
(RBM) strategy for a power-generating plant. J of Loss Prevention in the Process Industries
18(2):69−81
[11] Kierulff HE (2007) The replacement decision: Getting it right. Business Horizons
50(3):231−237
[12] Tsang AHC, Yeung WK, Jardine AKS, Leung BPK (2006) Data management for CBM
optimization. J of Quality in Maintenance Engineering 12(1):37−51
Optimising Preventive Maintenance Strategy for Production Lines 147
[13] Kallen MJ, van Noortwijk, JM (2003) Optimal maintenance decisions under imperfect
inspection. Reliability Engineering & System Safety (Selected papers from ESREL 2003)
90(2−3):177−185
[14] Sun Y, Ma L, Mathew J (2004) Reliability prediction of repairable systems for single com-
ponent repair. in: Proceedings of International Conference on Intelligent Maintenance Sys-
tem. Arles, France: IMS, S2-A.
[15] Sun, Y, Ma L, Morris J (2009) A practical approach for reliability prediction of pipeline
systems. Eur J of Operational Research 198(1):210−214
[16] Sun Y, Ma L, Mathew J (2007) Prediction of system reliability for multiple component
repairs. in: Proceedings of The 2007 IEEE International Conference on Industrial Engineer-
ing and Engineering Management. 2007. Singapore: IEEE, 1186−1190
[17] Kelly A (1984) Maintenance Planning and Control. Butterworth & Co Ltd., Cambridge
[18] Pham H (2003) ed. Handbook of Reliability Engineering. Springer, London
[19] Blischke WR, Murthy DNP (2000) Reliability – Modelling, Prediction, and Optimization.
John Wiley & Sons Inc., New York 143−239
A Flexible Asset Maintenance Decision-Making
Process Model
Abstract Optimal Asset Maintenance (AM) decisions are imperative for effi-
cient asset management. Decision Support Systems (DSSs) are often used to help
asset managers make maintenance decisions, but high quality decision support
must be based on sound decision-making principles. For long-lived assets, a suc-
cessful AM decision-making process must effectively handle multiple time scales.
For example, high-level strategic plans are normally made for periods of years,
while daily operational decisions may need to be made within a space of mere
minutes. When making strategic decisions, one usually has the luxury of time to
explore alternatives, whereas routine operational decisions must often be made
with no time for contemplation. In this paper, we present an innovative, flexible
decision-making process model which distinguishes meta-level decision making,
i.e. deciding how to make decisions, from the information gathering and analysis
steps required to make the decisions themselves. The new model can accommo-
date various decision types. Three industrial cases are given to demonstrate its
applicability.
__________________________________
Y. Sun
CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queen-
sland University of Technology, Brisbane, QLD 4001, Australia
e-mail: y3.sun@qut.edu.au, Tel: (61 7) 3138 2442, Fax: (61 7) 3138 1469
C. Fidge
Faculty of Science and Technology, Queensland University of Technology,
Brisbane, QLD 4001, Australia
L. Ma
CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queen-
sland University of Technology, Brisbane, QLD 4001, Australia
1 Introduction
Decision
hierarchy
Long term General
(e.g. 5 years)
Information
needed
Time scale
Strategic
decisions
Technical
decisions
Implementation
decisions
Short term
(e.g. hours) Reactive decisions Specific
AM Trigger information
decisions collection/generation
Basic AM
information
decision-
acquisition/
making
generation
process
processes
Requests
for decision
inputs
Decision Decision
required required
information information
Database
Based on our ‘split’ Asset Maintenance (AM) decision support framework from
Section 3 above, and taking into account the NAMS Group’s decision process
model, Rhodes’ five-step process model, and the guidelines, specifications and
asset management models provided by PAS 55, IIMM and the AMC, we devel-
oped a Flexible Asset Maintenance Decision-making Process (FAMDP) model as
shown in Figure 3.
The first step in this process model is to identify an AM decision which needs
to be made. As mentioned above, asset maintenance involves numerous decisions,
from routine maintenance planning to how to respond to an unexpected failure.
Different decisions need different information and analyses. Therefore, when
making a decision using a Decision Support System, one first needs to specify the
kind of decision to be made.
The second step is to identify the objectives and the constraints for making the
decision. Accurately recognising the decision objectives and constraints is im-
perative because they define the criteria for optimising the decision. The objec-
tives of a specific AM decision have to be in compliance with the asset manage-
156 Y. Sun, C. Fidge and L. Ma
1. Identify an AM decision
Yes
Do the decision
parameters need to
be optimised?
No
Has the relationship
between decision
parameters and objectives
been quantified? Yes
No
The fifth step is to rank the decision options based on decision criteria which are
determined according to the decision objectives and constraints. In modern Asset
Maintenance, decisions often involve multiple factors, and different objectives and
constraints, i.e. AM decision making belongs to the class of ‘multiple criteria’ deci-
sion problems. As a result, ranking decision options is often difficult. To address
this issue, various option ranking models and methodologies have been developed,
e.g. Decision Trees, the Analytic Hierarchy Process, and fuzzy logic. These tech-
niques can effectively assist in AM decision ranking. For decision making in safety-
critical environments, a risk-based decision making approach may be applied. The
IIMM presents a risk analysis method, and a risk assessment and management proc-
ess [17]. However, no matter which methodology is used, applying it correctly
typically requires a sound knowledge of how it works and, in particular, an under-
standing of its limitations. In addition, in most cases, it also takes a significant
amount of time to conduct decision option ranking analyses, and hence the ranking
process is also separated from the basic decision-making process in our model.
The sixth, seventh and eighth steps are to optimise decision parameters, such as
asset renewal times. After the fifth step, the decision options have been ranked.
Then one can determine a best option based on the rankings. However, deciding on
the best option does not necessarily mean that the decision can be finalised because
Asset Maintenance decisions are so complex. In practice, further analyses may be
needed to optimise those parameters which are associated with the selected deci-
sion option. For example, when the reliability of an asset is lower than an accept-
able level, a number of maintenance activities can be applied to improve its reli-
ability, including conducting preventive maintenance or renewing the whole asset.
If a decision to renew the asset is made, then one needs to further decide on the
optimal renewal time. To address these issues, our FAMDP model has additional
steps in which we need to identify data availability and then conduct an optimisa-
tion analysis using an appropriate optimisation model or method based on the deci-
sion objectives and constraints which have been identified in the second step.
The ninth and final step in our FAMDP model is to assess the risk and verify
the decision. Risk assessment of a decision is a part of the whole risk identifica-
tion, assessment and control system in an organisation. PAS 55 includes a well-
established methodology for risk identification, assessment and control. Decision
verification is an important step in an AM decision-making process. It usually
involves a number of ‘what-if’ analyses to ensure that the selected decision is
robust. Once the decision has been validated, it becomes the final one which leads
to the tenth step, to enact the decision. However, if the chosen decision option
proves unsatisfactory, and no other viable options are available, the decision
maker will need to modify the objectives or reconsider the decision options. Un-
fortunately, some decision makers, especially those at lower levels in an organisa-
tion, such as equipment operators, may not be allowed to change AM decision
objectives which are associated with the organisation’s business objectives. In this
case, the need for modification of objectives must be reported to their super-
visors – the eleventh step in our FAMDP model – and the whole decision-making
process is suspended until new AM objectives are determined.
A Flexible Asset Maintenance Decision-Making Process Model 159
7. Review options
No
Can a preferred option be selected
from the remaining options?
Yes
8. Complete financial analysis
Figure 4 A Simplified Version of the NAMS Group’s Decision-Making Process for Infra-
structure Projects [1]
6 Case Studies
tion (or reliability function), or (2) tube thicknesses at installation plus their ero-
sion rates, or both.
Step 4: Define potential maintenance strategy options for economisers.
This work is done by domain experts based on their experience. The potential
options include reactive (corrective) maintenance, preventive maintenance, predic-
tive maintenance, renewal of the tubing system and various combinations of these
actions. Renewal of an economiser tubing system can be defined as replacing
more than 40 % of the individual tubes. In economiser maintenance, the type of
preventive maintenance is opportunistic, e.g. preventively replacing some worn
tubes when the economiser is shut down to repair a leaking tube or for some other
reason.
Step 5: Select the best option and check the decision parameters. After a
qualitative analysis, assume that a combined maintenance strategy has been se-
lected. The economiser tubing system will be renewed at a scheduled interval.
Between renewals, the economiser will be maintained based on reactive mainte-
nance and opportunistic preventive maintenance strategies. In this case, the re-
newal interval is a decision parameter which needs to be optimised. Another two
decision parameters are the renewal area (i.e. how much of the old tubing to cut
away and replace) and location (i.e. which erosion ‘hotspots’ to focus on).
Step 6: Optimise the renewal intervals. The aim of optimising the renewal in-
tervals is to minimise the expected total maintenance cost of the economisers
which includes expected repair costs, expected renewal costs and expected pro-
duction losses due to maintenance downtime. The other objectives which have
been identified in Step 2 become constraints. Here, the expected repair cost is
assumed to be proportional to the failure probability of the tubes. The proportional
scale can be assumed to be constant, i.e. ignoring the influence of inflation and
interest. The failure probability of the tubes is time dependent. Therefore, the
expected repair cost is a function of renewal intervals. The expected renewal cost
is assumed to be inversely proportional to the renewal interval; hence, it is also a
function of renewal intervals. Again, the proportional scale can be assumed to be
constant. The expected production loss is assumed to be proportional to the failure
probability of the tubes and the outage duration. As a result, it is also a function of
renewal intervals. However, the proportional scale cannot be assumed to be con-
stant in this case. Seasonal changes of the electricity market price have to be taken
into account (however, daily fluctuations in the price do not need to be considered
because the outage duration due to maintenance is always greater than one day).
Therefore, the expected production loss depends on both renewal intervals and the
calendar times when the renewal actions are conducted. Adding the expected re-
pair cost, the expected renewal cost and the expected production loss together, we
can obtain the expected total maintenance cost of the economisers which is a func-
tion of renewal intervals and the calendar times when the renewal actions are
conducted. Using an appropriate optimisation algorithm, one can then finally iden-
tify the optimal renewal intervals.
Step 7: Verify the decision using sensitivity analysis and risk assessment. If
the decision is satisfied, we will accept the decision and the decision making loop
A Flexible Asset Maintenance Decision-Making Process Model 163
1. Identify an AM decision
AM decision
objectives 2. Define the objectives and
/constraints constraints associated with
identification economiser maintenance
process management
No Yes
7. Verify the decision using
Are any sensitivity analysis and risk
other options assessment
available?
No
Is the decision
satisfied?
Yes
for a certain period and then fix the problem. A process for making this short-term
decision by instantiating the FAMDP model from Figure 3 is shown in Figure 6.
Step 1: Identify an AM decision. In this case study, the AM decision is to de-
cide the optimal repair lead time for an economiser when a leak is detected.
Step 2: Gather the objectives and constraints associated with economiser
repairs. Since the required decision is a type of emergency decision, the objec-
tives and the constraints have to be defined in advance because there is no time for
reflection and analysis when the decision is required. Fortunately, in this case
study, the objectives and constraints are the same as those which have been identi-
fied above for choosing the optimal maintenance strategy. However, this coinci-
dence also means that these two types of decisions have interactions. Changes in
objectives and constraints in one decision will result in changes to other decisions.
Step 3: Assess and predict economiser conditions. Although a leak has been
identified, one has to check its severity and predict the consequential failures if the
leak is not fixed. According to historical observations, leaving a leak unrepaired
will produce around three further leaks every 24 hours due to the high-pressure
water escaping from the leaking tube eroding neighbouring tubes, and, conse-
quently, an additional one day is needed to fix ‘consequent’ leaks. These conse-
quential failures have to be considered in the decision as they can significantly
increase repair costs and production losses.
Step 4: Obtain potential repair options. As an emergency decision, the op-
tions should be clearly defined in advance. In practice, this work is done by do-
main experts based on their experience. When a leak is identified, potential op-
tions are to (1) shut down the unit and fix the leak immediately; (2) continue
operating the unit and fix the leaks three days later; or (3) continue operating the
unit and fix the leaks six days later.
Step 5: Select the best option. The optimal repair action heavily depends on
the electricity market price at the time when the leak occurs. The electricity mar-
ket price fluctuates significantly, from a typical $25/MWh up to $2500/MWh in
some short-lived peaks. As a result, production losses due to outages of the same
duration occurring at different times can be dramatically different, compared to
relatively stable repair costs. The major objective for determining the best repair
option is to minimise the total cost, which includes production losses and repair
costs. In current practice, we assume that the electricity supplier makes their deci-
sions based on the following rules: if the electricity market prices when a failure
occurs is less than $30/MWh, select option (1); if $30−$100/MWh, select option
(2); and if greater than $100/MWh, select option (3).
As in this case no decision parameters need to be further optimised, the steps
for optimisation of decision parameters (i.e. the sixth, seventh and eighth steps) in
the FAMDP (Figure 3) are skipped. Furthermore, because fixing economiser leaks
is a responsive decision and there is not enough time to do a what-if analysis, the
selected decision in Step 5 normally becomes the final decision. However, we also
noticed that decisions made based on these previously well-defined selection crite-
ria may not always be optimal. Therefore, when time permits, a risk assessment
and what-if analysis is needed to justify the decisions and calibrate the rules (i.e.
A Flexible Asset Maintenance Decision-Making Process Model 165
1. Identify an AM decision
AM decision
objectives 2. Gather the objectives and
/constraints constraints associated with
identification economiser repairs
process
Risk
assessment Yes No
and what- Is time enough
if analysis for what-if
process analysis?
Authorised Yes
to modify
objectives? No
7. Enact the decision
by going through a risk assessment and what-if analysis process as per Step 6 in
Figure 6). This case study once again has demonstrated the importance of separat-
ing the basic decision making activities and information generation and/or analysis
processes in a decision making process model used for emergency decisions.
Our process model has also been used to design a pipeline renewal decision sup-
port tool for a water utility company. Pipeline renewal is a type of long-term (over
166 Y. Sun, C. Fidge and L. Ma
30 years) decision in the company. The decision tool software was designed to
assist users to follow the procedure shown in Figure 3 automatically.
Step 1: As a special-purpose decision support tool, the decision of interest is to
decide the optimal renewal time for each pipeline in terms of minimum total cost,
while meeting the company’s major business objectives.
Step 2: After discussion with maintenance staff in the company, the objective
was identified as minimising the total cost, which included repair costs due to
pipeline failures and replacement costs. Production losses can be ignored in this
application. The major constraints to achieve this goal were (1) business risk con-
trol and (2) customers’ requirements for service interruptions.
Step 3: The pipeline’s health status is one of the most critical factors for decid-
ing renewal times. As the company has over 1000 pipelines which are made of
various materials and have different lengths, diameters and working environments,
a special process was designed for pipeline health assessment and prediction,
which includes pipeline filtering and grouping, and data quality analysis (censored
data or complete data), and statistical analysis.
Steps 4, 5 and 6 were not relevant in this case study as the tool was specifically
designed for making renewal time decisions only, so there are no other alternative
options to consider.
Step 7: The decision parameter ‘renewal time’ is what needs to be optimised in
this case. To this end, a total cost rate (i.e. the total cost per unit time) was formu-
lated as a function of repair cost per repair, renewal time, pipeline failure probabil-
ity and replacement cost per unit time. To evaluate the service interruption risk,
the quantitative relationship of service interruptions due to planned and unplanned
maintenances vs. the renewal time was also developed.
Step 8: The cost rate function, reliability function and service interruption
function were entered into a multi-criteria optimisation algorithm to calculate the
optimal renewal times which correspond to a minimal total cost rate and satisfy
the minimum reliability requirement and service interruption requirement. These
renewal times are then offered to decision makers.
Step 9: Because of the uncertainty in failures and costs, especially the pre-
dicted pipeline replacement cost, decision makers need to justify the recom-
mended renewal times through risk evaluations and what-if analyses, i.e. to see if
the decision is robust. An analysis tool was developed to calculate the changes of
failure probability, service interruptions and the total cost rate, as well as the fluc-
tuations of maintenance expenditure over a given decision horizon (e.g. 30 years)
corresponding to different renewal times. This function enables decision makers to
reschedule the renewal times which still remain to meet a particular risk control
level. However, for risk management, decision makers have to record their reasons
for such changes so that their decisions can be traced and audited.
Step 10: Once the renewal times of all pipelines have been determined, the de-
cision support system will automatically generate a renewal scheduling table
which shows the renewal time and cost of every pipeline and the total expected
repair cost over its life-span.
A Flexible Asset Maintenance Decision-Making Process Model 167
From the three case studies provided, it can be seen that our Flexible Asset
Maintenance Decision-making Process model can be instantiated for both long-
term economiser maintenance strategy decision making, short-term economiser
repair decision making, and long-term pipeline renewal decision making. Impor-
tantly, in all three cases it was possible to instantiate the model in a way that pre-
cisely matched the relevant company’s existing maintenance practices.
7 Conclusion
Acknowledgments This research was conducted within the CRC for Integrated Engineering
Asset Management, established and supported under the Australian Government’s Cooperative
Research Centres Program.
168 Y. Sun, C. Fidge and L. Ma
References
Hack-Eun Kim, Andy C.C. Tan, Joseph Mathew, Eric Y.H. Kim
and Byeong-Keun Choi
Abstract The ability to accurately predict the remaining useful life of machine
components is critical for machine continuous operation, and can also improve
productivity and enhance system safety. In condition-based maintenance (CBM),
maintenance is performed based on information collected through condition moni-
toring and an assessment of the machine health. Effective diagnostics and prog-
nostics are important aspects of CBM for maintenance engineers to schedule a
repair and to acquire replacement components before the components actually fail.
All machine components are subjected to degradation processes in real environ-
ments and they have certain failure characteristics which can be related to the
operating conditions. This paper describes a technique for accurate assessment of
the remnant life of machines based on health state probability estimation and in-
volving historical knowledge embedded in the closed loop diagnostics and prog-
nostics systems. The technique uses a Support Vector Machine (SVM) classifier
__________________________________
H.-E. Kim
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
A.C.C. Tan
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
J. Mathew
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
E.Y.H. Kim
CRC for Integrated Engineering Asset Management, School of Engineering Systems,
Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
B.-K. Choi
School of Mechanical and Aerospace Engineering, Gyeongsang National Univ.,
Tongyoung, Kyongnam, Korea
as a tool for estimating health state probability of machine degradation, which can
affect the accuracy of prediction. To validate the feasibility of the proposed model,
real life historical data from bearings of High Pressure Liquefied Natural Gas
(HP-LNG) pumps were analysed and used to obtain the optimal prediction of
remaining useful life. The results obtained were very encouraging and showed that
the proposed prognostic system based on health state probability estimation has
the potential to be used as an estimation tool for remnant life prediction in indus-
trial machinery.
1 Introduction
In this research, a new prognostics system based on health state estimation with
embedded historical knowledge is proposed. In terms of design and development
of intelligent maintenance systems, effective intelligent prognostics models using
condition monitoring techniques and failure pattern analysis for a critical dynamic
system can lead to a robust prognostics system in industry. Furthermore the com-
bined analysis of event data and condition monitoring data can be accomplished
by building a mathematical model that properly describes the underlying mecha-
nism of a fault or a failure.
For an accurate assessment of machine health, a significant amount of a priori
knowledge about the assessed machine or process is required because the corre-
sponding failure modes must be known and well-described in order to assess the
current machine or process performance [1].
Figure 1 illustrates the conceptual integration of diagnostics and prognostics
with embedded historical knowledge. To obtain the best possible prediction on the
machine remnant life, the proposed prognostics model is integrated with fault
diagnostics and empirical historical knowledge. Li et al. [2] suggested that a reli-
able diagnostic model is essential for the overall performance of a prognostics
system. To provide long range prediction, this model allows for integration with
172 H.-E. Kim et al.
Figure 2 Flowchart of the Diagnostic and Prognostic System Based on Health State Estimation
Machine Prognostics Based on Health State Estimation Using SVM 173
After identifying the impending fault in the diagnostic module, the discrete failure
degradation states determined in prior historical knowledge module are employed
in the health state estimation module as depicted in Figure 2. The historical failure
patterns also can be used to determine the optimum number of health states for the
prediction of the machine remnant life. In estimating the health state, predeter-
mined discrete degradation states were trained before being used to test the current
health state. Through prior training of each failure degradation state, current health
condition is obtained in terms of probabilities of each health state of the machine
using the capability of multiclassification. At the end of each prognostics process,
the output information will also be used to update the historical knowledge. This
section provides a brief summary of the proposed health state estimation method-
ology and the RUL prediction using the SVM classifier.
SVM is based on the statistical learning theory introduced by Vapnik and his co-
workers [7, 8]. SVM is also known as maximum margin classifier with the abilities
of simultaneously minimizing the empirical classification error and maximizing the
geometric margin. Due to its excellent generalization ability, a number of success-
174 H.-E. Kim et al.
ful applications have been implemented in the past few years. The theory, method-
ology and software of SVM are readily available in references [7–10]. Although
SVMs were originally designed for binary classification, multi-classification can
be obtained by the combination of several binary classifications. Several methods
have been proposed, for example, “one-against-one,” “one-against-all,” and di-
rected acyclic graph SVMs (DAGSVM). Hsu and Lin [10] presented a comparison
of these methods and pointed out that the “one-against-one” method is suitable for
practical use than the other methods. Consequently, in this study, the authors em-
ployed the “one-against-one” method to perform the classification of discrete fail-
ure degradation states.
G
Let xt = ( xt1 , xt 2 , ..., xtm ) be the observations, where m is the number of obser-
vations and t is the time index. Also, let yt be the health state (class) at time (t) and
yt = 1, 2, …, n, where n is the number of health states. For multiclassification of
n-health state (class) event, the “one-against-one” method has n(n-1)/2 classifiers,
where each classifier is trained on data from two classes. For training data from
the ith and the jth classes, SVM solve the following classification problem:
1 ij
+ c ξtij ( wij )T
2
minimize : w
2 t
ξtij ≥ 0, j = 1, 2, ..., l
G
where the training data xt is mapped to a higher dimensional space by function
φ , φ ( xt ) is kernel function, ( xt , yt ) is the ith or jth training sample, w ∈ R n and
b ∈ R are the weighting factors, ξtij is the slack variable and C is the penalty pa-
rameter. Detailed explanations on the weighting factors, slack variable and penalty
parameter can be seen in [7].
There are different methods which can be used in future testing after all the
n(n-1)/2 classifiers are constructed. After a series of tests, the decision is made
using the following strategy: if sign (( wij )T φ ( xt ) + bij ) says x is in the ith class,
then the vote for the ith class is added by one. Otherwise, the jth value is increased
by one. Then, the ith class is predicted using the largest vote. The voting approach
described above is also called Max Win strategy [11]. From the above SVM mul-
ticlassification result (yt), we obtain the probabilities of each health states (Si)
using the smooth window and indicator function (Ii) as following:
t +u −1
G G
Prob ( St = i xt ,… , xt +u −1 ) = I i ( y j ) u
j =t
(2)
0 y ≠ i
Ii ( y) =
1 y = i
where (St) is the smoothed health state and u is the width of the smooth window.
Machine Prognostics Based on Health State Estimation Using SVM 175
In the given smooth window subset, the sum of each health state probabilities is
shown in Eq. (3)
m
G G
Pr ( St = i xt ,…, xt +u −1 ) = 1.
i =1
(3)
From the result of each of the health probabilities, the probability distribution
of each health state subject to time (t) can be obtained as illustrated in Figure 3.
Figure 3 shows an example of probability distribution which has a simple linear
degradation process consisting of n number of discrete health states. As the prob-
ability of one state decreases, the probability of the next state increases. At the
point of intersection there is a region of over-lap between two health states, which
is natural phenomenon in linear degradation process. In real life, the probability
distribution of failure process is far more complex due to the dynamic and stochas-
tic nature of machine degradation.
After the estimation of current and each health state in terms of the probability
distributions, the RUL of machine is obtained according to the probability of each
health state (st) and historical operation time (age) at each state (τi), and can be
expressed as
m
G G
RUL(Tt ) = Pr ( St = i xt ,… , xt +u −1 ) ⋅τ i (4)
i =1
Liquefied natural gas (LNG) condenses the natural gas six hundred times by freez-
ing the gas below the boiling temperature (–162℃), which can make storage and
176 H.-E. Kim et al.
For machinery fault diagnostics and prognostics, signals such as vibration, tem-
perature and pressure are commonly used. In this research, the authors used vibra-
tion data because it is readily available in industry, and the trend of vibration
features closely related to the bearing failure degradation process. Figure 5 shows
the frequency spectrum plots of P301D pump. The bearing resonance component
increased over the period of operation hours. The first symptom of a bearing
failure was detected as early as 14 months before the bearing final failure. Other
bearing fault components appeared progressively until the final bearing failure, as
shown in plots (a)–(d) of Figure 5.
Vibration data were collected through two accelerometers installed on the
pump housing as shown in Figure 4. The vibration data from two LNG pumps of
identical specification were used for prediction of the remaining useful life. Due
to the random operation of the pumps to meet the total production target of LNG
supply, there were some restrictions to collect more complete data over the entire
life of the pump. The acquired vibration data are summarized in Table 2. As
shown in Table 2, a total 136 vibration samples for P301 C and 120 vibration
samples for P301 D were collected during the full range of operation over the life
of the pump, for training and testing of the proposed prognostic model.
Figure 6 shows the damage of (a) the outer raceway spalling of P301 C and (b)
the inner raceway flaking of P301 D, respectively. Although these two bearing
faults had different fault severities on the inner race and the outer race, these
faults occurred on similar bearings located on the same location of the pump.
Although bearing faults are the primary causes of machine breakdown, a number
of other component faults can also be embedded in bearing fault signals which
make it problematic in bearing diagnostics/prognostics. Currently, a number of
physical model-based prognoses have been reported which focused on identifying
appropriate features of damages or faults. However, current researches of prognos-
tics only concentrate on specific component degradations and do not include other
types of fault. In this research, the authors aim to address a generic and scalable
prognostic model which is applicable for different faults in identical machine. The
conventional statistical parameters from the vibration signals are used for prognos-
tic tests to establish the generic and scalable prognostic model in this study. In this
work, a total of 28 features (14 parameters, 2 positions) were also calculated for
health state probability estimation of bearing failure. The calculated features from
the two sets of vibration data of HP-LNG pumps are summarized in Table 3.
For the outstanding performance of fault classification and the reduction of
computational effort, effective features were selected using the distance evalua-
tion technique of feature effectiveness introduced by Knerr et al. [12] as depicted
below.
The average distance (di,j) of all the features in state i can be defined as follows:
N
1
di , j = Pi, j (m) − Pi, j (n) .
N × ( N − 1) m , n =1
(5)
When the average distance (di,j) inside a certain class is small and the average
distance (d′i,j) between different classes is big, these averages represent that the
features are well separated among the classes. Therefore, the distance evaluation
criteria (αi) can define as
(α i ) = d ai′ d ai (7)
The optimal features can be selected from the original feature sets according to
the large distance evaluation criteria (αi).
In this work, a total of 14 features were used to extract effective features from
each signal sample measured at the same accelerometer positions. The distance
evaluation criterion (αi) of 14 features in this work are shown in Figure 7, with
almost zero upper histogram value (No. 9). In order to select the effective degrada-
tion features, the authors defined a value greater than 1.3 of a normalized distance
evaluation criterion, |αi /αN | > 1.3, where (αi) is the distance evaluation criterion
and (αN) is the mean value of (αi). The ratio of 1.3 is selected based on past his-
torical records for this particular bearing/pump. From the results, three features are
selected for health state probability estimation, namely Kurtosis (5), Entropy esti-
mation value (7) and Entropy estimation error value (8). They meet the large dis-
tance evaluation criterion (αi) as compared with other features. These features
could minimize the classification training and test error of each health state.
Figure 8 shows the selected feature trends of kurtosis, entropy estimation and
entropy estimation error value, respectively. All the selected features show in-
creasing trends which indicate the failure degradation process of the machine over
time as shown in the plots.
In this case study, to select the optimal number of health states of bearing degrada-
tion, several health states were investigated using the data sets of P301 D for train-
ing and prediction tests. As the basic kernel function of SVM, a polynomial func-
tion was used in this work. Multiclass classification using OAO method was
applied to perform the classification of bearing degradation as described in Sec-
tion 3. Sequential minimal optimization (SMO) proposed by Platt [13] was used to
solve the SVM classification problem. For selection of optimal kernel parameters
(C, γ, d), the cross-validation technique is also used in order to avoid over-fitting
or under-fitting problems of classification performance. The result of the investi-
gation to select the optimal number of health states are plotted in Figure 9. The
average prediction value was estimated using Eq. (8) as follows:
N
i = 1 μ′ − μ
i i
Average prediction error = . (8)
N
A total of nine different states were investigated, ranging from two to ten states.
As shown in Figure 9, although low health states had low training error values,
they showed high prediction error values compared with other higher health states.
On the contrary, high health states also had high training error values, but rela-
tively low prediction error values. From this result, the authors selected five states
as the optimal number of health states because beyond five states the training error
values increased rapidly and without significant decrease in the prediction error
values. The training error and prediction error values of using five states were
10 % and 5.6 %, respectively.
Table 4 shows the training data sets of the selected five degradation states used
in this work and with eight sets of samples in each state using three selected fea-
Table 4 Training Data Sets for the Health State Probability Estimation (P301D)
State No. No. of samples (u) Average operation Hours (τi) RUL (%) No. of features
1 1~8 4 99.89 % 3
2 25 ~ 32 503 85.67 % 3
3 41 ~ 48 843 75.99 % 3
4 81 ~ 88 2501 28.77 % 3
5 121 ~ 128 3405 3.02 % 3
tures. Initially (State 1), the percentage of RUL is almost 100 % (99.89 %) and
progressively reduced to 28.77 % in state 4. At the 5th state, the remaining bear-
ing life is about 3.02 %.
In this RUL prediction of bearing failure, closed and open tests were conducted. In
the closed test, the five states were trained using the listed training data sets shown
in Table 4, and full data sets from P301 D (136 data sets) were tested to obtain the
probabilities of the five degradation states. Figure 10 shows the probabilities of
each state of P301 D. The first state probability started with 100 % and decreased
as long as the next state probability increased. For example, the probability of first
state (solid lines) decreases first and increases again to 90 % and eventually
dropped to zero. Simultaneously, the second state (dotted lines) reached 100 %.
Some overlaps between the states and the nonuniformity of the distribution could
be explained by the dynamic and stochastic degradation process and the un-
certainty of machine health condition or inappropriate data acquisitions in a real
environment. The entire probabilities of each state follow a nonlinear degradation
process and are distinctly separated.
As an open test, the similar bearing fault data (P301 C), which consisted of 120
sample sets, is tested to obtain the probability distribution of each health state of
P301 C using identical training data sets shown in Table 4. Figure 11 shows the
probability distribution of each health state of P301 C. Similar nonlinear probabili-
ties distribution and overlaps between states are also observed due to reasons ex-
plained above.
For the estimation of remaining useful life (RUL), the expected life of the ma-
chine was estimated by using the historical operation hours (τi) of each training
data set described in Table 4 and their probabilities evaluated using Eq. (4). Fig-
ure 12 shows the closed test result of estimated remnant life and the comparison
between real remaining useful life and estimated life. As shown in Figure 12,
although there are some discrepancies in the middle zone of the display, the over-
all trend of the estimated life follows the gradient of real remaining useful life of
the machine. The average prediction accuracy was 94.4 %, which is calculated
using Eq. (5) over the entire range of the data set. Furthermore, the estimated life
Figure 12 Comparison of Real Remaining Useful Life and Estimated Life (Closed Test,
P301 D)
Machine Prognostics Based on Health State Estimation Using SVM 185
Figure 13 Comparison of Real Remaining Useful Life and Estimated Life (Open Test, P301 C)
at the final state matched closely the real remaining useful life with less than 1 %
of remaining life.
Figure 13 shows the open test result of estimated remnant life and the compari-
son between real remaining useful life and estimated life. There is a large differ-
ence in remnant life at the initial degradation states as shown in Figure 13. In open
test, the estimated time was obtained from training data sets (P301 D) which had
3511 h in total operation. This causes the discrepancy between real remaining
useful life and estimated life in the beginning of the test. However, as it ap-
proaches final bearing failure, the estimated life matched more closely with the
real remaining useful life than those in the initial and middle states.
5 Conclusion
This paper proposed an innovative machine prognostic model based on health state
probability estimation. Through prior analysis of historical data in terms of histori-
cal knowledge, discrete failure degradation states were employed to estimate dis-
crete health state probability for long-term machine prognostics. To verify the
proposed model, bearing failure data from HP-LNG pumps were used to extract
prominent features and to determine the probabilities of degradation states. For
optimum performance of the classifier, effective features were selected using the
distance evaluation method. To select the optimal health states of bearing failure,
several health states were investigated. The health state probability estimation was
carried out using a full failure degradation process of the machine by optimally
selecting the number of health state over time from new to final failure states. The
result from the industrial case study indicates that the proposed model has the ca-
pability to provide accurate estimation of health condition for long-term prediction
of machine remnant life. The selection of number of optimal health states of bear-
ing failure is vital to avoid high training error with no improvement in prediction
accuracy. However, knowledge of failure patterns and physical degradation from
different historical data of machine faults still needs further investigation.
186 H.-E. Kim et al.
Acknowledgments This research was conducted with financial support from QUT-Internation-
al Postgraduate Award and the CRC for Integrated Engineering Asset Management, established
and supported under the Australian Government’s Cooperative Research Centres Programme.
References
[1] AKS Jardine, D Lin, D Banjevic (2006) A review on machinery diagnostics and prognostics
implementing condition-based maintenance. Mech Sys Signal Pr 20:1483−1510.
[2] Y Li, S Billington, C Zhang, T Kurfess, S Danyluk, S Liang (1999) Adaptive Prognostics
for Rolling Element Bearing Condition. Mech Sys Signal Pr 13:103−113.
[3] M Pal, PM Mather (2004) Assessment of the effectiveness of support vector machines for
hyperspectral data. Future Gener Comp Sy 20:1215−1225.
[4] G Niu, JD Son, A Widodo, BS Yang, DH Hwang, DS Kang (2007) A comparison of classi-
fier performance for fault diagnosis of induction motor using multi-type signals. Struct
Health Monit 6:215−229.
[5] Y Weizhong, X Feng (2008) Jet engine gas path fault diagnosis using dynamic fusion of
multiple classifiers. In: Neural Networ. IJCNN 2008. (IEEE World Congress on Computa-
tional Intelligence). IEEE Int Joint Conf 1585−1591.
[6] G Niu, T Han, BS Yang, ACC Tan (2007) Multi-agent decision fusion for motor fault
diagnosis, Mech Sys Signal Pr Vol. 21.
[7] VN Vapnik (1995) The Nature of Statistical Learning Theory. Springer, New York.
[8] VN Vapnik (1999) An overview of statistical learning theory. IEEE Tr Neural Networ10(5):
988−999.
[9] N Cristianini, NJ Shawe-Taylor (2000) An Introduction to Support Vector Machines. Cam-
bridge University Press, Cambridge.
[10] CW Hsu, CJ Lin (2002) A comparison of methods for multiclass support vector machines.
IEEE Tr Neural Networ 13:415−425.
[11] LM He, FS Kong, ZQ Shen (2005) Multiclass SVM based on land cover classification with
multisource data, In: Pr Fourth Intl Conf Mach Learn Cybernet 3541−3545.
[12] S Knerr, L Personnaz, G Dreyfus, Single-layer learning revisited: a stepwise procedure for
building and training a neural network. Springer-Verlag, New York.
[13] J Platt (1999) Fast training of support vector machines using sequential minimal optimiza-
tion. In: B. Scholkopf et al Advances in Kernel Methods-Support Vector Learning. MIT
Press, Cambridge.
Modeling Risk in Discrete Multistate
Repairable Systems
1 Introduction
A repairable component is an object in a system that can have its reliability re-
stored after it has become unreliable. A description of the component reliability
and performance is needed to understand its contribution to system reliability [5].
In some cases, there is a threshold of performance that the component must ex-
ceed. In that case, it is appropriate to describe the component as a member of one
of two sets: good and failed. In other cases, the component may operate in a range
of service duty, and may be able to deliver acceptable performance even though
reliability and performance is compromised [6].
For a system with a range of performance and reliability, a more general de-
scription of component reliability is necessary. Ideally, this description is a mech-
anistic relationship for variables and constraints of both production and mainte-
nance. In reality, these relationships are difficult to develop and validate, and so a
simplified formulation is preferred.
Maintenance activities are usually described as discrete-event activities; and
many types of operating systems can have different operating conditions classified
discretely as well [1]. Since it is generally not possible to describe the operation
and maintenance of a single repairable component in a system as a deterministic
process, a reasonable formulation of this type of system uses a discrete-event,
stochastic process model [7].
One of the simplest formulations for a stochastic process is a Markovian proc-
ess, which can be either continuous or discrete. The key attribute of a discrete-
Modeling Risk in Discrete Multistate Repairable Systems 189
state continuous-time Markovian random process X(t) є{1,2,... } is that the past
has no influence on the future if the present state is specified. The conditional
probabilities satisfy the relation
for t1 < t2 <... < tn–1 < tn. The conditional probabilities in Eq. (1) are called transi-
tion probabilities. The transition probabilities from state to state themselves do not
change over time, and they are described as negative exponential distributions.
The size of a Markov model for the evaluation of such a system may grow expo-
nentially with the number of components in the system [8,9].
For a single repairable component, these restrictions apply in some circum-
stances. Because the state transition probabilities must be stable over time, the
system, including operating and repair practices, must be mature and unchanging.
This implies no change in system duty, and no change in either operating practices
or the maintenance practices. In other words, if the system changes in some way,
such as a change in the effectiveness of operating and maintaining practices, then
the original Markovian process is no longer a correct representation of the system
[10].
If the system representation can be updated with new system behavior that re-
mains Markovian, then it is possible to describe the evolution of the system as a
set of Markov processes. If not, then a different formulation is required. In main-
tenance practices, the Markovian property may not be valid. When the transition
time between states of a component is a random variable that does not follow an
exponential distribution, the use of discrete Markov Chains for describing the sys-
tem is inappropriate. A semi-Markov process may be more representative since
the transition probabilities in a Semi-Markov process are functions of the duration
of time spent on a state of the system [11]. In general, a semi-Markov process is
the process that chooses its next state according to a Markov chain, but the transi-
tion time spent in this state it is a random amount of time. In general, in a semi-
Markov model, the transition rates in a particular state depend on the time already
spent in that state. However, the rates do not depend on the path taken to get to the
present state. The transition times between states for a component do not necessar-
ily follow an exponential distribution.
For a repairable component, the number of reliability states depends on how the
system is operated and maintained, and the possible failure modes.
Each state has a transition probability μii of remaining in the current state i, as
well as transition probabilities of changing from state i to a different state j.
A transition probability that makes the system less reliable in new state j is λij;
190 M.G. Lipsett and R.G. Bobadilla
a transition that improves system reliability is μij, with the convention that a more
reliable state is a higher numbered state. State transitions have either reliability-
related causes or production causes. Reliability-related causes of state transitions
are natural damage accumulation rates and maintenance decisions. Production-
related causes are operating decisions, including service duty (demand on the
component) and delays in maintenance.
Figure 1 illustrates a system with a single repairable component that has eight
possible states of reliability and performance related to demand. The eight discrete
states are:
• spare;
• standby;
• derated duty;
• full normal duty;
• minor fault;
• major fault;
• failed;
• in repair.
A spare is a good component that is not currently available for operation.
A partially consumed spare has some of its reliability consumed previously (in
another state), and so it has a transition probability of moving to a degraded
Spare μ8,8
8
μ7,7 λ μ7,8
8,7
Standby 7
λ7,6 μ6,7
μ6,6
Derated duty μ5,7
6
λ7,5
λ7,4 λ6,5 μ5,6 μ4,7
μ5,5
Full normal duty
5 μ4,6
λ4,3 μ3,4
μ3,3 μ1,5
Major fault 3 μ2,4 μ1,4
λ3,2 μ2,3
μ2,2
Failed 2
μ1,3
λ2,1 μ1,2
μ1,1
In repair 1
Figure 1 Discrete Reliability Model for a Repairable Component with Eight States
Modeling Risk in Discrete Multistate Repairable Systems 191
(faulty) state that is higher than that of a new good component. It can usually be
assumed that a spare component has little if any probability of becoming less
reliable over time while it remains a spare, however, in some cases, a spare part
has a “shelf life” and thus a finite transition probability of moving to a degraded
state.
During standby, the component is idle. It may or may not be consuming its reli-
ability while in this state. When a component is on hot standby, whereby it is
actively powered as a redundant part of a system and ready to operate on demand,
there is likely some consumption of reliability over time.
In derated duty, the component is operating, but at a reduced rating (lower per-
formance). There are many reasons for a component to operate in this state. Typi-
cally, derated duty consumes reliability at a lower rate than at full duty, but that is
not always the case.
At full normal duty, the component is operating at or above its nominal per-
formance level, is having no reliability issues, and is consuming reliability at a rate
that is related to its service demands. The production/reliability consumption rela-
tionship is usually not well characterized; but in the absence of other information,
the rate at which reliability is consumed is often assumed to be a linear function of
cumulative operating time.
A component with a minor fault has only incipient damage, meaning that there
is no effect on its performance in its intended service.
In contrast, a component with a major fault is no longer able to meet or exceed
its nominal level of performance. A component in such a state reduces the per-
formance of the overall system, and its reliability.
A failed component is unreliable and is unable to deliver any level of perform-
ance in the system. Because it is still part of an overall system, it affects the reli-
ability of the overall system.
A component that is in repair has been removed from the system, and is in the
process of having some level of reliability restored.
possible for a transition to occur between any two states, for this type of repairable
component, only some types of transitions have any real possibility of occurring.
The transition probabilities from one state into another state not only describe the
reliability of the process and the design of the components, but also the effective-
ness of operating and maintenance practices.
• spare to standby (λ8,7): component goes into service but on standby rather than
operating service;
• spare to spare (μ8,8): component does not change state, and there is no con-
sumption of reliability over time.
• full normal duty to full normal duty (μ5,5): there is no change in state, and reli-
ability is consumed at the nominal rate so the probability of changing to a lower
reliability state remains unchanged;
• full normal duty to minor fault (λ5,4): incipient failure but with no degradation
in performance;
Modeling Risk in Discrete Multistate Repairable Systems 193
• full normal duty to major fault (λ5,3): acute failure with degradation in per-
formance;
• full normal duty to standby (μ5,7): change in operation conditions or system
configuration.
• minor fault to minor fault (μ4,4): reliability is being consumed but the compo-
nent performance has not been compromised;
• minor fault to major fault (λ4,3): reliability has been consumed to the point that
the performance has been compromised;
• minor fault to full duty (μ4,5): reliability is restored without having to go to the
repair state, either through field service (condition-based), misdiagnosis of fault
and reclassification, or spontaneous self-repair;
• minor fault to derated (μ4,6): questionable component goes into derated service,
as a precaution;
• minor fault to standby (μ4,7): questionable component goes into standby ser-
vice, as a precaution.
• major fault to major fault (μ3,3): component remains in service even though not
performing adequately, affecting system performance;
• major fault to minor fault (μ3,4): field repair to partially restore reliability;
• major fault to derated (μ3,6): change in operating condition to accommodate
achievable level of performance;
• major fault to failed (λ3,2): loss of reliability and function to the point of unac-
ceptable performance.
• failed to failed (μ2,2): component has not changed, and system reliability has
not changed;
• failed to in repair (λ2,1): component leaves the operating system and goes into a
repair activity;
• failed to minor fault (μ2,4): component has only part of its reliability restored
without being removed from the operating system, either through a partial ser-
vicing repair or a spontaneous self-correction of an intermittent fault.
194 M.G. Lipsett and R.G. Bobadilla
5 Cost Functions
In practice, costs are easier to evaluate than transition probabilities, provided that
the cost of a transition between states is captured unambiguously in a cost ac-
counting system. Specific costs associated with each transition between states
depend on understanding the system in which the component operates (including
operational control and maintenance decision making) as well as the maintenance
processes involved. Examples include the kind of field repair undertaken, and
when a component is refurbished [0].
Costs should include all aspects of a transition between one state and another;
but the cost function should only include the cost of the transition to the new state.
This means that the costs associated with a state are allocated across the transitions
associated with arriving at that state, weighted by the probability of their respec-
tive occurrence. No future state transition costs are considered. For example, a
transition to a minor fault condition does not yet incur a cost, even though some-
time in the future there will very likely be a change to a major fault or failed state
(which will incur a cost to restore reliability and an opportunity cost of lost pro-
duction).
It is further assumed that the states are all known, and no state is misclassified.
If misclassification occurs, then there may be additional costs associated with
performing inappropriate actions based on incorrect information, for example,
having a transition from Failed to Spare a poor maintenance practice due to either
a misdiagnosis of a fault condition or mishandling of a failed component into
spares inventory.
The eight-state reliability model has the following costs:
Spare to Spare: There is a very small cost associated with this transition since
there is no a change in state and there are no costs associated with handling or
shipping; the only cost associated is related with storage. There is no change in
reliability of the component within this state.
Modeling Risk in Discrete Multistate Repairable Systems 195
Spare to Standby: In this model, the only way that a spare is introduced into
service is by shutting down the system, and so the system is in a standby state. For
this reason, there is a set of transitions: Spare to Standby, and then from standby to
an operating state. A simplified model may eliminate the Standby state.
Standby to Standby: The Standby state has no cost, unless the system is nonre-
dundant and incurs an opportunity cost of lost production while in standby.
Standby to Derated Duty, Standby to Normal Duty: From the maintenance cost
point of view, the costs related with these transitions are mainly handling and
installation of the component into the system. This cost does not include the op-
portunity cost of lost production during the change, if the system has to be down
for the change-out for other reasons. If the system has to be shutdown only to
install the component, then the cost of lost production should be included.
Standby to Minor Fault: Any degradation of the component during storage is
reflected in transition probabilities to the Minor Fault state.
Duty to Standby: The transition from operating state (normal or derated) to
standby has almost no cost, as it is simply a change of operating mode.
Standby to Spare: This transition has costs related to handling, relocation of the
component, and storage.
Duty to Duty (Normal or Derated): There is a small cost incurred in this transi-
tion, since there is no change in state, and the cost is only related to the component
working and performing its intended functions.
Duty (Normal or Derated) to Fault (Minor or Major): There is a small cost as-
sociated with this transition since the component remains operating and perform-
ing its function, this small cost is only related to operation. If the fault condition
has a large negative impact on functional performance, there could be a high op-
portunity cost of lost production from wasted product or high cost of consequen-
tial damage to other components. Duty to Fail is not included in this model be-
cause it is assumed that the system always progresses through a fault state before
reaching functional failure.
Fault to Spare: This transition has costs associated with handling, relocation of
the component, and restoration of reliability (repair). Storage cost only incurs
when passing from spare to spare.
Fault to Duty (Normal or Derated): Costs incurred in this transition are for mi-
nor restoration of reliability and reparations, which do not require going out of an
operating state to the In Repair state.
Fault to Fault: This transition has a cost associated for remaining in a fault
state due to compromised process performance. There is a higher transition cost
for transition from Minor Fault to Major Fault than from Major Fault to Minor
Fault.
Fault to Fail: There can be a large cost related to this transition due to produc-
tion losses. It is assumed that a component does not go directly from Minor Fault
to Failed, but progresses from some incipient problem to a Major Fault. This tran-
sition probability depends on the failure mode for the component. For a compo-
nent with multiple failure modes with conspicuously different hazard rates, the
model should be modified to incorporate multiple fault states.
196 M.G. Lipsett and R.G. Bobadilla
Failed to In Repair: The direct maintenance costs are counted only in this tran-
sition and in the transition within the In Repair state.
Failed to Fault: In this model, there may be transitions from Failed to a Fault
(Minor or Major) representing the spontaneous self-restoration of functionality
that can occur after an intermittent fault, or a minor maintenance activity that does
not require a real repair activity. This model does not consider cases when a failed
component returns to Normal or Derated Duty (with component reliability fully
restored).
In Repair to In Repair: This state transition to the same state has the cost of
shop repairs, and any lost production due to nonredundant components undergoing
shop repairs that incur a cost of lost production.
Fail to Fail: This state transition to the same state implies a cost for ongoing
production losses, including field fixes that don’t fix the problem.
In Repair to Failed: This state transition captures the cost of a bad shop repair
that does not fix the problem.
In Repair to Fault or Standby or Spare: These state transitions capture the costs
of going to a state of partial or full restoration of component reliability. The model
does not include transition from shop repair directly to operation, but rather goes
to standby before putting the component back into operation.
6 Risk Modeling
Usually, maintenance decisions are based on the risk associated with the next
change in state. This can be represented by a single transition in a Markov process.
At other times, it may be of interest to evaluate the risk after multiple steps.
The risk associated with a particular state i is the measure of the cost of transition
from that state and the probabilities of the transitions from that state, defined as
the sum of products of cost estimates and their respective transition probabilities
times the probability of being at state i at the current time (with respect to a step)
[13]. The total risk is the sum for all states:
n n
Risk = Pi ( 0) Cij Pij (2)
i =1 j =1
where Pi ( 0) is the ith element of the Initial Probability Vector P ( 0) which repre-
sents the probability of being in state i as the initial state, Cij is the transition cost
of changing from state i to state j, Pij is the transition probability of changing from
state i to state j, and n is the number of states. In a real system, there may also be
Modeling Risk in Discrete Multistate Repairable Systems 197
where
8
P (0) = ( P1(0) , P2(0) , P3(0) , P4(0) , P5(0) , P6 (0) , P7 (0) , P8(0) ) , Pi ( 0) = 1. (4)
i =1
If the initial state h is known, then the risk equation can be simplified as
n =8
Risk h = Chj Phj (5)
j =1
The risk for transitions from state i to another state j in multiple steps may be
different for that of one step because of the intermediate transitions to other states
k, etc. that may occur with different costs than that of a single step from state i to
state j.
We define the probability matrix P as
P11 P12 P13 P14 P15 P16 P17 P18 μ11 μ12 μ13 μ14 μ15 μ16 μ17 μ18
P21 P22 P23 P24 P25 P26 P27 P27 λ21 μ 22 μ 23 μ 24 μ 25 μ 26 μ 27 μ 28
P31 P32 P33 P34 P35 P36 P37 P38 λ31 λ32 μ33 μ34 μ35 μ36 μ37 μ38
P41 P42 P43 P44 P45 P46 P47 P48 λ41 λ42 λ43 μ 44 μ 45 μ 46 μ 47 μ 48
P= = (6)
P51 P52 P53 P54 P55 P56 P57 P58 λ51 λ52 λ53 λ54 μ55 μ56 μ57 μ58
P61 P62 P63 P64 P65 P66 P67 P68 λ61 λ62 λ63 λ64 λ65 μ66 μ67 μ68
P71 P72 P73 P74 P75 P76 P77 P78 λ71 λ72 λ73 λ74 λ75 λ76 μ77 μ78
P P P83 P84 P85 P86 P87 P88 λ81 λ82 λ83 λ84 λ85 λ86 λ87 μ88
81 82
where
8
P
j =1
ij = 1, i = 1,…8. (7)
R(k ) = R k . (9)
(k)
The risk Ri of being at state i after k steps is the total risk; and for every k
steps there is an stochastic vector formed by all the total risks of this step:
where Ri ( k ) is the risk associated with being at state i after k steps. R ( k ) is also
known as the risk distribution after k steps. Using a discrete Markov process rep-
resentation, with a risk transition matrix R, we obtain the risk after a number of
steps:
R (1) = P ( 0) R
R ( 2) = R (1) R = P ( 0) R 2 (11)
(k ) ( k −1) ( 0)
R =R R=P R . k
or
which is
Risk = ( R1( k ) , R2( k ) , R3( k ) , R4( k ) , R5( k ) , R6( k ) , R7( k ) , R8( k ) )V1
or (15)
Risk = R (k )
1 +R (k )
2 +R (k )
3 +R (k )
4 +R (k )
5 +R (k )
6 +R(k )
7 +R .
(k )
8
This equation includes the one-step case, so Eq. (5) is equivalent to Eq. (3)
when k = 1:
8 8
Risk = Pi ( 0) Cij Pij = P ( 0) RV1. (16)
i =1 j =1
The model requires only a sufficient number of states to satisfy the Markovian
property. Having more states than necessary would complicate the model, and
may make the model difficult to validate and to apply in practice. For example,
rather than an eight-state model, it may be adequate to use only four states: spare,
duty, fault, and failed. A four-state model is illustrated in Figure 2.
Spare μ4,4
4
μ3,4
λ 4,3
μ3,3
Duty λ 4,2 3
μ2,4
λ 3,2 μ2,3
μ1 ,4
λ 2,1
Figure 2 Discrete Reli- μ1,2
ability Model for a Repair-
able Component with Four μ1,1
Failed 1
States
200 M.G. Lipsett and R.G. Bobadilla
8 Verification
The limiting probabilities for these four states, given the transition probabilities
above, were found to be 0.1776, 0.2632, 0.2796 and 0.2796 for states 1, 2, 3 and 4,
respectively, since:
(k ) as k →∞
0.25 0.25 0.25 0.25 0.1776 0.2632 0.2796 0.2796
0.40 0.30 0.15 0.15 0.1776 0.2632 0.2796 0.2796
P(k ) = = .
0.10 0.40 0.40 0.10 0.1776 0.2632 0.2796 0.2796
0 0.1 0.3 0.6 0.1776 0.2632 0.2796 0.2796
In other words, for this specific transition matrix, the component would be
17.76 % of the time in “fail” (state 1), 26.32 % of the time in “fault”, 27.96 % in
“duty” and 27.96 % of the time in “spare.” This case was then run with the RENO
simulation. The same results were obtained when the number of steps (k) and the
number of simulations was sufficiently large. Very good numbers (close conver-
gence between limiting probabilities and values obtained with RENO simulation)
were reached with combinations of 5000 steps and 5000 simulations; and 10,000
steps and 10,000 simulations, as shown in Table 1. Good results were also ob-
tained for a combination of 1000 steps and 1000 simulations. The flowchart and
some of the results obtained with RENO are showed in Figure 4.
Once the flowchart was created in RENO, many different analyses were run to
sense the best combination of number of steps and simulations necessary to obtain
acceptable results. Table 1 shows these different runs and their results. It can be
confirmed that the larger the number of simulations and steps, the more accurate
the numbers obtained, and the obtained values approach to the limiting probabili-
ties expected for the discrete Markov model.
In this numerical experiment, the term simulation is used to describe a single
pass through a flowchart or process. In the example of 5000 steps and 5000 simu-
lations, a complete pass through the flowchart (simulation) was only completed
when 5000 steps were reached. This process was carried out 5000 times in order to
complete the 5000 simulations. More than one simulation is carried out in order to
represent randomness of the process appropriately and minimize the effects of
outliers. An average of the 5000 set of results is calculated.
The simulations were always run with a seed, which means that the software
was forced to use the same sequence of random numbers to start each simulation
in order to compare the results. Specifying the use of the same seed for each simu-
lation run allows you to obtain same value results. In other words, the simulation
can be duplicated. A seed also helps when tracking changes in simulation results
when changing the program. Without a seed, in some computer simulation scenar-
ios, it would be hard to determine whether changes in the outcome were due to the
changes in the code or due to different random numbers.
The number of steps has to be sufficiently large in order to imitate the infinite
number of steps (k→∞). The larger the number of steps, the closer the numbers of
the simulation will be to the limiting probabilities for a system with Markovian
properties.
Among the different analyses tested with an Intel Pentium 4 CPU 2.40 GHz, the
best results (closest numbers to limiting probabilities) were obtained with 10,000
steps in each simulation and 10,000 simulations followed by the test with
5000 steps in each simulation and 5000 simulations; but considering that the run
for the 10,000 steps in each simulation and 10,000 simulations took 4 days and
30 minutes, and the 5000 steps in each simulation and 5000 simulations one took
only 5 hours and 43 minutes, and that both sets of results had an error of less than
0.025 %, with respect to the limiting probabilities, a run with 5000 simulations
with 5000 steps in each simulation was considered to be sufficient. For more com-
plicated scenarios, where computation time becomes relevant to the process, a
combination of 1000 steps in each simulation and 1000 simulations should also be
acceptable since in this exercise model, this combination gave an error of less than
0.5
10 steps-5,000 sims
100 steps-1,000 sims
0.4
10,000 steps-10,000 sims
1,000 steps-1,000 sims
0.3
0.2
0.1
0
SIMULATION RUNS
Figure 5 Comparison of Limiting Probability Versus Markov Simulation Results for Spare State
Modeling Risk in Discrete Multistate Repairable Systems 203
0.2 %. Some of these results are shown in Figure 5 for one of the possible states, in
this case, the spare state. This figure only shows the results of the cases where the
number of simulations and steps were equal or greater than 100.
In most analyses, the system may not necessarily have constant parameters. Using
this framework, a sensitivity analysis can be conducted with changes in system
parameters over time. For example, a parameter of great interest to maintenance
planners is time interval between preventive maintenance activities.
New models can be constructed that consider changes such as an ongoing de-
crease of reliability after a certain number of steps until maintenance is performed,
or a continuous decrease of reliability at every time step. If these changes are well
behaved over the time interval, then they may be modeled as nonhomogeneous
Poisson processes. Reliability changes will affect the transition probabilities, and
changes in business activities can change the elements of the cost matrix. Multiple
analyses settings may be chosen to assess the impact on maintenance scheduling
for such changes. For example, a maintenance optimization goal may be to mini-
mize the “average total cost” of the process after 1000 steps. A set of simulations
covering multiple sensitivity analysis cases across the range of variables of interest
can show whether a near optimal PM length has been found.
There are several considerations in applying the proposed model. Primarily, the
model should have an appropriate set of states. A component with more than one
failure mode may require a different state to describe that mode if that failure
mode has different transition probabilities to other states than those of other failure
modes.
Estimating transition probabilities between states in a system can be achieved
in two ways. If the system has a means of automatically identifying states, then it
is a fairly simple matter to collect the record of events when the system entered
and exited a particular state. An example is a mine equipment dispatching system
which records the time when each equipment operator enters a code describing the
state of the machine. Of course, manual entry of codes may be subject to error,
and so some data cleaning may have to be done.
From this information, the transition probabilities can be estimated using stan-
dard statistical analysis software. It is important to have both the entering and
exiting information for each state so that the set of events for each type of transi-
tion can be determined. In the case of an exponential distribution, the random
events will follow a Poisson process.
If the system does not have an automatic method for recording the state of the
system, then it may be possible to identify a particular state from a vector of fea-
tures that are observable from system processes (production and maintenance).
204 M.G. Lipsett and R.G. Bobadilla
Once this parsing of states has been achieved, then estimation of the transition
probabilities proceeds as described above.
Estimating costs for Cij, the transition cost of changing from state i to state j
within a period of time may be more challenging. Ideally, the organization will
have an activity-based costing system. In that case, each transition between states
will map onto some cost that is recorded. Some transitions may have zero cost.
Benefits will have negative cost. Opportunity cost of lost production can be esti-
mated from the difference between the base-case cost and the costs associated
with the transition.
10 Conclusion
References
[1] Lipsett M (2001) Modeling the Flow of Information in Mine Maintenance Systems. Proc.
CIM Annual Conference
[2] Virtanen I (2006) On The Concepts And Derivation Of Reliability In Stochastic Systems
With States Of Reduced Efficiency. Dissertation, University of Turku
[3] Lugtigheid D, Banjevic D, Jardine A (2004) Modeling Repairable System Reliability with
Explanatory Variables and Repair and Maintenance Actions. IMA J Manag Math
15:89−110. doi: 10.1093/imaman/15.2.89
[4] Ching WK (2006) Markov chains: models, algorithms and applications. Springer, New York
[5] Kececioglu D (1995) Maintainability, Availability and Operational Readiness Engineering
Handbook. Prentice Hall, Upper Saddle River
Modeling Risk in Discrete Multistate Repairable Systems 205
[6] Caldeira J, Taborda J, Trigo T (2006) Optimization of the preventive maintenance plan of
a series components system. Int J Press Vessel Pip 83:244−248.
doi: 10.1016/j.ijpvp.2006.02.016
[7] Lindqvist B (2006) On the Statistical Modeling and Analysis of Repairable Systems. Statis-
tical Science.Vol. 21, No.4, 532−551. doi: 10.1214/088342306000000448
[8] Norris JR (1997) Markov Chains. Cambridge University Press, New York
[9] Sahner R, Trivedi K (1986) A Hierarchical Combinatorial-Markov Method of Solving
Complex Reliability Models. IEEE Computer Society Press, Los Alamitos CA. In Proceed-
ings of FJCC. 1986, 817−825
[10] Lisnianski A, Levitin G (2003) Multi-State System Reliability. World Scientific Publishing,
Singapore.
[11] D’Amico G, Janssen J, Manca R (2005) Credit Risk Migration Semi-Markov Models:
A Reliability Approach
[12] Zhang J (2005) Maintenance Planning and Cost Effective Replacement Strategies. Disserta-
tion, University of Alberta
[13] Modarres M, Kaminskiy M, Krivtsov V (1999) Reliability Engineering and Risk Analysis:
A Practical Guide. Marcel Dekker, New York
Managing the Risks of Adverse Operational
Requirements in Power Generation –
Case Study in Gas and Hydro Turbines
Abstract Load demands in power generation for the national or district grid
often require turbo-generator sets to operate under adverse operational require-
ments with respect to maintenance and design ideals. Such instances typically
involve turbines operating beyond maintenance schedules or at part load condi-
tions. Part load operations for hydro turbines, in particular, present a set of unique
problems. Power generation managers have to manage the risks of machine dam-
age imposed on their engineering assets in attempt to ensure continuing and stable
electricity despatch. This paper presents two case studies examining the risks of
machine failures from adverse operating requirements and how it could be man-
aged by condition monitoring. One involves gas turbines operating beyond OEM
recommended operating hours between maintenance. Blades failures are potential
concerns as well. The risks were evaluated and managed with vibration monitor-
ing of the blades passing frequencies. The other case study relates to hydro tur-
bines operating in rough zones at part load conditions dictated by load stabiliza-
tion requirements of the electricity grid. Measurements of vibrations, draft tube
pressures and strain gauging showed distressed conditions when the turbines were
operated at part loads. Premature failures were experienced in these units.
__________________________________
M.S. Leong, B.Sc, PhD
Professor, Institute of Noise and Vibration, Universiti Teknologi Malaysia,
Jalan Semarak, 54100 Kuala Lumpur, Malaysia
e-mail: salman.leong@gmail.com
N.B. Hee, B.Sc
Research Associate, (formerly Power Station Manager, Tenaga Nasional Berhad),
Institute of Noise and Vibration, Universiti Teknologi Malaysia,
Jalan Semarak, 54100 Kuala Lumpur, Malaysia
e-mail: ngbh1@yahoo.com
1 Introduction
One of the many challenges that must be addressed by electricity generation op-
erators, planners and national grid administrators is its ability to meet the require-
ments for the continuous supply of electricity to the national community with the
necessary reliability, taking into considerations technical, economical, environ-
mental and socio-political conditions. This, in particular, relates to electricity
supply having to meet electricity demand without fail. The dynamics between
supply and demand involve both long-term and daily short-term time frames. This
paper deals with maintenance and reliability issues faced by plant operators as a
result of having to ensure short-term power generation coping with immediate
supply (load despatch) from their facilities. Electricity demand fluctuates through-
out the day and night, peaking when industrial and consumer demands peak,
which are amongst many factors influenced by industrial usage, climate and sea-
sonal changes.
Recent experiences around the world demonstrated that power generation for
the national and district electricity grid have little excess load capacity (often
termed “spinning reserves”), partly due to the exorbitant capital cost of power
generation plants expansion and inherent unplanned outages (non-availability) of
existing facilities. Under these scenarios, power generation plants are often oper-
ated at maximum load capacities. In the event of unscheduled breakdowns or
equipment out on maintenance not brought back to service as originally planned,
plant operators often find themselves unable or not allowed to remove currently
operating units for maintenance based on the sole reason that a maintenance (in-
spection outage) is due. Maintenance schedules for large turbo-generator sets are
often guided by recommendations of the manufacturer (and insurance coverage
which may dictate compliance to such recommendations). This inevitably results
in a dilemma to a plant operator (and the national electricity grid Administra-
tor/National Load Despatch Centre) when national electricity load demands does
not permit units to be removed for maintenance. This paper, in part, examines how
such a dilemma needs to be managed.
Another problem relates to how electricity generation (MW power output) has
to be matched against electricity consumption. Base loads are provided by turbo-
generator sets on continuous operations, and peaking units are used to accommo-
date the varying peak load demands. In Malaysia, and probably in other countries,
base loads are usually assigned to steam and gas turbine sets (and nuclear if avail-
able) and peak electricity loads assigned to gas turbines and hydro turbines since
start ups and stoppage could be more readily accommodated on these turbine
types as compared to steam turbines, for example. This would, of course, be dic-
tated by the generation mix and availability unique to the country. Under such
scenarios of daily start stops of gas turbines, daily heat cycles are imposed on gas
turbines. Some manufacturers use Equivalent Operating Hours (EOH) to reflect
additional thermal reversal cycles imposed on the turbines in addition to actual
running hours.
Managing the Risks of Adverse Operational Requirements in Power Generation 209
There are several issues of pertinent concern relating to gas turbine operations in
power generation which are farily typical in industrial gas turbines.
Past experiences of power generation plants showed that blade failures are the
most common in gas turbines (see Figure 1). Rubs are also occasionally noticed on
the casing and rotor. This was consistent with experiences reported in the literature
that showed that blade failures are the most common fault in industrial gas tur-
bines. Meher-Homji [1, 2] cited statistics from a renowned insurance company
that blade failures accounted for as much as 42 % of failures in gas turbines. In a
more recent article by an insurance company (Allianz Technology Centre AZT
[3]), it was stated that statistical analysis of 714 gas turbine installation compo-
nents investigated by them during the last 10 years had shown that turbine blading
(14 %), compressor parts (9 %), casing (5 %), combustion chambers (5 %), rotors
(5 %) and burners (3 %) had the highest damage rates.
Figure 1 Common Gas Turbine Blade Failures Including: (a) foreign Object Damage (FOD),
(b) lost Parts, and (c) cracks at Root
210 M.S. Leong and N.B. Hee
The more common problems from turbine blade rows are foreign object dam-
age, lost parts, cracks (at the blades and roots), rubs, loose disk coupling, deforma-
tion and erosion. Lost parts would usually result in an increased synchronous
vibration response (increased amplitude and/or phase shift) and are more readily
detected from the increased vibration amplitude and/or phase shift of the x1 vibra-
tion vector. Cracks, looseness and rubs, unless reaching catastrophic stage, often
remains undetected from overall vibration levels monitoring that are typically used
in the equipment protection system and in-plant DCS/monitoring displays. Blade
related faults had been shown to be more readily detected from increased ampli-
tudes of blade passing frequency components [4, 5].
1.80
1.60
1.40
1.20
Amplitute, Gs
GT3
1.00 GT4
0.80 GT5
GT6
0.60
0.40
0.20
0.00
1150
1300
1500
1650
1950
2000
2300
2500
2600
3100
3200
3100
3200
3800
3800
3950
4550
4700
4900
Frequency, Hz
This section summarizes the economics and financial risks/gains of the extended
EOH based on the experience of the power plant with respect to the unit of con-
cern (GT6). The financial risks were evaluated based on potential cost of blade
failures (in all likelihood FOD damage) weighted against opportunity costs (reve-
nue and capacity payments from the electricity distribution party). Even with a
FOD damage, there exists an excess clause in the insurance coverage that makes
payment for typical FOD damage not a claimable sum. The key is to ensure that
risks associated with a major catastrophic failure of the turbine is avoided.
The maintenance schedule in accordance to OEM’s recommendations was
16,000 EOH for complete cycle of inspection with intervals between minor inspec-
tions of 4000 EOH. This unit was approximately at 64,000 EOH at the time OEM’s
request for an immediate outage (as compared to the scheduled 48,000 EOH).
When the unit was finally removed for overhaul at 65,953 EOH, this meant an
extension of 17,953 EOH, saving one complete cycle of inspection.
A more significant saving was achieved based on availability as reflected in ca-
pacity payment and energy payment if the unit was taken out on an untimely out-
age. This unit was operated for more than 120 days beyond the day when an im-
mediate outage was recommended by the OEM. This represented an additional
120 days availability. The Capacity Payment and Energy Payment for the gas
turbine payable to the plant were valued at USD 21,100 and USD 35,350 respec-
tively per machine per day, which amounted to USD 56,450 per day. This repre-
sented a revenue savings for the power plant of USD 6,774,000 for availability.
The combined savings to the plant for this extended EOH from maintenance sav-
ings and increased availability revenue were almost USD 14.6 million. Therefore,
it made financial sense for this plant to have considered and implemented the
extended EOH in an environment of pressing MW load demand.
While the effects of load variations in gas turbines are less obvious to the plant
operator on an immediate basis (notwithstanding the fact that it does have a sig-
nificant long-term impact on its useful life and the EOH), load variations in hydro
214 M.S. Leong and N.B. Hee
turbines are however more apparent on an immediate basis. Hydro turbines inher-
ently have a designated “rough zone” with respect to its performance curve (oper-
ating window). Due to the flow angles of the working fluid (water) as it enters and
leaves the runner, fluid structural interaction under part load conditions results in
unbalanced hydraulic conditions in the working section and draft tube. In the part
load operating zone, the hydraulic efficiency drops and, more importantly, from a
life cycle perspective, vibrations (and stresses) induced on the turbine are substan-
tially increased. This section of the paper presents issues related to increased risks
of long-term integrity of hydro turbines that are often not readily recognized by
National Load Despatch administrators (and perhaps even the plant operator)
arising from operating hydro turbines under part load conditions.
This case study relates to four Francis turbine units (base load each 100 MW)
operating at a constant speed of 250 rpm (4.1 cps). The hydro turbines were, al-
most as a matter of routine, used to stabilize power supply to the national electric-
ity grid; and, as such, operated over a broad load range over extended period in its
service.
Draft tube pressures (although often accessible for manual readings, but not neces-
sarily monitored for condition assessment) would exhibit dynamic variations aris-
ing from changes in flow conditions. A typical plot of draft tube pressures under
different load regimes is shown in Figure 4. The pressure variations with time
inherently results in pressure pulsations with frequency content. Fast Fourier
Transformation (FFT) of the pressure would yield dynamic pressures at sub-syn-
chronous frequencies of the shaft running speed. A pressure FFT is shown in Fig-
ure 5 for an operational load condition of 40 MW. A dominant pressure peak was
evident at 1.0 Hz (~25 % of runner RPM).
Operations of hydro turbines under part load conditions had been long known
to result in a spiral vortex flow as the water leaves the runner into the draft tube.
This flow vortex results in cyclic pressure fluctuations as evident in the above
A consequence of the vortex flow generated within the runner and draft tube is
high sub-synchronous vibrations induced in the rotor. The sub-synchronous com-
ponent (1.03 Hz corresponding to ~0.25xRPM) in fact exceeds the synchronous
x1 RPM associated with residual rotor unbalance. This sub-synchronous peak
frequency at 0.25xRPM (1.03 Hz) was identical to the dynamic frequency of the
pressure peak measured at the draft tube. This confirmed that the sub-synchronous
peak was flow induced. A plot of vibration spectrum against load (as obtained
from controlled tests in load increments of 10 MW) is given in Figure 6. These
plots clearly showed the onset of relatively higher flow induced vibrations result-
ing from part load operations (often referred to by the OEM and plant operators as
the “rough zone”).
A visually more dramatic insight on the effects of part load operations is ob-
tained when the shaft vibrations were displayed in time waveforms. Vibration time
Figure 7 Vibration Time Waveforms for Base Load and Part Load Conditions
A consequence of the pressure pulsations often visually observed on the draft tube
casing is the physical deformations (flexing) of the draft tube casing. The draft
tube steel casing for all four units of this particular hydro power plant, in fact, had
to be stiffened with additional ribs soon after initial commissioning as a result of
cracks in the draft tube external casing due to excessive vibrations (flexing) of the
draft tube casing. Even with additional steel ribs reinforcement for additional
rigidity, flexing of the draft tube casing was still visible.
Figure 8 Draft Tube Casing Strains (Maximum and Minimum Principal Stress) Versus Time
for Base Load and Part Load Conditions
Managing the Risks of Adverse Operational Requirements in Power Generation 217
Strain gauging of the draft tube casing was undertaken on one unit. Strain lev-
els were measured under incremental load conditions during the same time when
the above shaft vibrations were obtained. Comparisons between the baseline
(100 MW) strains time waveforms against part load condition (40 MW) are given
in Figure 8. The time waveforms of the measured strain (which were then con-
verted to stress levels) showed dynamic characteristics similar to the shaft vibra-
tions for the same load conditions. Stress reversals were significantly more ex-
treme at part load conditions as compared to base load conditions, which were
typically five to ten times higher. This demonstrated that components with struc-
tural fluid interaction were higher stressed, inevitably leading to reduced life.
The most commonly recognized and perhaps accepted consequence of part load
operations in hydro turbines are repairs and part replacement to the runner and
draft tube liner due to cavitation after several years of operation. The unit inher-
ently operates under reduced hydraulic (cost) efficiencies under part load condi-
tions. This may be deemed an acceptable price to pay arising from the necessity to
operate in the rough zone for load stabilization to the electricity grid. What is
unacceptable to plant operators would be the inability to operate at all at higher
loads due to high vibrations inherent with part load operations. In fact, there was
an incident with this particular power station where the main bearing pedestal
which supports the entire rotor train suffered structural cracks well before design
life of the unit, resulting in an inability of the unit to be operated for load dispatch
at higher loads. It was the considered opinion of the authors that this bearing ped-
estal structural failure was a result of extended operations in the rough zone under
part load operations of the unit.
4 Conclusion
and sideband activities were used to assess potential deterioration in the blades
condition. For operations under part load and operating window, all available
monitoring tools should be used. For hydro turbines, this included monitoring and
dynamic analysis (FFTs) of the draft tube pressures.
References
[1] Meher-Homji CB (1995) Blading vibration and failures in gas turbines: Part C – Detection
and troubleshooting. ASME no. 95-GT-420
[2] Meher-Homji (1995) CB Blading vibration and failures in gas turbines: Part D – Case
studies. ASME no. 95-GT-421
[3] Allianz Center for Technology (2008) Product service information 1/00. Information / Dam-
age analysis. www.en.allianz-azt.com
[4] Mitchell J (1975) Examination of pump cavitation, gear mesh and blade performance using
external vibration characteristics. In: Pr 4th Turbomach Sym Texas A&M University 39–45
[5] Kubiak J, Gonzalez G, Garcia G, Urquiza B (2001) Hybrid fault pattern for the diagnosis of
gas turbine component degradation. Int Joint Power Generation Conf New Orleans no.
PWR-19112
[6] Leong MS, Lim MH (2008) Detection of blade rubs and looseness in gas turbines – Opera-
tional field experience and laboratory study. 5th Int Conf Cond Monit Mach Failure Detect
Prev Tech Edinburgh 901–912
[7] Lim MH, Leong MS (2008) Improved blade fault diagnosis using discrete blade passing
energy packet and rotor dynamics wavelet analysis. ASME no. GT2010-22218, ASME
Turbo Expo2010: Power for Land, Sea and Air, Glasgow
Field-Wide Integrated Planning in a Complex
and Remote Operational Environment:
Reflections Based on an Industrial Case Study
Abstract Oil and Gas (O&G) producers are challenged to increase the working
efficiency while reducing production costs. This demands application of various
innovative techniques and novel work management solutions. In this context,
collaborative work and integration of work processes have become a major focus
of interest. One well-known initiative involves strategic and field-wide integrated
work planning that aims at more efficient and cost-effective coordination of activi-
ties by core disciplines and stakeholders for maximising business results.
This paper addresses issues related to Integrated Planning (IP) within an O&G
offshore production environment. It is based on an ongoing project in Norway in
close cooperation with the O&G industry.
Keywords Oil and gas assets, Work management, Operations and maintenance
performance
1 Introduction
From the official energy statistics of the U.S. Government [1], the world’s de-
mand for oil continues to grow. The shortage of supplies together with the
growth of global requirements has significantly contributed to the rise in the
price of oil. Higher oil prices have led to a significant expansion of O&G pro-
duction and exploration [2, 3] to meet the energy demand and meet the raising
__________________________________
Y. Bai
Centre for Industrial Asset Management, University of Stavanger N-4036, Stavanger, Norway
J.P. Liyanage
Centre for Industrial Asset Management, University of Stavanger N-4036, Stavanger, Norway
Integrated Operations (IO) is a new baseline established in the NCS during the
past few years. It is seen as a way to optimise and improve business performance
by integrating in operational disciplines, different phases of complex but inter-
dependent work processes, cooperative organisations, and different geographical
locations. This is under implementation through a number of innovative solutions
involving real time data integration, field-wide information sharing, interpretation,
support tools, management techniques, advanced technologies and new principles
of collaborative working [9, 10].
IO could also be seen as an operational setting where integration of both pro-
duction assets and technical support environment [9] is required to create an active
collaborative environment for better efficiency of production assets based on en-
hanced capabilities. In some oil fields, as experienced today on NCS, the estab-
lishment of a common digital infrastructure and reliable data management is al-
ready on schedule. Meanwhile, as one of the necessary factors, intelligent work
processes, which develop collaborative decision loops, and task and activity flow
across disciplines both onshore and offshore, is also under focus as a prerequisite
for successful applications of IO [8].
In this context, initiatives related to Integrated Work Processes (IWP) are also
in progress to streamline decisions and activities. In principle, the IWP involves an
effort to integrate work processes across operational disciplines by using Informa-
tion Communication Techniques (ICT) [11, 9]. It involves a series of technical and
managerial measures where information about operations must be made available
to all parties involved online and in real time to enhance the work collaborative
management process with better time, quality, cost and less risk. To realise the
Field-Wide Integrated Planning in a Complex and Remote Operational Environment 221
1.2 Method
This case study was performed with one of the major O&G producers in the
North Sea with participation in a company’s planning process. The objective was
to identify Integrated Planning scenarios, and was addressed mainly by using
empirical data from the Norwegian Continental Shelf (NCS), participating in the
company’s internal programs and projects, and using the knowledge of professio-
nals in the field and existing academic knowledge. Required data was collected
and knowledge and understanding gathered through communication with key
offshore engineers, active co-operation with IP planners, review of project reports
and other company documents, and being an observer in internal project work-
shops and meetings.
This paper focuses on the Integrated Planning concepts and its possible appli-
cation levels based on different environments. A brief introduction of influential
factors derived from aspects of dynamic businesses, cost, time, and quality will
also be addressed to illustrate the limits and constraints of the actual Integrated
Planning solution.
2 Integrated Planning
As Kayacan and Celik describe [12], Integrated Planning (IP) enables the align-
ment of key operational planning processes to provide a common perspective
across work plans. The major objective of IP is to integrate all operational plans
into a single centralised planning system which will be realised online and is based
on a complete database that contains key data of critical processes.
Oil and gas production and exploration involves complex working processes.
According to Payne [13], historical operation planning fails to link strategic plans
to operational plans. Each operational segment focuses on its own plan, creating
conflicts and resource waste based on constraint factors management [14]. Also,
the lack of performance measurement results in the deviation between business
strategy and execution [15]. This seriously harms the feasibility of strategies and
reduces production effectiveness. The effort of the O&G sector is to merge all
activity-related information coming from multi-disciplinary sources to an accurate,
integrated plan with seamless interface for efficient alignment between need and
requirements, and daily work.
222 Y. Bai and J.P. Liyanage
with rational leverages, linkages, references and charts for better visualisation,
interpretation and application, process efficiencies can be significantly improved
as information sharing goes beyond a “need to know basis” [20].
By nature, Integrated Planning is much more than a simple and linear design of
plans and schedules. All departmental plans with some temporary projects con-
tribute to a complex mix of information, and involve many kinds of inter-
relationships that are relatively difficult to fully understand. This raises the re-
quirement for a form of Portfolio Management, a management tool for construc-
tive management between different projects by project scope identification and
organisation patterns. The expectation here is that the IT system provides a mov-
able portfolio structure for future developments, thus providing a platform to real-
ise the control of task portfolio and applications following variability in the critical
dimensions [21].
For realising the operational objectives of IP, O&G producers need to evaluate the
current planning status and optimise it through work process integration and by
updating IT and infrastructure tools. However, due to various reasons (i.e. busi-
ness requirements, financial limit, future growth prospects, etc.), it is not easy to
achieve all business objectives of IP for each oil field. It challenges O&G produc-
ers to evaluate their production capacity and environment, and identify the best
solution based on an effective balance between the cost of IP establishment and
the benefits from its implementation.
Following the description above, IP level-1 is the basic and historical IP template
for planning in O&G industry. When the IP develops from level-1 to level-2, the
cost is mostly in the adjustment of traditional work processes. The amount of
effort required for an oil field to move to level-2, through the establishment of
independent databases, organisation of multi-disciplinary workshops, common
planning formats, etc., would not be so relatively extreme.
228 Y. Bai and J.P. Liyanage
Figure 7 Profit Potential for IP Vary from one Business Situation to Another
small-reserve oil fields with short-term operating contracts and limited growth
opportunities may find that the situation is not conducive to the development and
implementation of IP on a large scale. In such cases, efficiency improvements in
work planning processes are compared to maximum production with limited budget
consumption. Figure 7 illustrates the profit potential for the two cases.
In Figure 7, line ‘AB’ represents the case of the small-reserve oil field with
limited growth opportunities, while the line ‘CD’ represents that for a complex
rich-reserve oil field with better growth opportunities. The difference in profit
potential can occur because the impacts of changed planning processes differ in
relation to the complexity and scope of operations.
Furthermore, at least regarding the North Sea, the development within the
O&G industry related to IO has provided a common and an effective basis for IP
type activities. Even though economic status and profit-cost calculation have a
large impact on IP development and implementation, there are some other factors
as well. These are briefly presented in the next section.
Cost, time, quality and risks are among key criteria for evaluating business per-
formance. The IP of all activities must satisfy the requirement from these criteria.
230 Y. Bai and J.P. Liyanage
The O&G production and exploration projects are characterised by large capital
investments and complex processes. As an optimising solution of O&G produc-
tion, IP is inevitably influenced by business scope, budget, profit, and related
strategy. Among the main factors are:
i. Scope of O&G production: The number of assets involved and the scale of
production.
ii. Company business strategies and policies: The business objectives and oppor-
tunities in the region.
iii. Growth opportunities: The business options to grow the activities.
iv. Life-extension: The production life of current producing assets.
v. Constraints form business cooperation: The types of business cooperation
available and related needs of business partners.
5 Conclusion
References