Adams 2015
Adams 2015
Kevin MacG. Adams
Non-functional
Requirements
in Systems
Analysis and
Design
Topics in Safety, Risk, Reliability and Quality
Volume 28
Series editor
Adrian V. Gheorghe, Old Dominion University, Norfolk, VA, USA
Non-functional Requirements
in Systems Analysis
and Design
123
Kevin MacG. Adams
Information Technology and Computer
Science
University of Maryland University College
Adelphi, MD
USA
At the end of the last century, corporations and government entities in the United
States showed increasing concern for the loss of competitive advantage previously
enjoyed by products designed and manufactured in the United States. The loss of
competitive advantage experienced by manufacturers of these products was
attributed to a variety of causes that both threatened the country’s standard of living
as well as its position within the larger world economy (Dertouzos et al. 1989). A
report by the National Research Council reported that:
Engineering design is a crucial component of the industrial product realization process. It
is estimated that 70 percent or more of the life cycle cost of a product is determined during
design. (NRC 1991, p. 1)
The engineering community agreed with this assessment, stating that “market
loss by U.S. companies is due to design deficiencies more than manufacturing
deficiencies” (Dixon and Duffey 1990, p. 13). A variety of studies on both man-
ufacturing and engineering design were undertaken in order to improve the situation
in both the industrial sector and in academia (NRC 1985, 1986, 1991). The engi-
neering community found that in order to improve both the cost and efficacy of
products produced for the global economy and “To regain world manufacturing
leadership, we need to take a more strategic approach by also improving our
engineering design practices” (Dixon and Duffey 1990, p. 9).
“Market loss by U.S. companies is due to design deficiencies more than man-
ufacturing deficiencies” (Dixon and Duffey 1990, p. 13). The engineering design
process in use in the industrial sector required improvement, but more importantly,
the theory of design and implementing design methodologies advocated by the
academic community were stagnant. A renewed emphasis on design and a new
subdiscipline in engineering design were adopted by the engineering community.
New requirements for design activities in the academic curricula were mandated,
and the national engineering accreditation organization included additional design
criteria as part of the accreditation assessment process. Major efforts to reinvigorate
design in both undergraduate and graduate engineering programs in the United
States have reemphasized the role of design in the engineering curricula. This text
vii
viii Preface
has been developed to address a unique topic in engineering design, thereby filling
a void in the existing engineering literature.
The topic of this text—Nonfunctional Requirements in Systems Analysis and
Design—supports endeavors in the engineering of systems. To date, nonfunctional
requirements have only been addressed within highly focused subdisciplines of
engineering (e.g., reliability, maintainability, availability; traceability; testability;
survivability; etc.). The wider engineering community has not had access to
materials that permit them to develop a holistic, systemic perspective for non-
functional requirements that regularly affect the entire system. Having a basic
understanding of how the principal nonfunctional requirements affect the sustain-
ment, design, adaptability, and viability concerns of a system at a high level, should
help fill a void in the existing engineering literature.
To support this approach to understanding nonfunctional requirements during
engineering design endeavors, the book is divided into six major parts: (1) Systems
Design and Nonfunctional Requirements; (2) Sustainment Concerns; (3) Design
Concerns; (4) Adaptation Concerns; (5) Viability Concerns; and (6) Conclusion.
Part I focuses on the purposeful design of systems and how nonfunctional
requirements fit into the design approach. Chapter 1 provides an introduction to the
design of engineering systems, and reviews how engineers in the various engi-
neering disciplines are responsible for developing designs for complex man-made
systems. It also addresses systematic design, the breadth and depth of disciplines
associated with design activities, and the use of life cycle models and supporting
processes in the systems design process. Chapter 2 provides a description of
engineering design and explains how it fits within the larger scientific paradigm. It
includes a description of the desirable features and thought processes invoked in
good engineering design methodologies. The chapter contains a high-level over-
view of seven historically significant design methodologies. It concludes with a
more detailed section on axiomatic design and explains why axiomatic design is
proposed as an effective system-based approach to the design of engineering sys-
tems. Chapter 3 provides a formal definition for nonfunctional requirements and the
role they play in the engineering design of man-made, complex systems. It
addresses the wide range of nonfunctional requirements, and introduces a number
of taxonomies that have been used to describe nonfunctional requirements. The
chapter concludes by presenting a notional taxonomy or framework for under-
standing nonfunctional requirements and their role as part of any system design
endeavor. This taxonomy distributes 27 nonfunctional requirements into four
concerns: sustainment concerns, design concerns, adaptation concerns, and viability
concerns. The four concerns serve as the headings for the next four Parts of the text.
Part II addresses sustainment concerns during systems design endeavors. It is
divided into two chapters which address five nonfunctional requirements. Chapter 4
addresses the nonfunctional requirements for reliability and maintainability. The
section on reliability reviews the basic theory, equations, and concepts that underlie
its utilization, addresses how reliability is applied in engineering design, and also
explains how reliability is used as a technique for determining component reli-
ability. The section concludes with a metric and measureable characteristic for
Preface ix
adaptability and flexibility are defined, and a method for distinguishing between
these two nonfunctional properties is proposed. Modifiability is defined, and a
distinction between it and both scalability and maintainability is provided.
Robustness is defined, and its impact on design considerations is discussed. The
chapter concludes by defining a measure and a means for measuring changeability
that is a function of all four nonfunctional requirements discussed in the chapter.
Chapter 10 addresses the nonfunctional requirements for extensibility, portability,
reusability, and self-descriptiveness. The chapter begins by reviewing extensibility,
its definitions, and how it is approached as an aspect of purposeful systems design.
Portability is defined, positioned as a desirable characteristic, and is discussed as it
relates to the four factors designers must consider in order to achieve portable
designs. Reusability is addressed by providing both a definition and an explanation
of its role in systems designs. Both top-down or bottom-up approaches, and three
unique techniques that address reusability are presented. The section concludes by
recommending two strategies and ten heuristics that support reusability in systems
designs. Self-descriptiveness is defined and discussed by emphasizing the types of
problems associated with poor self-descriptiveness. Seven design principles for
user-systems dialogue are proposed to decrease errors and improve system self-
descriptiveness. The chapter concludes by defining a measure and a means for
measuring adaptation concerns, which is a function of extensibility, portability,
reusability, and self-descriptiveness.
Part V addresses viability concerns during systems design endeavors. It is divided
into two chapters which address eight nonfunctional requirements. Chapter 11
addresses the nonfunctional requirements for understandability, usability, robust-
ness, and survivability. The first three nonfunctional requirements are defined and
positioned within the requirements for good system design. The fourth nonfunctional
requirement, survivability, is defined and 17 design principles that may be invoked
when designing for survivability are addressed. The chapter concludes by defining a
measure and a means for measuring core viability concerns, which is a function of
understandability, usability, robustness, and survivability. Chapter 12 addresses the
nonfunctional requirements for accuracy, correctness, efficiency, and integrity. The
chapter begins by reviewing accuracy, its definitions, and concepts related to ref-
erence value, precision, and trueness. The second section defines correctness, and
demonstrate how both verification and validation activities provide evaluation
opportunities to ensure correctness. Four design principles that support the devel-
opment of systems that correctly represent the specified requirements for the system
are reviewed. Efficiency is addressed by providing a clear definition for efficiency,
and by establishing a proxy for system efficiency. Integrity and the concept that
underlies its use as a nonfunctional requirement in systems designs is reviewed.
Thirty-three security design principles, and the life cycle stages where they should be
invoked when designing for systems for integrity are proposed. The chapter con-
cludes by defining a measure and a means for measuring other viability concerns,
which is a function of accuracy, correctness, efficiency, and integrity.
Part VI provides a conclusion in Chap. 13. The conclusion reviews the climate
that led to the small crisis in engineering design during the late 1980s and the need
Preface xi
for revision of the engineering curricula and accreditation criteria. The major efforts
to reinvigorate design in both undergraduate and graduate engineering programs in
the United States which reflected the reemphasis of the role of design in the
engineering curricula are covered. Finally, the rationale for the development of the
text, and the need to address nonfunctional requirements in systems analysis and
design endeavors are reviewed.
This book is intended for use by systems practitioners or in a graduate course in
either systems engineering or systems design where an understanding of nonfunc-
tional requirements as an element of the design process must be understood. Given its
discipline-agnostic nature, it is just as appropriate for use in a software, mechanical,
or civil engineering class on design or requirements. The book may be utilized in a
traditional 12- or 14-week schedule of classes. Part I should be taught in order of
appearance in the book to provide the proper theoretical foundation. Parts II–V can be
taught in any order, although, lacking any other preference, they can be taught in the
order in which they appear. The conclusion in Chap. 13 should follow the conclusion
of Parts I–V, as it builds on the information developed in Chaps. 4–12.
Upon completion of the text, the reader or student should have an improved
understanding and appreciation for the nonfunctional requirements present in
complex, man-made systems. Although the text addresses only 27 nonfunctional
requirements, the author recognizes that many additional nonfunctional require-
ments exist and that they may be required to be addressed in many systems design
endeavors. However, armed with the approach used in understanding the 27 defined
functional requirements (i.e., definition, design utilization, measurement, and
evaluation), additional nonfunctional requirements may be similarly treated.
As always, the author takes responsibility for the thoughts, ideas, and concepts
presented in this text. Readers are encouraged to submit corrections and suggestions
through correspondence with the author in the spirit of continuous improvement.
References
Dertouzos, M. L., Solow, R. M., & Lester, R. K. (1989). Made in America: Regaining the
Productive Edge. Cambridge, MA: MIT Press.
Dixon, J. R., & Duffey, M. R. (1990). The neglect of engineering design. California Management
Review, 32(2), 9–23.
NRC. (1985). Engineering Education and Practice in the United States: Foundations of Our
Techno-Economic Future. Washington, DC: National Academies Press.
NRC. (1986). Toward a New Era in U.S. Manufacturing: The Need for a National Vision.
Washington, DC: National Academies Press.
NRC. (1991). Improving Engineering Design: Designing for Competitive Advantage. Washington,
DC: National Academy Press.
Acknowledgments
I would like to start by acknowledging three inspirational naval engineers that led
the most successful naval engineering endeavors of the twentieth century. Their
legacy has directly affected me and my perspective on engineering.
• Admiral Hyman G. Rickover [1900–1986], Father of the Nuclear Navy, led the
group of engineers, scientists, and technicians that developed and maintained the
United States Navy’s nuclear propulsion program from its inception in 1946 until
his retirement in 1983. Acknowledged as “the most famous and controversial
admiral of his era,” (Oliver 2014, p. 1) Admiral Rickover’s personal leadership,
attention to detail, conservative design philosophy, and program of intense
supervision ensured that the Nuclear Navy has remained accident-free to this day.
I was privileged to serve, in positions as an enlisted machinist’s mate, a sub-
marine warfare officer, and a submarine engineering duty officer in Admiral
Rickover’s program for over 23 years. Admiral Rickover’s legacy was present in
all we did. The magnificent underwater vessels that continue to protect this
country are a tribute to both his engineering brilliance and his ability to persevere
in the face of great odds.
• Vice Admiral Levering Smith [1910–1993] served as the first technical director
for the Polaris submarine launched ballistic missile program, “the most con-
vincing and effective of the nation's strategic deterrent weapon systems”
(Hawkins 1994, p. 216). He served in this capacity from 1956 until his retirement
in 1974. During this time, Vice Admiral Smith led the team that conceived and
developed Polaris, transitioned the force from Polaris to Poseidon, and led the
conceptual development of the current Trident ballistic missile system. The
legacy of the Navy’s strategic systems program “may have set an unattainable
standard for any equally important national endeavor” (Hawkins 1994, p. 215).
Once again, I was privileged to serve on a Polaris-Poseidon capable submarine
for a period of five years. Vice Admiral Smith’s technical acumen was present
throughout the weapons department and the field activities providing support for
our operations.
xiii
xiv Acknowledgments
• Rear Admiral Wayne Meyer [1926–2009], Father of Aegis, guided the devel-
opment and fielding of the Navy’s Aegis weapons system, the Shield of the Fleet,
from 1970 until 1983. Rear Admiral Meyer changed the way Navy surface
combatants were designed, built, and delivered to the Navy. The first Aegis
warship, “TICONDEROGA and her combat system were the product of a single
Navy Project Office (PMS 400), led by RADM Wayne E. Meyer and assigned
the total responsibility for designing, building, deploying, and maintaining the
AEGIS fleet. The creation of this single office with total responsibility repre-
sented a sea change in Navy policy and organization for acquisition of surface
combatants” (Threston 2009b, p. 109). As a member of the submarine force, I
had no personal involvement with this program. However, my father was a senior
manager at the Radio Corporation of America (RCA) which served as the prime
contractor for Aegis. My father introduced me to Rear Admiral Meyer when I
was a junior in high school. I had many occasions to hear firsthand stories about
how this amazing Naval engineer was changing the way surface warships were
being acquired by the Navy. The admiral’s build a little, test a little, learn a lot
approach was adopted throughout the program and has served me well in my
own career. The utilization of system budgets was revolutionary. “In addition to
obvious budgets (such as weight, space, power, and cooling), other budgets were
established for system errors, reliability, maintainability and availability, system
reaction time, maintenance man-hours, and a large number of others” (Threston
2009a, p. 96). The purposeful use and positive effects of systems budgets, which
address nonfunctional requirements, has stayed with me and have heavily
influenced the notions and structure for this book.
In addition, I have had the opportunity to serve with and work for a number of
engineers and technicians who have been tasked with operating and maintaining the
systems of Admirals Rickover, Smith, and Meyer. They took the time to teach me
the intricacies involved in operating and maintaining complex systems. Special
thanks to Art Colling, Stan Handley, Mel Sollenberger, John Almon, Butch Meier,
Joe Yurso, Steve Krahn, George Yount, John Bowen, Vince Albano, and Jim Dunn
for supporting me in the quest to understand the engineering of systems.
To the many students I have been privileged to teach at both the University of
Maryland University College, the College of William and Mary, Virginia Wesleyan
College, and Old Dominion University: Your quest for knowledge has challenged
me to constantly renew and improve my own understanding as part of the learning
process.
To my parents, for providing the love, resources, and basic skills required to
thrive as an engineer. To my children, for many hours of challenges, joy, and
amazement. Finally, to my wife, for her constant support, companionship, and love
throughout the process of completing this book and our journey through life
together.
Kevin MacG. Adams
Acknowledgments xv
References
2 Design Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1 Introduction to Design Methodologies . . . . . . . . . . . . . . . . . . 15
2.2 Introduction to the Discipline of Engineering Design. . . . . . . . 16
2.2.1 Features that Support Design Methodologies . . . . . . . 17
2.2.2 Thought in a Design Methodology . . . . . . . . . . . . . . 19
2.2.3 Synthesis of Thought and Features that Support
All Engineering Design Methodologies . . . . . . . . . . . 21
2.3 Methodological Terms and Relationships . . . . . . . . . . . . . . . . 21
2.3.1 Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.2 Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.4 Method or Technique . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.5 Relationship Between Scientific Terms . . . . . . . . . . . 24
2.4 Hierarchical Structure for Engineering Design . . . . . . . . . . . . 25
2.4.1 Paradigm for Engineering as a Field of Science . . . . . 25
2.4.2 Philosophy for Engineering . . . . . . . . . . . . . . . . . . . 25
2.4.3 Methodology for Engineering Design . . . . . . . . . . . . 26
xvii
xviii Contents
Part VI Conclusion
13 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
13.1 Position and Importance of Engineering Design . . . . . . . . . . . 253
13.2 Education in Engineering Design . . . . . . . . . . . . . . . . . . . . . 254
13.3 Position of This Text Within Engineering Design . . . . . . . . . . 257
13.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Part I
Systems Design and Non-functional
Requirements
Chapter 1
Introduction to the Design
of Engineering Systems
The term engineering systems used in the title of this chapter may be puzzling. This
term is used purposefully in order to account for an emerging discipline in engineering
that addresses “systems that fulfill important functions in society” (de Weck et al.
2011, p. xi). The term describes “both these systems and new ways of analyzing and
designing them” (de Weck et al. 2011, p. xi). The new discipline, engineering sys-
tems, addresses technology and technical systems by harmonizing them with the
organizational, managerial, policy, political, and human factors that surround the
problem while allowing the stakeholder’s needs to be met while not harming the
larger society. Some may recognize these challenges as being closely related to
terminology such as socio-technical systems (STS), engineering-in-the-large, or
macro-engineering. Engineering systems is an approach that is in agreement with the
author’s own holistic worldview for systems endeavors, which invokes a systemic
view when dealing with systems, messes, and problems (Hester and Adams 2014).
The chapter will begin by reviewing design and how engineers in the various
engineering disciplines are responsible for developing the designs for complex
man-made systems. It will conclude by addressing systematic design, the breadth
and depth of disciplines associated with design activities, and the use of life cycle
models and supporting processes in the systems design process.
The chapter has a specific learning goal and associated competencies. The
learning goal of this chapter is to be able to discriminate engineering design from
the less formal design-in-the-small invoked by inventors, hobbyists, entrepreneurs,
and ordinary human beings who tend to limit their scope to single, focused pur-
poses, goals, and objectives. This chapter’s goal is supported by the following
objectives:
• Describe the role of engineering vis-à-vis science.
• Describe the disciplines that affect engineering design.
• Identify the elements that make design a systematic endeavor.
• Describe the 5 major stages in the system life cycle.
• Recognize the technical processes in the system design stage.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the chapter topics which follow.
Engineering is “the science by which the properties of matter and the sources of
power in nature are made useful to humans in structures, machines, and products”
(Parker 1994, p. ix). Theodore von Kármán [1881–1963], the great aeronautical
6 1 Introduction to the Design of Engineering Systems
engineering pioneer and co-founder of Cal Tech’s Jet Propulsion Laboratory, has
been attributed with the following statement.
The scientist seeks to understand what is. The engineer seeks to create what never was
(Petroski 2010, p. 20).
Von Kármán was able to simply state what many committees, boards, and regu-
lators have tried to do in defining engineering. The root for the word engineering is
derived from the Latin ingenium, which means innate or natural quality.
Engineering has many definitions. One of the more comprehensive and thoughtful
has been assembled by the historians of engineering (Kirby et al. 1990).
The art of the practical application of scientific and empirical knowledge to the design and
production or accomplishment of various sorts of constructive projects, machines, and
materials of use or value to man (p. 2).
Social
Relations Finance
1.3 Engineers and Engineering in Design 7
Production
Engineering
Technology
Engineering
Science
Science
Fig. 1.2 The central activity of engineering design [adapted from a Fig. 1.1 in (Dixon 1966, p. 8)]
for a perspective of the central activity of engineering design in Fig. 1.2 that
integrates itself with disciplines such as art, science, politics, and production.
Penny (1970) discusses the principles of engineering design, using Dixon’s
(1966) perspective and as a starting point for a discussion about the engineering
design process.
The process that we label design has evolved as human beings have constructed
larger systems of increased complexity. A very simple diagram may be used to
capture the essential process of engineering design. In Fig. 1.3 engineering design is
presented in an ICOM diagram. ICOM is an acronym for Input, Control, Output,
and Mechanism (NIST 1993) and is used to describe the functions of the arrow on
the processes performed on the system or component in Fig. 1.3.
Each of the elements in Fig. 1.3 are described in terms of one another and the
processes used to approach their solution in Table 1.2.
In order to successfully accomplish the tasks in Table 1.2 the engineer is
required to acquire and maintain specialized knowledge, skills, and abilities
(KSAs). The requisite KSAs are gained through (1) formal education in under-
graduate-level engineering programs1 and (2) during training under the supervision
1
Engineering programs in the United States are accredited by the Accreditation Board for
Engineering and Technology (ABET).
8 1 Introduction to the Design of Engineering Systems
System
Input or Output
Component
Resources
2
All 50 states have licensing programs where engineers are designated as either engineers-in-training
or engineering interns during the period after they have passed their fundamentals of engineering
examination and are serving an apprenticeship. Licensed engineers are designated as Professional
Engineers and use the title, P.E.
1.3 Engineers and Engineering in Design 9
The processes that develop, operate, and eventually retire modern systems are best
described as a life cycle. The life cycle is a model of the real-world events and
processes that surround the system. Life cycle models have distinct phases or stages
that mark important points in the life of the system. For systems endeavors life
cycle models are described using standards developed by stakeholders from
industry, government, and academia that are published by the Institute of Electrical
and Electronics Engineers (IEEE) and the International Organization for
Standardization (ISO) and the International Electrotechnical Commission (IEC).
The standard we use for man-made systems life cycles is IEEE and ISO/IEC
Standard 15288: Systems and software engineering—System life cycle processes
(IEEE & ISO/IEC 2008). Important concepts from IEEE Standard 15288 include:
• Every system has a life cycle. A life cycle can be described using an abstract
functional model that represents the conceptualization of a need for the system,
its realization, utilization, evolution and disposal.
• The stages represent the major life cycle periods associated with a system and
they relate to the state of the system description or the system itself. The stages
describe the major progress and achievement milestones of the system through
its life cycle. They give rise to the primary decision gates of the life cycle (IEEE
& ISO/IEC 2008, p. 10).
A typical systems life cycle model would consist of the stages and associated goals
shown in Table 1.4.
1.4 Design in the System Life Cycle Model 11
The design stage of the system life cycle uses a number of technical processes
described in IEEE Standard 15288 (2008) to accomplish the goals of design.
Table 1.5 is a listing of the technical processes and the purposes that are invoked to
accomplish the design phase.
The detailed outcomes, and associated activities and tasks for each technical
process are described in IEEE Std 15288 (2008).
1.5 Summary
The information presented in this chapter introduced engineers and how they
transform human needs, through formal methods of design, into complex man-
made systems. The systematic nature of design and the breadth and depth of dis-
ciplines associated with design activities were also reviewed. The systematic nature
of engineering and the use of life cycle models and supporting processes were
highlighted as essential elements of engineering design.
The next chapter will discuss a number of engineering methodologies that may
be used to invoke repeatable processes for the engineering design of man-made
systems.
References
Ackoff, R. L. (1974). The systems revolution. Long Range Planning, 7(6), 2–20.
Baddour, R. F., Holley, M. J., Koppen, O. C., Mann, R. W., Powell, S. C., Reintjes, J. F., et al.
(1961). Report on engineering design. Journal of Engineering Education, 51(8), 645–661.
Bayazit, N. (2004). Investigating design: A review of forty years of design research. Design Issues,
20(1), 16–29.
Chang, H.-L., & Lin, J.-C. (2011). Factors that Impact the Performance of e-Health Service
Delivery System. In: Proceedings of the 2011 International Joint Conference on Service
Sciences (IJCSS) (pp. 237–241). Los Alamitos, CA: IEEE Computer Society.
Cross, H. (1952). Engineers and Ivory Towers. New York: McGraw-Hill.
de Weck, O. L., Roos, D., & Magee, C. L. (2011). Engineering systems: Meeting human needs in
a complex technological world. Cambridge, MA: MIT Press.
Dixon, J. R. (1966). Design engineering: Inventiveness, analysis, and decision making. New York:
McGraw Hill.
Ferguson, E. S. (1994). Engineering and the Mind’s Eye. Cambridge, MA: MIT Press.
Hester, P. T., & Adams, K. M. (2014). Systemic thinking—Fundamentals for understanding
problems and messes. New York: Springer.
IEEE, & ISO/IEC. (2008). IEEE and ISO/IEC Standard 15288: Systems and software engineering
—System life cycle processes. New York and Geneva: Institute of Electrical and Electronics
Engineers and the International Organization for Standardization and the International
Electrotechnical Commission.
IEEE, & ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and Software Engineering
—Vocabulary. New York and Geneva: Institute of Electrical and Electronics Engineers and the
International Organization for Standardization and the International Electrotechnical
Commission.
Ishii, K., Adler, R., & Barkan, P. (1988). Application of design compatibility analysis to
simultaneous engineering. Artificial Intelligence for Engineering Design, Analysis and
Manufacturing, 2(1), 53–65.
Kirby, R. S., Withington, S., Darling, A. B., & Kilgour, F. G. (1990). Engineering in history.
Mineola, NY: Dover Publications.
NIST. (1993). Integration Definition for Function Modeling (IDEF0) (FIPS Publication 183).
Gaithersburg, MD: National Institute of Standards and Technology.
NRC. (1991). Improving engineering design: Designing for competitive advantage. Washington,
DC: National Academy Press.
References 13
Pahl, G., Beitz, W., Feldhusen, J., & Grote, K.-H. (2011). Engineering design: A systematic
approach (K. Wallace & L. T. M. Blessing, trans) (3rd ed.). Darmstadt: Springer.
Parker, S. (Ed.). (1994). McGraw-Hill Dictionary of eEngineering. New York: McGraw-Hill.
Penny, R. K. (1970). Principles of engineering design. Postgraduate Medical Journal, 46(536),
344–349.
Petroski, H. (2010). The essential engineer: Why science alone will not solve our global problems.
New York: Alfred A. Knopf.
Simon, H. A. (1996). The sciences of the artificial (3rd ed.). Cambridge, MA: MIT Press.
Chapter 2
Design Methodologies
This chapter will introduce a number of engineering methodologies that may be used
to invoke repeatable processes for the purposeful design of engineering systems. The
term engineering systems may be used either as (1) a noun—“systems that fulfill
important functions in society” (de Weck et al. 2011, p. xi) or (2) a verb—“new ways
of analyzing and designing them” (de Weck et al. 2011, p. xi). In the verb form
engineering systems addresses technology and technical systems by harmonizing
them with the organizational, managerial, policy, political, and human factors that
surround the problem while allowing the stakeholder’s needs to be met while not
harming the larger society. The analysis and design efforts for engineering systems
require formal methodologies in order to implement repeatable processes that both
invoke proven engineering processes and are subject to efforts to improve those
processes.
The chapter will begin by discussing the discipline of engineering design and its
sub-disciplines of design theory and design methodology. The features and modes
of thought that support engineering design endeavors are reviewed.
The next section defines the terminology required to understand how a meth-
odology is positioned in the scientific paradigm. The section that follows provides a
formal hierarchical relationship between the terms.
This is followed by a section that presents seven historically significant engi-
neering design methodologies. The basics tenets of each methodology, including
the major phases, stages, and steps associated with each model are reviewed.
References for further study of each methodology are provided.
The chapter concludes by presenting a formal methodology for accomplishing
engineering technical processes, the Axiomatic Design Methodology. The
Axiomatic Design Methodology provides the framework through which system
functional and non-functional requirements are satisfied by design parameters and
process variables in the system design.
The chapter has a specific learning goal and associated competencies. The
learning goal of this chapter is to be able to identify describe engineering design, its
position in the scientific paradigm and a number of specific methodologies for
conducting engineering design endeavors. This chapter’s goal is supported by the
following objectives:
• Describe engineering design as a discipline.
• Differentiate between design theory and design methodology.
• Describe the desirable features of engineering design.
• Describe the double-diamond model of design.
• Construct a hierarchical structure that includes a paradigm, philosophy, meth-
odology method, and technique.
• Differentiate between the seven historical design methodologies.
• Describe the major features of the Axiomatic Design Methodology.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the chapter topics which follow.
The study of engineering design is a discipline within the broader field of engi-
neering. The scholarly journals in the discipline that address transdisciplinary
engineering design topics are presented in Table 2.1.
Design theory (or design science) and design methodology represent two aca-
demic subjects within the discipline of engineering design that each have their own
unique viewpoints and research agendas. The two subject areas are simply defined
as follows:
• Design theory—is descriptive as indicates what design is (Evbuomwan et al.
1996, p. 302).
• Design methodology—is prescriptive as it indicates how to do design
(Evbuomwan et al. 1996, p. 302)
2.2 Introduction to the Discipline of Engineering Design 17
Self conscious design contains many well-known activities such as decision making,
optimization, modeling, knowledge production, prototyping, ideation, or evaluation.
However, it cannot be reduced to any one of them or all of these activities (e.g., decisions
are made in design, but design is more than decision making). Thus, design theory is not
about modeling everything that one can find in a design practice, its goal is to precisely
18 2 Design Methodologies
address issues that are beyond the scope of the classical models that accompany its con-
stituent activities (decision making, prescriptive models, hypothetic-deductive model and
others). The questions this goal raises are of course: What, then, are the core phenomena of
Design? Is Design driven by novelty, continuous improvement, creativity, or imagination?
(Le Masson et al. 2013, pp. 97–98).
There are a number of features (or properties) that should be possessed by each
and every design methodology. The features are prominent elements characteristic
of each and every successful engineering design endeavor. While most design
methodologies do not formally address these features, they are unwritten elements
that the both the methodology and team members must invoke during the design
endeavor. The features are foundational to every engineering design methodology
and ensure that the methodology effectively and efficiently executes the eight
technical processes of the design stage. Ten features that support design method-
ologies are presented in Table 2.3 (Evbuomwan et al. 1996).
All of the features in Table 2.3 represent unique facets (i.e., one side of some-
thing many-sided) that the design methodology must contain in order to effectively
and efficiently execute the technical processes of the design stage during the sys-
tems life cycle. The first letters of each of the features may be combined to create
the acronym ERICOIDITI. The features are depicted in Fig. 2.1.
The next section will address the types of thought invoked during the execution
of a design methodology.
2.2 Introduction to the Discipline of Engineering Design 19
Desirable
Features
To be successful, the design team must invoke both the desirable features and the
two modes of thinking as a matter of routine during the execution of the technical
processes within the design methodology adopted for the design endeavor. The
ability to apply thinking modes required for the technical processes and to include
each of the desirable features provides a solid framework for the design effort.
Figure 2.3 is a depiction of the synthesis of thought and desirable features that
provide support for all engineering design methodologies.
The section that follows will review the terminology and relationships associated
with a methodology.
Desirable Desirable
Features Features
2.3.1 Paradigm
2.3.2 Philosophy
2.3.3 Methodology
Both method and technique are terms that require definition in order to both dif-
ferentiate them and provide for a common language for engineering design.
• Method: A systematic procedure, technique, or mode of inquiry employed by or
proper to a particular discipline or art (Mish 2009, p. 781).
• Technique: A body of technical methods (as in a craft or in scientific research)
(Mish 2009, p. 1283).
The section that follows will provide a hierarchical relationship between each of
the terms utilized in the description of an engineering methodology.
Paradigm
Philosophy
The network of
beliefs and
values.
The body of
The professional Methodology
theoretical
and education
knowledge that
structure.
underpins the
The worldview for worldview.
One of the systemic Method
a scientific Is at the highest
community. approaches that is
level of used to guide
abstraction. scientific endeavor.
Contains the Focused. Technique
A blend of more
systems laws , than one systemic Systematic.
principles , methodology ,
theorems , and becoming Discipline related.
axioms used by increasingly specific Narrow.
the scientific until it becomes a
community to Step by step
unique
address the world. methodology. procedures.
Precise actions.
Standard results.
Figure 2.4 provides the structure within which an engineering design methodology
exists. Documents that support a methodology in engineering design will be dis-
cussed in the section that follow.
The top-level, the paradigm, that surrounds all engineering efforts is science, the
scientific method and scientific community. The “sciences are organized bodies of
knowledge” (Nagel 1961, p. 3) which at its highest level includes six major fields:
(1) natural sciences; (2) engineering and technology; (3) medical and health sci-
ences; (4) agricultural sciences; (5) social sciences; and (6) humanities (OECD
2007). Each science is guided by “the desire for explanation which are at once
systematic and controllable by factual evidence that generates science” (Nagel
1961, p. 4). “Science is not a rigid body of facts. It is a dynamic process of
discovery. It is alive as live itself” (Angier 2007, p. 19).
The second-level, philosophy, serves to focus all engineering efforts and contains a
guide to the theoretical body of knowledge that underpins the worldview for all
engineers. There is an overarching body of knowledge that encompasses general
engineering knowledge (NSPE 2013) and individual bodies of knowledge for each
engineering discipline. For instance, the Guide to the Systems Engineering Body of
Knowledge or SEBoK (BKCASE-Editorial-Board 2014) and the Guide to the
Software Engineering Body of Knowledge or SWEBOK (Bourque and Fairley
2014). The body of knowledge acts as a guide to the specific knowledge areas
required to effectively practice engineering in the discipline governed by the body
of knowledge. Each body of knowledge endeavors to:
• To promote a consistent worldwide view of the engineering discipline,
• To specify the scope of, and clarify the relationship of the engineering discipline
with other scientific fields and engineering disciplines,
• To characterize the contents of the engineering discipline,
• To provide a topical access to the body of knowledge in the extant literature, and
• To provide a foundation for curriculum development and for individual certi-
fication and licensing material in the discipline.
26 2 Design Methodologies
The third level, the methodology, serves to focus all engineering design efforts (a
discipline of engineering) in achieving the technical processes required to design a
man-made systems. The definition of a design methodology was provided in
Chap. 1.
A systematic approach to creating a design consisting of the ordered application of a
specific collection of tools, techniques, and guidelines (IEEE and ISO/IEC 2010, p. 102).
The section that follows will present a number of formal methodologies that may be
utilized during the design of engineering system.
Phase I
Feasibility analysis
Design Phase II
Phases Preliminary design
Phase III
Detailed design
Phase IV
Construction Planning
Phase V
Distribution Planning
Production
and
Consumption
Cycle
Phases Phase VI
Consumption planning
Phase VII
Retirement planning
This step-wise execution of design phases and the associated processes should be
familiar to every engineer as it serves as the foundation for teaching the sequential
path of activities involved in delivering products and systems. The details of each of
the seven phases are presented as individual chapters in Asimow’s text Introduction
to Design (1962).
Nigel Cross, an emeritus professor of design studies at The Open University in the
United Kingdom and current editor-in-chief of the scholarly journal Design Studies,
published the first version of the eight-stage model of design shown in Fig. 2.6 in
1984. This model is unique in that it permits the user to visualize how the larger
design problem may be broken into sub-problems and sub-solutions which are then
synthesized into the total solution.
The three stages on the left hand side of the model and the one stage in the
bottom middle establishes objectives, functions, requirements, and characteristics of
the problem. The three stages on the right hand side of the model and the one in the
upper middle generate, evaluate, and provide improvements to alternatives and
identify additional opportunities that may be relevant to the problem’s design
solution. The right hand side responds to and provides feedback to the left hand
side. The details of this model are presented in Cross’ text Engineering Design
Methods (Cross 2008), which is now in its 4th edition.
Overall Overall
Problem Solution
Stage flow
Functions direction Evaluation
Sub- Sub-
problems solutions
Fig. 2.6 Eight stages of the design process [adapted from (Cross 2008, p. 57)]
2.5 Engineering Design Methodologies 29
Phase I
Analysis of Problem
Statement
of
problem
Phase II Feedback
Conceptual design loop
Selected
Schemes
Phase III
Embodiment of
Schemes
Phase IV
Detailing
Working
drawings
etc.
30 2 Design Methodologies
Vladimir Hubka [1924–2006] was the head of design education at the Swiss
Federal Technical University (ETH) in Zürich from 1970 until 1990. His area of
expertise was design science and the theory of technical systems. Hubka proposed a
four-phase, six step model that addressed elements of design from concept through
creation of assembly drawings. A simplified depiction of Hubka’s design process
model that represents the states of the technical processes during the design phases
is depicted in Fig. 2.8.
This is a unique model in that specific design documents are identified as
deliverable objects upon completion of the various steps. The details of this
innovative approach to design are described in the text Design Science:
Introduction to the Needs, Scope and Organization of Engineering Design
Knowledge (Hubka and Eder 1995). Hubka’s long-time colleague W. Ernst Eder
provides an excellent compilation on Hubka’s legacy, which includes his views on
both engineering design science and the theory of technical systems, providing a
glimpse into a number of fascinating views on these subjects (Eder 2011).
Stuart Pugh [1929–1993] was the Babcock Professor of Engineering Design and the
head of the Design Division at the University of Strathclyde in Glasgow, Scotland
from 1985 until him untimely death in 1993. During his time at Strathclyde he
completed his seminal work Total Design: Integrated Methods for Successful
Product Engineering (1991). Pugh was an advocate of participatory design using
transdisciplinary teams. Until Pugh fostered this idea in both his teaching and
consulting work, most engineers focused on technical elements of the design and
rarely participated in either the development process or the commercial aspects
associated with the product. Pugh’s use of transdisciplinary teams ensured that both
technical and non-technological factors were included in what he labeled Total
Design.
Pugh’s Total Design Activity Model has four parts. The first part is the design
core of six phases: (1) user need; (2) product specification; (3) conceptual design;
(4) detail design; (5) manufacture; and (6) and sales. The six phases of the design
core are depicted in Fig. 2.9. The iterations between the phases account for changes
to the objectives for the product during the period of design.
The second part of the Total Design Activity Model is the product design
specification (PDS). The PDS envelopes the design core and contains the major
specification elements required to design, manufacture and sell the product. The
major elements of a PDS are presented in Table 2.7.
When the PDS is placed on the design core the Total Design Activity Model is
represented by two of its four parts as depicted in Fig. 2.10. The lines that radiate
2.5 Engineering Design Methodologies 31
Development contract
Design specification
Release
Fig. 2.8 Depiction of Hubka’s design model [adapted from Figs. 7–13 (Hubka and Eder 1995)]
32 2 Design Methodologies
Market
Iterations
Specification
Main
Design
Flow
Concept Design
Detail Design
Iterations
Manufacture
Sell
Market
Conceptual design
Element of the PDS equates to this
From Table 2.7 Concept Design specification
Detailed design
Detail Design
equates to this
Element of the PDS specification
From Table 2.7
Design is in balance
Manufacture
with the specification
Sell
Fig. 2.10 Design core and surrounded by PDS [adapted from Fig. 1.5 in (Pugh 1991, p. 7)]
from and surround the core phases are the elements of the PDS relevant to the
particular product’s design.
The third part of the Total Design Activity Model are the inputs from the dis-
cipline independent methods required to execute the design core. These include
both the desirable features of engineering design and the two modes of thought as
depicted in Fig. 2.3 and many others. The fourth and final part of the Total Design
Activity Model are the inputs from the technology and discipline dependent sources.
Many discipline specific methods are required to execute the elements of the PDS
that surround the design core. Examples include stress and strain analysis, welding,
34 2 Design Methodologies
Interactive
Market
Specification
Concept Design
Detail Design
Manufacture
Incremental
Sell
electromagnetic surveys, heat transfer studies, etc. The completed Total Design
Activity Model is depicted in Fig. 2.11.
The Total Design Activity Model depicted in Fig. 2.11 includes examples of both
technology and discipline specific methods and discipline independent methods to
be illustrative of the inputs to the model. Real-world implementation of this model
would involve many more methods. The details of this detailed model for to design
are described in Pugh’s seminal text Total Design: Integrated Methods for
Successful Product Engineering (1991).
In Germany, the Association of German Engineers (VDI) has a formal guideline for
the Systematic Approach to the Design of Technical Systems and Products (VDI
1987). The guideline proposes a generalized approach to the design of man-made
systems that has wide applicability within a wide range of engineering disciplines.
This approach is depicted in Fig. 2.12.
2.5 Engineering Design Methodologies 35
Task
Phase I
Stage Results
Stage 1
Clarify and define the task
Specification
Stage 2
Determine functions and
their structure
Function structure
Stage 3
Search for solution
principles and their Phase II
combinations
Principle solution
Stage 4
Divide into realizable
modules
Module structure
Stage 5
Develop layouts of key
modules
Phase III
Preliminary layouts
Stage 6
Complete overall layout
Definitive layout
Stage 7
Provide production and
operating instructions
Product Phase IV
documents
Further realization
Fig. 2.12 General approach to design [adapted from Fig. 3.3 in (VDI 1987, p. 6)]
The model has four phases made up of seven stages and a specific result is
associated with each stage. The approach in Fig. 2.12 should be “… regarded as a
guideline to which detailed working procedures can be assigned. Special emphasis
is placed on the iterative nature of the approach and the sequence of steps must not
be considered rigid” (Pahl et al. 2011, p. 18).
36 2 Design Methodologies
The team of Gerhard Pahl, Wolfgang Beitz, Jörg Feldhusen, and Karl-Heinrich
Grote have authored one of the most popular textbooks on design, Engineering
Design: A Systematic Approach (2011). In this text they propose of model for
design that has four main phases: (1) planning and task clarification; (2) conceptual
design; (3) embodiment design; and (4) detailed design. The simple nature of the
model does not warrant a figure, but each of the phases are described in the
following:
• Task Clarification—the purpose of this phase “is to collect information about the
requirements that have to be fulfilled by the product, and also about the existing
constraints and their importance” (Pahl et al. 2011, p. 131).
• Conceptual Design—the purpose of this phase is to determine the principle
solution. “This is achieved by abstracting the essential problems, establishing
function structures, searching for suitable working principles and then com-
bining those principles into a working structure” (Pahl et al. 2011, p. 131).
• Embodiment Design—the purpose of this phase is to “determine the construc-
tion structure (overall layout) of a technical system in line with technical and
economic criteria. Embodiment design results in the specification of a layout”
(Pahl et al. 2011, p. 132).
• Detailed Design—the purpose of this phase is to finalize “the arrangement,
forms, dimensions, and surface properties of all the individual parts are finally
laid down, the materials specified, production possibilities assessed, costs esti-
mated, and all the drawings and other production documents produced. The
detailed design phase results in the specification of information in the form of
production documentation (Pahl et al. 2011, p. 132).
The details of each of the phases in this model are presented in their text
Engineering Design: A Systematic Approach (Pahl et al. 2011) which is now in its
3rd edition.
The section that follow will discuss an eighth design methodology—Axiomatic
Design.
Processes for design presented in Table 2.2, but its ability to invoke specific axioms
of systems theory in order to develop quantitative measures for evaluating systems
design endeavors. None of the seven design methodologies reviewed in Sect. 2.5
demonstrated that ability.
The sections that follow will introduce the basic elements of the ADM. The
central focus will be on its ability to selection the best design alternative based upon
a quantitative evaluation of the design’s ability to satisfy its functional and non-
functional requirements. The elimination of qualitative evaluation parameters and
cost is a major shift from every other design methodology. As such, the Axiomatic
Design Methodology is positioned as the premier methodology for systems design
endeavors.
A key concept in axiomatic design is that of domains. In the design world there are
four domains: (1) the customer1 domain, which is characterized by customer
attributes that the customer and associated stakeholders would like to see in the
their system; (2) The functional domain where the customer’s detailed specifica-
tions, specified as both functional requirements (FR) and non-functional require-
ments (NFR) or what Suh describes as constraints (C) are specified; (3) The
physical domain where the design parameters emerge; and (4) The process domain
where process variables enable the design. Figure 2.13 is a depiction of the four
domains of the design world.
1
This chapter will adhere to Dr. Suh's term customer. However, note that this term is too narrowly
focused. Therefore, the reader is encouraged to substitute the term stakeholder, which includes the
larger super-set of those associated with any systems design.
2.6 The Axiomatic Design Methodology 39
Functional
Mapping Requirements
(FR)
Constraints (C) or
Mapping Non-functional
Requirements
(NFR)
A second key concept of axiomatic design is the independence axiom. The inde-
pendence axiom states:
Maintain the independence of the functional requirements (Suh 2005b, p. 23).
where [A] is the design matrix which relates FRs to DPs and is:
Equation for Design Matrix
A11 A12 A13
½ A ¼
A21 A22 A23
ð2:2Þ
A31 A23 A33
2
Only functional requirements will be addressed in this description, but the concept also applies to
the non-functional requirements that act as constraints on the system design.
40 2 Design Methodologies
Using the design matrix in Eqs. 2.2 and 2.1 may be written as Eq. 2.3.
Expanded Equation for Functional Requirements
X
n
FR ¼ Aij DPj ð2:4Þ
i¼1
The information axiom is one of the seven axioms of systems theory (Adams et al.
2014). The Information Axiom states:
Systems create, possess, transfer, and modify information. The information principles
provide understanding of how information affects systems (Adams et al. 2014, p. 119).
The information axiom’s principle invoked by Suh (1990, 2001) in his formulation
for Axiomatic Design is the principle of information redundancy. Information
redundancy is “the fraction of the structure of the message which is determined not
by the free choice of the sender, but rather by the accepted statistical rules gov-
erning the use of the symbols in question” (Shannon and Weaver 1998, p. 13). It is
the number of bits used to transmit a message minus the number of bits of actual
information in the message.
2.6 The Axiomatic Design Methodology 41
where:
H information entropy
pi probability of the information elements
The reformulated equation for information content (I), as related to the probability
(pi) of a design parameter (DPi) satisfying a functional requirement (FRi) is pre-
sented in Eq. 2.6.
System Information Content
X
n
Isys ¼ log2 pi ð2:6Þ
i¼1
The information axiom, when used in this context, states that the system design
with the smallest Isys (i.e., the design with the least amount of information) is the
best design. This is perfectly logical, because such a design requires the least
amount of information to fulfill the design parameters.
The Axiomatic Design Methodology’s utilization of Shannon’s information
entropy is remarkable because a system’s design complexity, most often expressed
as a qualitative assessment, may be represented as a quantitative measure based on
the information entropy required to satisfy the design parameters.
The design goals include not only the functional requirements (FRi), but constraints
(Ci) which place bounds on acceptable design solutions. Axiomatic design
addresses two types of constraints: (1) input constraints, which are specific to the
overall design goals and apply to all proposed designs; and (2) system constraints,
which are specific to a particular system design.
3
Information entropy is sometimes referred to as Shannon Entropy. For more information on
Information Theory the reader may review either Ash (1965). Information Theory. New York:
Dover Publications, or Pierce (1980). An Introduction to Information Theory: Symbols, Signals
and Noise (2nd, Revised ed.). New York: Dover Publications.
42 2 Design Methodologies
Constraints affect the design process by generating a specific set of functional requirements,
guiding the selection of design solutions, and being referenced in design evaluation (Suh
2005a, p. 52).
2.7 Summary
In this chapter engineering design has been defined and positioned within the larger
scientific paradigm and the engineering field. Desirable features and two modes of
thought used in of engineering design were addressed. Terminology relating a
methodology within a hierarchy of scientific approaches has been provided. Finally,
seven historical and one preferred methodology for system design endeavors were
presented.
The next chapter will review the definition for non-functional requirements and
the role they play in every engineering design of man-made systems. It will also
develop a notional taxonomy for identifying and addressing non-functional
requirements in a system design endeavor.
References
Adams, K. M., Hester, P. T., Bradley, J. M., Meyers, T. J., & Keating, C. B. (2014). Systems
theory: The foundation for understanding systems. Systems Engineering, 17(1), 112–123.
Adams, K. M., & Keating, C. B. (2011). Overview of the systems of systems engineering
methodology. International Journal of System of Systems Engineering, 2(2/3), 112–119.
Angier, N. (2007). The canon: A whirlwig tour of the beautiful basics of science. New York:
Houghton Mifflin Company.
Ash, R. B. (1965). Information theory. New York: Dover Publications.
Asimow, M. (1962). Introduction to design. Englewood Cliffs: Prentice-Hall.
BKCASE-Editorial-Board. (2014). The guide to the systems engineering body of knowledge
(SEBoK), version 1.3. In R. D. Adcock (Ed.), Hoboken, NJ: The Trustees of the Stevens
Institute of Technology.
Bourque, P., & Fairley, R. E. (Eds.). (2014). Guide to the software engineering body of knowledge
(version 3.0). Piscataway, NJ: Institute of Electrical and Electronics Engineers.
Carnap, R. (1934). On the character of philosophic problems. Philosophy of Science, 1(1), 5–19.
Checkland, P. B. (1999). Systems thinking. Systems Practice. Chichester: Wiley.
Cross, N. (2008). Engineering design methods (4th ed.). Hoboken, NJ: Wiley.
de Weck, O. L., Roos, D., & Magee, C. L. (2011). Engineering systems: Meeting human needs in
a complex technological world. Cambridge, MA: MIT Press.
Eder, W. E. (2011). Engineering design science and theory of technical systems: Legacy of
Vladimir Hubka. Journal of Engineering Design, 22(5), 361–385.
References 43
Evbuomwan, N. F. O., Sivaloganathan, S., & Jebb, A. (1996). A survey of design philosophies,
models, methods and systems. Proceedings of the Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture, 210(4), 301–320.
French, M. J. (1998). Conceptual design for engineers (3rd ed.). London: Springer.
Honderich, T. (2005). The Oxford companion to philosophy (2nd ed.). New York: Oxford
University Press.
Hubka, V., & Eder, W. E. (1995). Design science: Introduction to the needs, scope and
organization of engineering design knowledge (2nd ed.). New York: Springer.
IEEE and ISO/IEC. (2008). IEEE and ISO/IEC Standard 15288: Systems and software
engineering—system life cycle processes. New York: Institute of Electrical and Electronics
Engineers and the International Organization for Standardization and the International
Electrotechnical Commission.
IEEE and ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and software
engineering—vocabulary. New York : Institute of Electrical and Electronics Engineers and
the International Organization for Standardization and the International Electrotechnical
Commission.
Kuhn, T. S. (1996). The structure of scientific revolutions. Chicago: University of Chicago Press.
Le Masson, P., Dorst, K., & Subrahmanian, E. (2013). Design theory: History, state of the art and
advancements. Research in Engineering Design, 24(2), 97–103.
Mingers, J. (2003). A classification of the philosophical assumptions of management science
methods. Journal of the Operational Research Society, 54(6), 559–570.
Mish, F. C. (Ed.). (2009). Merriam-webster’s collegiate dictionary (11th ed.). Springfield, MA:
Merriam-Webster, Incorporated.
Nagel, E. (1961). The structure of science: Problems in the logic of scientific explanation. New
York: Harcourt, Brace & World.
Norman, D. A. (2013). The design of everyday things (Revised and expanded ed.). New York:
Basic Books.
NSPE. (2013). Engineering body of knowledge. Washington, DC: National Society of Professional
Engineers.
OECD. (2007). Revised field of science and technology (FOS) classification in the frascati manual.
Paris: Organization for Economic Cooperation and Development.
Pahl, G., Beitz, W., Feldhusen, J., & Grote, K.-H. (2011). Engineering design: A systematic
approach (K. Wallace & L. T. M. Blessing, Trans. 3rd ed.). Darmstadt: Springer.
Pierce, J. R. (1980). An introduction to information theory: Symbols, signals & noise (2nd Revised
ed.). New York: Dover Publications.
Proudfoot, M., & Lacey, A. R. (2010). The Routledge dictionary of philosophy (4th ed.).
Abingdon: Routledge.
Psillos, S. (2007). Philosophy of science A-Z. Edinburgh: Edinburgh University Press.
Pugh, S. (1991). Total design: Integrated methods for successful product engineering. New York:
Addison-Wesley.
Runes, D. D. (Ed.). (1983). The standard dictionary of philosophy. New York: Philosophical
Library.
Shannon, C. E., & Weaver, W. (1998). The mathematical theory of communication. Champaign,
IL: University of Illinois Press.
Suh, N. P. (1990). The principles of design. New York: Oxford University Press.
Suh, N. P. (2001). Axiomatic design: Advances and applications. New York: Oxford University
Press.
Suh, N. P. (2005a). Complexity in engineering. CIRP Annals Manufacturing Technology, 54(2),
46–63.
Suh, N. P. (2005b). Complexity: Theory and applications. New York: Oxford University Press.
VDI. (1987). Systematic approach to the design of technical systems and products (VDI Guideline
2221). Berlin: The Association of German Engineers (VDI).
Chapter 3
Introduction to Non-functional
Requirements
Abstract One of the most easily understood tasks during any systems design
endeavor is to define the systems functional requirements. The functional
requirements are a direct extension of the stakeholder’s purpose for the systems and
the goals and objectives that satisfy them. Less easily understood are a systems non-
functional requirements, or the constraints under which the entire system must
operate. Identification of non-functional requirements should happen early in the
conceptual design stage of the systems life cycle, for the same reason that functional
requirements are defined up-front—that is, costs sky-rocket when new requirements
are added late in a systems design sequence. Approaches for addressing non-
functional requirements are rarely addressed in texts on systems design. In order to
provide a logical and repeatable technique for addressing over 200 existing non-
functional requirements, they must be reduced parsimoniously to a manageable
number. Over 200 non-functional requirements are reduced, using results reported
in eight models from the extant literature. The 27 resultant non-functional
requirements have been organized in a taxonomy that categorizes the 27 major non-
functional requirements within four distinct categories. Utilization of this taxonomy
provides a framework for addressing non-functional requirements during the early
system design stages.
This chapter will begin by reviewing the definition for non-functional requirements
and the roles they play in the engineering design of man-made systems. The chapter
will then address the wide range of non-functional requirements and introduce a
number of taxonomies used to describe non-functional requirements. It will con-
clude by presenting a notional taxonomy or framework for understanding non-
functional requirements and their role as part of any system design endeavor.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to describe nonfunctional requirements and a
taxonomy for addressing them during systems design endeavors. This chapter’s
goal is supported by the following objectives:
• Define a non-functional requirement.
• Discuss three aspects of non-functional requirements that complicate dealing
with them.
• Name 10 non-functional requirements.
• Discuss the historical development of frameworks for non-functional
requirements.
• Describe the elements of the Taxonomy of NFR for Systems.
• Describe the four-level structural map for measuring the attributes of non-
functional requirements.
All system design efforts start with a concept for the system. Concepts start as ideas
and are moved forward is a series of actions during the concept design stage of the
systems life cycle. As the system life cycle progresses the system’s concept is
transposed into formal system-level requirements where the stakeholder’s needs are
transformed into discrete requirements.
During the concept design stage two requirements-related technical processes,
depicted in Table 3.1, are invoked.
The requirements addressed in Table 3.1 are functional requirements.
There are additional definitions which will provide additional insight about this
category of requirement. Table 3.2 provides definitions from two of the most
popular systems engineering and systems design texts.
From these two definitions it is clear that functional requirements have the fol-
lowing essential characteristics (note that all of these are characterized by verbs):
1. Define what the system should do.
2. Be action oriented.
3. Describe tasks or activities.
4. Are associated with the transformation of inputs to outputs.
These are the requirements that the Axiomatic Design Methodology describes as
FRi which are mapped to Design Parameters DPi during the transformation from the
functional domain to the physical domain as part of the Axiomatic Design
Methodology described in Chap. 2.
In the Axiomatic Design Methodology the design goals include not only the
functional requirements (FRi), but constraints (Ci) which place bounds on accept-
able design solutions. Axiomatic design addresses two types of constraints: (1)
input constraints, which are specific to the overall design goals and apply to all
proposed designs; and (2) system constraints, which are specific to a particular
system design.
Constraints affect the design process by generating a specific set of functional requirements,
guiding the selection of design solutions, and being referenced in design evaluation. (Suh
2005, p. 52)
While the IEEE Guide for Developing System Requirements Specifications (IEEE
1998b) is silent on non-functional requirements, the IEEE Recommended Practice
for Software Requirements Specifications (IEEE 1998a) states that requirements
consist of “functionality, performance, design constraints, attributes, or external
interfaces” (p. 5). Descriptions of what each software requirement is supposed to
answer are presented in Table 3.3.
There are additional definitions which will help give additional insight about non-
functional requirements. Table 3.4 provides definitions from a variety of systems
and software engineering and design texts.
There are three additional aspects to non-functional requirements that complicate
the situation.
1. Non-functional requirements can be ‘subjective’, since they can be viewed,
interpreted and evaluated differently by different people. Since NFRs are often
stated briefly and vaguely, this problem is compounded. (Chung et al. 2000,
p. 6)
2. Non-functional requirements can also be ‘relative’, since the interpretation and
importance of NFRs may vary on the particular system being considered.
Achievement of NFRs can also be relative, since we may be able to improve
upon existing ways to achieve them. For these reasons a ‘one solution fits all’
approach may not be suitable. (Chung et al. 2000, pp. 6–7)
From these definitions and conditions practitioners may easily conclude that non-
functional requirements have the following essential characteristics (note that all of
these are characterized by adjectives that define which, what kind of, or how many):
1. Define a property or quality that the system should have.
2. Can be subjective, relative, and interacting.
3. Describe how well the systems must operate.
4. Are associated with the entire system.
Practitioners responsible for systems design could benefit from a structured
approach to the identification, organization, analysis, and refinement of non-func-
tional requirements in support of design activities (Cleland-Huang et al. 2007;
50 3 Introduction to Non-functional Requirements
Cysneiros and do Prado Leite 2004; Gregoriades and Sutcliffe 2005; Gross and Yu
2001; Sun and Park 2014).
It is precisely because non-functional requirements (NFR) describe important,
and very often critical requirements, that a formal, structured approach for their
identification, organization, analysis, and refinement is required as a distinct ele-
ment of systems design. Non-functional requirements include a broad range of
system needs that play a critical role in early development of the systems archi-
tecture (Nuseibeh 2001). Failure to formally identify and account for non-functional
requirements early in a system design may prove to be costly in later stages of the
systems life cycle. In fact, “failing to meet a non-functional requirements can mean
that the whole system is unusable” (Somerville 2007, p. 122).
The current state of affairs with respect to non-functional requirements has shown
that:
• There is not a single, agreed upon, formal definition
• There is not a complete list.
• There is not a single universal classification schema, framework, or taxonomy.
The three sections that follow will: (1) Present a list of non-functional requirements
with appropriate formal definitions; (2) Review the historical work and research
associated with the development of a universal classification schema, framework or
taxonomy for non-functional requirements; and (3) Recommend a notional model
for understanding the major non-functional requirements in systems design.
This section will identify and define the principal non-functional requirements
associated with systems. It is important to recognize that non-functional require-
ments span the complete life cycle of a system, from conception to retirement and
disposal and that each non-functional requirement has its 20 min of fame and is
accompanied by its own set of experts, zealots, and benefactors.
In their seminal work Engineering Systems: Meeting Human Needs in a
Complex Technological World, de Weck et al. (2011) of the Massachusetts Institute
of Technology, discuss the relationship between non-functional requirements and
what they term ilities.
3.3 Identification and Organization of Non-functional Requirements 51
In computer science ilities are discussed as nonfunctional requirements. (de Weck et al.
2011, p. 196)
Ilities are requirements of systems, such as flexibility or maintainability, often ending in the
suffix “ility”; properties of systems that are not necessarily part of the fundamental set of
functions or constraints and sometimes not in the requirements. (de Weck et al. 2011,
p. 187)
The quest to identify and organize non-functional requirements started in 1976 and
continues to this day. This section will review some of the major classification
models for non-functional requirements.
52 3 Introduction to Non-functional Requirements
Barry Boehm and two of his colleague at TRW conducted a study (Boehm et al.
1976) which produced 23 non-functional characteristics of software quality that
they arranged in a hierarchical tree. The lower-level branches in the tree contain
sub-characteristics of the higher-level characteristic. In the schema presented in
Fig. 3.1 the lower-level characteristics in the tree are necessary but not sufficient for
achieving the higher-level characteristics.
A number of models were developed at the United States Air Force’s Rome Air
Development Center between 1978 and 1985. Three of these models are presented
in the following sections.
Device-independence
Portability
Self-containedness
Accuracy
Reliability Completeness
Robustness/integrity
Consistency
As-is Utility
Accountability
Efficiency
Device efficiency
Accessibility
Communicativeness
Testability Self-descriptiveness
Structuredness
Maintainability
Conciseness
Understandability
Legibility
Modifiability Augmentability
Fig. 3.1 Software quality characteristics tree [adapted from Fig. 1 in (Boehm et al. 1976, p. 595)]
3.4 Classification Models for Non-functional Requirements 55
Maintainability {Can I fix it?} Portability {Will I be able to use it on another machine?}
Flexibility {Can I change it?} Reusability {Will I be able to reuse some of the software?}
Testability {Can I test it?} Interoperability {Will I be able to interface it with another system?}
PRODUCT OPERATIONS
Fig. 3.2 Software quality factors [adapted from Fig. 2 in (Cavano and McCall 1978, p. 136)]
McCall and his colleague Mike Masumoto, under the direction of James P. Cavano,
continued the work in non-functional quality requirements and developed the
Software Quality Measurement Manual (McCall and Matsumoto 1980). Their
Quality-Factor tree is depicted in Fig. 3.3.
56 3 Introduction to Non-functional Requirements
Traceability
Consistency Correctness
Completeness
Error Tolerance
Reliability
Accuracy
Simplicity
Conciseness
Maintainability
Modularity
Self-descriptiveness
Operability
Training Usability
Communicativeness
Data commonality
Fig. 3.3 USAF quality-factor tree [adapted from (McCall and Matsumoto 1980, p. 24)]
The final work on software quality was conducted between 1982 and 1984 and
resulted in the third volume of the Software Quality Evaluation Guidebook (Bowen
et al. 1985). The guidebook provides a comprehensive set of procedures and
techniques to enable data collection personnel to apply quality metrics to software
products and to evaluate the achieved quality levels. The associated model had 3
acquisition concerns, 13 quality factors, 29 criteria, 73 metrics, and over 300 metric
elements. Table 3.8 shows the relationship between the acquisition concern, quality
factors and criteria.
The FURPS Model was first introduced by Robert Grady and Deborah Caswell
(Grady and Caswell 1987). The model’s acronym is based on its five categories: (1)
functionality; (2) usability; (3) reliability; (4) performance; and (5) supportability.
The original FURPS Model “was extended to empathize various specific attributes”
(Grady 1992, p. 32) and re-designated FURPS+. The FURPS+ categories and
3.4 Classification Models for Non-functional Requirements 57
Table 3.8 Software quality evaluation guidebook model (Bowen et al. 1985)
System need factor and Quality factor and user concern Criteria
acquisition concern
Performance factor attributes Efficiency—How well does it utilize a Effectiveness—
—How well does it function? resource? communication
Effectiveness—
processing
Effectiveness—
storage
Integrity—How secure is it? System
accessibility
Reliability—What confidence can be Accuracy
placed in what it does? Anomaly
management
Simplicity
Survivability—How well will it perform Anomaly
under adverse conditions? management
Autonomy
Distributedness
Modularity
Reconfigurability
Usability—How easy it is to use? Operability
Training
Design factor attributes— Correctness—How well does it conform to Completeness
How valid is the design? the requirements? Consistency
Traceability
Maintainability—How easy is it to repair? Consistency
Modularity
Self-
descriptiveness
Simplicity
Visibility
Verifiability—How easy is it to verify its Modularity
performance? Self-
descriptiveness
Simplicity
Visibility
Adaptation factor attributes Expandability—How easy is it to expand Augmentability
—How adaptable is it? or upgrade its capability or performance? Generality
Modularity
Self-
descriptiveness
Simplicity
Virtuality
(continued)
58 3 Introduction to Non-functional Requirements
attributes are depicted in Table 3.9 (Grady 1992, p. 32). The FURPS+ elements
represent a number of the non-functional requirements presented in Tables 3.5, 3.6
and 3.7.
James K. Blundell, Mary Lou Hines, and Jerrold Stach of the University of
Missouri—Kansas City (Blundell et al. 1997) developed a highly detailed non-
functional quality measurement model that includes 39 quality measures that are
3.4 Classification Models for Non-functional Requirements 59
related to 18 characteristics, which are then each related to seven critical design
attributes. The seven critical design attributes are shown in Table 3.10.
The critical design characteristics are related to 18 characteristics desirable in a
software system. This relationship is shown in Table 3.11.
The final relationship is between the 18 desired characteristics and the 39 measures
of quality, or ilities, which is related in Table 3.12.
The most intriguing feature of this model is relationship between the 39 non-
functional quality measures and the seven design attributes (cohesion, complexity,
coupling, data structure, intra-modular complexity, inter-modular complexity, and
token selection). The least appealing feature of the model is that the 39 non-
functional quality measures are neither organized nor related, leaving the user to
face a huge array of relationships.
60 3 Introduction to Non-functional Requirements
Table 3.10 Critical design attributes (Blundell et al. 1997, pp. 244–245)
Design attribute Attribute description
Cohesion (COH) The singularity of function of a single module
Complexity (COM) The complexity within modules
Coupling (COU) The simplicity of the connection between modules
Data structures (DAS) Data types based upon functional requirements
Intra-modular complexity (ITA) The complexity within modules
Inter-modular complexity (ITE) The complexity between modules
Token selection (TOK) The number of distinct lexical tokens in the program code
Table 3.11 Relationship between design attributes and desired characteristics (Blundell et al.
1997, p. 343)
# Characteristic COH COM COU DAS ITA ITE TOK
1 Conciseness ✓ ✓ ✓ ✓ ✓ ✓
2 Ease of change ✓ ✓ ✓ ✓ ✓ ✓
3 Ease of checking ✓ ✓ ✓
conformance
4 Ease of coupling to ✓ ✓ ✓
other systems
5 Ease of introduction of ✓ ✓ ✓ ✓
new features
6 Ease of testing ✓ ✓ ✓ ✓ ✓ ✓ ✓
7 Ease of understanding ✓ ✓ ✓ ✓ ✓ ✓ ✓
8 Freedom from error ✓ ✓ ✓ ✓ ✓ ✓
9 Functional ✓ ✓ ✓ ✓
independence of
modules
10 Precise computations ✓ ✓ ✓
11 Precise control ✓ ✓ ✓ ✓
12 Shortest loops ✓ ✓ ✓ ✓
13 Simplest arithmetic ✓ ✓ ✓
operators
14 Simplest data types ✓ ✓ ✓ ✓
15 Simplest logic ✓ ✓ ✓ ✓
16 Standard data types ✓ ✓
17 Ease of maintenance ✓ ✓ ✓ ✓ ✓ ✓ ✓
18 Functional specification ✓ ✓
compliance
3.4 Classification Models for Non-functional Requirements 61
Non-functional
Requirements
Performance requirements
Standards requirements Legislative requirements
Space requirements
Privacy requirements
Reliability requirements
Safety requirements
Portability requirements
9126 (ISO/IEC 1991) and its replacement ISO/IEC Std 25010 (ISO/IEC 2011)
include non-functional requirements, definitions, and how to measure them as part
of a systems endeavor. Table 3.13 lists the non-functional requirements addressed
in the latest international standard for systems quality, ISO/IEC Standard 25010:
Systems and software engineering—Systems and software Quality Requirements
and Evaluation (SQuaRE)—System and software quality models. The standard has
eight main characteristics, each with a set of supporting sub-characteristics.
and characteristics and lists the number of unique categories, factors, or criteria as
non-functional requirements to be considered for inclusion in the notional frame-
work. In this fashion the large body of non-functional requirements are reduced to
209.
By considering only the unique categories, factors, or criteria between the eight
models the total number of non-functional requirements treated in the extant lit-
erature is reduced from 209 to 96.
3.5 Notional Framework for Understanding Major NFR in Systems Design 65
An analysis of the criteria in each of the seven historical models shows that not all
of the criteria are universally applied. Table 3.15 reveals the frequency of criteria in
the eight models.
The decision about which of the criteria in Table 3.15 to consider for inclusion in
the notional framework is aided by one final task, reviewing the established formal
definitions for each of the 96 non-functional requirements criteria.
Table 3.16 provides an alphabetical list and the formal definitions from IEEE
Standard 24765, Systems and Software Engineering—Vocabulary1 for 24 non-
functional requirements criteria that achieved a frequency of 3 or higher. Table 3.16
also includes definitions for three other non-functional requirements that achieved a
frequency of 2 and three that received a score of 1 (indicated by an asterisk). All six
of these criteria were deemed worthy of inclusion in the final list. Note that com-
pleteness and human factors/engineering were not defined in IEEE Standard 24765
and will be eliminated from the final list of non-functional requirements attributes
that will be considered.
A review of the definitions reveals that operability is not unique and is actually
contained within the definition for availability. Based upon this, operability is
removed from the list of most frequent NFRs, leaving the list with 27 unique NFRs.
1
The on-line version of the IEEE standard was also used and is indicated by [SEVOCAB].
66 3 Introduction to Non-functional Requirements
Taxonomy
of NFR
for Systems
and (4) System Sustainment Concerns. Figure 3.5 show the relationship between
the four system concerns and the 27 non-functional requirements selected for
consideration during a system’s life cycle.
Utilization of the NFR Taxonomy for Systems requires a process for measuring the
ability to achieve the non-functional requirement. By following the metrics concept
articulated by Fenton and Pfleeger (1997), specific information about each attribute
must be captured. A set of structural mappings that relate and individual NFR
attribute from Fig. 3.5 to a specific metric and measurement entity are required. The
framework for the structural mappings is based upon that described by Budgen
(2003). A requisite four-level construct and example is presented in Table 3.17.
70 3 Introduction to Non-functional Requirements
Each NFR attribute in Fig. 3.5 should have a structural map that clearly identifies
the measurement method or technique and the specific systems characteristic(s) that
will be used to measure the NFR.
3.6 Summary
References
Blundell, J. K., Hines, M. L., & Stach, J. (1997). The measurement of software design quality.
Annals of Software Engineering, 4(1), 235–255.
Boehm, B. W., Brown, J. R., & Lipow, M. (1976). Quantitative evaluation of software quality. In
R. T. Yeh & C. V. Ramamoorthy (Eds.), Proceedings of the 2nd International Conference on
Software Engineering (pp. 592–605). Los Alamitos, CA: IEEE Computer Society Press.
Bowen, T. P., Wigle, G. B., & Tsai, J. T. (1985). Specification of software quality attributes:
Software quality evaluation guidebook (RADC-TR-85-37, Vol. III). Griffiss Air Force Base,
NY: Rome Air Development Center.
Budgen, D. (2003). Software design (2nd ed.). New York: Pearson Education.
Buede, D. M. (2000). The engineering design of systems: Models and methods. New York: Wiley.
Cavano, J. P., & McCall, J. A. (1978). A framework for the measurement of software quality.
SIGSOFT Software Engineering Notes, 3(5), 133–139.
Chung, L., Nixon, B. A., Yu, E. S., & Mylopoulos, J. (2000). Non-functional requirements in
software engineering. Boston: Kluwer Academic Publishers.
Cleland-Huang, J., Settimi, R., Zou, X., & Solc, P. (2007). Automated classification of non-
functional requirements. Requirements Engineering, 12(2), 103–120.
Cysneiros, L. M., & do Prado Leite, J. C. S. (2004). Nonfunctional requirements: From elicitation
to conceptual models. IEEE Transactions on Software Engineering, 30(5), 328–350.
Cysneiros, L. M., & Yu, E. (2004). Non-functional requirements elicitation. In J. do Prado Leite &
J. Doorn (Eds.), Perspectives on Software Requirements (Vol. 753, pp. 115–138). Norwell:
Kluwer Academic.
de Weck, O. L., Roos, D., & Magee, C. L. (2011). Engineering systems: Meeting human needs in
a complex technological world. Cambridge: MIT Press.
Ebert, C. (1998). Putting requirement management into praxis: dealing with nonfunctional
requirements. Information and Software Technology, 40(3), 175–185.
Fenton, N. E., & Pfleeger, S. L. (1997). Software metrics: A rigorous & practical approach (2nd
ed.). Boston: PWS Publications.
Grady, R. B. (1992). Practical software metrics for project management and process
improvement. Englewood Cliffs, NJ: Prentice-Hall.
Grady, R. B., & Caswell, D. (1987). Software metrics: Establishing a company-wide program.
Englewood Cliffs: Prentice-Hall.
Gregoriades, A., & Sutcliffe, A. (2005). Scenario-based assessment of nonfunctional requirements.
IEEE Transactions on Software Engineering, 31(5), 392–409.
Gross, D., & Yu, E. (2001). From non-functional requirements to design through patterns.
Requirements Engineering, 6(1), 18–36.
IEEE. (1998a). IEEE Standard 830—IEEE recommended practice for software requirements
specifications. New York: Institute of Electrical and Electronics Engineers.
IEEE. (1998b). IEEE Standard 1233: IEEE guide for developing system requirements
specifications. New York: Institute of Electrical and Electronics Engineers.
IEEE, & ISO/IEC. (2008). IEEE and ISO/IEC Standard 15288: Systems and software engineering
—system life cycle processes. New York and Geneva: Institute of Electrical and Electronics
Engineers and the International Organization for Standardization and the International
Electrotechnical Commission.
IEEE, & ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and software engineering
—vocabulary. New York and Geneva: Institute of Electrical and Electronics Engineers and the
International Organization for Standardization and the International Electrotechnical
Commission.
ISO/IEC. (1991). ISO/IEC Standard 9126: Software product evaluation—quality characteristics
and guidelines for their use. Geneva: International Organization for Standardization and the
International Electrotechnical Commission.
72 3 Introduction to Non-functional Requirements
ISO/IEC. (2011). ISO/IEC Standard 25010: Systems and software engineering—Systems and
software quality requirements and evaluation (SQuaRE)—system and software quality models.
Geneva: International Organization for Standardization and the International Electrotechnical
Commission.
Kossiakoff, A., Sweet, W. N., Seymour, S. J., & Biemer, S. M. (2011). Systems engineering
principles and practice (2nd ed.). Hoboken: Wiley.
Mairiza, D., Zowghi, D., & Nurmuliani, N. (2010). An investigation into the notion of non-
functional requirements. In Proceedings of the 2010 ACM Symposium on Applied Computing
(pp. 311–317). New York: ACM.
McCall, J. A., & Matsumoto, M. T. (1980). Software quality measurement manual (RADC-TR-80-
109-Vol-2). Griffiss Air Force Base, NY: Rome Air Development Center.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capability
for processing information. Psychological Review, 63(2), 81–97.
Nuseibeh, B. (2001). Weaving together requirements and architectures. Computer, 34(3),
115–119.
Pfleeger, S. L. (1998). Software engineering: Theory and practice. Upper Saddle River, NJ:
Prentice-Hall.
Robertson, S., & Robertson, J. (2005). Requirements-led project management. Boston: Pearson
Education.
Somerville, I. (2007). Software engineering (8th ed.). Boston: Pearson Education.
Suh, N. P. (2005). Complexity in engineering. CIRP Annals—Manufacturing Technology, 54(2),
46–63.
Sun, L., & Park, J. (2014). A process-oriented conceptual framework on non-functional
requirements. In D. Zowghi & Z. Jin (Eds.), Requirements Engineering (Vol. 432, pp. 1–15)
Berlin: Springer.
Wiegers, K. E. (2003). Software requirements (2nd ed.). Redmond: Microsoft Press.
Part II
Sustainment Concerns
Chapter 4
Reliability and Maintainability
This chapter will address two major topics. The first topic is reliability and the
second is maintainability. The first topic will review reliability and the basic theory,
equations and concepts that underlie its utilization. It will then address how reli-
ability is applied in engineering design and as a technique for determining com-
ponent reliability. The section on reliability will conclude with a metric and
measurable characteristic for reliability.
The second major topic of this chapter will define maintainability and discuss
how it is used in engineering design. The terms used in the maintenance cycle are
defined and applied to specific maintainability equations. The maintenance and
support concept is introduced as an important element of the conceptual design
stage of the systems life cycle. The chapter concludes with a metric and measurable
characteristic for maintainability.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of reliability and
maintainability affect sustainment in systems endeavors. This chapter’s goal is
supported by the following objectives:
• Define reliability.
• Describe the reliability function and its associated probability distributions.
• Explain failure rate and the bathtub failure rate curve.
• Identify the component reliability models and their application in calculating
system reliability.
• Describe the reliability processes that take place in each of the systems design
phases.
• Describe how reliability is achieved in system design.
• Describe the 12 steps in a FMECA.
• Construct a structural map that relates reliability to a specific metric and mea-
surable characteristic.
• Define maintainability.
• Identify how the maintenance and support concept is included during conceptual
design.
• Describe the terminology used in the maintenance cycle.
• Construct a structural map that relates maintainability to a specific metric and
measurable characteristic.
• Explain the relationship between reliability and maintainability.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the chapter topics which include the following.
4.2 Reliability
There are other definitions, presented in Table 4.2 that may contribute to improved
understanding.
Dissecting these definitions, the major elements are:
• Probability—Fraction or a percentage specifying the number of times that one
can expect an event to occur in a total number of trials.
• Satisfactory performance—Set of criteria to be met by a component or system.
• Time—A measure against which the degree of system or component perfor-
mance can be related (reliability span?).
• Specified operating conditions—Environment in which the system or compo-
nent functions.
Having a definition for and understanding the constituent elements of reliability is
fine. However, what does the application of reliability engineering add to a system or
component design? Simply stated, the objectives of reliability, in order of priority are:
1. To apply engineering knowledge and specialist techniques to prevent or reduce
the likelihood or frequency of failures.
2. To identify and correct the causes of failures that do occur, despite the efforts to
prevent them.
3. To determine ways of coping with failures that do occur, if their causes have not
been corrected.
4. To apply methods for estimating the likely reliability of new designs, and for
analyzing reliability data (O’Connor and Kleyner 2012, p. 2).
Armed with a meaningful definition for reliability, and a basic set of objectives for
reliability engineering, the following sections will discuss how reliability is
approached during systems endeavors.
The reliability function is related similarly, with the chance of having a reliable
outcome being R(t) and the chance of a failed outcome being F(t) and is shown in
Eq. 4.2.
Reliability and Failure Equation
Equation 4.2 may be re-written as Eq. 4.3 to show that reliability is a function of the
failure rate F(t):
General Reliability Equation
The failure rate F(t) for a component may be represented as a probability density
function (p.d.f.). By using the area under the curve for the p.d.f. By using the area
under the curve for the p.d.f. Eq. 4.3 is re-written as Eq. 4.4 to show reliability
using the p.d.f.
Reliability Equation with Probability Density Function
Zt
RðtÞ ¼ 1 ðp:d:f Þ dt ð4:4Þ
0
Table 4.3 Primary probability density function types used in reliability calculations
Probability density Utilization
function type
Poisson The likelihood of failure is very low in a large sample size
Normal (Gaussian The likelihood of failure is distributed equally on either side of the
distribution) median failure rate
Lognormal (Galton The likelihood of failure is expressed as the multiplicative product of
distribution) many independent random variables
Weibull The probability of failure increases over time. Use to model life data
where wear out due to aging is present
Exponential Failure rate is an exponential distribution
The expressions for each of the probability density functions may be obtained
from any number of statistical handbooks. For example, an exponential p.d.f. is
inserted into Eq. 4.4 the reliability equation is represented as the expression in Eq. 4.5.
Reliability Equation with Mean Life and Time Interval
where θ = mean life of the component and t = evaluation time period. To solve
Eq. 4.5 two new terms are introduced: (1) failure rate (λ) which will be defined as
λ = 1/θ; and (2) mean time between failure (MTBF), defined in Table 4.4.
The solution of Eq. 4.5, using the new terms, is shown in Eq. 4.6.
Reliability Equation with MTBF and Failure Rate
When the reliability function is used to calculate the failure rate for a large pop-
ulation of identical components over a period of time the following behavior is
observed.
The sample experiences a high failure rate at the beginning of the operations time due to
weak or substandard components, manufacturing imperfections, design errors, and instal-
lation defects. As the failed components are removed, the time between failures increases
which results in a reduction in the failure rate. The period of decreasing failure rate (DFR)
Time (t)
is referred to as the “infant mortality region”, the “shakedown” region, the “debugging”
region, or the “early failure” region. (Elsayed 2012, pp. 15–16)
The curve associated with this phenomenon is labeled the bathtub curve and is
depicted in Fig. 4.1.
As the design process moves through the conceptual and preliminary design stages
the next stage is detailed design. Detailed design is where the subsystems are
broken down into the required assemblies, subassemblies, components, and parts.
The specific relationships and configurations of the components directly affects the
reliability of the system.
Three basic relationships between components may be chosen. Components may
be combined in series, in parallel, or in a combination of series and parallel rela-
tions. The sections that follow will address how reliability is calculated for each of
these relationships.
When components are arranged in a serial relationship, as depicted in Fig. 4.2, all
components must operate satisfactorily if the system is to function as designed.
The basic reliability for the components in Fig. 4.2 is shown in Eq. 4.7.
Reliability Equation for Three Components in Series
Rsys ¼ Ra Rb Rc ð4:7Þ
4.2 Reliability 81
By inserting Eq. 4.6 into Eq. 4.7 the failure rate is introduced into the reliability
equation as depicted in Eq. 4.8.
Reliability Equation Using Failure Rate for Three Components in Series
The failure rate for a component is the inverse of the MTBF, as shown in Eq. 4.9.
Failure Rate and MTBF
1
kx ¼ ð4:9Þ
MTBFx
By using Eq. 4.8 the overall reliability of the system over a period of 1000 h is:
Rsys ¼ eð0:167Þ eð0:222Þ eð0:01Þ
Rsys ¼ eð0:399Þ
Rsys ¼ 0:601
This shows that the system, as configured, has a 60.1 % probability of surviving to
1000 h.
When components are arranged in a parallel relationship, as depicted in Fig. 4.3, all
components must fail to cause a total system failure.
The basic reliability for the components A and B in Fig. 4.3 is shown in
Eq. 4.10.
Reliability Equation for Two Components in Parallel
Component
C
4.2 Reliability 83
the design, A and B, which is a system with two identical power supplies, the
corresponding system reliability is calculated as:
If a third identical power supply is added in parallel to this system design, then the
reliability of the system increases:
When components are arranged in relationships that combine both series and
parallel relations, as depicted in Fig. 4.4, a variety of failure combinations may
cause the system to fail.
The basic reliability for the components A, B, C and D in Fig. 4.4 is shown in
Eq. 4.12.
Reliability Equation for Components in a Combined Series-Parallel
Relationship
Component Component
A C
Input Output
Component Component
B D
… assesses failure modes, the effects, and the criticality of failure for design alternatives.
The hardware, software, and human elements of the design alternatives should be analyzed,
and historical or test data should be applied, to refine an estimate of the probability of
successful performance of each alternative. A failure modes and effects analysis (FMEA)
should be used to identify the strengths and weaknesses of the design solution. For critical
failures, the project conducts a criticality analysis to prioritize each alternative by its crit-
icality rating. The results of this analysis are used to direct further design efforts to
accommodate redundancy and to support graceful system degradation. (IEEE 2005, p. 52)
The section that follows will discuss the Failure Mode and Effect Analysis (FMEA)
mentioned in IEEE Standard 1220.
FMEA is also known as FMECA or Failure Modes, Effects, and Criticality Analysis
and is a popular and widely used reliability design technique. FMEA may be
applied to functional or physical entities and may start as early as the conceptual
design stage in the systems life cycle. The FMEA/FMECA goals are to:
• Design process to improve inherent system reliability.
• Identifies potential system weaknesses.
The process uses a bottom-up approach where the analysis considers failures at the
lowest level of the system’s hierarchy and moves upward to determine effects at the
higher levels in the hierarchy. The process traditionally uses 12 steps:
1. Define the system, outcomes, and technical performance measures
2. Define the system in functional terms
3. Do a top-down breakout of system level requirements
4. Identify failure modes, element by element
5. Determine causes of failure
6. Determine effects of failure
7. Identify failure/defect detection means
8. Rate failure mode severity
9. Rate failure mode frequency
10. Rate failure mode detection probability (based on item #7)
11. Analyze failure mode criticality where criticality is a function of severity (#8),
frequency (#9), and probability of detection (#10) as expressed in a Risk
Priority Number (RPN)
12. Initiate recommendations for improvement.
Bernstein (1985) provides an example of a FMECA for a mechanical system that
should be reviewed for understanding. Finally, there is an established international
standard for conducting a FMEA/FMECA which is contained in IEC Standard
60812 Analysis Techniques for System Reliability—Procedure for Failure Mode
and Effects Analysis (FMEA) (IEC 2006).
The next section will discuss how to measure reliability.
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute was stressed. A structural mapping that relates reliability to a specific
metric and measurement entity are required. The four-level construct for reliability
is presented in Table 4.5.
The section that follows will discuss the non-functional attribute for
maintainability.
86 4 Reliability and Maintainability
4.3 Maintainability
This section will review the basics of maintainability and how it is applied during
systems endeavors. Maintainability is closely related to reliability and is a central
element of sustainment.
There are other definitions, presented in Table 4.6 that may contribute to improved
understanding.
Dissecting these definitions, the major elements are:
• Maintain—The ability to take the actions necessary to keep the system in a fully
operable condition.
• Maintenance—“the process of retaining a hardware system or component in, or
restoring it to, a state in which it can perform its required functions” (IEEE and
ISO/IEC 2010, p. 205).
Order
Replacment
Component
Receive
Replacement
Component
Removal of
Disassembly for Alignment &
Detection Preparation Isolation Faulty Reassembly Testing
Access Adjustment
Component
Repair of Faulty
Component
Maintenance Downtime
(MDT)
Knowing that the Mean Active Maintenance Time (M) is the sum of the mean
corrective and mean preventive maintenance times, Eq. 4.13 may be expressed as
Eq. 4.14:
Expanded Maintenance Downtime Equation
1
MTBM ¼ ð4:15Þ
1
MTBMu þ MTBM
1
s
Both of these measures are primary measures for maintainability and will be
required to address the non-functional requirement attribute for availability in the
next chapter.
4.3 Maintainability 89
Much of the design team’s focus should be on the cost drivers and higher risk
elements that are anticipated to impact the system throughout its life cycle. Because
system operations and maintenance activities consume large portions of a systems
total life cycle costs, the maintenance support concept is particularly important. A
maintenance concept may be defined as:
The set of various maintenance interventions (corrective, preventive, condition based, etc.)
and the general structure in which these interventions are foreseen. (Waeyenbergh and
Pintelon 2002, p. 299)
Figure 4.6 depicts some of the major elements in a systems maintenance and
support concept.
Operational Unit Maintenance
Operational environment
On-site corrective maintenance
Planned preventive maintenance
Supply support for critical items
Low level skills
System of Record
Intermediate
Depot Maintenance Maintenance
Depot Maintenance Provider Provider
Fixed facility
Intermediate Maintenance
Detailed maintenance of systems
Field Facility
Overhaul of systems
Corrective maintenance of subsystems
Supply support for overhaul items
Preventive maintenance of subsystems
Detailed skills
Supply support for repair items
Medium level skills
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute was stressed. To support this, a structural mapping that relates maintain-
ability to a specific metric and measurement entity are required. The four-level
construct for maintainability is presented in Table 4.9.
4.4 Summary
References
Bernstein, N. (1985). Reliability analysis techniques for mechanical systems. Quality and
Reliability Engineering International, 1(4), 235–248.
Blanchard, B. S., & Fabrycky, W. J. (2011). Systems engineering and analysis (5th ed.). Upper
Saddle River: Prentice-Hall.
Elsayed, E. A. (2012). Reliability engineering (2nd ed.). Hoboken: Wiley.
IEC. (2006). IEC Standard 60812: Analysis techniques for system reliability—procedure for
failure mode and effects analysis (FMEA). Geneva: International Electrotechnical Commission.
IEEE. (2005). IEEE Standard 1220: Systems engineering—application and management of the
systems engineering process. New York: Institute of Electrical and Electronics Engineers.
IEEE, & ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and Software Engineering
—Vocabulary. New York and Geneva: Institute of Electrical and Electronics Engineers and the
International Organization for Standardization and the International Electrotechnical
Commission.
92 4 Reliability and Maintainability
Ireson, W. G., Coombs, C. F., & Moss, R. Y. (1996). Handbook of reliability engineering and
management (2nd ed.). New York: McGraw-Hill.
Kossiakoff, A., Sweet, W. N., Seymour, S. J., & Biemer, S. M. (2011). Systems engineering
principles and practice (2nd ed.). Hoboken: Wiley.
O’Connor, P. D. T., & Kleyner, A. (2012). Practical reliability engineering (5th ed.). West
Sussex: Wiley.
Waeyenbergh, G., & Pintelon, L. (2002). A framework for maintenance concept development.
International Journal of Production Economics, 77(3), 299–313.
Chapter 5
Availability, Operability, and Testability
This chapter will address two major topics. The first topic is availability and the
second is testability. The first topic will review availability and the basic theory,
equations and concepts that underlie its utilization. A section will address how
availability is applied in engineering design and conclude with a metric and mea-
sureable characteristic for reliability.
The second major topic of this chapter will define testability and discuss how it
is used in engineering design. The relationship to availability is reviewed and
applied to the availability equation. The section concludes with a metric and
measureable characteristic for testability.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of availability and
testability affect sustainment in systems endeavors. This chapter’s goal is supported
by the following objectives:
• Define availability.
• Describe the terminology used to calculate availability.
• Describe the relationship between maintainability and availability.
• Construct a structural map that relates availability to a specific metric and
measurable characteristic.
• Define testability.
• Describe the relationship between testability and operational availability.
• Construct a structural map that relates testability to a specific metric and mea-
surable characteristic.
• Explain the relationship between testability and availability.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the sections that follow.
This section will review the basics of availability and how it is applied during
systems endeavors. Availability is an important measure utilized in assessing a
systems’ ability to provide the required functions and services to its stakeholders.
From these definitions it should be clear that the non-functional requirement for
operability is easily satisfied within the definition for availability. As a result, the
rest of this chapter will treat operability as part of the non-functional requirement
for availability.
Availability is usually simply stated as the ratio of the system uptime over the
sum of system uptime and system downtime. Availability as a general concept is
“the period of time for which an asset is capable of performing its specified
function, expressed as a percentage” (Campbell 1995, p. 174).
However, availability has a number of unique definitions, characterized by either
(1) the time interval being considered, or (2) the type of downtime (i.e., either
corrective repairs or scheduled maintenance). There are other definitions presented
in Table 5.1 that may contribute to improved understanding.
5.2 Availability and Operability 95
Table 5.1 Key elements that differentiate the definitions for availability
Definition Source
Inherent Availability (Ai): “Includes only the corrective Elsayed (2012, p. 202)
maintenance of the system (the time to repair or replace the failed
component), and excludes ready time, preventive maintenance
downtime, logistics (supply) time, and waiting administrative time”
Achieved Availability (Aa): “Includes corrective and preventive Elsayed (2012, p. 202)
maintenance downtime. It is expressed as a function of the
frequency of maintenance, and the mean maintenance time”
Operational Availability (Ao): “The repair time includes many Elsayed (2012, p. 203)
elements: the direct time of maintenance and repair and the indirect
time which includes ready time, logistics time, and waiting or
administrative downtime”
System uptime
Ao ¼ ð5:1Þ
System total time ðuptime þ downtimeÞ
Equation 5.1 may be expanded by including the maintainability terms mean time
between failure (MTBF) and mean time to repair (MTTR) that were discussed in
Chap. 4 and by introducing a new term mean logistics delay time (MLDT). The
equation for operational availability is expanded and shown in Eq. 5.2.
Expanded Availability Equation
MTBF
A0 ¼ ð5:2Þ
MTBF þ MTTR þ MLDT
MLDT includes both mean supply delay time (MSDT), mean outside assistance
delay time (MOADT) and mean administrative delay time (MADT). The terms are
defined as follows:
• MSDT includes delays during the acquisition of spare parts, test equipment, and
special tooling required to accomplish the maintenance.
• MOADT includes delays due to the arrival of specialized maintenance personnel
during travel to the systems’ operational location to perform maintenance.
96 5 Availability, Operability, and Testability
The operational availability (Ao) equation may now be re-written to include the
additional terms from MLDT and is shown in Eq. 5.4.
Fully Expanded Availability Equation
MTBF
A0 ¼ ð5:4Þ
MTBF þ MTTR þ MSDT þ MOADT þ MADT
At the end of Chap. 3 we stressed the importance of being able to measure each
non-functional attribute. A structural mapping that relates availability to a specific
metric and measurement entity are required. The four-level construct for availability
is presented in Table 5.2.
The section that follows will discuss an additional sustainment concern by
addressing the non-functional attribute for testability.
5.3 Testability
In this section the basics of testability and how it is applied during systems
endeavors will be reviewed. Testability is an emerging measure that could be
utilized as a means for improving the ability to properly assess a system’s con-
formance with the functions and services required by its stakeholders.
where
ω Mean time to failure or MTTF.
τ Mean time to find a failure provided that test equipment, facilities, and
manpower are available.
ρ Repairability or mean time to repair or MTTR.
Re-written using the more familiar terms from the previous section, Eq. 5.5
becomes Eq. 5.6.
Inherent Availability Equation with MTTF and MTTR
MTTF
Ai ¼ ð5:6Þ
MTTF þ s þ MTTR
Because inherent availability considers only corrective maintenance, that is, faults
that are caused by failure, testability (τ) also has a role in the prediction of the
operational availability of a system. The operational availability Eq. 5.5 may be re-
written as shown in Eq. 5.7.
Expanded Availability Equation with Testability
MTBF
A0 ¼ ð5:7Þ
MTBF þ s þ MTTR þ MSDT þ MOADT þ MADT
Availability is dependent on how well the operator can assess the condition of the system,
how quickly he/she can detect and locate the cause of degraded or failed units, and how
efficiently he/she can rectify the malfunction (Kelley et al. 1990, p. 22).
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute was stressed. In support of this importance, a structural mapping that
relates testability to a specific metric and measurement entity are required. The four-
level construct for testability is presented in Table 5.5.
5.4 Summary
In this chapter the non-functional requirements for availability and testability have
been reviewed. In each case a formal definition has been provided along with
additional explanatory definitions, terms, and equations. The ability to effect the
non-functional requirement during the design process has also been addressed.
Finally, a formal metric and measurement characteristic have been proposed for
evaluating each non-functional requirement attribute.
The next Part of the text will shift the focus to concerns that are directly related
to the design itself. The first chapter in the Part on Design Concerns will address the
non-functional attributes for conciseness, modularity, simplicity, and traceability.
The second chapter in Part III on Design Concerns will address the non-functional
attributes for compatibility, consistency, interoperability, and safety.
References
Baudry, B., Le Traon, Y., & Sunye, G. (2002). Testability analysis of a UML class diagram.
Proceedings of the Eighth IEEE Symposium on Software Metrics (pp. 54–63). Los Alamitos,
CA: IEEE Computer Society.
Buschmann, F. (2011). Tests: The Architect’s best friend. IEEE Software, 28(3), 7–9.
Campbell, J. D. (1995). Uptime: Strategies for excellence in maintenance management. Portland,
OR: Productivity Press.
Elsayed, E. A. (2012). Reliability engineering (2nd ed.). Hoboken, NJ: Wiley.
IEEE. (2005). IEEE Standard 1220: Systems engineering—Application and management of the
systems engineering process. New York: Institute of Electrical and Electronics Engineers.
References 101
IEEE, and ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and software
engineering—Vocabulary. New York and Geneva: Institute of Electrical and Electronics
Engineers and the International Organization for Standardization and the International
Electrotechnical Commission.
Jiang, T., Klenke, R. H., Aylor, J. H., & Gang, H. (2000). System level testability analysis using
Petri nets. Proceedings of the IEEE International High-Level Design Validation and Test
Workshop (pp. 112–117). Los Alamitos, CA: IEEE Computer Society.
Kelley, B. A., D’Urso, E., Reyes, R., & Treffner, T. (1990). System testability analyses in the
Space Station Freedom program. Proceedings of the IEEE/AIAA/NASA 9th Digital Avionics
Systems Conference (pp. 21–26). Los Alamitos, CA: IEEE Computer Society.
Valstar, J. E. (1965). The contribution of testability to the cost-effectiveness of a weapon system.
IEEE Transactions on Aerospace, AS-3(1), 52–59.
Voas, J. M., & Miller, K. W. (1993). Semantic metrics for software testability. Journal of Systems
and Software, 20(3), 207–216.
Voas, J. M., & Miller, K. W. (1995). Software testability: The new verification. IEEE Software,
12(3), 17–28.
Part III
Design Concerns
Chapter 6
Conciseness, Modularity, Simplicity
and Traceability
Abstract The design of systems and components during the design stage of the
systems life cycle requires specific purposeful actions to ensure effective designs
and viable systems. Designers are faced with a number of design concerns that they
must embed into the design in every instance of thinking and documentation. Four
of these concerns are addressed by the non-functional requirements for conciseness,
modularity, simplicity, and traceability. Formal understanding of each of these non-
functional requirements requires definitions, terms, and equations, as well as the
ability to understand how to control their effect and measure their outcomes during
system design endeavors.
This chapter will address four major topics: (1) conciseness; (2) modularity; (3)
simplicity or complexity; and (4) traceability in design endeavors. The chapter
begins with a section that reviews conciseness and the basic terminology, equations
and concepts that underlie its utilization. A metric for measuring and evaluating
conciseness is proposed.
Section 6.3 discusses the concept of modularity and how it affects systems
designs. A number of specific modularity measures from the extant literature are
presented. The section completes with the selection of a measure for modularity and
a structural map relating the metric and the measurement attributes for modularity.
Section 6.4 in this chapter addresses simplicity by contrasting it with com-
plexity. Relevant measures for complexity from the related literature are reviewed
and three are presented for understanding. The section concludes with a metric and
measurable characteristic for complexity.
Section 6.5 presents traceability and how it impacts system design endeavors.
The need for traceability expressed in the IEEE Standard for the Application and
Management of the Systems Engineering Process (IEEE 2005) is used to develop a
metric for evaluating traceability in systems designs. The section completes by
relating the proposed measure for traceability as a metric and includes a structural
map for traceability.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of conciseness,
modularity, simplicity and traceability that influence design in systems endeavors.
This chapter’s goal is supported by the following objectives:
• Define conciseness.
• Describe the terminology used to calculate conciseness.
• Construct a structural map that relates conciseness to a specific metric and
measurable characteristic.
• Define modularity.
• Describe the terminology used to represent modularity.
• Construct a structural map that relates modularity to a specific metric and
measurable characteristic.
• Describe the relationship between simplicity and complexity.
• Define complexity.
• Describe the terminology used to represent complexity.
• Construct a structural map that relate complexity to a specific metric and
measurable characteristic.
• Define traceability.
• Describe the terminology used to represent traceability.
• Construct a structural map that relate traceability to a specific metric and
measurable characteristic.
• Explain the significance of conciseness, modularity, simplicity and traceability
in systems design endeavors.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the sections that follow.
6.2 Conciseness
In this section the basics of conciseness and how it is applied during systems
endeavors is reviewed. Conciseness is not a well-known non-functional require-
ment and will not have attributes that are either obvious or universally recognized.
In order to understand this attribute the review will start with a basic definition.
Simply stated, each functional requirement should be satisfied without affecting any
other functional requirement.
During the conceptualization process of engineering design, each functional
requirement is transformed from the functional domain where they state what is
required, to the physical domain where they will be matched to a design parameter
that will define how the function will be accomplished. An ideal, or concise mapping,
should be one design parameter for each unique functional requirement. A system
that meets this requirement would be ideally concise. Mathematically, this rela-
tionship may expressed as the conciseness ratio (CR), which is expressed in Eq. 6.1.
The Conciseness Ratio
Pn
DPi
CR ¼ Pi¼1
n ð6:1Þ
j¼1 FRj
where
i = the number of Design Parameters (DP)
j = the number of Functional Requirements (FR)
From the definition of conciseness and the associated conciseness ratio (CR) it
should be clear that designs with a higher conciseness ratio are more concise because
their design parameters and functional requirements are parsimonious. The next
section will address how conciseness is included in the design synthesis process.
Neither the term conciseness nor the word concise are directly addressed in IEEE
Standard 1220—Systems engineering—Application and management of the sys-
tems engineering process (IEEE 2005). However, requirements analysis (Sect. 6.1)
and functional analysis (Sect. 6.3) are serial inputs into the design synthesis process
(Sect. 6.5) where design solution alternatives are identified and evaluated.
• As an element of the synthesis task in Sect. 6.5.2 where
… alternatives and aggregates of alternatives are analyzed to determine which design
solution best satisfies allocated functional and performance requirements, interface
requirements, and design constraints and adds to the overall effectiveness of the system or
higher-level system (IEEE 2005, p. 51).
The conciseness ratio (CR) may be applied to design alternatives as a measure for
discrimination between competing alternatives.
108 6 Conciseness, Modularity, Simplicity and Traceability
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute was stressed. A structural mapping that relates conciseness to a specific
metric and measurement entity are required. The four-level construct for concise-
ness is presented in Table 6.1.
The section that follows will address the non-functional requirement for mod-
ularity as a design concern.
6.3 Modularity
In this section the basics of modularity and how it is applied during systems
endeavors will be reviewed. Modularity is a well-established principle in many
forms of engineering (Budgen 2003), however, it is not universally applied as a
design discriminator. In order to understand this attribute the basic definition serves
as a logical starting point.
Table 6.3 provides definitions of these terms in a format that permits the reader to
easily contrast their differences.
There are a variety of specific types of both cohesion and coupling that are
factors to consider during the design of any system. However, systems designers are
interested in a higher level of abstraction, that of the system’s modularity, and will
use the notion of coupling in the development of metrics for modularity.
where:
M(u) the modularization function
u the number of New-To-Firm (NTF) components (i.e., modules)
N total number of components (i.e., modules)
s substitutability factor
δ degree of module coupling
6.3 Modularity 111
The Design Structure Matrix or DSM (Browning 2001; Eppinger and Browning
2012; Steward 1981) is a widely used design approach that has, as its roots, both the
simplified N2 diagram (Becker et al. 2000) and the House of Quality (Hauser and
Clausing 1988). The DSM is a graphical method for representing the relationships
between the components (i.e., modules) of a system. The DSM is a square matrix
with identical labels for both the rows and columns. In a static, component-based
DSM, the labels represent the system architecture by depicting the relevant rela-
tionships between the system’s constituent components or modules.
In order to illustrate a basic DSM, a simple refrigeration system is depicted in
Fig. 6.1.
Table 6.5 is a sample DSM for the simple refrigeration system depicted in
Fig. 6.1.
The type of interaction that occurs between the system components or modules
is characterized in Table 6.6.
The relationships between the components are coded based upon (1) the com-
ponent or module interaction type as described in Table 6.6, and (2) weighting the
interactions between components or modules relative to each other as presented in
Table 6.7.
The DSM structure for the system in Fig. 6.1 can be used to display any of the
interaction Types from Table 6.6. A completed DSM for the Energy interactions,
using the weighting schema from Table 6.7 is presented in Table 6.8.
LP Gas HP Gas
Compressor
Electrical
Evaporator Power Condenser
Electrical Cooling
Power Medium
Cooling Fan
Control
Circuitry
HP Vapor HP Liquid
Thermal
Expa nsion
Valve (TXV)
The study by Yu et al. (2007) proposes a modularity metric termed the Minimum
Description Length (MDL), that is based upon an evaluation of a design as rep-
resented in a DSM. The MDL evaluates the system’s modularity (represented by the
DSM) using the terms presented in Eq. 6.3.
Minimum Description Length
!
1 Xnc
1 1
MDL ¼ nc lognn þ lognn cli þ S1 þ S2 ð6:3Þ
3 i¼1
3 3
where
nc the number of components (i.e., modules)
nn the number of rows or columns in the DSM
cli the size of the module i
S1 the number of cells that are in a module, but are empty
S2 the number of cells that is 1 in between the modules
Table 6.7 Weighting schema for energy interaction between components (Browning 2001)
Label Weight Description
Required +2 Energy transfer/exchange is necessary for functionality
Desired +1 Energy transfer/exchange is beneficial, but not necessary for
functionality
Indifferent 0 Energy transfer/exchange does not affect functionality
Undesired −1 Energy transfer/exchange causes negative effects but does not
prevent functionality
Detrimental −2 Energy transfer/exchange must be prevented to achieve
functionality
Table 6.8 DSM for energy actions in the system depicted in Fig. 6.1
A B C D E F G H
Compressor A A +2
Condenser B B +2
TXV C C +2
Evaporator D D
Cooling fan E E +2
Electrical power F +2 +2 +2 F +2
Cooling medium G +2 G
Control circuitry H +2 +2 H
Neither the term modularity nor the word coupling are directly addressed in IEEE
Standard 1220—Systems engineering—Application and management of the sys-
tems engineering process (IEEE 2005). However, the reason that modularity is a
feature in the purposeful design of systems is because, from an engineering design
perspective modularity does many things:
• First, it makes the complexity of the system manageable by providing an
effective “division of cognitive labor.”
• Second, modularity organizes and enable parallel work.
• Finally, modularity in the ‘design’ of a complex system allows modules to be
changed and improved over time without undercutting the functionality of the
system as a whole (Baldwin and Clark 2006, p. 180).
However, modularity-in-design is not defined simply as a system with a defined
hierarchy of modules. “A complex engineering system is modular in design if (and
only if) the process of designing it can be split up and distributed across separate
modules, that are coordinated by design rules, not by ongoing consultations
amongst the designers” (Baldwin and Clark 2006, p. 181).
114 6 Conciseness, Modularity, Simplicity and Traceability
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute was stressed as an important element of the design process. A structural
mapping that relates modularity to a specific metric and measurement entity are
required. The four-level construct for modularity is presented in Table 6.9.
The section that follows will address the non-functional requirement for sim-
plicity as a design concern.
6.4 Simplicity
In this the basics of simplicity and how it is applied during systems endeavors will be
reviewed. “While simplicity cannot be assessed, one can at least seek measures for its
converse characteristic of complexity” (Budgen 2003, p. 75). In order to understand
simplicity and complexity it is best to once again start with their basic definitions.
Simplicity and its converse characteristic complexity are defined, from a systems
engineering perspective, in Table 6.10. The definitions are provided side-by-side to
permit the reader to easily contrast their differences.
6.4 Simplicity 115
The definition for complexity in Table 6.10 does not provide sufficient level of
detail required to understand what complexity is and how it appears in systems. It is
often useful to characterize complexity by the features that would be present in a
system that is characterized as complex. Typically, these include:
1. The system contains a collection of many interacting objects or agents.
2. These objects’ behavior is affected by memory or feedback.
3. The objects can adapt their strategies according to their history.
4. The system is typically open.
5. The system appears to be alive.
6. The system exhibits phenomena which are generally surprising, and may be
extreme.
7. The emergent phenomena typically arise in the absence of any sort of invisible
hand or central controller.
8. The system shows a complicated mix of ordered and disordered behavior
(Johnson 2007, pp. 13–15).
Armed with this improved understanding of complexity, how is it measured?
The next section will address methods for measuring complexity in systems.
One measure for system complexity that is particular useful during system design
has been proposed by Huberman and Hogg (1986). The authors report that their
physical measure for system complexity is based on “its diversity, while ignoring its
detailed specification. It applies to discrete hierarchical structures made up of ele-
mentary parts and provides a precise, readily computable quantitative measure”
(p. 376). Their method relies upon the concept of hierarchy, and utilizes a hierarchy
tree to represent the structure of a system.
A powerful concept for understanding these systems is that of a hierarchy. This can cor-
respond to the structural layout of a system or, more generally, to clustering pieces by
strength of interaction. In particular, if the most strongly interacting components are
grouped together at the first level, then the most strongly interacting clusters are combined
at the next level and so on, one ends up with a tree reflecting the resulting hierarchy
(Huberman and Hogg 1986, p. 377).
The structure for a notional system is depicted in the hierarchy tree in Fig. 6.2.
6.4 Simplicity 117
The premise of hierarchy upon which this measure relies is that proposed by
Simon (1996).
To design such a complex structure, one powerful technique is to discover viable ways of
decomposing it into semi-independent components corresponding to its many functional
parts. The design of each component can then be carried out with some degree of inde-
pendence of the design of others, since each will affect the others largely through its
function and independently of the details of the mechanisms that accomplish the function
(p. 128).
C ðT Þ ¼ 1 DðTÞ ð6:4Þ
Y
k
DðT Þ ¼ ð2k 1Þ DðTij Þ ð6:5Þ
j¼1
where
C complexity measure
D diversity measure
T system hierarchy tree whose diversity is being evaluated
J number of non-isomorphic sub-trees with a range from 1 to k
k number of sub-trees in the system elements
where
H is the information entropy
B is the base 2 logarithm due to the use of binary logic in information theory
p is the probability associated with each the symbols
i the number of discrete messages
X
n
Isys ¼ log2 ½pðDPi Þ ð6:7Þ
i¼1
The information axiom, when used in this context, states that the system design
with the smallest Isys (i.e., the design with the least amount of information) is the best
design. This is because this design requires the least amount of information to fulfill
the design parameters (DP). Once again, the Axiomatic Design Methodology’s
utilization of Shannon’s information entropy is remarkable because a system’s design
complexity, most often expressed as a qualitative assessment, may be represented as a
quantitative measure based on the information entropy required to satisfy the design
parameters.
6.4 Simplicity 119
None of the previous measures are easily employed as a truly generalizable measure
of a system’s complexity. However, there is a highly generalizable measure termed
variety that may be used as a measure of the complexity of a system. Variety is “the
total number of possible states of a system, or of an element of a system” (Beer
1981, p. 307). As such, it is an excellent measure of the complexity of a system.
Variety, as a measure of system complexity, computes the number of different
possible system states that may exist and is calculated using the relations in Eq. 6.8
(Flood and Carson 1993, p. 26).
Variety
V ¼ Zn ð6:8Þ
where
V variety or potential number of system states
Z number of possible states of each system element
N number of system elements
The variety of a simple system can quickly become enormous. Take, for example,
a system that has 8 different subsystems with 8 possible channels capable of operating
simultaneously. This is a total of 64 different system elements. In this system each
element can only have 2 element states—working or not working. The variety gen-
erated from the system show that the system may have 18,446,744,073,710,000,000
states!
Of the three methods for measuring complexity, variety seems to be the easiest
to compute based upon a minimum of required characteristics for its calculation.
As stated in each of the previous sections, the importance of being able to measure
each non-functional attribute is essential in systems design endeavors. A structural
mapping that relates complexity to a specific metric and measurement entity are
required. The four-level construct for complexity is presented in Table 6.12.
The section that follows will address the non-functional requirement for trace-
ability as a design concern.
6.5 Traceability
In this section the basics of traceability and how it is applied during systems
endeavors will be reviewed. In order to understand traceability its definition is
reviewed.
Downward Upward
traceability tra ceability
System
Requirements
Downward Upward
traceability traceability
System
Design
Artifacts
Downward Upward
traceability tra ceability
System
Elements
and Functions
Now that traceability has been formally defined, how it is instantiated as part of the
formal design process may be reviewed.
It is important to note that many important questions about the design of a system
can only be answered by understanding the relationships between the design layers
depicted in Fig. 6.3. “Documenting these relationships engenders greater reflection
and subjects your thinking to peer review” (Dick 2005, p. 14). The formal design
process is where the traceability relationships are developed.
Traceability, is a major factor in ensuring a robust system design that satisfies the
stakeholder’s identified needs. Traceability is addressed in IEEE Standard 1220—
Systems engineering—Application and management of the systems engineering
process (IEEE, 2005) which specifically addresses traceability in nine specific
areas.
1. As an element of the system definition stage described in Section 5.11.3.
System product functional and performance requirements should be allocated among the
subsystems so as to assure requirement traceability from the system products to their
respective subsystems, and from subsystems to their parent product (IEEE 2005, p. 22).
Now traceability has been defined within the processes used to design of systems,
how is it measured? This question is a tough one to answer because traceability is a
subjective, qualitative measure which differs from the objective, quantitative
measures developed for most of the non-functional requirements already addressed.
In order to understand how to approach a subjective, qualitative measure, a short
review of how to measure subjective, qualitative objects is required.
In order to evaluate traceability, questions that address both the presence (yes or no)
and quality of the effort (how well) to provide traceability in the nine areas where
traceability is addressed in systems design. In this case each of the nine areas or
objects must be related to a specific measurable attribute. Measures are important
because they are the linkage between observable, real-world, empirical facts and the
construct (i.e., traceability) that are created as an evaluation point. In this case a
measure is defined as “… an observed score gathered through self-report, interview,
observation, or some other means” (Edwards and Bagozzi 2000, p. 156).
The selection of a measurement scale is an important element in the development
of an adequate measure for traceability. A scale is defined as “… a theoretical
variable in a model, and scaling or measurement is the attachment to empirical
events of values of the variable in a model” (Cliff 1993, p. 89). Because
124 6 Conciseness, Modularity, Simplicity and Traceability
The numbers attached to the ordinal scale only provide a shorthand notation for
designating the relative positions of the measures on the scale. The use of a well-
known scale type, the Likert scale, is proposed for use in evaluating traceability.
Because Likert-type ordinal scales have been shown to have increased reliability (as
measured by Cronbach’s (1951) Coefficient alpha) up to the use of 5 points and “…
a definite leveling off in the increase in reliability after 5 scale points,” (Lissitz and
Green 1975, p. 13) the scales used for traceability in the next section have been
purposefully designed for increased reliability by using 5 points.
Before moving on to describing the measure for traceability two important points
must be made with respect to scale development. Scales are characterized as either a
proposed scale or a scale. “A proposed scale is one that some investigator(s) put
forward as having the requisite properties, and if it is indeed shown to have them,
then it is recognized as a scale” (Cliff 1993, p. 65). In this chapter the use of the
word scale is referring to proposed scales. This may seem to be an insignificant
point, but until the scale has been accepted and successfully utilized it remains
proposed. The second and final point is that the use of an ordinal scale limits the
measurement effort to but a few statistics such as “… rank order coefficient of
correlation, r, Kendall’s W, and rank order analysis of variance, medians, and
percentiles” (Kerlinger and Lee 2000, p. 363). Because the current evaluation
techniques for traceability use few if any measures, the statistical limitation
imposed by the use of ordinal scales may be evaluated as an acceptable one.
Armed with a construct, measurement attributes and an appropriate scale type, the
traceability measure may be constructed. In order to evaluate traceability, we will
need to answer questions that address both the presence (yes or no) and quality of
the effort (how well) to provide traceability in the nine areas (i.e., our measurement
6.5 Traceability 125
constructs) from Section 5.4.2. Table 6.14 has rearranged the nine constructs and
associated measurement concerns based upon the life cycle stage and systems
engineering process.
In order to evaluate the design’s ability to conform to the notion of traceability, a
specific question should be developed which will evaluate each of the nine design
traceability measurement concerns. The answers to the questions will be contained
in a 5 point-Likert scale. The measurement constructs and questions associated with
each of the measurements concerns are presented in Table 6.15.
The answer to each question in Table 6.15 will be scored using the 5-point Likert
measures in Table 6.16.
126 6 Conciseness, Modularity, Simplicity and Traceability
The overall measure for system traceability is a sum of the scores from the nine
traceability metrics as shown in Eq. 6.9.
Generalized Equation for System Traceability
X
n
Tsys ¼ Ti ð6:9Þ
i¼1
Tsys ¼ Tcd þ Tpd1 þ Tpd2 þ Tdd þ Tfa þ Ts1 þ Ts2 þ Tv1 þ Tv2 ð6:10Þ
The summation of the nine (9) constructs in Eq. 6.10 will be the measure the
degree of traceability in a system design endeavor Table 6.16.
As stated in each of the three previous sections, the importance of being able to
measure each non-functional attribute is essential in systems design endeavors. A
structural mapping that relates traceability to a specific metric and measurement
entity are required. The four-level construct for traceability is presented in
Table 6.17.
6.6 Summary
The chapter that follows will address non-functional requirement for compati-
bility, consistency, interoperability, and safety as part of the concern for design in
systems endeavors.
References
Ameri, F., Summers, J. D., Mocko, G. M., & Porter, M. (2008). Engineering design complexity:
An investigation of methods and measures. Research in Engineering Design, 19(2–3),
161–179.
Ashby, W. R. (1958). Requisite variety and its implications for the control of complex systems.
Cybernetica, 1(2), 83–99.
Ashby, W. R. (1968). Variety, constraint, and the law of requisite variety. In W. Buckley (Ed.),
Modern systems research for the behavioral scientist (pp. 129–136). Chicago: Aldine
Publishing Company.
Baldwin, C. Y., & Clark, K. B. (2006). Modularity in the design of complex engineering systems.
In D. Braha, A. A. Minai, & Y. Bar-Yam (Eds.), Complex engineered systems (pp. 175–205).
Berlin: Springer.
Bar-Yam, Y. (2004). Multiscale variety in complex systems. Complexity, 9(4), 37–45.
Bashir, H. A., & Thomson, V. (1999). Estimating design complexity. Journal of Engineering
Design, 10(3), 247–257.
Becker, O., Asher, J. B., & Ackerman, I. (2000). A method for system interface reduction using
N2 charts. Systems Engineering, 3(1), 27–37.
Beer, S. (1981). Brain of the Firm. New York: Wiley.
Booch, G. (1994). Object-oriented analysis and design with applications (2nd ed.). Reading, MA:
Addison-Wesley.
Braha, D., & Maimon, O. (1998). The measurement of a design structural and functional
complexity. IEEE Transactions on Systems, Man and Cybernetics—Part A: Systems and
Humans, 28(4), 527–535.
Briand, L. C., Wüst, J., Daly, J. W., & Victor Porter, D. (2000). Exploring the relationships
between design measures and software quality in object-oriented systems. Journal of Systems
and Software, 51(3), 245–273.
Browning, T. R. (2001). Applying the design structure matrix to system decomposition and
integration problems: A review and new directions. IEEE Transactions on Engineering
Management, 48(3), 292–306.
Budgen, D. (2003). Software design (2nd ed.). New York: Pearson Education.
Chidamber, S. R., & Kemerer, C. F. (1994). A metrics suite for object oriented design. IEEE
Transactions on Software Engineering, 20(6), 476–493.
Chrissis, M. B., Konrad, M., & Shrum, S. (2007). CMMI: Guidelines for process integration and
product improvement (2nd ed.). Upper Saddle River, NJ: Addison-Wesley.
Cliff, N. (1993). What is and isn’t measurement. In G. Keren & C. Lewis (Eds.), A handbook for
data analysis in the behavioral sciences: Methodological issues (pp. 59–93). Hillsdale, NJ:
Lawrence Erlbaum Associates.
Conant, R. C. (1976). Laws of information which govern systems. IEEE Transactions on Systems,
Man and Cybernetics, SMC, 6(4), 240–255.
Coombs, C. H., Raiffa, H., & Thrall, R. M. (1954). Some views on mathematical models and
measurement theory. Psychological Review, 61(2), 132–144.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3),
297–334.
Dick, J. (2005). Design traceability. IEEE Software, 22(6), 14–16.
References 129
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between
constructs and measures. Psychological Methods, 5(2), 155–174.
Eppinger, S. D., & Browning, T. R. (2012). Design structure matrix methods and applications.
Cambridge, MA: MIT Press.
Faulconbridge, R. I., & Ryan, M. J. (2003). Managing complex technical projects: A systems
engineering approach. Norwood, MA: Artech House.
Flood, R. L., & Carson, E. R. (1993). Dealing with complexity: An introduction to the theory and
application of systems science (2nd ed.). New York: Plenum Press.
Gershenson, J. K., Prasad, G. J., & Zhang, Y. (2003). Product modularity: Definitions and benefits.
Journal of Engineering Design, 14(3), 295.
Gershenson, J. K., Prasad, G. J., & Zhang, Y. (2004). Product modularity: Measures and design
methods. Journal of Engineering Design, 15(1), 33–51.
Halstead, M. H. (1977). Elements of Software Science. Elsevier North-Holland, Inc., Amsterdam.
Hauser, J. R., & Clausing, D. P. (1988). The house of quality. Harvard Business Review, 66(3),
63–73.
Henneman, R. L., & Rouse, W. B. (1986). On measuring the complexity of monitoring and
controlling large-scale systems. IEEE Transactions on Systems, Man and Cybernetics, 16(2),
193–207.
Henry, S., & Kafura, D. (1984). The evaluation of software systems’ structure using quantitative
software metrics. Software: Practice and Experience, 14(6), 561–573.
Hölttä-Otto, K., & de Weck, O. (2007). Degree of modularity in engineering systems and products
with technical and business constraints. Concurrent Engineering, 15(2), 113–126.
Hornby, G. S. (2007). Modularity, reuse, and hierarchy: Measuring complexity by measuring
structure and organization. Complexity, 13(2), 50–61.
Huberman, B. A., & Hogg, T. (1986). Complexity and adaptation. Physica D: Nonlinear
Phenomena, 22(1–3), 376–384.
IEEE. (2005). IEEE standard 1220: Systems engineering—application and management of the
systems engineering process. New York: Institute of Electrical and Electronics Engineers.
IEEE, & ISO/IEC (2010). IEEE and ISO/IEC standard 24765: Systems and software engineering—
vocabulary. New York and Geneva: Institute of Electrical and Electronics Engineers and the
International Organization for Standardization and the International Electrotechnical
Commission.
Jarke, M. (1998). Requirements tracing. Communications of the ACM, 41(12), 32–36.
Johnson, N. (2007). Simply complexity: A clear guide to complexity theory. Oxford: Oneworld
Publications.
Jung, W. S., & Cho, N. Z. (1996). Complexity measures of large systems and their efficient
algorithm based on the disjoint cut set method. IEEE Transactions on Nuclear Science, 43(4),
2365–2372.
Kerlinger, F. N., & Lee, H. B. (2000). Foundations of behavioral research. Fort Worth: Harcourt
College Publishers.
Kitchenham, B. A., Pickard, L. M., & Linkman, S. J. (1990). An evaluation of some design
metrics. Software Engineering Journal, 5(1), 50–58.
Koomen, C. J. (1985). The entropy of design: A study on the meaning of creativity. IEEE
Transactions on Systems, Man and Cybernetics, SMC, 15(1), 16–30.
Lissitz, R. W., & Green, S. B. (1975). Effect of the number of scale points on reliability: A Monte
Carlo approach. Journal of Applied Psychology, 60(1), 10–13.
Martin, M. V., & Ishii, K. (2002). Design for variety: Developing standardized and modularized
product platform architectures. Research in Engineering Design, 13(4), 213–235.
McCabe, T. J. (1976). A complexity measure. IEEE Transactions on Software Engineering, SE,
2(4), 308–320.
McCabe, T. J., & Butler, C. W. (1989). Design complexity measurement and testing.
Communications of the ACM, 32(12), 1415–1425.
Mikkola, J. H., & Gassmann, O. (2003). Managing modularity of product architectures: Toward an
integrated theory. IEEE Transactions on Engineering Management, 50(2), 204–218.
130 6 Conciseness, Modularity, Simplicity and Traceability
Min, B.-K., & Soon Heung, C. (1991). System complexity measure in the aspect of operational
difficulty. IEEE Transactions on Nuclear Science, 38(5), 1035–1039.
Newcomb, P. J., Bras, B., & Rosen, D. W. (1998). Implications of modularity on product design
for the life cycle. Journal of Mechanical Design, 120(3), 483–490.
Nunnally, J. C. (1967). Psychometric theory (3rd ed.). New York: McGraw-Hill.
Shannon, C. E. (1948a). A mathematical theory of communication, part 1. Bell System Technical
Journal, 27(3), 379–423.
Shannon, C. E. (1948b). A mathematical theory of communication, part 2. Bell System Technical
Journal, 27(4), 623–656.
Shannon, C. E., & Weaver, W. (1998). The mathematical theory of communication. Champaign,
IL: University of Illinois Press.
Simon, H. A. (1996). The sciences of the artificial (3rd ed.). Cambridge, MA: MIT Press.
Sosa, M. E., Eppinger, S. D., & Rowles, C. M. (2007). A network approach to define modularity of
components in complex products. Journal of Mechanical Design, 129(11), 1118–1129.
Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103(2684), 677–680.
Steward, D. V. (1981). The design structure system: A method for managing the design of
complex systems. IEEE Transactions on Engineering Management, EM, 28(3), 71–74.
Suh, N. P. (1990). The principles of design. New York: Oxford University Press.
Suh, N. P. (2001). Axiomatic design: Advances and applications. New York: Oxford University
Press.
Suh, N. P. (2005). Complexity: Theory and applications. New York: Oxford University Press.
Summers, J. D., & Shah, J. J. (2010). Mechanical engineering design complexity metrics: size,
coupling, and solvability. Journal of Mechanical Design, 132(2), 021004.
Torgerson, W. (1958). Theory and methods of scaling. New York: Wiley.
Wiegers, K. E. (2003). Software requirements (2nd ed.). Redmond, WA: Microsoft Press.
Yourdon, E., & Constantine, L. L. (1979). Structured design: Fundamentals of a discipline of
computer design and systems design. Englewood Cliffs, NJ: Prentice-Hall.
Yu, T.-L., Yassine, A. A., & Goldberg, D. E. (2007). An information theoretic method for
developing modular architectures using genetic algorithms. Research in Engineering Design,
18(2), 91–109.
Chapter 7
Compatibility, Consistency,
Interoperability
Abstract The design of systems and components during the design stage of the
systems life cycle requires specific purposeful actions to ensure effective designs
and viable systems. Designers are faced with a number of design concerns that they
must embed into the design in every instance of thinking and documentation. Three
of these concerns are addressed by the non-functional requirements for compati-
bility, consistency, and interoperability. Formal understanding of each of these non-
functional requirements requires definitions, terms, and equations, as well as the
ability to understand how to control their effect and measure their outcomes during
system design endeavors.
This chapter will address three major topics: (1) compatibility; (2) consistency; and
(3) interoperability in design endeavors. The chapter begins by reviewing com-
patibility and the basic terminology, equations and concepts that underlie its uti-
lization. Compatibility and its relation to standards is addressed and a measure for
evaluating compatibility in systems design is proposed.
Section 7.2 discusses the concept of consistency and how it affects systems
designs. Consistency is defined and reviewed with respect to design. The section
completes with a proposed measure for consistency that is based upon requirements
validation, functional verification, and design verification activities and provides a
structural map relating the metric and the measurement attributes for consistency.
Section 7.5 in this chapter addresses interoperability by providing a definition
and models of interoperability. A number of methods for evaluating interoperability
are discussed and a measurement method is proposed. The section concludes with a
metric and measurable characteristic for interoperability.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of compatibility,
7.2 Compatibility
This section will review the basics of compatibility and how it is applied during
systems endeavors. Compatibility is not a well-known non-functional requirement
and as such must be clearly defined and understood.
The 2nd definition of compatibility is very close to the definition for interoperability
which is:
The ability of two or more systems or components to exchange information and to use the
information that has been exchanged. (IEEE and ISO/IEC 2010, p. 186)
Interoperability will be discussed in the third section of this chapter, so the dis-
cussion of compatibility in this section will use only the 1st definition, where a
system’s ability to work with other systems without having to be altered to do so is
the focus. By restricting the definition in this manner a direct linkage with the
notion of standards, which are the primary means for ensuring compatibility in
systems endeavors, is established.
7.2 Compatibility 133
professional, and technical organizations that support specific industries and asso-
ciated sectors in the economy. In the United States, the American National
Standards Institute’s (ANSI) stated mission is:
To enhance both the global competitiveness of U.S. business and the U.S. quality of life by
promoting and facilitating voluntary consensus standards and conformity assessment sys-
tems, and safeguarding their integrity.
1. Buyers view product compatibility as a benefit because they feel they are pro-
tected from stranding, because the product has been designed and produced to
work according to an accepted and utilized standard, protecting the buyer from
the limitations associated with a one-of-a-kind product.
2. Systems designers view compatibility from the perspective that standards place
limits on their design decisions. Because design decisions are constrained there
are potential financial costs associated with static losses due to limited variety
and dynamic losses due to limited innovation.
3. Management may view compatibility standards as both a means for muting
competition during early development and extending product life by ensuring
compatibility with other products over the life of the product.
In summary “Standards are an inevitable outgrowth of systems, whereby com-
plementary products work in concert to meet users’ needs” (Shapiro 2001, p. 82).
Design Compatibility Analysis (DCA) is the process that focuses on ensuring that
the proposed design is compatible with the specifications in the original design
requirements. While this may seem trivial, many proposed designs stray far from
the original requirements and associated specifications. The underlying goal for
DCA is:
136 7 Compatibility, Consistency, Interoperability
Compatibility
Knowledge
Base
Requirements
Proposed
Compatibility and
Design
Specifications
Redesign Respecify
Design
Verification
Fig. 7.1 Concept for design compatibility analysis [based on figure in (Ishii et al. 1988)]
One of the important tasks in engineering design is to ensure the compatibility of the
elements of the proposed design with each other and with the design specifications
(requirements and constraints). Major design decisions, such as selection of components,
determination of the system type, and sizing of components, must be made with the
compatibility issue in mind. Some design features make a good match while others may be
totally incompatible for the required design specifications (Ishii et al. 1988, p. 55).
The team that conducts the DCA should follow the concept depicted in Fig. 7.1
where the proposed design is compared with the requirements and specifications
while using the compatibility knowledge base as a guide. If the proposed design is
deemed to be non-compatible, then either a redesign or re-specification is required.
where:
utility(s) weight of the evaluation of a design element where Σ utility(s) = 1.0.
M(s) compatibility of a design element s.
K entire set of design elements.
While a full discussion and derivation of the match-index (MI) is beyond the scope
of this chapter, the reader is encourage to review the literature on the use of the MI
as a measure of design compatibility (Ishii 1991; Ishii et al. 1988).
At the end of each Chap. 3 the importance of being able to measure each non-
functional attribute was cited as being essential in each and every system design
endeavor. A structural mapping that relates compatibility to a specific metric and
measurement entity are required. The four-level construct for compatibility is
presented in Table 7.3.
7.3 Consistency
In this section the basics of consistency and how it is applied during systems
endeavors will be discussed. Consistency is another non-functional requirements
that is not well-known and as such must be clearly defined and understood.
“Assessing consistency is really a process for ensuring that the different viewpoint
models … form projections from the same overall design model” (Budgen 2003,
p. 384). There is a dearth of information in the literature on methods and techniques
for how to assess consistency. Completion of the requirements validation (6.2.4),
functional verification (6.4.3), and design verification (6.6.3) tasks in IEEE
Standard 1220 (IEEE 2005) ensure that consistency is addressed as a high level
task. Boehm (1984) recommended using the following:
• Manual cross-referencing: Cross referencing involves both reading and the
construction of cross-reference tables and diagrams to clarify interactions among
design entities. For many large systems this can be cumbersome, leading to the
suggested use of automated cross-referencing tools.
140 7 Compatibility, Consistency, Interoperability
As discussed during the development of the scales for traceability in the previous
chapter, the selection of a measurement scale is an important element in the
development of an adequate measure for design consistency. Because the three
design processes (i.e., requirements validation, functional verification, and design
verification) used to identify evaluation points have no natural origin or empirically
defined distance, the ordinal scale is selected as an appropriate scale for measuring
the consistency attributes. The well-known Likert scale is proposed for use in
evaluating design consistency. In order to improve its reliability a five point Likert
scale will be invoked (Lissitz and Green 1975).
Before moving on to describing the measure for design consistency two
important points must once again be made with respect to scale development.
Scales are characterized as either a proposed scale or a scale. “A proposed scale is
one that some investigator(s) put forward as having the requisite properties, and if it
is indeed shown to have them, then it is recognized as a scale” (Cliff 1993, p. 65).
As stated earlier, the use of the word scale is referring to proposed scales. This may
seem to be an insignificant point, but until the scale has been accepted and suc-
cessfully utilized it remains proposed.
Armed with a construct, measurement attributes and an appropriate scale type, the
design consistency measure may be constructed. In order to evaluate design con-
sistency, questions that address both the presence (yes or no) and quality of the
effort (how well) must be answered in order to provide consistent systems design by
evaluating design processes for requirements validation, functional verification, and
design verification tasks. These measurement constructs are presented in Table 7.5.
X
n
Csys ¼ Ci ð7:2Þ
i¼1
The summation of the three constructs in Eq. 7.3 will be the measure the degree
of consistency in a system design endeavor.
7.3 Consistency 143
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute during systems design endeavors was stressed. A structural mapping that
relates consistency to a specific metric and measurement entity are required. The
four-level construct for consistency is presented in Table 7.8.
7.4 Interoperability
In this section the basics of interoperability and how it is applied during systems
endeavors will be addressed. Interoperability is a common term, but one without a
generally agreed upon definition. As such, it must be clearly defined and
understood.
Hard Holistic
Characteristics Worldviews
Conceptual
Structural
Constructs
Systemic Contextual
Syntactic
Interoperability Frame
Cultural
Semantic
Conditions
During the period between 1980 and 2014 sixteen separate models for evaluating
interoperability in systems have appeared in the general literature. However only
eight of these are either major enterprise reports or have been published in peer-
reviewed scholarly journals. Table 7.12 presents the eight formal models.
7.4 Interoperability 147
While an in-depth review of each model is beyond this chapter, readers are
encouraged to consult the references in Table 7.12 for a more detailed description of
the development of each interoperability evaluation model. The next section will
describe one of these measures as an appropriate technique for measuring and
evaluating interoperability in systems.
The i-Score Model for evaluating system interoperability compares the similarity of
system interoperability characteristics and is based upon the following assumption:
If a pair of systems is instantiated only with system interoperability characters, then the
measure of their similarity is also a measure of their interoperability (Ford 2008, p. 52).
The interoperability example utilized to describe this technique includes the macro-
system S which is made up of the set of systems si, which includes the macro-
characteristics of interoperability X, which are made up of the individual system
interoperability characteristics xi. For two systems s1 and s2, we can have the
interoperability relationships X(s) as shown in Fig. 7.3.
148 7 Compatibility, Consistency, Interoperability
S1 S2
No interoperation
X(s) = x 1 = 0
S1 S2
Uni-directional interoperation
X(s) = x 2 = 1
S1 S2
Uni-directional interoperation
X(s) = x 3 = 2
S1 S2
Bi-directional interoperation
X(s) = x 4 = 4
Once the set (S) of systems (si), their interoperability characters (xi), and the possible
states of those characters (ci) are identified, modeling of the interoperability rela-
tionships for the larger system (S) may commence. The individual systems (si) are
modeled, or instantiated, as a sequence that is representative of the states of each
system’s interoperability characteristics. The lower-case Greek character sigma (σ) is
used to denote the instantiation of the individual system (si), as depicted in Eq. 7.4.
Instantiation of si
In order to support meaningful system comparisons the larger system (S) is modeled
by aligning the instantiations (σi) of all of the member systems (si ∈ S). The upper-
case Greek characters sigma (Σ) is used to denote the instantiation alignment as
depicted in Eq. 7.5.
Instantiation Alignment of S
X
n
Xi ðSi Þ ¼ fr1 ; r2 ; . . .rn g ð7:5Þ
i¼1
An interoperability function (I), shown in Eq. 7.6, has been proposed by Ford
(2008) that uses the modified Minkowski similarity function to derive a weighted,
normalized measure of the similarity of two system instantiations (σ′) and (σ″).
150 7 Compatibility, Consistency, Interoperability
Interoperability Function
2 3
Pn 0
Pn 00
"X 0 r #1r
r ð i Þ þ r ðiÞ 1
41 pffiffiffi
n
r ð i Þ r00ðiÞ 5
I¼ i¼1 i¼1
bi ð7:6Þ
2ncmax r
n i¼1 cmax
where:
r is the Minkowski parameter (usually set to r = 2).
n the number of interoperability characters in each system.
b 0, if σ’(i) = 0 or σ”(i) = 0, else b = 1.
cmax maximum value of any interoperability character.
Three systems s1, s2, and s3 where (s1, s2, s3 ∈ S) have interoperability character-
istics x1, x2, x3, and x4 where (x1, x2, x3, x4∈ X) and the characteristics have a
maximum value of 9 where C ∈ {R ∩ [0,9]} with r = 2) can be represented as
follows:
S ¼ s1; s2 ; s3
X ¼ x1; x2 ; x3 ; x4
fr1 ; r2 ; r3 g ¼ fx1 ðs1 Þ; x2 ðs1 Þ; x3 ðs1 Þ; x4 ðs1 Þ; x1 ðs2 Þ; x2 ðs2 Þ; x3 ðs2 Þ; x4 ðs2 Þ; x1 ðs3 Þ; x2 ðs3 Þ; x3 ðs3 Þ; x4 ðs3 Þg
The interoperability function I may be calculated by inserting the values from Σ into
Eq. 7.6. The resulting interoperability matrix M is:
0 0:207 0:162
M ¼ 0:207 0 0:276
0:162 0:276 0
7.4 Interoperability 151
7.5 Summary
References
Adams, K. M., & Meyers, T. J. (2011). Perspective 1 of the SoSE methodology: Framing the
system under study. International Journal of System of Systems Engineering, 2(2/3), 163–192.
Alberts, D. S., & Hayes, R. E. (2003). Power to the edge: Command and control in the
information age. Washington, DC: DoD Command and Control Research Program.
Audi, R. (Ed.). (1999). Cambridge dictionary of philosophy (2nd ed.). New York: Cambridge
University Press.
152 7 Compatibility, Consistency, Interoperability
Boehm, B. W. (1984). Verifying and validating software requirements and design specifications.
IEEE Software, 1(1), 75–88.
Budgen, D. (2003). Software design (2nd ed.). New York: Pearson Education.
Chiu, D. K. W., Cheung, S. C., Till, S., Karlapalem, K., Li, Q., & Kafeza, E. (2004). Workflow
view driven cross-organizational interoperability in a web service environment. Information
Technology and Management, 5(3–4), 221–250.
Cliff, N. (1993). What is and isn’t measurement. In G. Keren & C. Lewis (Eds.), A handbook for
data analysis in the behavioral sciences: Methodological issues (pp. 59–93). Hillsdale, NJ:
Lawrence Erlbaum Associates.
David, P. A., & Greenstein, S. (1990). The economics of compatibility standards: An introduction
to recent research. Economics of Innovation and New Technology, 1(1–2), 3–41.
DiMario, M. J. (2006). System of systems interoperability types and characteristics in joint
command and control. In Proceedings of the 2006 IEEE/SMC International Conference on
System of Systems Engineering (pp. 236–241). Piscataway, NJ: Institute of Electrical and
Electronics Engineers.
DoD. (1998). C4ISR Architecture working group final report—levels of information system
interoperability (LISI). Washington, DC: Department of Defense.
Ford, T. C. (2008). Interoperability measurement. Air force institute of technology. Fairborn, OH:
Wright Patterson Air Force Base.
Ford, T. C., Colombi, J. M., Jacques, D. R., & Graham, S. R. (2009). A general method of
measuring interoperability and describing its impact on operational effectiveness. The Journal
of Defense Modeling and Simulation: Applications, Methodology, Technology, 6(1), 17–32.
Grindley, P. (1995). Standards, strategy, and policy: Cases and stories. New York: Oxford
University Press.
Hamilton, J. A., Rosen, J. D., & Summers, P. A. (2002). An interoperability roadmap for C4ISR
legacy systems. Acquisition Review Quarterly, 28, 17–31.
Heiler, S. (1995). Semantic interoperability. ACM Computing Surveys, 27(2), 271–273.
IEEE. (2005). IEEE Standard 1220: Systems engineering—application and management of the
systems engineering process. New York: Institute of Electrical and Electronics Engineers.
IEEE, & ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and software engineering
—vocabulary. New York and Geneva: Institute of Electrical and Electronics Engineers and the
International Organization for Standardization and the International Electrotechnical
Commission.
Ishii, K. (1991). Life-cycle engineering using design compatibility analysis. In Proceedings of the
1991 NSF Design and Manufacturing Systems Conference (pp. 1059–1065). Dearborn, MI:
Society of Manufacturing Engineers.
Ishii, K., Adler, R., & Barkan, P. (1988). Application of design compatibility analysis to
simultaneous engineering. Artificial Intelligence for Engineering Design, Analysis and
Manufacturing, 2(1), 53–65.
Ishii, K., & Sugeno, M. (1985). A model of human evaluation process using fuzzy measure.
International Journal of Man-Machine Studies, 22(1), 19–38.
Kasunic, M., & Anderson, W. (2004). Measuring systems interoperability: Challenges and
opportunities (CMU/SEI-2004-TN-003). Pittsburgh, PA: Carnegie Mellon University.
Kinder, T. (2003). Mrs Miller moves house: The interoperability of local public services in europe.
Journal of European Social Policy, 13(2), 141–157.
LaVean, G. E. (1980). Interoperability in defense communications. IEEE Transactions on
Communications, 28(9), 1445–1455.
Lissitz, R. W., & Green, S. B. (1975). Effect of the number of scale points on reliability: A Monte
Carlo approach. Journal of Applied Psychology, 60(1), 10–13.
Mensh, D., Kite, R., & Darby, P. (1989). A methodology for quantifying interoperability. Naval
Engineers Journal, 101(3), 251–259.
Ozok, A. A., & Salvendy, G. (2000). Measuring consistency of web page design and its effects on
performance and satisfaction. Ergonomics, 43(4), 443–460.
References 153
Rezaei, R., Chiew, T. K., & Lee, S. P. (2014). An interoperability model for ultra large scale
systems. Advances in Engineering Software, 67, 22–46.
Shapiro, C. (2001). Setting compatibility standards: Cooperation or collusion? In R. C. Dreyfuss,
D. L. Zimmerman, & H. First (Eds.), Expanding the boundaries of intellectual property:
Innovation policy for the knowledge society (pp. 81–101). New York: Oxford University Press.
Sheth, A. P. (1999). Changing focus on interoperability in information systems: From system,
syntax, structure to semantics. In M. Goodchild, M. Egenhofer, R. Fegeas, & C. Kottman
(Eds.), Interoperating geographic information systems (pp. 5–29). New York: Springer.
Shneiderman, B. (1997). Designing the user interface: Strategies for effective human-computer
interaction (3rd ed.). Boston: Addison-Wesley.
Tolk, A., Diallo, S. Y., & Turnitsa, C. D. (2007). Applying the levels of conceptual interoperability
model in support of integratability, interoperability, and composability for system-of-systems
engineering. Journal of Systemics, Cybernetics and Informatics, 5(5), 65–74.
Vetere, G., & Lenzerini, M. (2005). Models for semantic interoperability in service-oriented
architectures. IBM Systems Journal, 44(4), 887–903.
Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3), 338–353.
Chapter 8
System Safety
Abstract The design of systems and components during the design stage of the
systems life cycle requires specific purposeful actions to ensure effective designs
and viable systems. Designers are faced with a number of design concerns that they
must embed into the design in every instance of thinking and documentation. Safety
is one of these concerns and is addressed by the non-functional requirement for
safety which is composed of seven attributes. The development of the seven safety
attributes were developed using Leveson’s Systems-Theoretic Accident Model and
Processes (STAMP). Because STAMP is a system-theoretic approach appropriate
for evaluating safety in complex, systems-age engineering systems, the safety
attributes provide the ability to understand how to control safety and measure its
outcomes during system design endeavors.
This chapter will address system safety and how it is incorporated into system
design endeavors. Machine age systems safety is contrasted with systems-age
concerns. The need for safety expressed in the IEEE Standard for the Application
and Management of the Systems Engineering Process (IEEE 2005) is used to
develop a metric for evaluating safety in systems designs. The chapter completes by
relating the proposed measure for evaluating systems safety as a metric and includes
a structural map for systems safety.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of safety are ensured
through purposeful design efforts during systems endeavors. This chapter’s goal is
supported by the following objectives:
• Define safety in terms of emergence.
• Describe the relationship between systems safety and hazards.
Safety is a widely used term, but one which we will need to define clearly if we are
to apply it as a valid design concern during systems endeavors. Safety, from a
systems engineering perspective, is defined as:
The expectation that a system does not, under defined conditions, lead to a state in which
human life, health, property, or the environment is endangered. (IEEE and ISO/IEC 2010,
p. 315)
Safety has additional definitions, shown in Table 8.1 that may provide improved
understanding of the term when used as a non-functional requirement for a system.
The definitions for safety can be further improved by reviewing how safety has
been addressed in the literature on systems.
Safety is like motherhood and apple pie, who does not want it? However, under-
taking systems endeavors and ensuring that the associated system is safe is not an
abstract idea but a concrete requirement, most often satisfied by the inclusion of a
non-functional requirement for safety.
Most of the traditional literature on systems safety has focused on assumptions
that worked well for simple systems (i.e., the machine age), but no longer work for
complex systems (i.e., the systems age). In her seminal work Engineering a Safer
World: Systems Thinking Applied to Safety, Leveson (2011) of the Massachusetts
Institute of Technology, discusses the need to move away from the assumptions of
8.3 Safety in Systems 157
Table 8.2 Machine age and systems age safety model assumptions (Leveson 2011, p. 57)
Machine age assumption Improved systems age assumption
1 “Safety is increased by increasing system or “High reliability is neither necessary nor
component reliability. If components or sufficient for safety”
systems do not fail, then accidents will not
occur”
2 “Accidents are caused by chains of directly “Accidents are complex processes involving
related events. We can understand accidents the entire socio-technical system. Traditional
and assess risk by looking at the chain of event-chain models cannot describe this
events leading to the loss” process adequately”
3 “Probabilistic risk analysis based on event “Risk and safety may be best understood
chains is the best way to assess and and communicated in ways other than
communicate safety and risk information” probabilistic risk analysis”
4 “Most accidents are caused by operator “Operator behavior is a product of the
error. Rewarding safe behavior and environment in which it occurs. To reduce
punishing unsafe behavior will eliminate or operator ‘error’ we must change the
reduce accidents significantly” environment in which the operator works”
5 “Highly reliable software is safe” “Highly reliable software is not necessarily
safe. Increasing software reliability or
reducing implementation errors will little
impact on safety”
6 “Major accidents occur from the chance “System will tend to migrate toward states
simultaneous occurrence of random events” of higher risk. Such migration is predictable
and can be prevented by appropriate system
design or detected during operations using
leading indicators of increasing risk”
7 “Assigning blame is necessary to learn from “Blame is the enemy of safety. Focus should
and prevent accidents or incidents” be on understanding how the system
behavior as a whole contributed to the loss
and not on who or what to blame for it”
the machine age and to a new series of improved assumptions that can be suc-
cessfully applied as a new model of systems safety appropriate for the systems age.1
Table 8.2 contrasts the machine age assumptions with those required in the new
systems age.
The foundation for systems engineering is systems theory and its series of
supporting axioms and principles (Adams et al. 2014). Leveson’s new model for
system safety makes use of system theory’s centrality axiom and its pair of sup-
porting principles—hierarchy and emergence and communications and control.
Specifically, safety can be viewed as a control problem:
1
The reader is encouraged to read Chap. 2—Questioning the Foundations of Traditional Safety
Engineering in Leveson (2011). Engineering a Safer World: Systems Thinking Applied to Safety.
Cambridge, MA: MIT Press for a thorough discussion of the rationale associated with each of the
seven assumptions.
158 8 System Safety
Emergent properties like safety are controlled or enforced by a set of constraints (control
laws) related to the behavior of the system components. (Leveson 2011, p. 67)
The interactive complexity and tight coupling in modern complex systems requires
the new systems-age model of systems safety to approach safety by incorporating
both the social and technical elements of the system. The socio-technical system’s
context will dictate the non-functional requirements for safety that are invoked
during the systems design process.
In the system design process safety requirements are derived from accident hazards.
A safety requirement is a constraint derived from identified hazards. (Penzenstadler et al.
2014, p. 42)
The definition of hazards is based upon the system design. The major elements of
the design that give insight into potential hazards include the (1) system compo-
nents, (2) component interconnections, (3) human interactions with the system, (4)
connections to the environment, and (5) potential environmental disturbances. The
formal design process should address each of these hazards and the constraint that
may prevent their occurrence.
In the traditional systems design process invoked by IEEE Standard 1220—
Systems engineering—Application and management of the systems engineering
process (IEEE 2005) safety is touched upon in four process areas.
• As an element of the requirements analysis process in the following sections:
6.1.1—Stakeholder expectations are balanced with an analysis of the effects on
the overall system design and safety.
6.1.4—Measures of effectiveness are defined reflect overall stakeholder
expectations and satisfaction that include safety.
6.1.9.5—The design team accounts for the system design features that create
significant hazards.
• As an element of the functional analysis process in the following sections:
6.3.2.5 (1)—The design team analyzes and prioritizes potential functional fail-
ure modes to define failure effects and identify the need for fault detection and
recovery functions.
8.4 Safety in System Design Efforts 159
Hazard
Analysis Design
Technique Decisions
(STPA)
While an in-depth review of each STAMP is beyond this chapter, readers are
encouraged to consult Part III—Using STAMP in Leveson’s (2011) text
Engineering a Safer World: Systems Thinking Applied to Safety.
The next section will discuss a measure for evaluating system safety.
In the previous sections the use of a systems-based model for ensuring that system
safety was advocated and emerges as a purposeful result of the design process. In
order to ensure that the system design process has invoked a holistic, socio-tech-
nical perspective the design effort should be evaluated using the essential criteria of
such a model. As with traceability, the criteria will be subjective, qualitative
measures that will need to answer questions that address both the presence (yes or
no) and quality of the effort (how well) to provide a sufficiently robust systems-
based model as an element of the system design process. In this case the STAMP
criteria will need to be related to a specific measurable attribute that can be utilized
as a measure. Once again, measures are important because they are the linkage
between observable, real-world, empirical facts and the construct (i.e., system
safety model) that we create as an evaluation point.
to improve reliability a five point Likert scale will be invoked (Lissitz and
Green 1975).
Before moving on to describing the measure for system safety two important
points must once again be made with respect to scale development. Scales are
characterized as either a proposed scale or a scale. “A proposed scale is one that
some investigator(s) put forward as having the requisite properties, and if it is
indeed shown to have them, then it is recognized as a scale” (Cliff 1993, p. 65). In
this chapter use of the word scale is referring to proposed scales. As stated before,
this may seem to be an insignificant point, but until the scale has been accepted and
successfully utilized it remains proposed.
Armed with a construct, measurement attributes and an appropriate scale type, the
system safety measure may be constructed. In order to evaluate system safety, the
need to answer questions that address both the presence (yes or no) and quality of
the effort (how well) to provide system safety by invoking the principal elements of
a systems-based model during systems design endeavors must be answered. The
seven STAMP criteria (i.e., our measurement constructs) from Table 8.3 have been
rearranged in Table 8.4 and in order to evaluate the design’s ability to conform to
the STAMP criteria for system safety, a specific question has been developed which
may be used to evaluate each of the seven system safety measurement concerns.
The answers to the questions will be contained in a 5 point-Likert scale. The
measurement constructs and questions associated with each of the measurements
concerns are presented in Table 8.5.
The answer to each question will be scored using the 5-point Likert measures in
Table 8.6
A generalized measure for system safety is shown in Eq. 8.1.
Generalized Equation for Systems Safety
X
n
Ssys ¼ Si ð8:1Þ
i¼1
The overall measure for system safety is a sum of the scores from the seven system
safety metrics as shown in Eq. 8.2 and will be the measure the degree of system
safety in a system design endeavor.
Expanded Equation for System Safety
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute was stressed as an essential element in systems design endeavors. A
structural mapping that relates system safety to a specific metric and measurement
entity are required. The four-level construct for system safety is presented in
Table 8.7.
8.8 Summary 165
8.8 Summary
In this chapter the non-functional requirement for safety has been reviewed. A formal
definition for safety has been provided along with additional explanatory definitions,
terms, and equations. The ability to effect safety during the design process has also
been addressed. Finally, a formal metric and measurement characteristic have been
proposed for evaluating the non-functional requirement for safety.
166 8 System Safety
The next Part of the text will shift the focus to adaptation concerns. Adaptation
concerns address the system’s ability to change and adapt in order to remain viable
and continue to address the requirements of its stakeholders. The first chapter in the
Part on Adaptation Concerns will address the non-functional attributes for adapt-
ability, flexibility, modifiability and flexibility, and robustness. The second chapter
in Part III on Adaptation Concerns will address the non-functional attributes for
extensibility, portability, reusability, and self-descriptiveness.
References
Adams, K. M., Hester, P. T., Bradley, J. M., Meyers, T. J., & Keating, C. B. (2014). Systems
theory: The foundation for understanding systems. Systems Engineering, 17(1), 112–123.
Alberico, D., Bozarth, J., Brown, M., Gill, J., Mattern, S., & McKinlay, A. (1999). Software
system safety handbook: A technical and managerial team approach. Washington: Joint
Services Software Safety Committee.
Cliff, N. (1993). What is and isn’t measurement. In G. Keren & C. Lewis (Eds.), A handbook for
data analysis in the behavioral sciences: Methodological issues (pp. 59–93). Hillsdale:
Lawrence Erlbaum Associates.
DoD. (2000). Military Standard (MIL-STD-882D): Standard practice for system safety.
Washington: Department of Defense.
IEEE. (1994). IEEE Standard 1228: Software safety plans. New York: Institute of Electrical and
Electronics Engineers.
IEEE. (2005). IEEE Standard 1220: Systems engineering—application and management of the
systems engineering process. New York: Institute of Electrical and Electronics Engineers.
IEEE, & ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and software engineering—
vocabulary. New York, Geneva: Institute of Electrical and Electronics Engineers and the
International Organization for Standardization and the International Electrotechnical
Commission.
Leveson, N. G. (2004). A new accident model for engineering safer systems. Safety Science, 42(4),
237–270.
Leveson, N. G. (2011). Engineering a safer world: Systems thinking applied to safety. Cambridge:
MIT Press.
Leveson, N. G., Dulac, N., Marais, K., & Carroll, J. (2009). Moving beyond normal accidents and
high reliability organizations: A systems approach to safety in complex systems. Organization
Studies, 30(2–3), 227–249.
Lissitz, R. W., & Green, S. B. (1975). Effect of the number of scale points on reliability: A Monte
Carlo approach. Journal of Applied Psychology, 60(1), 10–13.
NASA. (2011). NASA System Safety Handbook (NASA/SP-2010-580). In System Safety
Framework and Concepts for Implementation (Vol. 1). Washington: National Aeronautics
and Space Administration.
Parker, S. (Ed.). (1994). McGraw-Hill dictionary of engineering. New York: McGraw-Hill.
Penzenstadler, B., Raturi, A., Richardson, D., & Tomlinson, B. (2014). Safety, security, now
sustainability: The nonfunctional requirement for the 21st century. IEEE Software, 31(3), 40–47.
Perrow, C. (1999). Normal accidents: Living with high-risk technologies. Princeton: Princeton
University Press.
Part IV
Adaptation Concerns
Chapter 9
Adaptability, Flexibility, Modifiability
and Scalability, and Robustness
Abstract The design of systems and components during the design stage of the
systems life cycle requires specific purposeful actions to ensure effective designs
and viable systems. Designers are faced with a number of adaptation concerns that
they must embed into the design in every instance of thinking and documentation.
The ability for a systems to change is essential to its continued survival and ability
to provide requisite functions for its stakeholders. Changeability includes the non-
functional requirements for adaptability, flexibility, modifiability and robustness.
Purposeful design requires an understanding of each of these requirements and how
to measure and evaluate each as part of an integrated systems design.
This chapter will address four major topics: (1) adaptability; (2) flexibility;
(3) modifiability and scalability; and (4) robustness in design endeavors. The
chapter begins with a section that reviews the concept of changeability, its three
unique elements, and a method for representing systems change using a state-
transition-diagram.
Section 9.2 defines adaptability and flexibility and provides a clear method for
distinguishing between these two non-functional properties.
Section 9.3 in this chapter addresses modifiability by providing a clear definition
and a distinction between it and both scalability and maintainability.
Section 9.4 defines robustness and discusses the design considerations related to
robust systems.
Section 9.5 defines a measure and a means for measuring changeability that is a
function of (1) adaptability; (2) flexibility; (3) modifiability; and (4) robustness. The
chapter completes by relating the proposed measure for changeability as a metric
and includes a structural map for traceability.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of adaptability,
The motivation for change in existing systems is based upon three major factors:
(1) marketplace forces; (2) technological evolution; and (3) environmental shifts
(Fricke and Schulz 2005). These drivers of change must be addressed by systems
practitioners throughout the system lifecycle. Because real-world systems exist in a
constantly changing domain, they too are subject to change. The ability for a system
to change is termed changeability. Changeability is a term that is not formally
defined in the systems engineering vocabulary, but it encompasses a number of
defined terms that include adaptability, flexibility, modifiability, scalability, and
robustness. Each of these terms will be fully addressed in later sections.
Right now, the most important point is that changeability addresses differences
in a system over time. Change can be thought of simply as the differences in a
system between and initial or zero time to and time tf where f = some future time.
During the time transition between to and tf, either the system or environment or
both the systems and environment may have been altered. A systems’ life cycle is
filled with changes to both the system and its related environment. The constant
stream of changes that occur during the systems life cycle require the system’s
designers and maintainers to plan for, recognize, and control changes to ensure the
system remains both viable and functionally effective. Changes that occur in a
system are characterized by a change event that contains three unique elements:
(1) the cause or impetus for the change; (2) the mechanism that affects the change,
and (3) the overall effect of change on both the system and its environment. These
events will be discussed in the sections that follow.
9.2 The Concept of Changeability 171
The impetus for change originates as a result of one or more of three change factors
described in the previous section: (1) marketplace forces; (2) technological evo-
lution; and (3) environmental shifts (Fricke and Schulz 2005). The source, which is
the instigator, force, or impetus that affects the change is labeled the change agent.
The change agent is responsible for transforming the change factor into action
(Ross et al. 2008).
The mechanism of change describes the path taken between time to and time tf
while the system and or its environment is being transformed from its initial state to
the new altered state (Ross et al. 2008). The pathway is the action that includes all
of the mechanistic resources (i.e., material, manpower, money, minutes (time),
methods, and information) required to affect the change.
The effects of change are the actual differences between the system and or the
environment at to and time tf (Ross et al. 2008). The difference between the system
at to and tf is described in terms of the changed or newly added characteristics (yi)
which are the results of a series of discrete events put into motion by the change
agent and accomplished by specific mechanisms (mi). Each mechanism may be
depicted as a transition arc with a discrete event and subsequent action that cause a
change in system characteristics (Y where yi ∈ Y).
The time-dependent behavior of the system and its environment, as a function of the
three change elements just described, may be modeled in a state transition diagram
(Hatley and Pirbhai 1988). The state-transition-diagram (STD) accounts for the
impetus, agent, pathways, and effects during the change event. Figure 9.1 is an STD
that defines change as a function of the change agent, events and actions, mecha-
nisms and effects where:
• Y = A set of n system characteristics Yi, where i = 1 to n and yi ∈ Y.
• yi = A system characteristic yi, where i = 1 to n.
172 9 Adaptability, Flexibility, Modifiability …
Change Factors
Marketplace Technological
Environmental
evolution
shifts
Change agent
System characteristics at to
Time = to State 1 Y= { y1, y2, . . .yn}
M1 ... M2 . . . Mn
Effect = Y’-Y
Event 1 Event 2 Event n
Action 1 Action 2 Action n
System characteristics at tf
Time = tf State 2 Y’= { y1, y2, . . .yn}
In this section the basics of adaptability and flexibility and how they are applied
during systems endeavors are discussed. Adaptability and flexibility have many
interpretations and as such must be clearly defined and understood.
Adaptability has additional definitions, shown in Table 9.1 that may provide
improved understanding of the term when used as a non-functional requirement for
a system.
The section that follows will provide the definition for flexibility.
Flexibility has additional definitions, shown in Table 9.2 that may provide
improved understanding of the term when used as a non-functional requirement for
a system.
The section that follows will show how adaptability and flexibility are related.
In this section the basics of modifiability and how it is applied during systems
endeavors are reviewed. Modifiability has been interpreted in many ways and as
such must be clearly defined and understood.
9.4 Modifiability and Scalability 175
Flexibility
External impetus
for change ⇒ flexible-type
change
ENVIRONMENT
SYSTEM
System Boundary
Adaptability
Internal impetus
for change ⇒ adaptable-type
change
Fig. 9.2 Change agent location in distinguishing between adaptability and flexibility
Modifiability has additional definitions, shown in Table 9.3 that may provide
improved understanding of the term when used as a non-functional requirement for
a system.
It is important to note that modifiability is interested in the set of system char-
acteristics where:
• Y = A set of n system characteristics Yi, where i = 1 to n and yi ∈ Y.
• yi = A system characteristic yi, where i = 1 to n.
There are two important distinctions to make when considering the definition for
modifiability.
1. The magnitude or level of the individual characteristics is addressed by scalability
and is not a concern for modifiability since no new characteristics are being
introduced into the set of system characteristics (Y). [NOTE: Scalability will no
longer be discussed]
2. The essential difference between maintainability and modifiability, is that main-
tainability is concerned with the correction of bugs whereas modifiability is not.
9.5 Robustness
In this section the basics of robustness and how it is applied during systems
endeavors is reviewed. As with most of the non-functional requirements, robustness
has been interpreted in many ways and as such must be clearly defined and
understood.
9.5 Robustness 177
Robustness has additional definitions, shown in Table 9.4 that may provide
improved understanding of the term when used as a non-functional requirement for
a system.
It is important to note that robustness is often referring to the larger system, without
addressing any particular system characteristic, individual component, subsystem,
or environmental perturbation. In fact, robustness is thought to be a function of a
system’s internal structure and fragility.
Because robustness is achieved by very specific internal structures, when any of these
systems is disassembled, there is very little latitude in reassembly if a working system is
expected. Although large variations or even failures in components can be tolerated if they
are designed for through redundancy and feedback regulation, what is rarely tolerated,
because it is rarely a design requirement, is nontrivial rearrangements of the interconnection
of internal parts (Carlson and Doyle 2002, p. 2539).
The study of complex systems has provided a framework titled Highly Optimized
Tolerance (HOT) that seeks to focus attention on robustness through both tolerance
and configuration.
‘Tolerance’ emphasizes that robustness in complex systems is a constrained and limited
quantity that must be carefully managed and protected. ‘Highly optimized’ emphasizes that
this is achieved by highly structured, rare, non-generic configurations that are products
either of deliberate design or evolution. The characteristics of HOT systems are high
performance, highly structured internal complexity, and apparently simple and robust
external behavior, with the risk of hopefully rare but potentially catastrophic cascading
failure events initiated by possibly quite small perturbations (Carlson and Doyle 2002,
p. 2540).
In order to evaluate changeability, questions that address both the presence (yes or
no) and quality of the effort (how well) to provide changeability as a purposeful
effort during a system design endeavor must be developed and answered. In this case
each of the four non-functional requirements identified as constituting the measure
termed changeability must be addressed. The goal is to frame each of the four
non-functional requirements as an object with a specific measureable attribute.
The establishment of measures is important because they are the linkage between the
observable, real-world, empirical facts about the system and the construct
(i.e., changeability) devised as an evaluation point. In this case a measure is defined
as “... an observed score gathered through self-report, interview, observation, or
some other means” (Edwards and Bagozzi 2000, p. 156).
As we discussed during the development of the scales for both traceability (see
Sect. 5.4) and system safety (see Sect. 6.4.5), the selection of a measurement scale
is an important element in the development of an adequate measure for change-
ability. Because none of the non-functional requirements we have selected as cri-
teria for changeability have a natural origin or empirically defined distance, an
ordinal scale should be selected as an appropriate scale for measuring system
changeability. The well-known Likert scale is proposed for use in evaluating
changeability. In order to ensure improved reliability a five point Likert scale will
be invoked (Lissitz and Green 1975).
Armed with a construct, measurement attributes and an appropriate scale type, the
changeability measure may be constructed. In order to evaluate changeability,
questions that address both the presence (yes or no) and quality of the effort (how
180 9 Adaptability, Flexibility, Modifiability …
At the end of chapter 3 the importance of being able to measure each non-functional
attribute was highlighted as being an important element of systems design.
A structural mapping that relates changeability to a specific metric and measurement
entity are required. The four-level construct for changeability is presented in
Table 9.7.
9.7 Summary
This chapter has addressed the adaptation concern for changeability and reviewed
its four non-functional requirements: (1) adaptability; (2) flexibility; (3) modifi-
ability; and (4) robustness. In each case a formal definition has been provided along
with additional explanatory definitions and terms. The ability to effect the non-
functional requirement during the design process has also been addressed. Finally, a
formal metric and measurement characteristic have been proposed for evaluating
changeability.
The chapter that follows will address non-functional requirement for extensi-
bility, portability, reusability, and self-descriptiveness as part of the concern for
adaptation in systems endeavors.
References
Andrzejak, A., Reinefeld, A., Schintke, F., & Schütt, T. (2006). On adaptability in grid systems. In
V. Getov, D. Laforenza, & A. Reinefeld (Eds.), Future generation grids (pp. 29–46). New
York, US: Springer.
Baldwin, C. Y., & Clark, K. B. (2006). Modularity in the design of complex engineering systems.
In D. Braha, A. A. Minai, & Y. Bar-Yam (Eds.), Complex engineered systems (pp. 175–205).
Berlin: Springer.
Bengtsson, P., Lassing, N., Bosch, J., & van Vliet, H. (2004). Architecture-level modifiability
analysis (ALMA). Journal of Systems and Software, 69(1–2), 129–147.
Bordoloi, S. K., Cooper, W. W., & Matsuo, H. (1999). Flexibility, adaptability, and efficiency in
manufacturing systems. Production and Operations Management, 8(2), 133–150.
Carlson, J. M., & Doyle, J. (2002). Complexity and robustness. Proceedings of the National
Academy of Sciences of the United States of America, 99(3), 2538–2545.
Cliff, N. (1993). What Is and Isn’t Measurement. In G. Keren & C. Lewis (Eds.), A handbook for
data analysis in the behavioral sciences: Methodological issues (pp. 59–93). Hillsdale, NJ:
Lawrence Erlbaum Associates.
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between
constructs and measures. Psychological Methods, 5(2), 155–174.
Engel, A., & Browning, T. R. (2008). Designing systems for adaptability by means of architecture
options. Systems Engineering, 11(2), 125–146.
Fricke, E., & Schulz, A. P. (2005). Design for changeability (DfC): Principles to enable changes in
systems throughout their entire lifecycle. Systems Engineering, 8(4), 342–359.
182 9 Adaptability, Flexibility, Modifiability …
Hatley, D. J., & Pirbhai, I. A. (1988). Strategies for real-time system specification. New York:
Dorset House.
IEEE. (2005). IEEE Standard 1220: Systems engineering—Application and management of the
systems engineering process. New York: Institute of Electrical and Electronics Engineers.
IEEE, & ISO/IEC. (2010). IEEE and ISO/IEC Standard 24765: Systems and software engineering
—Vocabulary. New York and Geneva: Institute of Electrical and Electronics Engineers and the
International Organization for Standardization and the International Electrotechnical
Commission.
Lissitz, R. W., & Green, S. B. (1975). Effect of the number of scale points on reliability: A Monte
Carlo approach. Journal of Applied Psychology, 60(1), 10–13.
Ross, A. M., Rhodes, D. H., & Hastings, D. E. (2008). Defining changeability: Reconciling
flexibility, adaptability, scalability, modifiability, and robustness for maintaining system
lifecycle value. Systems Engineering, 11(3), 246–262.
Chapter 10
Extensibility, Portability, Reusability
and Self-descriptiveness
Abstract The design of systems and components during the design stage of the
systems life cycle requires specific purposeful actions to ensure effective designs
and viable systems. Designers are faced with a number of adaptation concerns that
they must embed into the design in every instance of thinking and documentation.
The ability for a system to adapt is essential to its continued survival and ability to
provide requisite functions for its stakeholders. Adaptation concerns includes the
non-functional requirements for extensibility, portability, reusability, and self-
descriptiveness. Purposeful design requires an understanding of each of these
requirements and how to measure and evaluate each as part of an integrated systems
design.
This chapter will address four major topics (1) extensibility, (2) portability,
(3) reusability, and (4) self-descriptiveness. Each of these topics are associated with
adaptation concerns in design endeavors. The chapter begins by reviewing exten-
sibility, its definitions, and how it is approached as an aspect of purposeful systems
design.
The second section defines portability, provides a perspective on why portability
is a desired characteristic, and four factors designers must consider in order to
achieve portable designs.
The third section in this chapter addresses reusability by providing a clear
definition and an addressing reusability in systems designs. Design for reuse can be
achieved by using either a top-down or bottom-up approach and three unique
techniques. The section concludes by recommending 2 strategies and 10 heuristics
that support reusability in systems designs.
The fourth section defines self-descriptiveness and discusses the types of
problems associated with poor self-descriptiveness. The section also discusses how
utilization of the seven design principles for user-systems dialogue and adoption
and application of an appropriate standard for user-system dialogue can decrease
errors and improve system self-descriptiveness.
The final section defines a measure and a means for measuring adaptation
concerns that is a function of extensibility, portability, reusability, and self-
descriptiveness. The section completes by relating the proposed measure for
adaptation concerns as a metric and includes a structural map for extensibility,
portability, reusability, and self-descriptiveness.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of adaptability,
flexibility, modifiability and scalability, and robustness that influence design in
systems endeavors. This chapter’s goal is supported by the following objectives:
• Define extensibility.
• Discuss how extensibility is achieved during purposeful systems design.
• Define portability.
• Describe the four factors that affect portability in systems designs.
• Define reusability.
• Describe the two approaches to reusability that may be used during design
endeavors.
• Define self-descriptiveness.
• Describe the three levels of problems associated with poor self-descriptiveness.
• Construct a structural map that relate adaptation concerns to a specific metric
and measureable characteristic.
• Explain the significance of extensibility, portability, reusability, and self-
descriptiveness in systems design endeavors.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the sections that follow.
10.2 Extensibility
In this section the basics of extensibility and how it is applied during systems
endeavors will be reviewed. As with many of the other non-functional require-
ments, extensibility is not well understood or used in ordinary discussions about
systems requirements. To validate this assertion, take a minute and review the index
of a systems engineering or software engineering text and look for the word
extensibility. Is it missing? It would not be surprising to hear that the word is
missing from just about every major text. Therefore, extensibility and its charac-
teristics must be carefully reviewed in order to provide a common base for both
learning and application during systems design endeavors.
10.2 Extensibility 185
Extensibility (which is preferred over the word extendability) has additional defi-
nitions, shown in Table 10.1 that may provide further meaning for the term when
applied as a non-functional requirement for a system.
Additional meaning for extensibility may be obtained by reviewing the defini-
tion for its synonym, expandability, which is shown in Table 10.2.
From these definitions extensibility may be defined as the ability to extend a
system, while minimizing the level of effort required for implementing extensions
and the impact to existing system functions. Having settled on this basic definition,
the next section will discuss how extensibility may be used as a purposeful element
during systems design endeavors.
Extensibility has been practiced in the design of most hardware products for
many years. For instance, imagine purchasing a car that has a design that would
prohibit the addition of optional accessories. The dealers would be required to roll-
the-dice when they made their selection of stock for their lots and customers would
have to make trade-offs that they were not comfortable with. Instead, local dealers
are able to add factory options because the design included the ability to add
components. The ability to add components to a design is enabled by common
interfaces and established standards. Designs that include established standards for
common interfaces for component connections are able to accept new technology in
a manner that permits seamless integration. One-of-a-kind designs with a lack of
standardization are notoriously unable to accept new technological improvements.
The electronics industry includes extensibility as a major non-functional require-
ment that permits components to be interconnected based on designs that routinely
include interface points based on accepted industry standards.
The same can be said for most modern software products. Vendors that provide
large enterprise resource planning (ERP) suites have modular designs for their
products that permit consumers to select any number of individual software mod-
ules that perform specific business functions (e.g., financial accounting, human
resource management, etc.). The vendor’s architecture includes interfaces points
between their own functional modules and other 3rd party vendors that provide
unique support applications (e.g., customer resource management, scheduling, etc.).
Modern software frameworks also include the ability to incorporate extensibility.
For instance, Microsoft has developed the Managed Extensibility Framework
(MEF) as a library within its .NET development framework for creating light-
weight, extensible applications. MEF’s components, called parts, declaratively
specifies both the part’s dependencies or imports and what capabilities or exports it
makes available. When a programmer creates a part, the MEF composition engine
satisfies its imports with what is available from other parts. Similarly, Oracle’s
Enterprise Manager has an Extensibility Exchange which is a library where pro-
grammers are able to find plug-ins and connectors they may utilize.
In conclusion, extensibility is a purposeful design function that permits systems
to be extended—added to or modified—during their design life with a minimum of
effort and subsequent disruption to the system and its users. The eminent software
pioneer David Lorge Parnas makes the point about how engineers must account for
change during the design stages when he states “One of the clearest morals in the
earlier discussion about design for change as it is taught in other areas of engi-
neering is that one must anticipate changes before one begins the design” (Parnas
1979, p. 130).
The section that follows will address the non-functional requirement for
portability.
10.3 Portability 187
10.3 Portability
In this section the basics of portability and how it is applied during systems
endeavors will be addressed. As with many of the other non-functional require-
ments, portability is not well understood or used in ordinary discussions about
systems requirements. Once again, take a minute and review the index of a favorite
systems engineering or software engineering text and look for the word portability.
Is it missing? It would not be surprising to hear that the word is missing from just
about every major text. Therefore, a careful review of portability and its charac-
teristics is in order to provide a common base for both learning and its application
during systems design endeavors.
Portability has additional definitions, from the literature, that are listed in Table 10.3
that may provide further help in understanding the term when applied as a non-
functional requirement for a system.
From these definitions portability is the degree to which a system can be
transported or adapted to operate in a new environment. Having settled on this
basic definition, the next section will discuss how portability may be used as a
purposeful element during systems design endeavors.
In Chap. 9 it was stated that changes occur in systems based upon one or more of
the following major factors: (1) marketplace forces; (2) technological evolution; and
(3) environmental shifts (Fricke and Schulz 2005). Because real-world systems
exist in a constantly changing domain, they too are subject to change and these
In order to achieve systems that are portable, designers are required to address four
major factors: (1) impediment factors; (2) human factors; (3) environmental, and (4)
cost factors (Hakuta and Ohminami 1997).
• Impediment factors include all of the technical issues that cloud, restrict, or
prohibit the movement of the system from its current environment to the new or
target environment. Some examples of technical issues include reusability of
hardware and software, compatibility of hardware, software, and standards,
interfaces between hardware and software, data structures, size, and restruc-
turing effort, compatibility of design tools as well as development and testing
environments.
• Human factors address the knowledge and experience of the design team and
their ability to address the tasks required to transport or adapt the system to
operate in the new or target environment.
• Environmental factors address the target environment. Specifically, the “set of
elements and their relevant properties, which elements are not part of the sys-
tem, but a change in any of which can cause or produce a change in the state of
the system” (Ackoff and Emery 2006, p. 19). New target environments are often
a significant challenge to design teams unfamiliar with the elements in the new
environment.
• Cost factors address the aggregate the individual costs attributed to the
impediment, human, and environmental factors associated with the transporta-
tion and adaptation of the existing system required for it to operate in the new or
target environment.
All of these factors must be addressed by the systems designer when evaluating the
decision to incorporate portability requirements as part of the purposeful design
during systems endeavors. The section that follows will address system reusability.
10.4 Reusability
In this section the basics of reusability and how it is applied during systems
endeavors will be reviewed. Compared to the other non-functional requirements
addressed so far, reusability is a term that is used frequently during discussions
10.4 Reusability 189
about systems requirements. Despite its frequent use, we will review its formal
systems vocabulary definition as well as some definitions from the literature in
order to solidify a common usage for the term during systems design endeavors.
Reusability has additional definitions, from the literature, that are listed in
Table 10.4 that may provide further help in understanding the term when applied as
a non-functional requirement for a system.
From these definitions reusability is the degree to which a system repeats the use
of any part of an existing system in a new system. Having settled on this basic
definition, the next section will discuss how reusability may be used as a purposeful
element during systems design endeavors.
The challenges associated with reusability are based upon two important char-
acteristics required in order to effect reuse in a design. Every designer must
carefully analyze the ability of an existing systems element’s ability to satisfy both
(1) functionality and (2) required interfaces. As a result, designers of system ele-
ments will be required to make tradeoffs between the functionality and interface
requirements in their design, and the functionality and interfaces requirements of
potentially reusable systems elements. Few existing system’s elements provide both
identical functionality and interfaces, so tradeoffs take on additional importance
when purposefully including reusability as a non-functional requirement in a sys-
tem’s design.
An additional design consideration must be made when invoking reusability as a
non-functional requirement for a system. The design team must decide on a reus-
ability approach: Will the reuse design be top-down (often labeled generative) or
bottom-up (often termed compositional)?
Component-oriented reuse as the major bottom-up concept is based on the idea to build a
system from smaller, less complex parts by reusing or adapting existing components; top-
down reuse approaches—in contrast—are more challenging, as they require a thorough
understanding of the overall structure of the engineered solution (Stallinger et al. 2011,
p. 121).
10.5 Self-descriptiveness
In this section the basics of self-descriptiveness and how it is applied during systems
endeavors will be addressed. Self-descriptiveness, when compared to the other non-
functional requirements addressed so far, is a term that is rarely used during discus-
sions about systems requirements. Because of its infrequent use in ordinary conver-
sation, a review of both its formal systems vocabulary definition as well as some
definitions from the literature are required. This will solidify a common meaning for
the term during our discussions of its use during systems design endeavors.
192 10 Extensibility, Portability, Reusability and Self-descriptiveness
Self-descriptiveness has additional definitions, from the literature, that are listed in
Table 10.7 that may be used to understand the term when applied as a non-func-
tional requirement for a system.
From these definitions self-descriptiveness is the characteristic of a system that
permits an observer to determine or verify how its functions are achieved. Having
settled on this basic definition, the next section will discuss how self-descriptive-
ness is achieved during systems design endeavors.
The concept that underlies self-descriptiveness is related to the dialogue the user
has with the system under consideration. In this context dialogue is defined as the
“interaction between a user and an interactive system as a sequence of user actions
(inputs) and system responses (outputs) in order to achieve a goal” (ISO 2006,
p. vi). More simply stated, dialogue is the interaction between a user and the
system of interest required to achieve a desired goal. As part of this dialogue a
system designer must strive to understand how the system and its user (be that the
designer or the person utilizing the system to complete its intended functions)
communicate. International Standard 9241, Part 110—Dialogue Principles (ISO
2006) lists seven principles, the second of which is self-descriptiveness, which
define how usable designs should behave. Research has demonstrated “that among
the dialogue principles, self-descriptiveness is the most important” (Watanabe et al.
2009, p. 825).
Table 10.8 describes three levels of problems that are associated with self-
descriptiveness.
By adopting the seven design principles for user-system dialogue, designers can
greatly reduce (1) generalized dialogue errors, (2) the specific self-descriptiveness
errors described in Table 10.8, and (3) the broader range of seven systems errors
(Adams and Hester 2012, 2013). Inclusion of the non-functional requirement for
self-descriptiveness in a design requires the design team to formally adopt and
apply an appropriate standard for user-system dialogue (ISO 2006).
Self-descriptiveness’ importance moves beyond the system design and directly
impacts the system’s implementation and continued viability during the operation
and maintenance stage and through retirement and disposal. It is precisely because
the user-system dialogue has such lasting effects that it gains importance in systems
design endeavors. “Self-descriptiveness is necessary for both testability and
understandability” (Boehm et al. 1976, p. 606) and can be extrapolated to other
requirements as well.
The next section will discuss how the four non-functional requirements for
extensibility, portability, reusability, and self-descriptiveness can be measured and
evaluated.
As discussed during the development of the scales for traceability (see Chap. 6),
system safety (see Chap. 8), and changeability (see Chap. 9) the selection of a
measurement scale is an important element in the development of an adequate
measure. Because none of the non-functional requirements selected as criteria has a
natural origin or empirically defined distance, an ordinal scale should be selected as
an appropriate scale for measuring system extensibility, portability, reusability, and
self-descriptiveness. In order to ensure improved reliability, a five-point Likert scale
will be invoked (Lissitz and Green 1975).
10.6 A Method for Evaluating Extensibility, Portability … 195
Armed with a construct, measurement attributes, and an appropriate scale type, the
measures for extensibility, portability, reusability, and self-descriptiveness may be
constructed. In order to evaluate these, questions that address both the presence (yes
or no) and quality of the effort (how well) to provide effective and meaningful
levels of extensibility, portability, reusability, and self-descriptiveness as part of the
system’s design must be addressed. Each of the four criteria (i.e., the measurement
constructs) has a specific question, shown in Table 10.9, which may be used to
evaluate each one’s contribution to system adaptation concerns.
The answer to each question in Table 10.9 will be scored using the five-point
Likert measures in Table 10.10.
The summation of the four constructs in Eq. 10.1 will be the measure of the
degree of adaptation in a system design endeavor.
Expanded Equation for System Adaptability Concerns
At the end of Chap. 3 the importance of being able to measure each non-functional
attribute was emphasized. A structural mapping that relates adaptation concerns to
four specific metrics and measurement entities is required. The four-level construct
for adaptation concerns is presented in Table 10.11.
10.7 Summary
References
Lopes, T. P., Neag, I. A., & Ralph, J. E. (2005). The role of extensibility in software standards for
automatic test systems, Proceedings of AUTOTESTCON—The IEEE systems readiness
technology conference (pp. 367–373). Piscataway, NJ: Institute of Electrical and Electronics
Engineers.
Lynex, A., & Layzell, P. J. (1998). Organisational considerations for software reuse. Annals of
Software Engineering, 5(1), 105–124.
Mooney, J. D. (1990). Strategies for supporting application portability. Computer, 23(11), 59–70.
Park, K. S., & Lim, C. H. (1999). A structured methodology for comparative evaluation of user
interface designs using usability criteria and measures. International Journal of Industrial
Ergonomics, 23(5–6), 379–389.
Parnas, D. L. (1979). Designing software for ease of extension and contraction. IEEE Transactions
on Software Engineering, SE-5(2), 128–138.
Pfleeger, S. L. (1998). Software engineering: Theory and practice. Upper Saddle River, NJ:
Prentice-Hall.
Pressman, R. S. (2004). Software engineering: A practitioner’s approach (5th ed.). New York:
McGraw-Hill.
Stallinger, F., Neumann, R., Vollmar, J., & Plösch, R. (2011). Reuse and product-orientation as
key elements for systems engineering: aligning a reference model for the industrial solutions
business with ISO/IEC 15288, Proceedings of the 2011 International Conference on Software
and Systems Process (pp. 120–128). New York: ACM.
Stallinger, F., Plösch, R., Pomberger, G., & Vollmar, J. (2010). Integrating ISO/IEC 15504
conformant process assessment and organizational reuse enhancement. Journal of Software
Maintenance and Evolution: Research and Practice, 22(4), 307–324.
Watanabe, M., Yonemura, S., & Asano, Y. (2009). Investigation of web usability based on the
dialogue principles. In M. Kurosu (Ed.), Human Centered Design (pp. 825–832). Berlin:
Springer.
Part V
Viability Concerns
Chapter 11
Understandability, Usability, Robustness
and Survivability
Abstract The design of systems and components during the design stage of the
systems life cycle requires specific purposeful actions to ensure effective designs
and viable systems. Designers are faced with a number of core viability concerns
that they must embed into the design to ensure the system remains viable. The
ability for a system to remain viable is critical if it is to continue to provide required
functionality for its stakeholders. Core viability concerns includes the non-func-
tional requirements for understandability, usability, robustness, and survivability.
Purposeful design requires an understanding of each of these requirements and how
to measure and evaluate each as part of an integrated systems design.
This chapter will address four major topics (1) understandability, (2) usability, (3)
robustness, and (4) survivability. Each of these topics are associated with core
viability concerns in design endeavors. The chapter begins by reviewing under-
standability, its definitions, and how it is approached as an aspect of purposeful
systems design.
Section 11.3 defines usability, provides a perspective on why usability is a
desired characteristic, and describes the four attributes traditionally associated with
usability.
Section 11.4 in this chapter addresses robustness by providing a clear definition
and addressing robustness as an element of systems designs. Design and the con-
cepts associated with robustness conclude the section.
Section 11.5 defines survivability and its three major contributing elements. The
section also discusses 17 design principles that may be invoked when designing for
survivability.
Section 11.6 defines a measure and a means for measuring core viability con-
cerns that is a function of understandability, usability, robustness, and survivability.
The section completes by relating the proposed measure for core viability concerns
as a metric and includes a structural map for understandability, usability, robust-
ness, and survivability.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to identify how the attributes of adaptability,
flexibility, modifiability and scalability, and robustness that influence design in
systems endeavors. This chapter’s goal is supported by the following objectives:
• Define understandability.
• Discuss how understandability is achieved during purposeful systems design.
• Define usability.
• Describe the four attributes traditionally associated with usability.
• Define robustness.
• Discuss the element associated with designing for robustness.
• Define survivability.
• Describe the three elements of survivability.
• Discuss some of the design principles associated with survivability.
• Construct a structural map that relate core viability concerns to a specific metric
and measurable characteristic.
• Explain the significance of understandability, usability, robustness, and sur-
vivability in systems design endeavors.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the sections that follow.
11.2 Understandability
In this section the basics of understandability and how it is applied during systems
endeavors will be reviewed. The definition for the non-functional requirement for
understandability would seem to be a clear cut one and a topic of discussion when
developing systems. However, when a search of the many texts and scholarly articles
on design is conducted, little mention of understandability is revealed. In order to
improve upon this situation, and to be precise and to develop an appropriate measure
for usability, a common definition and associated terminology for understandability
must be constructed in order to describe its use in systems design endeavors.
Understandability has additional definitions, shown in Table 11.1 that may provide
further meaning for the term when applied as a non-functional requirement for a
system.
Additional meaning for understandability may be obtained by reviewing the
definition for a closely related word, unambiguity. It is important to note that
“unambiguity and understandability are interrelated (according to some, they would
be even the same property), since, if a requirement is ambiguous, then it cannot be
properly understood” (Génova et al. 2013, p. 28). Two formal definitions for un-
ambiguity are presented in Table 11.2.
From these definitions, from a systems user’s perspective, understandability is
the ability to comprehend any portion of a system without difficulty. Having settled
on this basic definition, the next section will discuss the elements that contribute to
human understandability in systems.
Knowing that the system’s designers are directly responsible for the understand-
ability of the system and that this is a function of readily comprehendible design
artifacts (e.g., user documentation, help screens, the functional flow of information;
application of business rules, etc.), the design must adopt formal approaches to
ensure understandability. Failure to adopt such an approach may lead to the fol-
lowing scenario for both users and the system’s maintainers.
If we can’t learn something, we won’t understand it. If we can’t understand something, we
can’t use it—at least not well enough to avoid creating a money pit. We can’t maintain a
system that we don’t understand—at least not easily. And we can’t make changes to our
system if we can’t understand how the system as a whole will work once the changes are
made. (Nazir and Khan 2012, p. 773)
In order to avoid this type of pitfall, systems designs must adopt and implement
a clear conceptual model for understandability. “To me, the most important part of a
successful design is the underlying conceptual model. This is the hard part of
design: formulating an appropriate conceptual model and then assuring that
everything else be consistent with it” (Norman 1999, p. 39).
In simple terms a conceptual model of a system is a mental model that people
use to represent their individual understanding of how the system works. The
conceptual or mental model is just that, it is a perceived structure for the real-world
system that they are encountering. The utility of the conceptual model includes both
(1) prediction—how things will behave and (2) functional understanding—rela-
tionships between the system’s components and the functions they perform. A
conceptual model “specifies both the static and the dynamic aspects of the appli-
cation domain. The static aspect describes the real world entities and their rela-
tionships and attributes. The dynamic aspect is modeled by processes and their
interfaces and behavior” (Kung 1989, p. 1177). The static portion of conceptual
models include things and their associated properties while the dynamic aspect
addresses events and their supporting processes. The static and dynamic perspec-
tives of conceptual models have four purposes (Kung and Solvberg 1986):
11.2 Understandability 205
Designer’s User’s
Conceptual Conceptual
Model Model
Conceptual
Model Physical structure and
plus information about system
Information
Documentation
Instructions
Information Web-sites
Help screens
SYSTEM
Signifiers IMAGE
Fig. 11.1 System image and the user’s and designer’s conceptual models
11.3 Usability
In this section, the basics of usability and how it is applied during systems
endeavors will be addressed. As with many of the other non-functional require-
ments, usability sounds like a very clear concept. However, arriving at a common
definition and understanding for the term is problematic. Once again, take a minute
and review the index of a favorite systems engineering or software engineering text
and look for the word usability. Is it missing? It would not be surprising to hear that
the word is missing from just about every major text. Therefore, a careful review of
usability and its characteristics is in order to provide a common base for both
learning and its application during systems design endeavors.
Usability has additional definitions, from the literature, that are listed in Table 11.3
that may provide further help in understanding the term when applied as a non-
functional requirement for a system.
11.3 Usability 207
As presented in the previous section, the system image is the physical structure and
information about the system from which the users construct their own conceptual
model of the system. When deviations exist between the intentions represented in
the designer’s conceptual model and the actual understanding encapsulated in the
user’s conceptual model (which is a direct result of the system image), under-
standability and system usability are diminished.
When usability in a system’s design is discussed, it is including design
parameters such as ease of use, ease of learning, error protection, error recovery,
and efficiency of performance (Maxwell 2001). Jakob Nielsen, one of the pioneers
of human centered design, points out the importance of usability and that “usability
has multiple components and is traditionally associated with these five attributes:
• Learnability: The system should be easy to learn so that the user can rapidly
start getting some work done with the system.
• Efficiency: The systems should be efficient to use, so that once the learner has
learned the system, a high level of productivity is possible.
208 11 Understandability, Usability, Robustness and Survivability
• Memorability: The system should be easy to remember, so that the casual user is
able to return to the system after some period of not using it, without having to
learn everything all over again.
• Errors: The system should have a low error rate, so that users make few errors
during the use of the system, and so that if they do make errors they can easily
recover from them. Further, catastrophic errors must not occur.
• Satisfaction: The system should be pleasant to use, so that users are subjectively
satisfied with using it; they like it.” (Nielsen 1993, p. 26)
During the engineering design stages of the systems life cycle, specific processes
that address usability elements are invoked. The processes are labeled usability
engineering or human centered design and provide formal methods for ensuring
that usability characteristics are specified early in the design stages and that they
continue to be measured throughout all of the subsequent life cycle stages.
Table 11.4 is a listing of some of the concerns during human-computer-interactions
(HCI) and associated sample measures.
As with understandability, usability involves a strong cognitive component
because users need to comprehend the system to some extent in order to utilize it.
The relationship between data and information is an important element of human
cognition.
Most data is of limited value until it is processed into a useable form. Processing data into a
useable form requires human intervention, most often accomplished with the use of an
information system. The output of this process is information. Information is contained in
descriptions, answers to questions that begin with such words as who, what, where, when,
and how many. These functional operations are performed on data and transform it into
information. (Hester and Adams 2014, p. 161)
Table 11.4 Concerns during HCI [adapted from (Zhang et al. 2005, p. 522)]
HCI concern Attribute Description Sample measure items
area
Physical Ergonomic System fits our physical • Legible
strengths and limitations • Audible
and does not cause harm • Safe to use
to our health
Cognitive Usability System fits our cognitive • Fewer errors and
strengths and limitations easy recovery
and functions as the • Easy to use
cognitive extension of our • Easy to remember
brain how to use
• Easy to learn
Affective, emotional, Satisfaction System satisfies our • Aesthetically
and intrinsically aesthetic and emotional pleasing
motivational needs, and is attractive for • Engaging
its own sake • Trustworthy
• Satisfying and
enjoyable
• Entertaining and fun
Extrinsically Usefulness Using the system would • Support individual’s
motivational provide rewording tasks
consequences • Can do some tasks
that would not so
without the system
• Extend one’s
capability
• Rewarding
We have many tactics to follow to help people understand how to use our designs. It is
important to be clear about the distinctions among them, for they have very different
functions and implications. Sloppy thinking about the concepts and tactics often leads to
sloppiness in design. And sloppiness in design translates into confusion for users. (Norman
1999, p. 41)
All of these factors discussed in this section must be addressed by the systems
designer when evaluating the decision to incorporate usability requirements as part
of the purposeful design during systems endeavors. The section that follows will
address system robustness.
11.4 Robustness
In this section, robustness and how it may be applied during systems endeavors will
be reviewed. As with many of the other non-functional requirements addressed so
far, robustness is a term that is infrequently used during discussions about systems
requirements. Based upon its infrequent use, a thorough review of both the formal
210 11 Understandability, Usability, Robustness and Survivability
definition from the systems vocabulary as well as some definitions from the liter-
ature is in order to solidify a common usage for the term in the sections that follow.
Robustness has additional definitions, from the literature, that are listed in
Table 11.5 that may provide further help in understanding the term when applied as
a non-functional requirement for a system.
From these definitions, robustness is the ability of a system to maintain a desired
characteristic despite fluctuations caused by either internal changes or its envi-
ronment. Armed with this composite definition, robustness and how it may be used
as a purposeful element during systems design endeavors may be reviewed.
In order to adequately ensure viability, the design process must account for both
the system’s users and the environments in which it will operate. It is important to
note that both users and environments are plural. This is because systems must be
designed to accommodate not only the currently required user base and defined
environment, but changes that will add new users and which will then position the
system within an ever-changing environment. Failure to do so will restrict the
system’s ability to operate and require expensive modification or changes to its
structure. The range of operating conditions specified for the system should be
broad enough to permit changes in users and environment without a significant loss
of system functionality. “A product with a wide range of permissible operating
conditions is more robust than a product that is more restrictive” (Schach 2002,
p. 148).
The principle method used to achieve robustness in systems has been redundancy.
Redundancy “is key to flexibility and robustness since it enables capacity, func-
tionality, and performance options as well as fault-tolerance” (Fricke and Schulz
2005, p. 355). Traditional design methods ensure that a system’s elements (i.e., the
components of which it is constituted) have sufficient redundancy to permit oper-
ations when some number of systems elements fail or malfunction. This is the
traditional role of reliability during systems design and includes a purposeful
configuration and interconnection of systems elements.
Because robustness is achieved by very specific internal structures, when any of these
systems is disassembled, there is very little latitude in reassembly if a working system is
expected. Although large variations or even failures in components can be tolerated if they
are designed for through redundancy and feedback regulation, what is rarely tolerated,
because it is rarely a design requirement is nontrivial rearrangements of the interconnection
of internal parts. (Carlson and Doyle 2002, p. 2539)
Clausing (2004) advocates the use of an operating window (OW) to define the
range within which a specific system characteristic is expected operate successfully.
The OW is the range in at least one critical functional variable of the system within which
the failure rate is less than some selected value. The range is bounded by thresholds (or a
threshold) beyond which the performance is degraded to some selected bad level. (Clausing
2004, p. 26)
11.5 Survivability
In this section the basics of system survivability and how it is applied during
systems endeavors will be addressed. Survivability, as a non-functional require-
ment, is a term that is very rarely used during discussions about systems require-
ments. Because of its infrequent use in ordinary conversation, a review the formal
systems vocabulary definition and some definitions from the literature is required.
This will solidify a common meaning for the term during discussions of its use and
application during systems design endeavors.
11.5 Survivability 213
Survivability’s additional definitions are taken from the literature and are listed in
Table 11.6. All of these definitions may be used to better understand the term when
applied as a non-functional requirement for a system.
From these definitions survivability is the ability of a system to continue to
operate in the face of attacks or accidental failures or errors. Having settled on this
basic definition, the next section will discuss how survivability is achieved during
systems design endeavors.
Generally, systems must remain survivable at all times. However, there are specific
times when system survivability requirements may be relaxed. This includes times
when the system is removed from operation (e.g., for upgrade, repair, or overhaul)
or is placed in a reduced state of readiness (e.g., standby mode or training). The
systems stakeholders should be queried about these types of situations as part of the
requirements analysis process. Knowledge of limited periods during which the
system does not have to remain survivable are important elements of the conceptual
design for the system.
During the conceptual design of the system a number of additional items must be
considered as part of the survivability approach. “Survivability requires more than
simply a technical solution, but rather an integrated collaboration of technical
aspects, business considerations and analysis techniques” (Redman et al. 2005,
p. 187).
The ability to understand, measure, and evaluate the non-functional requirements for
understandability, usability, robustness, and survivability when included as require-
ments in a system is a valuable capability. Having the ability to measure and evaluate
each of these non-functional requirements provides additional perspectives and insight
into the future performance and viability of all elements of the system being designed.
Based upon the understanding developed in the previous sections on under-
standability, usability, robustness, and survivability and how they are used in systems
design endeavors, a technique for measure may be developed. Once again, this is a
tough assignment because each of these non-functional requirements are subjective,
qualitative measures which differ greatly from most of the objective, quantitative
216 11 Understandability, Usability, Robustness and Survivability
measures have developed for many of the other non-functional requirements. In order
to understand how to approach a subjective, qualitative measure, a review of how to
construct and measure subjective, qualitative objects will be presented.
As discussed during the development of the scales for traceability (see Chap. 6),
system safety (see Chap. 8), changeability (see Chap. 9), and adaptation concerns (see
Chap. 10), the selection of a measurement scale is an important element in the
development of an adequate measure. Because none of the non-functional require-
ments selected as criteria has a natural origin or empirically defined distance, an
ordinal scale should be selected as an appropriate scale for measuring system exten-
sibility, portability, reusability, and self-descriptiveness. In order to ensure improved
reliability, a five-point Likert scale will be invoked (Lissitz and Green 1975).
Armed with a construct, measurement attributes, and an appropriate scale type, the
measures for understandability, usability, robustness, and survivability are
11.6 A Method for Evaluating Understandability, Usability, … 217
constructed. In order to evaluate these, questions that address both the presence (yes
or no) and quality of the effort (how well) to provide effective and meaningful
levels of understandability, usability, robustness, and survivability as an element of
the system’s design must be answered. Each of the four criteria (i.e., the mea-
surement constructs) has a specific question, shown in Table 11.8, which may be
used to evaluate each one’s contribution to system adaptation concerns.
The answer to each question in Table 11.8 will be scored using the five-point
Likert measures in Table 11.9.
The summation of the four constructs in Eq. 11.1 will be the measure of the
degree of core viability in a system design endeavor.
At the end of Chap. 3, the importance of being able to measure each non-functional
attribute was stressed as an important feature. A structural mapping that relates core
viability concerns to four specific metrics and measurement entities is required. The
four-level construct for core viability concerns is presented in Table 11.10.
218 11 Understandability, Usability, Robustness and Survivability
Table 11.10 Four-level structural map for measuring core viability concerns
Level Role
Concern Systems viability
Attribute Core viability concerns
Metrics Understandability, usability, robustness and survivability
Measurable Sum of (1) understandability (Vunder), (2) usability (Vuse), (3)
characteristics robustness (Vrobust), and (4) survivability (Vsurviv)
11.7 Summary
In this chapter, the core adaptation concerns contained in the four non-functional
requirements: understandability, usability, robustness, and survivability have been
addressed. In each case, a formal definition has been provided along with additional
explanatory definitions and terms. The ability to effect each of the four the non-
functional requirements during the design process has also been addressed. Finally,
a formal metric and measurement characteristic have been proposed for evaluating
design concerns through metrics for understandability, usability, robustness, and
survivability.
The chapter that follows will address additional non-functional requirements
associated with system viability.
References
Audi, R. (Ed.). (1999). Cambridge dictionary of philosophy (2nd ed.). New York: Cambridge
University Press.
Bevan, N. (2001). International standards for HCI and usability. International Journal of Human-
Computer Studies, 55(4), 533–552.
Blundell, J. K., Hines, M. L., & Stach, J. (1997). The measurement of software design quality.
Annals of Software Engineering, 4(1), 235–255.
Boehm, B. W., Brown, J. R., & Lipow, M. (1976). Quantitative evaluation of software quality. In
R. T. Yeh & C. V. Ramamoorthy (Eds.), Proceedings of the 2nd International Conference on
Software Engineering (pp. 592–605). Los Alamitos, CA: IEEE Computer Society Press.
Bowen, T. P., Wigle, G. B., & Tsai, J. T. (1985). Specification of software quality attributes:
Software quality evaluation guidebook (RADC-TR-85-37, Vol. III). Griffiss Air Force Base,
NY: Rome Air Development Center.
Box, G. E. P., & Fung, C. A. (1993). Quality quandries: Is your robust design procedure robust?
Quality Engineering, 6(3), 503–514.
Branscomb, L. M., & Thomas, J. C. (1984). Ease of use: A system design challenge. IBM Systems
Journal, 23(3), 224–235.
Carlson, J. M., & Doyle, J. (2002). Complexity and robustness. Proceedings of the National
Academy of Sciences of the United States of America, 99(3), 2538–2545.
Carroll, J. M., & Thomas, J. C. (1982). Metaphor and the cognitive representation of computing
systems. IEEE Transactions on Systems, Man and Cybernetics, 12(2), 107–116.
Cavano, J. P., & McCall, J. A. (1978). A framework for the measurement of software quality.
SIGSOFT Software Engineering Notes, 3(5), 133–139.
References 219
Mekdeci, B., Ross, A. M., Rhodes, D. H., & Hastings, D. E. (2011). Examining survivability of
systems of systems. In Proceedings of the 21st Annual International Symposium of the
International Council on Systems Engineering (Vol. 1, pp. 564–576). San Diego, CA:
INCOSE-International Council on Systems Engineering.
Nakano, T., & Suda, T. (2007). Applying biological principles to designs of network services.
Applied Soft Computing, 7(3), 870–878.
Nazir, M., & Khan, R. A. (2012). An empirical validation of understandability quantification
model. Procedia Technology, 4, 772–777.
Nielsen, J. (1993). Usability engineering. Cambridge: Academic Press Professional.
Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6(3), 38–43.
Norman, D. A. (2013). The design of everyday things (Revised and expanded ed.). New York:
Basic Books.
Ottensooser, A., Fekete, A., Reijers, H. A., Mendling, J., & Menictas, C. (2012). Making sense of
business process descriptions: An experimental comparison of graphical and textual notations.
Journal of Systems and Software, 85(3), 596–606.
Redman, J., Warren, M., & Hutchinson, W. (2005). System survivability: A critical security
problem. Information Management & Computer Security, 13(3), 182–188.
Richards, M. G., Ross, A. M., Hastings, D. E., & Rhodes, D. H. (2008). Empirical validation of
design principles for survivable system architecture. In Proceedings of the 2nd Annual IEEE
Systems Conference (pp. 1–8).
Richards, M. G., Ross, A. M., Hastings, D. E., & Rhodes, D. H. (2009). Survivability design
principles for enhanced concept generation and evaluation. In Proceedings of the 19th Annual
INCOSE International Symposium (Vol. 2, pp. 1055–1070). San Diego, CA: INCOSE-
International Council on Systems Engineering.
Ross, A. M., Rhodes, D. H., & Hastings, D. E. (2008). Defining changeability: Reconciling
flexibility, adaptability, scalability, modifiability, and robustness for maintaining system
lifecycle value. Systems Engineering, 11(3), 246–262.
Schach, S. R. (2002). Object-oriented and classical software engineering (5th ed.). New York:
McGraw-Hill.
Wagner, S. (2013). Quality models software product quality control (pp. 29–89). Berlin: Springer.
Westrum, R. (2006). A typology of resilience situations. In E. Hollnagel, D. D. Woods, & N.
Leveson (Eds.), Resilience engineering: Concepts and precepts (pp. 55–65). Burlington:
Ashgate.
Zhang, P., Carey, J., Te’eni, D., & Tremaine, M. (2005). Integrating human-computer interaction
development into the systems development life cycle: A methodology. Communications of the
Association for Information Systems, 15, 512–543.
Chapter 12
Accuracy, Correctness, Efficiency,
and Integrity
Abstract The design of systems and components during the design stage of the
systems life cycle requires specific purposeful actions to ensure effective designs
and viable systems. Designers are faced with a number of other viability concerns
that they must embed into the design to ensure the system remains viable. The
ability for a system to remain viable is critical if it is to continue to provide required
functionality for its stakeholders. Other viability concerns includes the non-func-
tional requirements for accuracy, correctness, efficiency, and integrity. Purposeful
design requires an understanding of each of these requirements and how to measure
and evaluate each as part of an integrated systems design.
This chapter will address four major topics (1) accuracy, (2) correctness, (3) effi-
ciency, and (4) integrity. Each of these topics are associated with other viability
concerns in design endeavors. The chapter begins by reviewing accuracy, its def-
initions, and concepts related to the reference value, precision, and trueness. The
section will conclude with a discussion of how accuracy is approached as an aspect
of purposeful systems design.
Section 12.2 will define correctness and demonstrate how both verification and
validation activities provide evaluation opportunities to ensure correctness. The
section will also include four design principles that support the development of
systems that correctly represent the specified requirements for the system being
addressed by the design team.
Section 12.3 in this lecture will address efficiency by providing a clear definition
for efficiency and establishes a proxy for system efficiency. The section will con-
clude by detailing a generic attribute and method for evaluating efficiency in sys-
tems design endeavors.
Section 12.4 will define integrity and the concept that underlies its use as a non-
functional requirements in systems designs. The section will also discuss 33
security design principles, and the life cycle stages where they should be invoked
when designing for systems for integrity.
Section 12.7 will define a measure and a means for measuring other viability
concerns that is a function of accuracy, correctness, efficiency, and integrity. The
section will conclude by relating the proposed measure for other viability concerns
as a metric and will include a structural map for accuracy, correctness, efficiency,
and integrity.
The chapter has a specific learning goal and associated objectives. The learning
goal of this chapter is to be able to understand the other viability concerns and to
identify how the non-functional requirements of accuracy, correctness, efficiency,
and integrity affect design in systems endeavors. This chapter’s goal is supported by
the following objectives:
• Define accuracy.
• Discuss how accuracy is achieved during purposeful systems design.
• Define correctness.
• Describe how verification and validation are related to correctness.
• Describe the design principles that may be used to support correctness in design
endeavors.
• Define efficiency.
• Discuss how resources may be used as a proxy for efficiency in systems design
endeavors.
• Define integrity.
• Discuss some of the 33 security design principles associated with integrity is
systems design endeavors.
• Construct a structural map that relate other viability concerns to a specific metric
and measureable characteristic.
• Explain the significance of accuracy, correctness, efficiency, and integrity in
systems design endeavors.
The ability to achieve these objectives may be fulfilled by reviewing the materials
in the sections that follow.
12.2 Accuracy
In this section the basics of accuracy and how it is applied during systems
endeavors will be discussed. The definition for the non-functional requirement
termed accuracy would, on the surface, seem to be straight forward. However, this
very common term is routinely used incorrectly and is often misrepresented as
precision. To improve understanding of accuracy and to use this term as it was
intended, both a common definition and associated terminology will be constructed
12.2 Accuracy 223
and a detailed graphic will be used to represent a concept for accuracy. This is an
important first step if accuracy is to be understood as an element of systems design
endeavors.
Accuracy has additional definitions, shown in Table 12.1. that may provide further
meaning for the term when applied as a non-functional requirement for a system.
These definitions may be adequate for metrologists and scientists who engage in
measurement activities on a daily basis. However, for those engaged in engineering
and design activities, the definitions in Table 12.1 require additional description if
accuracy is to be properly invoked as a meaningful non-functional requirement in a
design endeavor. The next section will provide additional context for the term
accuracy in support of a common meaning.
The measurement process includes a number of elements that include the: “(a)
measurement method, (b) a system of causes, (c) repetition, and (d) capability of
control” (Murphy 1969, p. 357). The measurement process is how the measurement
method is implemented. The system of causes includes the resources required to
execute the test (i.e., materials, test personnel, instruments, test environment, and
specific time). An important element is the notion of control. The test must be
capable of statistical control if it is included as part of a formal measurement
process. Without the presence of statistical control the process cannot use the
routine statistical measures that permit us to discuss both accuracy and precision.
All measurement and testing occurs with respect to a reference level or target value.
The target value is most often established as either (1) a property of a material or (2)
a physical characteristic of a system component. The target value serves as the
design value against which we will measure during the conduct of a test. Accuracy
connotes the agreement between the long-run average of the actual measurements
and the target value. Accuracy is “a qualitative performance characteristic
12.2 Accuracy 225
expressing the closeness of agreement between a measurement result and the value
of the measurand” (Menditto et al. 2007). The qualitative performance characteristic
of the measurement, which is accuracy, includes both precision and trueness.
Accuracy is described by both precision and trueness. The definitions for each term
are as follows:
• Precision: “Closeness of agreement between indications or measured quantity
values obtained by replicate measurements on the same or similar objects under
specified conditions” (JCGM 2012, p. 22)
• Trueness: “Closeness of agreement between the average of an infinite number of
replicate measured quantity values and a reference quantity value” (JCGM
2012, p. 21).
Figure 12.1 is a depiction of how precision and trueness are used to describe the
accuracy of a measurement context. From both the depiction in Fig. 12.1 and a
knowledge of basic statistics, it is clear that the “standard deviation is an index of
precision” (Murphy 1969, p. 360). The process improvement notion of Six Sigma,
popularized by efforts at Motorola (Hahn et al. 1999) is based upon a precision that
is a multiple of six standard deviations (6σ) which signifies a process which is said
to be under control, having a minuscule rejection rate of 0.00034 % with an
associated yield of 99.99966 %.
Reference
Value
Probability
Density
Trueness
Precision Measurement
Value
There is a relationship between the types of errors present during measurement and
testing endeavors and the associated performance characteristics represented by
accuracy, precision, and trueness.
• Trueness Errors: The difference between the mean of the measurement process
and the reference value is termed a systematic error and is expressed by a
quantitative value we term bias.
• Precision Errors: The measurements that contribute to the mean and exist
within some index of precision are caused by random errors and are expressed
by a quantitative value we term the standard deviation.
• Accuracy Errors: The total error encountered during the measurement process,
attributable to both systematic and random errors and is expressed as mea-
surement uncertainty.
The relationship between these types of errors, performance characteristics, and the
quantitative expression is depicted in Fig. 12.2. It is important to recognize that the
accuracy in a measurement is a parameter that expresses the measurement uncer-
tainty or more precisely, “the dispersion of the values that could reasonably be
attributed to the measurand” (i.e. the quantity to be measured) (Menditto et al. 2007,
p. 46).
Measurement uncertainty conveys more correctly the slight doubt which is attached to any
measurement result. Thus a doubtful meaning of ‘accuracy’ (doubtful because tied to ‘true
value’) is replaced by a practical one: ‘measurement uncertainty’. (De Bièvre 2006, p. 645)
PERFORMANCE
Trueness Accuracy Precision CHARACTERISTIC
Fig. 12.2 Relationships between type of error, performance characteristic and quantitative
expression
12.2 Accuracy 227
In this section the specific tasks and activities accomplished during the design stage
of the systems life cycle that require accuracy as an element of the performance
specification will be identified. Accuracy is an important non-functional require-
ment where measures of effectiveness (MOE), measures of performance (MOP),
and technical performance measures (TPM) are required. Accuracy is a factor in
determining system performance requirements and is addressed in IEEE Standard
1220—Systems engineering—Application and management of the systems engi-
neering process (IEEE 2005) specifically in the following areas.
Section 5.1.1.3 in IEEE Standard 1220 (IEEE 2005) requires the following:
Identify the subsystems of each product and define the design and functional interface
requirements among the subsystems and their corresponding performance requirements and
design constraints. System product functional and performance requirements should be
allocated among the subsystems so as to assure requirement traceability from the system
products to their respective subsystems, and from subsystems to their parent product. (IEEE
2005, pp. 21–22)
The design team develops top-level performance measures that are labeled
Measures of Effectiveness (MOE). MOEs are defined as:
Standards against which the capability of a solution to meet the needs of a problem may be
judged. The standards are specific properties that any potential solution must exhibit to
some extent. MOEs are independent of any solution and do not specify performance of
criteria. (Sproles 2001, p. 146)
MOEs have two key characteristics: (1) the ability to be tested, and (2) that they can
be quantified in some manner. For example, if a team is tasked with designing an
electric vehicle a valid MOE may be stated as: The electric vehicle must be able to
drive fully loaded from Norfolk, VA to Washington, DC without recharging. This
MOE is clearly measurable and quantifiable. MOEs are typically supported by a
supporting lower-level hierarchy of Measures of Performance (MOP).
228 12 Accuracy, Correctness, Efficiency, and Integrity
Section 5.2.1.1 in IEEE Standard 1220 (IEEE 2005) requires the following:
Subsystem performance requirements are allocated among the assemblies so as to assure
requirements traceability from subsystems to appropriate assemblies, and from assemblies
to the parent subsystem. (IEEE 2005, p. 25)
Section 5.2.1.2 in IEEE Standard 1220 (IEEE 2005) requires the following:
Assembly performance requirements are allocated among the components so as to assure
requirement traceability from the assemblies to their respective components, and from
components to their parent assembly. (IEEE 2005, p. 25)
The design team develops mid-level performance measures that are labeled
Measures of Performance (MOP). MOPs are defined as:
Performance requirements describe how well functional requirements must be performed to
satisfy the MOEs. These performance requirements are the MOPs that are allocated to
subfunctions during functional decomposition analysis and that are the criteria against which
design solutions [derived from synthesis (see 6.5)] are measured. There are typically several
MOPs for each MOE, which bind the acceptable performance envelope. (IEEE 2005, p. 41)
In this case the performance requirement for the electric vehicle that was used as an
example during the establishment of the type of MOE in the earlier conceptual
design phase can be clearly traced. In the example the MOE stated that the electric
vehicle must be able to drive fully loaded from Norfolk, VA to Washington, DC
without recharging. In support of this requirement, more than one MOP may be
developed. An example of a supporting MOP is: the vehicle range must be equal to
or greater than 250 miles. This establishes a more precise requirement than the
distance from Norfolk, VA to Washington, DC. MOPs are supported by any
number of specific Technical Performance Measures (TPMs).
Section 5.3.4.1 in IEEE Standard 1220 (IEEE 2005) requires the following:
Component reviews should be completed for each component at the completion of the
detailed design stage. The purpose of this review is to ensure that each detailed component
definition is sufficiently mature to meet measure of effectiveness/measure of performance
(MOE/MOP) criteria. (IEEE 2005, p. 31)
Section 6.1.13 in IEEE Standard 1220 (IEEE 2005) requires the following:
Identify the technical performance measures (TPMs), which are key indicators of system
performance. Selection of TPMs are usually limited to critical MOPs that, if not met, put
the project at cost, schedule, or performance risk. Specific TPM activities are integrated into
the SEMS [Systems Engineering Master Schedule] to periodically determine achievement
to date and to measure progress against a planned value profile. (IEEE 2005, p. 42)
12.2 Accuracy 229
The design team defines detailed Technical Performance Measures (TPMs) for all of
the MOPs associated with the systems requirements. TPMs are quantitative in nature
and are derived directly from and support the mid-level MOPs. The TPMs are used
to assess compliance with requirements in the system’s requirements breakdown
structure (RBS) and also assist in monitoring and tracking technical risk.
In the previous example for an electric vehicle the MOE stated that the electric
vehicle must be able to drive fully loaded from Norfolk, VA to Washington, DC
without recharging. In support of this MOE an MOP was developed and stated the
vehicle range must be equal to or greater than 250 miles. This established a more
precise requirement than the distance from Norfolk, VA to Washington, DC. In
support of this a series of specific Technical Performance Measures (TPMs) are
invoked. The TPMs for this example may include performance measures such as:
battery capacity, vehicle weight, drag, power train friction, etc. Each of the TPMs
must have a measurement accuracy—which requires specification for a reference
value, precision, and trueness for each and every measure.
In conclusion, accuracy is a purposeful design function that permits systems to
be tested and evaluated against standard reference values throughout the entire
systems’ life.
The section that follows will address the non-functional requirement for
correctness.
12.3 Correctness
In this section, the basics of correctness and how it is applied during systems
endeavors will be reviewed. As with many of the other non-functional require-
ments, correctness sounds like a very clear concept. However, arriving at a common
definition and understanding for how correctness is invoked during requirements
deliberations in systems design endeavors may turn out to be problematic. Once
again, take a minute and review the index of a favorite systems engineering or
software engineering text and look for the word correctness. Is it missing? Once
again, it would not be surprising to hear that the word is missing from just about
every major text. Therefore, a careful review of correctness and its characteristics is
in order to provide a common base for both learning and application during systems
design endeavors.
Correctness has additional definitions, from the literature, that are listed in
Table 12.2 that may provide further help in understanding the term when applied as
a non-functional requirement for a system.
From these definitions, correctness is the degree to which a system satisfies its
specified design requirements. Having settled on this basic definition, the next
section will discuss how correctness is evaluated during systems design endeavors.
The definition chosen for correctness, the degree to which a system satisfies its
specified design requirements, seems to sound very much like the process
requirements for verification and validation activities that are performed throughout
the systems life cycle. As a result, a review of the definitions for both verification
and validation will be conducted.
12.3.2.1 Verification
12.3.2.2 Validation
Now that correctness, verification, and validation are defined and an understanding
of the activities used to analyze correctness as part of the design effort have been
established, how designers ensure correctness during design activities will be
addressed.
Systems design activities must satisfy the specified requirements for the system.
Correctness of design means that the design is sufficient. A sufficient design is one
that satisfactorily incorporates the elements of the System Theory’s Design Axiom
(Adams et al. 2014). This is accomplished by integrating the Design Axiom’s
supporting principles into the design rules used by the design team. The four
supporting principles from the Design Axiom are addressed in the following
sections.
The Law of Requisite Parsimony states that human beings can only deal simulta-
neously with between five and nine observations at one time. This is based on
George Miller’s seminal paper The Magical Number Seven, Plus or Minus Two:
Some Limits on Our Capacity for Processing Information (1956). Miller makes
three points, based on empirical experimentation, in this paper.
1. Span of attention: Experiments showed that when more than seven objects were
presented, the subjects were said to estimate and for less than seven objects they
were said to subitize. The break point was at the number seven.
2. Span of immediate memory: He reports the fact that the span of immediate
memory, for a variety of test materials, is about seven items in length.
3. Span of absolute judgment: This is the clear and definite limit based on the
accuracy with which we can identify absolutely the magnitude. This is also in
the neighborhood of seven.
Nobel Laureate Herbert A. Simon [1916–2001] followed Miller’s work with a
paper (Simon 1974) that questioned Miller’s vagueness with respect to the size of a
chunk used in memory and is a worthy companion to Miller’s work. Application of
the Law of Principle of Requisite Parsimony during systems design endeavors is
clear.
No matter how complex our models, designs, or plans may be, we should avoid subjecting
people to more than nine concepts simultaneously, and should as a matter of routine,
involve fewer. Parsimony should be invoked in systems engineering in order to make sure
that the artefacts and methods we use in designing systems does not inherently try to force
people to make judgments that exceed their short term cognitive capacities. (Adams 2011,
p. 149)
12.3 Correctness 233
The ability to conceptualize, design-to, or implement ideas that have more than 9
individual elements strains normal human memory and recall. Design artifacts,
systems hierarchies, and organizations should adopt structures that utilize spans of
less than 9. This application of parsimony improves the ability to conceptualize,
design, and implement systems that are correct.
The design team must make numerous decisions and in doing so some elements of
the design will take priority over others. Requisite saliency has particular impor-
tance to the design team because as they conduct trade-off analyses, solve problems,
and process data and information into knowledge, requisite saliency provides the
means for making these rational decisions. As a result, all design “processes must
include a specific provision for uncovering relative saliency for all the factors in a
system as a critical element in the overall system design” (Adams 2011, p. 149).
The father of modern socio-technical systems design, Albert Cherns, has spent a
major portion of his career emphasizing the need to limit the scope of the design
process to that which is required, no more and no less. Failure to limit the process in
this manner leads to terms such as brass-plating and polishing-the-cannonball,
which serve to indicate that objects or systems have been far over designed and
include elements never envisioned by the system’s stakeholders. Cherns’ principle
of minimum critical specification states:
This principle has two aspects, negative and positive. The negative simply states that no
more should be specified than is absolutely essential; the positive requires that we identify
what is essential. (Cherns 1987, p. 155)
234 12 Accuracy, Correctness, Efficiency, and Integrity
The basic utility of this principle is clear—design as little as possible and only
specify what is essential. However, all design require redundancy to effect many of
the non-functional requirements described in this book. The design becomes a
balance between all of the system’s requirements and the resources (i.e., primarily
financial) allocated to implement the design. Cherns expressed concern over the
tendency for design teams to focus on a potential solution too early in the design
process, thereby closing options before the team could rationally evaluate all
alternatives.
This premature closing of options is a pervasive fault in design; it arises, not only because
of the desire to reduce uncertainty, but also because it helps the designer to get his own
way. We measure our success and effectiveness less by the quality of the ultimate design
than by the quantity of our ideas and preferences that have been incorporated into it.
(Cherns 1987, p. 155)
The ability to balance the specifications for the design against the known
requirements is most often a function of incomplete knowledge of the system (i.e.,
the principle of darkness), which only improves as we improve our understanding
of the system during analysis.
Whatever benefits we plan to achieve through specification become obsolete (often at a
rapid pace) as the contextual elements surrounding the design become better defined. In
many cases, early over specification may have a crippling effect on the ability of the design
team to adapt to evolving changes in context. (Adams 2011, p. 150)
The Pareto Principle was named after the 19th century Italian economist Vilfredo
Pareto [1848–1923], who noticed that in Italy about 80 % of wealth was in the
hands of 20 % of the population. Since that time a variety of sociological,
economic, and political phenomena have been shown to have the same pattern.
The well-known statistical quality control expert Joseph M. Juran [1904–2008]
“ … claims credit for giving the Pareto Principle its name. Juran’s Pareto Principle
is sometimes known as the Rule of 80/20” (Sanders 1987, p. 37).
The Pareto Principle states that in any large complex system 80 % of the output will be
produced by only 20 % of the system. The corollary to this is that 20 % of the results absorb
80 % of the resources or productive efforts. (Adams 2011, p. 147)
This fairly simple principle can be utilized during the design process by under-
standing that some system-level requirements will consume more design resources
(e.g., time, manpower, methods, etc.) than others. Similarly, elements or compo-
nents of the system may consume more power, require more information, process
more data, or fail more often than others. By using the Pareto principle, and its
accompanying ratios, the design team may be able to better understand the rela-
tionships and between the system and its constituent elements.
12.3 Correctness 235
12.4 Efficiency
this requirement will be approached in the sections that follow by reviewing its
formal definition, discussing concepts that surround its utilization, and how it is
treated during systems design endeavors.
Efficiency has additional definitions, from the literature, that are listed in Table 12.6
that may provide further help in understanding the term when applied as a non-
functional requirement for a system.
In the section that follows a proxy for system efficiency and a detailed generic
attribute for evaluating efficiency in systems design endeavors are presented.
output
Efficiency ðphysicalÞ ¼ ð12:1Þ
input
worth
Efficiency ðeconomicÞ ¼ ð12:2Þ
cost
The proxy for efficiency—the number of steps to complete task and resource
utilization, is a valid method to evaluate efficiency in a systems design. Systems
expert Russell Ackoff [1919–2009] also related efficiency to resources (M5I)
stating:
Information, knowledge, and understanding enable us to increase efficiency, not effec-
tiveness. The efficiency of behavior or an act is measured relative to an objective by
determining either the amount of resources required to obtain that objective with a specified
probability, or the probability or obtaining that objective with a specified amount of
resources. (Ackoff 1999, p. 171)
In a formally defined engineering design process (which is a system), the process that
is clearly defined and orderly requires less energy to execute than one that is poorly
defined and disorganized. A poorly defined and disorganized process inefficiently
utilizes resources and ends up “diverting energy to exploring new paths (thereby
wasting energy and reducing efficiency)” (Colbert 2004, p. 353). Based upon this
analysis, resource efficiency may serve as a valid proxy for systems design efficiency.
In 1999 the Electronic Industries Association (EIA) issued interim standard 731-1,
the Systems Engineering Capability Model or SECM (EIA 1999).
The SECM was a merger of two previous widely used systems engineering models: the
Systems Engineering Capability Maturity Model® (SE-CMM®) and the Systems
Engineering Capability Assessment Model (SECAM). The EIA completed this effort and
published the SECM version 1.0 as an Interim Standard in January 1999. This document
was then used as the main systems engineering source document for the CMMISM
development. (Minnich 2002, p. 62)
The SECM contains general attributes (GA) for both usefulness and cost effec-
tiveness. The cost effectiveness GA is defined as “the extent to which the benefits
received are worth the resources invested. Cost effectiveness is determined through
the use of an intermediate parameter—resource efficiency” (Wells et al. 2003,
p. 304). The GA for cost effectiveness turns out to be a useful measure for the
efficiency of the design process. The resource efficiency calculation is a ratio of the
actual resources required to produce results against the benchmarked standard, as
depicted in Eq. 12.3.
Resource Efficiency
actual resources
Resource Efficiency ¼ ð12:3Þ
benchmarked standard
Table 12.7 Resource Efficiency Likert Scale [Adapted from Wells et al. (2003, Table II)]
Measure Descriptor Measurement criteria
0.0 E−− (E minus, Resources required to produce the work product(s) or result(s)
minus) exceeded the expected (benchmarked) values by more than
50 %
0.5 E− (E minus) Resources required to produce the work product(s) or result(s)
were more than the expected (benchmarked) values by 5–50 %
1.0 E Resources required to produce the work product(s) or result(s)
were within 5 % of the expected (benchmarked) values
1.5 E+ (E plus) Resources required to produce the work product(s) or result(s)
were less than the expected (benchmarked) values by 5–50 %
2.0 E++ (E plus, Resources required to produce the work product(s) or result(s)
plus) were less than the expected (benchmarked) values by more
than 50 %
12.5 Integrity
In this section the basics of system integrity and how it is applied during systems
endeavors is reviewed Integrity, as a non-functional requirement, is a term that is
very rarely used during discussions about systems requirements. Because of its
infrequent use in ordinary conversation, the formal systems vocabulary definition as
well as some definitions from the literature will be reviewed. This will solidify a
common meaning for the term during discussions of its use during systems design
endeavors.
Integrity’s additional definitions are taken from the literature and are listed in
Table 12.8. All of these definitions may be used to better understand the term when
applied as a non-functional requirement for a system.
When the definitions in Table 12.8 are reviewed it is clear that, over time, the
definition for integrity has taken on new meaning. From these shifts and interpre-
tations the integrity of a system can be taken to mean the systems’ ability to ensure
program correctness, noninterference, and information assurance. Integrity is
concerned with information modification rather than information disclosure or
availability. That is, integrity is something different from confidentiality or denial of
service.
240 12 Accuracy, Correctness, Efficiency, and Integrity
In order to remain viable, systems must maintain integrity at all times. The system’s
stakeholders must have a high degree of trust that their system and its attendant data
and information are correct (i.e., precise, accurate, and meaningful), valid (created,
modified and deleted only by authorized users), and invariant (i.e., consistent and
unmodified).
The National Institute for Standards and Technology (NIST) has issued its
Engineering Principles for Information Technology Security (Stoneburner et al.
2004) and in this publication they define system security as “The quality that a
system has when it performs its intended function in an unimpaired manner, free
from unauthorized manipulation of the system, whether intentional or accidental”
(p. A-4). During the engineering stages of the systems life cycle the design team is
tasked with ensuring that the system’s design includes both requirements and means
to ensure adequate system integrity. There are 33 security principles that may be
used to enhance system integrity during system design endeavors. The section that
follows will discuss how each of these security principles may be applied to deliver
integrity as a purposeful element of the system design process.
resilience vulnerability)
17. Design and operate an IT system to limit damage and to be
resilient in response
18. Provide assurance that the system is, and continues to be, resilient
in the face of expected threats
19. Limit or contain vulnerabilities
20. Isolate public access systems from mission critical resources (e.g.,
data, processes, etc.)
21. Use boundary mechanisms to separate computing systems and
network infrastructures
22. Design and implement audit mechanisms to detect unauthorized
use and to support incident investigations
23. Develop and exercise contingency or disaster recovery procedures
to ensure appropriate availability
Reduce 24. Strive for simplicity
vulnerabilities 25. Minimize the system elements to be trusted
26. Implement least privilege
27. Do not implement unnecessary security mechanisms
28. Ensure proper security in the shutdown or disposal of a system
29. Identify and prevent common errors and vulnerabilities
(continued)
243
Table 12.9 (continued)
244
Solms 2005, p. 606). Integrity is particularly concerned with the preserving the
confidentiality and integrity of the information contained within the system. The
next section will discuss how the non-functional requirements for accuracy, cor-
rectness, efficiency, and integrity may be measured and evaluated.
As discussed during the development of the scales for in Chaps. 7 through 11, the
selection of a measurement scale is an important element. Because the non-func-
tional requirements for accuracy, correctness, efficiency, and integrity have no
natural origin or empirically defined distance, an ordinal scale is an appropriate
246 12 Accuracy, Correctness, Efficiency, and Integrity
scale for measuring these criteria. As discussed in Chaps. 7 through 11, a five-point
Likert scale will be invoked (Lissitz and Green 1975) in order to ensure improved
reliability.
Armed with a construct, measurement attributes, and an appropriate scale type, the
measures for accuracy, correctness, efficiency, and integrity are constructed. In
order to evaluate these, two essential questions must be answered. One addresses
the presence (yes or no) and the other addresses the quality of the effort (how well)
to provide effective and meaningful levels of accuracy, correctness, efficiency, and
integrity during the system’s design. Each of the four criteria are measurement
constructs and have a specific question, shown in Table 12.10, which may be used
to evaluate each one’s contribution to a system’s other viability concerns.
The answer to each question in Table 12.10 will be scored using the five-point
Likert measures in Table 12.11.
The summation of the four constructs in Eq. 12.4 will be the measure of the
degree of other viability in a system design endeavor.
Table 12.12 Four-level structural map for measuring other viability concerns
Level Role
Concern Systems viability
Attribute Other viability concerns
Metrics Accuracy, correctness, efficiency, and integrity
Measurable Sum of (1) accuracy (Vaccuracy), (2) correctness (Vcorrectness), (3)
characteristics efficiency (Vefficiency), and (4) integrity (Vintegrity)
In each of the previous chapters the importance of being able to measure each non-
functional attribute was stressed. A structural mapping that relates core viability
concerns to four specific metrics and measurement entities is required. The four-
level construct for other viability concerns is presented in Table 12.12.
12.7 Summary
In this chapter, the non-core or other viability concerns have been addressed. These
include four non-functional requirements: (1) accuracy, (2) correctness, (3) effi-
ciency, and (4) integrity. In each case, a formal definition has been provided along
with additional explanatory definitions and terms. The ability to effect each of the
four the non-functional requirements during the design process has also been
addressed. Finally, a formal metric and measurement characteristic have been
248 12 Accuracy, Correctness, Efficiency, and Integrity
proposed for evaluating other viability concerns through metrics for accuracy,
correctness, efficiency, and integrity.
The next chapter will discuss the use of the complete taxonomy of non-func-
tional requirements as part of the purposeful design of complex systems during
systems design endeavors.
References
Ackoff, R. L. (1999). Ackoff’s best: His classic writings on management. New York: Wiley.
Adams, K. M. (2011). Systems principles: foundation for the SoSE methodology. International
Journal of System of Systems Engineering, 2(2/3), 120–155.
Adams, K. M., Hester, P. T., Bradley, J. M., Meyers, T. J., & Keating, C. B. (2014). Systems
theory: The foundation for understanding systems. Systems Engineering, 17(1), 112–123.
ANSI/EIA. (1998). ANSI/EIA standard 632: Processes for engineering a system. Arlington, VA:
Electronic Industries Alliance.
Biba, M. J. (1975). Integrity considerations for secure computer systems (MTR 3153). Bedford,
MA: MITRE.
Blundell, J. K., Hines, M. L., & Stach, J. (1997). The measurement of software design quality.
Annals of Software Engineering, 4(1), 235–255.
Boehm, B. W., Brown, J. R., & Lipow, M. (1976). Quantitative evaluation of software quality. In
R. T. Yeh & C. V. Ramamoorthy (Eds.), Proceedings of the 2nd international conference on
software engineering (pp. 592–605). Los Alamitos, CA: IEEE Computer Society Press.
Boulding, K. E. (1966). The impact of social sciences. New Brunswick, NJ: Rutgers University
Press.
Bowen, J. P., & Hinchey, M. G. (1998). High-integrity system specification and design. London:
Springer.
Bowen, T. P., Wigle, G. B., & Tsai, J. T. (1985). Specification of software quality attributes:
Software quality evaluation guidebook (RADC-TR-85-37) (Vol. III). Griffiss Air Force Base,
NY: Rome Air Development Center.
Bresciani-Turroni, C. (1937). On Pareto’s law. Journal of the Royal Statistical Society, 100(3),
421–432.
Cavano, J. P., & McCall, J. A. (1978). A framework for the measurement of software quality.
SIGSOFT Software Engineering Notes, 3(5), 133–139.
Cherns, A. (1976). The principles of sociotechnical design. Human Relations, 29(8), 783–792.
Cherns, A. (1987). The principles of sociotechnical design revisited. Human Relations, 40(3),
153–161.
Churchman, C. W., & Ratoosh, P. (Eds.). (1959). Measurement: Definitions and theories. New
York: Wiley.
Cliff, N. (1993). What is and isn’t measurement. In G. Keren & C. Lewis (Eds.), A Handbook for
Data Analysis in the Behavioral Sciences: Methodological Issues (pp. 59–93). Hillsdale, NJ:
Lawrence Erlbaum Associates.
Colbert, B. A. (2004). The complex resource-based view: Implications for theory and practice in
strategic human resource management. Academy of Management Review, 29(3), 341–358.
Courtney, R. H., & Ware, W. H. (1994). What do we mean by integrity? Computers & Security, 13
(3), 206–208.
Creedy, J. (1977). Pareto and the distribution of income. Review of Income and Wealth, 23(4),
405–411.
De Bièvre, P. (2006). Accuracy versus uncertainty. Accreditation and Quality Assurance, 10(12),
645–646.
Del Mar, D. (1985). Operations and industrial management. New York: McGraw-Hill.
References 249
Sanders, R. E. (1987). The Pareto principle: Its use and abuse. The Journal of Services Marketing,
1(2), 37–40.
Sandhu, R. S., & Jajodia, S. (1993). Data and database security and controls. In H. F. Tipton &
Z. G. Ruthbert (Eds.), Handbook of information security management (pp. 481–499). Boston:
Auerbach.
Sandhu, R. S., & Jajodia, S. (1994). Integrity mechanisms in database management systems. In
M. D. Abrams, S. Jajodia, & H. J. Podell (Eds.), Information security: An integrated collection
of essays (pp. 617–635). Los Alamitos, CA: IEEE Computer Society Press.
Simon, H. A. (1974). How big is a chunk? Science, 183(4124), 482–488.
Sproles, N. (2001). The difficult problem of establishing measures of effectiveness for command
and control: A systems engineering perspective. Systems Engineering, 4(2), 145–155.
Stoneburner, G., Hayden, C., & Feringa, A. (2004). Engineering principles for information
technology security (A baseline for achieving security), [NIST special publication 800-27 Rev
A]. Gaithersburg, MD: National Institute of Standards and Technology.
Szilagyi, A. D. (1984). Management and performance (2nd ed.). Glenview, IL: Scotts, Foresman
and Company.
Thuesen, G. J., & Fabrycky, W. J. (1989). Engineering economy. Englewood Cliffs, NJ: Prentice-
Hall.
Warfield, J. N. (1999). Twenty laws of complexity: Science applicable in organizations. Systems
Research and Behavioral Science, 16(1), 3–40.
Wells, C., Ibrahim, L., & LaBruyere, L. (2003). A new approach to generic attributes. Systems
Engineering, 6(4), 301–308.
Part VI
Conclusion
Chapter 13
Conclusion
Abstract The design of systems and components is a crucial element that affects
both the cost and efficacy of products produced for the world economy. Design is a
characteristic function of engineering. The structure of engineering education
underwent a major shift after WWII. The nationwide shift toward a more science-
based curricula for all levels of education led to design type courses being devalued
and even omitted in engineering education. Recent efforts to re-invigorate design in
both undergraduate and graduate engineering programs in the United States have
re-emphasized the role of design in the engineering curricula. The current text has
been developed to address a unique topic in engineering design—non-functional
requirements in systems analysis and design endeavors, thereby seeking to fill a
perceived void in the existing engineering literature.
The competitive difficulties in the world market that are faced by products manu-
factured in the United States has been attributed to a variety of causes. The MIT
Commission on Industrial Productivity addressed the recurring weaknesses of
American industry that continue to threaten the country’s standard of living and its
position in the world economy (Dertouzos et al. 1989). “To regain world manu-
facturing leadership, we need to take a more strategic approach by also improving
our engineering design practices” (Dixon and Duffey 1990, p. 9).
“Market loss by U.S. companies is due to design deficiencies more than man-
ufacturing deficiencies” (Dixon and Duffey 1990, p. 13). The importance of
engineering design and its associated activities, especially when compared to more
glamorous activities such as marketing and sales, directly affects the cost and long-
term efficacy of products produced for the world market.
Engineering design is a crucial component of the industrial product realization process. It is
estimated that 70 % or more of the life cycle cost of a product is determined during design.
(NRC 1991, p. 1)
Clearly, design activities and the engineers tasked with implementing them are
important elements of the global economy. Design is a characteristic function
within the field of engineering. Although not all engineers are directly involved in
design, a 1982 study of the primary activities of engineers displayed in Table 13.1,
reported that 28 % were working in development and design related activities.
This text is positioned as a guide for a course in engineering design that focuses on
the elements of the design that do not provide a direct function in support of the
stakeholder’s processes. The non-functional requirements addressed in the text are
elements of the design that affect performance of the entire system, and are not
attributable to any one specific function or process mandated by the system’s
stakeholders. In fact, the system’s stakeholders may not recognize terms such as
survivability, robustness, and self-descriptiveness. It is the job of the engineer
conducting the design to ensure that appropriate system-wide, non-functional
requirements are addressed and invoked in order to effectively treat sustainment,
design, adaptation and viability concerns.
As such, this text is satisfying a subset of the goals established for engineering
design that were addressed in the previous section. Specifically:
• Undergraduate engineering design education actions to:
1. Teach students what the design process entails and familiarize them with the
basic tools of the process;
2. Demonstrate that design involves not just function but also producibility,
cost, customer preference, and a variety of life cycle issues; and
• Graduate design education actions to:
3. Developing competence in advanced design theory and methodology;
4. Familiarizing graduate students with state-of-the-art ideas in design, both
from academic research and from worldwide industrial experience and
research;
• ABET Accreditation Criteria:
5. Criterion 3—Student outcomes: (c) an ability to design a system, compo-
nent, or process to meet desired needs within realistic constraints such as
economic, environmental, social, political, ethical, health and safety,
manufacturability, and sustainability; and (d) an ability to function on
multidisciplinary teams.
6. Criterion 5—Curriculum: (b) Engineering design is the process of devising a
system, component, or process to meet desired needs. It is a decision-making
process (often iterative), in which the basic sciences, mathematics, and the
engineering sciences are applied to convert resources optimally to meet these
stated needs.
258 13 Conclusion
13.4 Summary
References
ABET. (2013). Criteria for accrediting engineering programs: Effective for reviews during the
2014–2015 accreditation cycle (E001 of 24 Feb 2014). Baltimore, MD: Accreditation Board
for Engineering and Technology.
Corbett, J., & Crookall, J. R. (1986). Design for economic manufacture. CIRP Annals—
Manufacturing Technology, 35(1), 93–97.
Dertouzos, M. L., Solow, R. M., & Lester, R. K. (1989). Made in America: Regaining the
productive edge. Cambridge, MA: MIT Press.
Dixon, J. R., & Duffey, M. R. (1990). The neglect of engineering design. California Management
Review, 32(2), 9–23.
NRC. (1985). Engineering education and practice in the United States: Foundations of our
techno-economic future. Washington, DC: National Academies Press.
NRC. (1986). Toward a new era in U.S. manufacturing: The need for a national vision
Washington, DC: National Academies Press.
NRC. (1991). Improving engineering design: Designing for competitive advantage. Washington,
DC: National Academy Press.
Simon, H. A. (1996). The sciences of the artificial (3rd ed.). Cambridge, MA: MIT Press.
Tadmor, Z. (2006). Redefining engineering disciplines for the twenty-first century. The Bridge, 36
(2), 33–37.
Whitney, D. E. (1988). Manufacturing by design. Harvard Business Review, 66(4), 83–91.
Index